Jump to content
Free downloads from TNA ×
The Great War (1914-1918) Forum

Remembered Today:

Most succesful British (UK) unit WW1


widavies

Recommended Posts

Surely an unanswerable question - any sort of result which could possibly be drawn from measurable factors is thrown to the winds by the fortunes of war and luck. Wasn't it Hitler who opined that he wanted lucky generals rather than great military minds ?

A unit can go from being spectacularly good to spectacularly bad - witness 51st Highland. Monty thought they were the best he had in Africa, but in Salerno they mutinied (in small, but significant numbers) and in Normandy they failed. Monty complained to Brooke that they had failed in every task given to them. It's explainable of course - after two years of fighting they were more inclined to be cautious than troops who were in contact for the first time. Nevertheless, this "best" became "worst" within 18 months.

George Patton used to say that "luck, in the long run, is given mainly to the efficient" and I think he lifted this from one of the von Moltkes.

Link to comment
Share on other sites

If you want to gauge inf bns then the place to do it is when they are in the line. I've always said you can gauge a unit by its sentries, being a sentry is the most responsible job ever given to a private soldier because he has to act on his own judgement without an NCO telling him what to do. Good sentries reflect a well trained and well led unit. Such units would probably be avoided by German patrols which gives a measure. Well trained and well led units maintain their cohesion, which reflects strong morale and in the end this is what wins or loses wars. This is the very good reason why UK places 'maintenance of morale' as the pre-eminent principle of war as enumerated by JFC Fuller in the early 1920s in his revision of Field Service Regulations.

Absolutely agree with you, but it is still worth knowing what well trained and led units with high morale look like in terms of their historical record. Discipline stats are interesting. Tim Bowman has done some interesting work looking at court martial records in Irish battalions (see his book published by Manchester UP). Are such stats an indicator of combat effectiveness? I think there is a lot more wrk to be done in this area, and it needs to focus on not just FMCG records but lower level offences.

Forget Dupuy, mathamatactics is irrelevant.

Dupuy has been controversial. It was his analysis that came up with the finding (based IIRC on Second World War data for infantry) that the Germans were 20% man-per-man more effective on the field of battle than the British and Americans. That has been controversial ever since and his methods have been subject to scrutiny.

I don't agree that mathematics and statistics are irrelevant. They can inform historical enquiry. If one poses an historical question, can statistics and statistical methods help to uncover the answer? Yes, there are lies, damned lies and statistics... of course there will always be issues as to whether one has posed the right question, and to the quality of the raw data etc. However, if history doesn't try to use such methods, it risks relying on broadly accepted generalities (which can become myths in their own right) and the particular i.e. case studies which either back up or prove the 'rule'.

Often, the only way to assess the 'myths' is look at the statistics in detail to asses whether they are valid, or are merely fiction. I think back to John Terraine in the 1960s who went through the British Official History counting up the number of German counter attacks on the Somme in 1916 and the distances covered by the armies in the 100 days. It may not have been particularly sophisticated, but it was an important step in moving the history of the BEF on the Western Front away from polemic.

Link to comment
Share on other sites

I wish those involved in this project the very best of luck and strongly believe they are wasting their time. Operational analysis, even in retrospect, for a single unit (e.g. a single aircraft/ship/platoon in a single engagement) is hedged about with massive uncertainties and unknowns, all of which have to be given appropriate weighting in the analysis. Uplift these unknown factors to the level of a several armies of battalions during a world-wide, four-year war and the graph goes exponential. Usually these uncertain factors are given a range of values against which a sensitivity anlaysis can be performed. The result (surprise, surprise!) is a broad range of answers. Participants should not be deceived that, just because this is a retrospective analysis, the problem is merely a case of sorting known 'facts' into an eye-pleasing table of results. If only.

How does the analysis leader intend to factor in such unhelpful factors as the effects of 'the fog of war', communications, morale, weather, luck, etc, etc? Or will these battle-winning/losing effects just be ignored? What weighting (if any) will be given to each of them in individual circumstances? Who will agree those weightings?

A timely reminder: In 1982, a shed-load of OA scientists at the MOD's Defence Operational Analysis Establishment were asked to forecast the outcome of the Falklands War. With all their degrees and mathematical expertise and every OA tool at their disposal (including massive computing power) their clear advice to ministers was the following outcome - Argentina WIN, UK LOSE. If they could not do it ......

I would drop this project before it gets really nasty.

Link to comment
Share on other sites

Gents

I was going to ask how the various reshufflings of battalions would affect the ratings of various divisions - be they the bolstering of New Army Divisions early in the war or the 1918 disbandments. Surely the 'ratings' and performance of various divisions would alter with the departure or disbandment of strong or weak battalions. Some divisions finished the war with almost completely different battalions, commanders and staffs - there are too many variables and too few consistencies for really accurate statistical analysis.

As with writing personal reports for soldiers it's easy to work out who is rated in the very top and in the very bottom but working out where the middle ground sits is much more difficult.

I'd disprove the German aspect of things as well - I'm sure a German Army expert would debunk some of the ratings made in the British intelligence analysis of the 251 divisions of the German Army. I'm sure if a German 'list' existed it too would have inconsistancies.

I'll offer the best of British but I'm not sure this project will make it to fruition.

Regards

Colin

Link to comment
Share on other sites

Would it have a point, in your view, Charles?

Quite possibly... but it depends on the question and the data being used. One might find that one has done a lot of work but is missing some critical data which makes it hard to come to an answer.

I think Will has asked a good question, but I think he has underestimated the extent to which the question itself needs to be clearly defined, the staggering volume of data required and the complexity of the analysis. I fear that his idea of asking us to give our assessments of a number of units and then averaging them might only say as much about our own perceptions of those units as it does about the units themselves. With insufficient variables to cover all critical inputs and outcomes I suspect that our subjective preconceptions are likely to creep in.

The SHLM project was a brave attempt to try and answer a similar question at the divisional level. It is nearly 20 years since it was conceived and those involved included not only the titular Peter Simkins, Bryn Hammond, John Lee and Chris McCarthy, but also luminaries such as Gary Sheffield, John Bourne and most of the academics in the war studies dept at Sandhurst. They clearly thought it an approach worth trying. However, as has been pointed out above, it collapsed under the weight of data required. However, IMHO, that doesn't mean that the questions posed by SHLM were pointless.

Link to comment
Share on other sites

It may be seen as splitting hairs, but it seems to me that there is a difference between trying (on the basis of a statistical analysis) to "evaluate the effectiveness" of various units and rank ordering the success (i.e. identifying most (and by implication the least) successful) of all units. The former would seem to be a means towards other potential research goals, the latter an end in itself. I am not sure what one gains by the rank ordering?

Chris

Link to comment
Share on other sites

I was going to ask how the various reshufflings of battalions would affect the ratings of various divisions - be they the bolstering of New Army Divisions early in the war or the 1918 disbandments. Surely the 'ratings' and performance of various divisions would alter with the departure or disbandment of strong or weak battalions. Some divisions finished the war with almost completely different battalions, commanders and staffs - there are too many variables and too few consistencies for really accurate statistical analysis.

Good question. I know of one PhD student who is actively looking at this question with regard to the divisional reorganisations of 1918. I believe its more of a qualitative/case study approach. I understand he is trying to assess the impact of the restructuring on the BEF's ability to resist the March offensive. Did those divisions that suffered the most disruptive reorganisations perform worse in March/April 1918 than those that suffered least disruption?

His research is throwing up an interesting hypothesis. A number of divisions were reconstituted in the summer of 1918, largely with a number of battalions which had been withdrawn from Palestine and Macedonia. (For example, 9 battalions of the London Regt were withdrawn from 60 Div in Palestine, I think some ended up in 30 Div.) Those battalions were already highly experienced in open warfare and thus had a different 'skill set' from battalions that had only ever been in trenches on the Western Front. Did their new parent formations perform particularly well in the 100 days given that that period of the war saw a return to open warfare?

As with writing personal reports for soldiers it's easy to work out who is rated in the very top and in the very bottom but working out where the middle ground sits is much more difficult.

I'll offer the best of British but I'm not sure this project will make it to fruition.

That's certainly true. However, I don't think any leading scholar of the BEF would claim that we know definitively which divisions were in the very top and bottom. That's why the SHLM project was important. Did the 31st Div really deserve its epithet (The Thirty-Worst) for example?

Agreed. Will's hasn't started yet, and SHLM sadly collapsed.

Link to comment
Share on other sites

It may be seen as splitting hairs, but it seems to me that there is a difference between trying (on the basis of a statistical analysis) to "evaluate the effectiveness" of various units and rank ordering the success (i.e. identifying most (and by implication the least) successful) of all units. The former would seem to be a means towards other potential research goals, the latter an end in itself. I am not sure what one gains by the rank ordering?

A very important hair splitting! On its own, a simple rank ordered list will tell you very little (i.e. almost nothing), particularly when a number of the variables are subject to assessment and weighting. It won't tell you if there is a real qualitative difference between, say, the first and second on the list. A slight tweaking of the relative weighting could lead to a reordering of the list.

I would expect a statistical analysis to cluster divisions into a small number of clusters - probably between 4 and 6 clusters. The divisions within each cluster would have similar levels of effectiveness. It might be a simple range from 'highly effective' to 'ineffective', or perhaps a more nuanced classifiction e.g 'highly effective in all sttack and defence scanarios' or 'reliable in limited offensive operations but highly effective in defence'

Link to comment
Share on other sites

A very important hair splitting! On its own, a simple rank ordered list will tell you very little (i.e. almost nothing), particularly when a number of the variables are subject to assessment and weighting. It won't tell you if there is a real qualitative difference between, say, the first and second on the list. A slight tweaking of the relative weighting could lead to a reordering of the list.

I would expect a statistical analysis to cluster divisions into a small number of clusters - probably between 4 and 6 clusters. The divisions within each cluster would have similar levels of effectiveness. It might be a simple range from 'highly effective' to 'ineffective', or perhaps a more nuanced classifiction e.g 'highly effective in all sttack and defence scanarios' or 'reliable in limited offensive operations but highly effective in defence'

Hi Charles,

I agree cluster analysis would be an ideal way looking at data sets.

What I would like to do is to gather data something along the lines of:-

1) Prepare a list of all variables considered important to influencing events at a Bttn level. (I would appreciate suggestions for populating this lists from members),

2) At divisional level, variables at play during attacks/ defence ( eg artillery densitity, machine guns, tanks, transport, communications, intelligence, air support etc.)

3) Corps level, as above.

4) Geography: different parts of the line would pose differing problems for attack or defence, so these would have to be taken into consideration as a major influence, as would be weather conditions and seasons.

5) Other factors. ( opposition forces at time of attack/defence).

lots of variables, and possibly a long project, but it could be possible to try and gain valuable information on performance. Maybe we might be surprised by findings.

Regards

Will Davies

Link to comment
Share on other sites

It might be important to specify clearly what is understood as "effective" / "successful" and then operationalize that through the sorts of things you are dicussing.

I have some related experience (in a different field) where I attempted to measure the effectiveness of organizations in influencing policy processes. One of the things which emerged in the research was that there were sometimes "internal" / "organisational" reasons for certain courses of action which were as important (and sometimes more important) than the prima facie reason for the action.

I am not entirely sure how this might play out in your approach but I am thinking of something like Second Bellewaarde (Sept 25th 1915) which is usually described as subsidiary to (or perhaps a diversion from) the main action at Loos. If that was indeed the main purpose of the attack then even without the capture of any of the "objectives" it might be considered "successful" (in actual fact I do not think it was successful on either account) - but it would seem to me to be very difficult to factor this sort of thing in. Taking the same line of reasoning - whilst by most short term, declaratory goals the Battle of the Somme would probably not register as a resounding success; yet if part of the purpose was to draw away German attention from Verdun or, if as a result of the Battle crucial lessons were learnt (as Sheffield etc have argued) then these too may be counfounding elements.

I don't mean to sound overly negative about the project - I think it is interesting and has potential but my experience is that one can waste a lot of time and effort unless the need for, approach to and methodology of the research is specified as precisely as possible from the outset (and even then bitter experience tells me....) along the lines of an abbreviated dissertation prospectus [shudder!]

Cheers,

Chris

Link to comment
Share on other sites

Among the data that would be needed are some important facts concerning the enemy. Where will that come from, given that most of the German operational records no longer exist?

Link to comment
Share on other sites

A couple of points:

The 'mutiny' at Salerno in 1943 was because men coming out of the medical system were being posted to units other than the ones they had previously served with. They were happy to fight but only with their 'own' unit. It's an indictment of the British regimental system and why a corps of infantry should have been created.

I agree that discipline stats can also be an indicator. However, I'd observe that c. 1980 in Germany, in a divisional area with about 18 major units, one unit figured in 50% of all military crime. Needless to say they mostly recruited in a large city north of Cumbria and alchohol was almost certainly a factor!

I don't think its a matter of Germans 'fearing' some BEF units, just a matter of tactical commonsense, don't conduct fighting patrols against units likely to be alert and on the ball unless you really have to, much easier to go for the sloppy ones.

Link to comment
Share on other sites

Having studied all 16 Royal Warwicks fighting battalions from August 1914 to July 31st 1917 I cannot possibly see how you can compare their relative performance as the factors involved in success/failure at any time are so variable or impossible to evaluate at a battalion level. Even comparing the four TF battalions who were together in 143 Brigade would be impossible. What are the sources you would use anyway. If one war diary entry might give eight reasons why a significant attack failed several others have no post-battle evaluation. If a battalion going forward finds its 'flanks in the air' who or what is at fault - the battalion, one or both on each side, Brigade, the plan itself, the German response, Division, intelligence failure, the fireplan etc etc.

Whereas I can demonstrate statistically the gradual/rapid dilution of Warwickshire men (including Brummies)in each battalion from the start of the war the exercise being suggested here is pie in the sky!

Link to comment
Share on other sites

Instead of a list of possible points of comparison, I think the OP should put forward a list of data points that can be seen as standard across fighting units with his reasons for choosing them and the data he employed setting them. He needs then to provide a scale or scales for measuring these data points within the unit and across units. I believe that the factors involved in an action are infinitely variable and problematic in the extreme to isolate. For any useful comparison to be made between two measures, there needs to be a degree of constancy shared between them. I do not believe that any two military actions share a sufficient degree of comparability. Different units fighting in different actions with all the multifarious points of difference make it impossible to compare how well they did. Different conditions as the war continued make it barely possible to compare actions from one year to the next. How do we compare the 1st Division of 1914 with its well trained regulars with the 1st Division of late 1918 with its mixture of battle hardened veterans of the Somme and Passchendaele and new recruits, conscripted a few months before? These discrepancies multiply the closer one looks. Setting all that to one side. If we did somehow set up a huge set of differential equations to measure and compare performance over time and across units, and that is what it would require, what purpose would the result serve? Does anyone think that champions of a particular regiment or battalion would accept that their lads were no longer the Cream of the British Army? That the 17th Loamshires had emeged from this calculation as ' the best'? And if they did, what then? What exactly is the point of this exercise?

Link to comment
Share on other sites

I would appreciate suggestions for populating this lists from members, .....

Maybe we might be surprised by findings.

I (and, I suspect, many others) have better things to do with their lives. As to "surprise" at the findings - I think that is a given, along with disbelief and anger. Let battle commence. I shall watch from my chateau.

Link to comment
Share on other sites

Hi All,

Thanks for all the feedback, the good, bad and the damn right ugly. ( thinking about can of worms here!!!)

I'll take all the many varied comments on board and have a good think about how I'm or if I'm going proceed with this venture after all the varying responses:-

If I do proceed (gulp), I'm thinking of rating each unit by a set of pre-defined variables and which will only limited to periods of attack or defence that that unit might have been involved in. Static periods of holding the line will not be taken into consideration except if it had a direct influence on the condition of the unit during the above.

In summary from all your feedback it seems that the main considerations affecting the unit from external sources are as follows

1) Time related (operations changed between 1914-1918) study might be limited to comparisons at differing points in the war.

2) The Corps/Brigade/division or army group attached.

3) Weather conditions prevailing during event.

4) The oppositional strength and fortifications opposite any actions

5) Time in line, losses, quality of replacements.

6) Artillery.

7) machine guns

8) Intelligence/planning

9) Commanders

10) Sectors of operations

11) Additional support.

12) logistics

13) Moral

14) Regular/TF/New army???

15) luck!!!!!!

If I have neglected to mention anything then [please put me right.

Regards

Will

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...