Just an ex-sailor trying to remember, but the more or less average firing range distance NATO and Warsaw Pact tanks were expected to meet each other in Germany was about 1K to 1.5 K?
In Southern Germany, the USAEUR sector, yes. But it was a lot more open in the North, you are looking something like 2000-3000 metres in places.
You know, I've been thinking a bit lately about this. I'm wondering how these estimations are derived.
A possible method to get such estimations is by picking a sufficiently large random sample locations and then to look into a number of random directions to calculate the average LOS. But firing positions aren't being picked at random. You rather slect them after appreciating the terrain to give you the best tradeoff between long range of fire and good concealment, and each of these positions has a preferred engagement direction. So the random sampling method may be fast, but is it accurate? (clearly not)
The next question is, what's the underlying data model of these calculations. Back in the 1970s, I'm pretty sure that they were based on a 50m grid that was the standard for all topographical survey maps (add to that, where necessary, detail maps along the path of a planned road construction or whatever). By definition, a 50m grid is devoid of all micro variations of the terrain, like drainage trenches that might run parallel to a road's embankment). But micro variations of the terrain are not only enormously important for the selection of firing positions (since they usually offer that extra bit of cover that makes a good firing position better than a bad one, duh). They also can have a substantial influence on line of sight during a mobile engagement.
I've done a few comparisons lately where I got my hands on 10m grid DTED 3 elevation data, and a high resolution LIDAR scan of the same place. In extreme cases it is, literally, a difference like day and night. In the DTED 3 model the average engagement range between a static defender and a closing enemy was about 1200m, which dropped (average!!!) to about 300m in the LIDAR scan based terrain model.
Gun ammo consumption dropped by 50% in the high-res terrain, simply because the targets were more fleeting, missile ammo consumption dropped by 70% with an average engagement rage of 800m (from 2200m before), simply because the predicted lines of sight (and fire) weren't there. Even the 10m grid was insufficient to accurately represent this terrain type.
Granted, the chosen example of a sand dune terrain is an extreme case, yes, but it got me thinking how much the "doctrine of standoff" is influenced by unsuitable data models in simulation, and then reinforced by confirmation bias as simulation results usually give an advantage to the side with standoff. Heck, I'm looking at pure junk like MASA S****d at times where units shoot each other right through a railroad embankment of 8m elevation, and that's supposedly a high quality constructive simulation (compared to other "high quality constructive simulations" in the market it may not be any worse but clearly, the results that such computer simulations calculate aren't worth the breathing air that you need to analyze them; they've all gone through a formal VVA, but if a VVA doesn't detect that a data model is unsuitable it's about as useless as I thought it was some 20 years ago when I had much less of an idea of the field of simulations than I do today).