The UK's numbers have been fairly consistent, I think, with what models in the later half have been predicting. Especially those that came out later, such as Cambridge's nowcasting - https://www.mrc-bsu.cam.ac.uk/now-casting/.
I agree with your concerns, though. I wouldn't say I have an opinion here, it's difficult for me to say. But this is what a scientist working on one model told me - "The modelling exercise is a difficult one, but it’s not astrology. It’s a difficult problem. It’s like weather prediction was 40 - 50 years ago. People would get it wrong, make jokes about it and so on. Now no one makes jokes about weather prediction because you get it correct 90% of the times.”
Make what you will of it :)
Also, to add, there are some ways in which the risk of over-relying on models can be mitigated. I've outlined them in my story but let me just put them together here.
1) Ignore hard numbers entirely.
2) Do not rely on one model. Get multiple models' inputs. Compare them.
3) To compare, you need to know what went behind those models. Any model that is not a 100% open with its methodologies - discard. Have independent researchers examine the accuracy of the models (just as RAMP is doing in the UK).
4) Based on informed, transparent inputs from multiple models: see what they recommend. Then weigh the pros and cons of going with the "most recommended" position vs the other options that are less recommended, taking into account socioeconomic factors. The answer of what needs to be done cannot come from a model alone. India, for example, is in dire straits economically. Smaller, richer countries in Europe? Perhaps not so much. The final decision would and should vary based on such considerations.
There are so many variables any sane mind would need some sort of assistance in decision-making -- even if imperfect and to be taken with caution.