Resilience is a concept that has evolved over the last 100 years. During this time the UK has gradually moved from civil defence (a focus on war risks, with some spillover benefits) to Integrated Emergency Management (IEM, an all-hazards approach driven by risk assessment) to Resilience (preparedness for effects across networked systems). And in each case, embedding change has taken years.
Modern emergency powers have their roots in the post-WWI desire to be able to tackle any threat to the state, and to recognise broader civil contingency risks beyond the war. In the period running up to WWII, this early concept of emergency planning remained focussed on security, but did include consideration of critical supply chains and risks to national infrastructure. The expansion of this to include civil contingency risks, and to give local responders official responsibilities came just before the outbreak of WWII, and naturally focussed on protecting local communities from the impacts of war.
After the war, civil defence continued to develop to include smaller scale civil crises, including the widespread disruption caused by strikes in the 1970s. But as the risks facing the UK evolved and adapted in the post-war period, so did our approach to tackling them. It would not be until the 1980s that a new concept of IEM emerged, taking a broader risk-based approach to the whole range of hazards that faced the UK as a nation. This was adapted further in the early 2000s into a new Resilience approach, partly driven by the 9/11 attacks.
Now, IEM and Resilience are systems that are employed across the world. IEM forms the basis of work in most developed countries. The UK Government was an outlier when it adopted Resilience in the 2000s, but it is now common practice internationally. But the key deficiency with each of these approaches has been the inability to get ahead of problems - to tackle them at source.