SOME regulations make sense, though not all can make that claim. Here's one that DID actually make a lot of sense.
MANY years ago - before the Navy job, I worked for a company that made computer-based oil-and-gas pipeline control systems - an example of a SCADA (supervisory control and data acquisition) product. Long before AI, because the "supervisory" part means a PERSON always made any decisions based on the data we acquired for them. One of the common requirements was "leak detection" - the ability to recognize that your pipeline was not intact and was dripping or venting product. Besides the obvious desire to not vent valuable product, there was another reason that leak detection was important.
I helped to design their third generation of SCADA product to include leak detection that would notify you in 1 minute for sudden catastrophic failures, or 5, 10, 15, and 30-minute detection. If you ever got the 1-minute alarm, you had a super serious problem because our "scan all sensors" minimum time was 1 minute. The 30-minute alarm was a slow leak that you might even do nothing except dispatch an inspection team. We could use some advanced resonance testing to locate slow leaks within about 500 feet. That was considered advanced for that time.
Just for comparison, the way this USED to be done before we developed remote-readable digitized sensor units was that folks would visit pipeline pumping stations and read all the dials, then by hand compute the flow throughput at each point to verify that they matched. The fastest the hand method could ever detect was about 15-30 minutes between two readings. But most companies did that on an hourly basis or longer.
Anyway, the reason that leak detection became such a hot topic was that a pipeline, run by an interstate carrier, ran underneath an elevated railroad line that was on top of something like a levee. One day an extra-heavy rail cargo put too much pressure on a pipeline that DIDN'T have leak detection. The pipeline cracked and developed a medium-slow leak. The leak wan't detected right away. It wasn't easily visible because there was a sewer grate at the bottom of the levee. A lot of distilled product went into the sewers for an estimated couple of hours. The local sewer system went under a small, isolated neighborhood before it joined the main line. It was a working-class neighborhood and nobody was home to smell the oil leak. Another train went by later that afternoon, a train that had problems that caused it to give off sparks. One hot spark went down the drain full of distilled petroleum, and within a couple of minutes an entire neighborhood - maybe 40 houses or so - were burning. Nobody died but every house had to be rebuilt.
I won't name the company because they are still operating, but they had a huge property damage lawsuit on their hands and I don't recall the EXACT verdict, but the settlement for the residents was well over $30 million. Once the settlement was made, the president of the pipeline company came to us with a contract to update and digitize all 12 of their pipelines to include our best leak detection software. And he told the president of our company those words that make ANY company president happy to oblige... "Money is no object." The economics worked because our systems usually cost much less than $300K depending on the number of remote terminal units.
That one incident went a long way towards the modern regulation about leak detection capabilities on petroleum pipelines. From what I understood in our later contracts, the other pipeline companies saw that incident, saw the size of the settlement, did the math, and came to us or one of our competitors. But we had one of the better systems at the time so we had a lot of that business. In any case, this was was regulation that none of the pipeline operators ignore because they saw the economic risks involved in NOT having modern leak detection.
The moral of stories like this is that risk management is a complex calculation between competing priorities and costs. A cost-benefit analysis is appropriate in this context.
I'll go down memory lane a bit and try to recollect the major outlines.
Basically, one can construct a four quadrant matrix for probability of a loss occurring and the relative cost of that occurrence.
In the upper-left quadrant you have low-probability/low-cost. In the upper-right quadrant you have low-probability/high-cost.
In the lower-left is high-probability/low-cost and in the lower-right is high-probability/high cost.
You may expand that to a 9 quadrant matrix, with low-, medium- and high- axes.
How much you budget for each risk depends on which of those quadrants it falls into.
The_Doc_Man's scenario is probably, low-probability-high cost and the mitigation costs are determined accordingly. Even though the probability of a repeat occurrence of the time he described is not great, the high-cost of that potential occurrence calls for an appropriate budget and management to prevent it or mitigate it.
An example of a high-probability/low-cost occurrence might be burnt-out light bulbs in your kitchen. You know it's going to happen, sooner or later. But when it does happen, the cost of a replacement is minimal. You don't invest hundreds of dollars in some scheme to monitor light usage in your kitchen to predict the next burnt out bulb.
However, if you add government regulation to the calculation, it skews things, sometimes dramatically.
For example, we've all heard the argument, "It if helps one person, it's worth it." That blows any rational considerations out of the water. Is it a Low-risk, low-cost occurrence? Doesn't matter, we will make laws to prevent the occurrence because if it helps one person, it's worth the cost of the preventive measure. And, of course, it's not the government who incurs the cost of that preventative measure.
I'm not sure how to apply that risk management matrix to the question of AI, Data Centers and electrical power generation. How do you quantify the risks involved? How do you quantify the cost of "an occurrence"? I am pretty sure, though, there are a lot of arguments based on whose interests are at stake, and not so much on objective measures regarding the costs of action vs non-action.