Oh for cryin' out loud, ...
Modern systems do not crash because code gets "tangled." They do not crash the way that you describe. You clearly have no clue about operating system design and behavior. You exhibit such deep misunderstanding that I am astounded that you even dared post on the subject. Either you are arrogant enough to make terribly wrong assumptions and not care that you expose your ignorance, or else someone who explained a crash to you "talked down" to you because they didn't think you were worth the time of a proper explanation.
I will NOT talk down to you. What WOULD happen for Windows or UNIX or OpenVMS or VMWare (the four systems I can talk about) is that when you tried to launch yet another task, if you had actually reached a capacity limit, you would have gotten a message "Insufficient resources." And if it was that the program launched OK but later tried to extend its virtual size past the limits, IT would have crashed (as an individual program) with an "Insufficient resources" error. For some situations it would be more specific as to which resource was depleted, e.g. "Insufficient Virtual Memory" or "Task Table Full" or something like that.
For YEARS computer operating systems have had the ability to detect resource limits. Any system that actually COULD take on so much work as to choke itself is either poorly designed or has an inexperienced or incompetent administrator who doesn't understand how to determine a system's capacity. ALL systems potentially have the ability to know when they've gone too far, unlike some humans.
This is a case where the boss saying "I don't care, raise the quotas" will result in GREATER down-time. That happened once. I documented the demand and my response and the boss's "I don't care, do it." And that boss was given a lateral transfer two months later because of his idiocy. The new boss told me to retune accordingly and give him specs on what to get to prevent further recurrences. (Turned out to be physical memory.)
A good analyst would be able to monitor resource consumption and make predictions of overloads based on whatever growth is predicted. Such a person would also know when a system HAS reached a limit and thus needs "tuning" to throttle the job queue or the task list. (Or needs a memory upgrade, which also happens.) In the 28 1/2 years of my service with the U.S. Navy Reserve, I've predicted upgrade requirements not less than four times for CPU issues and maybe five or six times for disk issues. Also at least twice for bandwidth consumption issues. I was even able to automate the predictive processes to the point that they were accurate to the hour at which a particular disk would enter a danger zone. It is not impossible to prevent exactly the failure mode you are describing.
Before you ask, by actual measure using log files and other performance files, I kept my systems running 99.93% availability during scheduled operational hours; i.e. less than 0.1% unexpected downtime over a 10-year period. Many months my system availability was 100% and a lot of time it was in the 99.98% or 99.99% availability. I think my longest uptime was a 5-month stretch of continuous 100% availability because that particular O/S (non-Windows) didn't have any patches to apply. No patches, no down-time.