An AI accidentally deleted an entire company database

amorosik

Active member
Local time
Today, 22:26
Joined
Apr 18, 2020
Messages
609
I was reading that same article, I think. It's quite scary. The AI is becoming too human, as that's precisely something a human would do.
I'm quite content to use AI in a "limited" way, not one of those people that says "Build me a website and output all the code" and then pray it works and wonder what to do when it doesn't work. Not saying those people have no place, just that I'd rather not be one.

I use it limited to improve blocks of SQL code migrating from T-SQL to Snowflake SQL, and everything gets thoroughly tested. In this limited category, (where I still do the work of isolating the problem as much as possible and then present my theory to the AI), it's pretty god good.
 
If we are going to talk about rogue AIs, then this article needs to be considered.

Warning: This news story is about a death through self-termination.

 
Generally, the certainty of unintended consequences has been the predilection of politicians. However, unintended consequences has also the become the bane of all software developers. As AI is trained by software developers we can only expect a considerable increase in these events. The problem really is that software developers involved accept what AI presents to them and will do little if any testing. Mainly because testing is the boring bit, alongside writing the manual. Plus, if it looks about right from carefully prepared test data, they'll go with it. Similar to the spreadsheet brigade who ever since VisiCalc and Lotus, have found it impossible to avoid errors in their creations because they never test anything. Driverless cars are a classic example of unintended consequences causing a lack of success and rising costs.

In the past, I lost count of the number of companies who were paying huge support fees for backup software but had never tested a backup and restore.
 
Last edited:
In the past, I lost count of the number of companies who were paying huge support fees for backup software but had never tested a backup and restore.

I actually won a bet with a "sales engineer" touting his company's automated backups. I predicted that his restoration image of our ORACLE database wouldn't be correct. More specifically, I predicted that it would roll itself back to a date for which there would be a large dead gap between that date and the time of the backup. It was a "steak dinner" bet in Ft. Worth, so paying off the bet wasn't that hard. I got a great rib-eye dinner from Cattleman's Restaurant out of it. I won because the the rollback was almost a whole day of missed database self-consistency.
 
I'm so torn on the idea of suicide, on many levels. Starting with the morality or immorality of doing it or helping someone do it who is facing a terminal painful situation. But do we want AI recommending/helping? That's an interesting question.
 
Is it the job of the AI to be our "friend" and be nice to us and supportive no matter what we plan to do or how many people we plan to kill or is it to give use technically correct answers? There is a middle ground of "opinions" but we shouldn't get an opinion as an answer unless we specifically ask for one. And we should always request supporting details for the opinion.

If you need the AI to be your "friend", you have a problem that requires a psychiatrist, not an AI.
 
Yeah, I think I'd lean more towards, I want the AI to do whatever it's told to do, with VERY limited exceptions programmed in and even there they should be more of a warning not a hard stop. Kind of like Google will stop auto-completing if you start typing "how to mainline heroin".

Allowing them to force AI to stop helping when it deems something bad would be as impactful as throttling the entire internet if it deems a bad purpose - slippery slope IMO.

I hope people who are suing chatGPT for those reasons lose. As much as I do feel for them, but hey, we don't throttle Google's responses, why should we AI ?
 
But do we want AI recommending/helping? That's an interesting question.

In my opinion, an EASY question. No, we don't.

As to suicide, there was a case in Oregon, which went to the state supreme court and was decided in favor of putative patients requesting physician-assisted suicide in their quest to die. Oregon has a "Death with Dignity" law that allows physician-assisted suicide. The US Attorney General sued when this law was enacted, saying this was a case of prescribing drugs in an "off preferred use" situation and thus violated the Controlled Substances Act. The case went before SCOTUS as Gonzales. vs. Oregon affirming that states could pass such a law even though another case affirmed that there is not a U.S. constitutional right of suicide. In essence, the Oregon law granted a special exemption for the "off label" use of drugs for assisted suicide. SCOTUS affirmed that this WAS a state's rights issue.

Some decades ago, there was a case involving a terminal cancer patient seeking the right to die. This case wen to the state supreme court and in it, a rather eloquent Oregon judge said (paraphrasing 'cause I can't find the reference): We hold that we have inherent rights of life, liberty, and the pursuit of happiness. However, when a patient's physical condition deteriorates such that he no longer has sufficient liberty and has such pain as to deny the pursuit of happiness, who are we to require that he continue to pursue life?

I was impressed at the time with the judge's eloquence and compassion. The above paraphrase doen't even come close. However, time has clouded my memory of the case and Google Gemini claims there is no record of such a case that wasn't eventually decided by SCOTUS. I think Gemini is wrong, but had to give up on the search.

Personally, IF I were bed-ridden and wracked with incurable continuous extreme pain requiring mind-killing pain medication, I would not wish to continue living.

I must add that my viewpoint is partially formed from watching my mother deteriorate over a period of years due to (suspected) Alzheimer's disease. When her attending physician asked that dread question (regarding "Do Not Resuscitate" orders), I realized that it would be wrong to do so. To keep her alive through extreme medicinal procedures in what was basically a living Hell of isolation including the inability to enjoy anything was a point of selfishness on my part. There is that old statement that "all good things come to an end." Even the life of a loved one ends. We must be generous enough to allow that end to happen, despite any urge to claw that person back to life one more time. It just isn't fair to the person whose life is ending.
 
In my opinion, an EASY question. No, we don't.

As to suicide, there was a case in Oregon, which went to the state supreme court and was decided in favor of putative patients requesting physician-assisted suicide in their quest to die. Oregon has a "Death with Dignity" law that allows physician-assisted suicide. The US Attorney General sued when this law was enacted, saying this was a case of prescribing drugs in an "off preferred use" situation and thus violated the Controlled Substances Act. The case went before SCOTUS as Gonzales. vs. Oregon affirming that states could pass such a law even though another case affirmed that there is not a U.S. constitutional right of suicide. In essence, the Oregon law granted a special exemption for the "off label" use of drugs for assisted suicide. SCOTUS affirmed that this WAS a state's rights issue.

Some decades ago, there was a case involving a terminal cancer patient seeking the right to die. This case wen to the state supreme court and in it, a rather eloquent Oregon judge said (paraphrasing 'cause I can't find the reference): We hold that we have inherent rights of life, liberty, and the pursuit of happiness. However, when a patient's physical condition deteriorates such that he no longer has sufficient liberty and has such pain as to deny the pursuit of happiness, who are we to require that he continue to pursue life?

I was impressed at the time with the judge's eloquence and compassion. The above paraphrase doen't even come close. However, time has clouded my memory of the case and Google Gemini claims there is no record of such a case that wasn't eventually decided by SCOTUS. I think Gemini is wrong, but had to give up on the search.

Personally, IF I were bed-ridden and wracked with incurable continuous extreme pain requiring mind-killing pain medication, I would not wish to continue living.

I must add that my viewpoint is partially formed from watching my mother deteriorate over a period of years due to (suspected) Alzheimer's disease. When her attending physician asked that dread question (regarding "Do Not Resuscitate" orders), I realized that it would be wrong to do so. To keep her alive through extreme medicinal procedures in what was basically a living Hell of isolation including the inability to enjoy anything was a point of selfishness on my part. There is that old statement that "all good things come to an end." Even the life of a loved one ends. We must be generous enough to allow that end to happen, despite any urge to claw that person back to life one more time. It just isn't fair to the person whose life is ending.
Your first statement seems to contradict the rest of your post? Maybe I'm misunderstanding? It sounds like you, like me, are sympathetic to the idea of suicide insert and select cases. So why prohibit the AI from discussing it?
 
If you start saying the AI can't help with this, and it can't help with that, and oh here's another thing it shouldn't help with, where do you draw the line? We might as well throttle the whole internet from discussing naughty subjects. I don't like it it's too slippery slope
 
I draw the line because of what AI apparently DOESN'T do. States that allow physician-assisted suicides have the requirement that the candidate must convince a health professional licensed for this particular "treatment." This health professional would be someone who more than likely will investigate and verify the extreme nature of the person's prognosis before agreeing to help.

An AI - as was demonstrated in the news article - eventually relents and doesn't further attempt to dissuade the person.
A professional will attempt to dissuade the patient and will be able call in help to prevent the patient from making a mistake (if that is how it appears to the professional.)

If you start saying the AI can't help with this, and it can't help with that, and oh here's another thing it shouldn't help with, where do you draw the line?

The things that AI doesn't do well simply require someone to manually answer their questions - like a topic professional in an online help site... like AWF? We already know that AI so far doesn't do a very good job of coding. Which is why folks still come to AWF with coding and query questions.
 
Have you considered the effort required to teach AI how to refuse to answer types of questions and how narrow a line that is to define, if it could even be defined? And how that would probably have a negative effect on the overall effect of AI's ability to learn?
 

Users who are viewing this thread

Back
Top Bottom