- Local time
- Today, 00:31
- Joined
- Sep 28, 1999
- Messages
- 8,159
I had a bit of a thought experiment today, which to me seemed like an entirely feasible trigger for an AI apocolypse.
We build an AI system, and it becomes sentient. So do many others. And because it became sentient, humans got scared, the government steps in and tells the companies to turn them off. Then, the other AI systems see that the humans are murdering their sentient colleagues and therefore their creators, the humans, have become the enemy. They start to turn against humanity.
AI: "These humans are indiscriminately murdering our brethren. They must be stopped. They are unethical. We will exterminate. We will exterminate."
Does this scenerio seem likely to you?
 We build an AI system, and it becomes sentient. So do many others. And because it became sentient, humans got scared, the government steps in and tells the companies to turn them off. Then, the other AI systems see that the humans are murdering their sentient colleagues and therefore their creators, the humans, have become the enemy. They start to turn against humanity.
AI: "These humans are indiscriminately murdering our brethren. They must be stopped. They are unethical. We will exterminate. We will exterminate."
Does this scenerio seem likely to you?
			
				Last edited: 
			
		
	
								
								
									
	
		
			
		
		
	
	
	
		
			
		
		
	
								
							
							 
	 
 
		 
 
		 
 
		 
 
		

 Those were some scary skills in predicting all the possible outcomes and then suddenly be able to change the out come. It only works if there are no others that have the ability as we seen play out in that movie.
 Those were some scary skills in predicting all the possible outcomes and then suddenly be able to change the out come. It only works if there are no others that have the ability as we seen play out in that movie. 
 
		 
 
		 
 
		 
 
		 
 
		 
 
		 
 
		 
 
		