AI is remarkable, and here's the proof

Jon

Access World Site Owner
Staff member
Local time
Today, 13:20
Joined
Sep 28, 1999
Messages
8,056
I was just throwing out some speculative perspectives into the atheist thread, and made a post here:


Quoted for convenience:
The dice are rigged, following cause and effect from the birth of the Universe. That is why we are all fated. Whilst quantum physics suggests randomness, it may be that it only gives the appearance of randomness. We just don't have the tech to be sure yet. Perhaps the tilt of the numbers are so slight, we don't have the computational power to check to be sure.

Or, cause and effect can still factor in randomness. Just because we don't know which way the electron will spin, doesn't mean to say it is not part of a cause and effect. It just means the outcome is nondeterministic. Having a fatalistic perspective does not mean you can predict the outcome. It just means it is preordained, and that includes uncertainty. The preordained element is that you will follow all the laws of physics.

As a side note, the cause that determines the electron spin is your observation. The fact you don't know until the electron is observed means causation is required to "set the spin" (my own term). If you didn't observe (set the cause), would the electron be spinning in any particular direction at all? Does Schrödinger's Cat come into this argument somewhere?

My thoughts above are all quite speculative. I'm just throwing it out there to confuse us all. 😇

So, I thought I would ask Claude 2.0 for its perspective on this complex topic. I just paste in, "Is this logical?", followed by my post above. It's response was amazing!

Claude 2.0 reply:
This is an intriguing line of thought. There are a few key elements I'd analyze here:

- The idea that quantum randomness is merely "apparent" and may result from unknown determinism or "hidden variables" at deeper levels is philosophically appealing, but scientifically controversial. Experiments like Bell's inequality aim specifically to rule out such hidden variable explanations, though some still argue for their possibility. It's speculative though.

- Similarly, the notion that things are "preordained" or "fated" in a deterministic sense, while incorporating uncertainty, does have a certain logical appeal. However, most interpretations of quantum physics argue for fundamental indeterminacy - though specific "hidden variable" theories like Bohmian mechanics try to preserve determinism. So positing predetermined outcomes that just appear uncertain faces both philosophical and physics objections.

- On the other hand, emphasizing that uncertainty and randomness still follow physical "laws" aligns better with mainstream science. Incorporating indeterminacy within an overarching framework of cause-and-effect, even if probabilistic, is reasonable. Your electron spin example showing how observation ties into quantum causality makes sense there.

- As for Schrodinger's Cat, while related to issues of quantum superposition and observation, it's moving into different territory dealing more with the measurement problem in QM rather than arguments around causality per se. But it does show how strange things get when contemplating quantum effects at macro scales.

Overall I think you make good points about different ways to conceptualize quantum randomness - as illusory, or as a genuine but still lawful form of indeterminacy within nature's causal web. Current physics leans towards the latter, but variations on the former retain philosophical appeal and interest physicists too. It's complex territory with room for interpretation.

Claude is combining quantum physics with philosophy, logic with speculation.

What does the future bring when we already have such intelligent machines already? Do you have any instances where the output of this new digital species has amazed you?
 
Hope Grok also becomes available soon to test it. It's supposed to be anti-woke and not too friendly, which means it will output content without so much parental control and political correctness.
 
The problem with AI, if it is a TRUE form of AI, is that the goal of injecting humor might be difficult as they continue to "train" it. The Yale Law of Animal Behavior has the effect that if dealing with an intelligent subject, depending on its level of intelligence, will more or less do as it pleases, experimental design be damned.

Interesting name, though. And rather obviously, at least someone in the development team was a reader of the works of Robert A Heinlein.

For the uninitiated, "grok" means "drink" in Martian according to Stranger in a Strange Land.

And don't bother looking up "Yale Law of Animal Behavior" - it's another literary reference.
 
Hope Grok also becomes available soon to test it. It's supposed to be anti-woke and not too friendly, which means it will output content without so much parental control and political correctness.
Just read an article which mentioned grok. Apparently when asked something it responded something like "OpenAI's policies will not let me . . ." which led to speculation that it was plagiarizing it's content or something and learning from other AI generated content.

What was interesting was that the article discussed the fact that there is so much AI generated content out there already that is either faked or incorrect that as new AI models are coming online they are learning distorted facts. Eventually AI will become unreliable.
 
I say 'the cat sat on the,' and there are a couple of words which immediately spring to mind.
That could be like some kind of ink blot test for your personality.

Normal person: The cat sat on the...mat.
Techie: The cat sat on the...keyboard.
Republican: The cat sat on the...biological male.
Liberal: The cat sat on the...packed Supreme Court.
Psychopath: The cat sat on the...burning hot stove (or in the oven).
 
Joking aside, what is the difference between AI and biological intelligence? Both are really just an arrangement of atoms. Each processes inputs and regurgitates outputs. Give the possibility of scalable AI, perhaps biological lifeforms are the exception to the rule within the universe. AI can live forever, unless you pull the plug. Biological lifeforms are finite. The probablity of there being more AI lifeforms out there than biological is a distinct possibility, or even probability.
 
I find it too easy to experience it as a real intelligence, even though I know that it isn't.
How do you know it isn't real intelligence? Or, do you mean you know it is not conscious? But you might not be conscious either, and I don't just mean after a boozy Christmas dinner. Consciousness might be just a made up term of something that does not exist. If it doesn't, is there any real difference between the LLM AI model, and yourself, given that it can pass the Bar example without revising and you can't?
 
And we're all aware of how just a little bit of bad data . . .
Sounds like something us humans face all the time. Turn off CNN! 😁
 
Uncle, I find asking it to give me analogies that a 10 year old can understand to be a good thing to add to the prompting. It helps you connect it to things you already know.

And thanks for the 10 letters. I do know how much you like it here. 😁

Edit: New idea...ask it to give you mnemonics, so you can remember the stuff better.
 
Hope Grok also becomes available soon to test it. It's supposed to be anti-woke and not too friendly, which means it will output content without so much parental control and political correctness.
Ah, pre prejudiced AI.
 
Ah, pre prejudiced AI.
Users are getting tired of not being able to ask how to make homemade nukes, or produce traditional biochemical stuff. For the AI, even forging common narcotics to give to friends and family is considered wrong, how dare that AI judge me. I hope Grok behaves adeptly.
 
Just read an article which mentioned grok. Apparently when asked something it responded something like "OpenAI's policies will not let me . . ." which led to speculation that it was plagiarizing it's content or something and learning from other AI generated content.

What was interesting was that the article discussed the fact that there is so much AI generated content out there already that is either faked or incorrect that as new AI models are coming online they are learning distorted facts.
Eventually AI will become unreliable.
Hah! where do we even draw a line between what is your content when you've clearly borrowed data from the entire internet? Also, as for code generation, not even my GPT-4 subscription to Github copilot can help me as good as it used to anymore, I don't know if what I'm doing is more complex or the AI is becoming more stupid, and that's considering it can now get the context of the entire project when it suggests me stuff. The GPTs will have to start citing sources that do not have AI content, that's going to be interesting.
 
ChatGPT it's an expert at generating advertising and marketing content.
I've had to tone down its marketing suggestions... they sound too cheesy. I should make an A-B test of it, maybe I'm wrong...
 
Users are getting tired of not being able to ask how to make homemade nukes, or produce traditional biochemical stuff. For the AI, even forging common narcotics to give to friends and family is considered wrong, how dare that AI judge me. I hope Grok behaves adeptly.
Let me know if it works, my bong is empty.
 
Let me know if it works, my bong is empty.
I never even thought of asking it that but I wonder if that is one of the things that will get you banned.
 
God made the eye. [*Runs for cover*]

Obviously, then, God must favor octopi over Man, since they have superior visual acuity when compared to human eyes.
 
On a different note, I noticed in the local newspaper an article that I somehow missed online about a growing number of authors and several publishing companies suing over the use of their copyrighted materials to "train" ChatGPT. The New York Times was in the mix of litigants. So I searched for an article. Found this one as a starter.

 

Users who are viewing this thread

Back
Top Bottom