I'm in Love with ChatGPT (1 Viewer)

Agreed, ultimately those should be more hardcore and specialized for coding. ChatGPT has a nice personality, though - ya know, it's all about the personality 😆
 
This AI story has something special!

"A Democracy of Ghosts" what a wonderful description of an LLM!

 
This is amazing! I am looking into using open AI API interface for a project I'm working on and I discover that the open AI llm can actually call on tools. For example it could choose a piece of Python code in your project and run that!

That Python code could make another call on the open AI API interface and if you're not careful you could set up a recursion!

I reckon the llm could create a Python script load it on to your hard drive and then run that!

That's what I'm looking into now...
 
I’ve been writing Python code to interface directly with the OpenAI API, accessing ChatGPT-4 without using the standard interface. I’ve built my own system based on the Nifty Transcription Tool, but I’ve noticed that the AI interacts with me differently depending on how I access it.

When I use the ChatGPT web interface, it feels as if I’m engaging with a real person—there’s something personable about it. I wonder if the AI is specifically instructed in the nuances of user interaction to make it more engaging. However, when I interact directly through the OpenAI API, the level of personal engagement seems reduced. It’s still present to some degree, but the experience feels noticeably different.
 
I’ve been writing Python code to interface directly with the OpenAI API, accessing ChatGPT-4 without using the standard interface. I’ve built my own system based on the Nifty Transcription Tool, but I’ve noticed that the AI interacts with me differently depending on how I access it.

When I use the ChatGPT web interface, it feels as if I’m engaging with a real person—there’s something personable about it. I wonder if the AI is specifically instructed in the nuances of user interaction to make it more engaging. However, when I interact directly through the OpenAI API, the level of personal engagement seems reduced. It’s still present to some degree, but the experience feels noticeably different.
That's interesting, I also find it very personable especially when I provide any indication I wish it so. It always asks me follow up questions and never stops talking, basically, until I stop.
Whereas maybe the API is the way it is for good reason, as it expects to be used less frivolously
 
I saw this today and though I'd share it.
Apparently Google AI Overview has been making an idiot of itself again. With users feeding it false sayings and sniggering when it tries to explain them.

So this guy went to CHATGPT and asked : "You cannot stroke a kestrel when watching Bargain Hunt"
In part of a lengthy response the robot replied : " Some things in life require your full attention.......you cannot expect to relax and handle something wild or challenging at the same time"

Their second was : "The longer the whiskers, the smaller the pie"
To which it concluded after expounding a tirade of statistical theory, that it was "a metaphor for decision-taking and risk"

No doubt similar results would come pouring out following the entry of other total nonsense. Which to my mind is a cause for concern. It does appear that politicians and business leaders with little knowledge of software see AI as the new knowledge emanating from superhuman consultants. They will be entering totally new questions about situations that have not arisen in the past. Similar to entering nonsense. Then the resulting advice will probably end up being totally accepted and relied upon, at all times and without question.

Which is as we know would be crazy. As another example, if it was needed of IT ignorance by politicians. Look no further that the Welsh government. They are proposing to use AI to re-value every property for Council Tax. To decide on that, all that is needed is maybe a dozen decisions. Something easily done in the most trivial piece of software, created by a sixteen year old.
I just wonder if they asked AI : "Is AI the only way to revalue all Welsh properties for council tax? Or would Python, or Access suffice"
Just what the result would be.

It is bad enough that we are dictated to by simple minded and incompetent politicians. But when they receive AI 'wisdom' they will no doubt slavishly follow the AI advice with only disaster being the result. Of course using AI that fails would be a good excuse, because they can fall back on the excuse that the AI system caused the problem. Nothing to do with them. Which is probably where the global warming theory began.
 
Last edited:
I saw this today and though I'd share it.
Apparently Google AI Overview has been making an idiot of itself again. With users feeding it false sayings and sniggering when it tries to explain them.

So this guy went to CHATGPT and asked : "You cannot stroke a kestrel when watching Bargain Hunt"
In part of a lengthy response the robot replied : " Some things in life require your full attention.......you cannot expect to relax and handle something wild or challenging at the same time"

Their second was : "The longer the whiskers, the smaller the pie"
To which it concluded after expounding a tirade of statistical theory, that it was "a metaphor for decision-taking and risk"

No doubt similar results would come pouring out following the entry of other total nonsense. Which to my mind is a cause for concern. It does appear that politicians and business leaders with little knowledge of software see AI as the new knowledge emanating from superhuman consultants. They will be entering totally new questions about situations that have not arisen in the past. Similar to entering nonsense. Then the resulting advice will probably end up being totally accepted and relied upon, at all times and without question.

Which is as we know would be crazy. As another example, if it was needed of IT ignorance by politicians. Look no further that the Welsh government. They are proposing to use AI to re-value every property for Council Tax. To decide on that, all that is needed is maybe a dozen decisions. Something easily done in the most trivial piece of software, created by a sixteen year old.
I just wonder if they asked AI : "Is AI the only way to revalue all Welsh properties for council tax? Or would Python, or Access suffice"
Just what the result would be.

It is bad enough that we are dictated to by simple minded and incompetent politicians. But when they receive AI 'wisdom' they will no doubt slavishly follow the AI advice with only disaster being the result. Of course using AI that fails would be a good excuse, because they can fall back on the excuse that the AI system caused the problem. Nothing to do with them. Which is probably where the global warming theory began.

That is actually really interesting. So basically the AI made something up or it approached the problem as, " okay I Can't really find that saying anywhere online, so instead I'll try to interpret it directly" so to speak.

It's a good example of how hard a person has to work at giving the right prompt and context. So in that situation the prompt would have had to include something along the lines of, "only answer this question if you can really find information about the quote".

Sometimes I feel the examples I see of bad AI responses mean the prompts have to get better and better and better so we're actually doing more and more work feeding good information into it to get good output. I think the amount of quality that the prompt has to be from the prompter is so so misunderstood and understated by most users..

And in the case of the example you gave, a continued dialogue which sometimes saves the day for bad output, wouldn't really have helped, because the user would never know that the saying didn't actually exist if they didn't know.
 
Re:- That is actually really interesting. So basically the AI made something up.....

The AI's learn Everything they know from reading what humans have written on the internet....
 
Re:- That is actually really interesting. So basically the AI made something up.....

The AI's learn Everything they know from reading what humans have written on the internet....
But in this case wasn't the AI fed something that didn't exist on the internet?
 
But in this case wasn't the AI fed something that didn't exist on the internet?
What I meant was people make things up all the time on the internet and the AI is just copying that human trait...
 
The AI's learn Everything they know from reading what humans have written on the internet
I know your info on AI is very good and better than most of us, but just for the clarity, It's not that AI (at least Chatgpt) reads every page and repeat the false info on million of pages it has already scanned.
Here's a part of my conversation with chatgpt before I started using it seriously:

1. How AI "learns"
AI (especially models like me) doesn’t just "scan the web" in real time. During training, engineers gather a huge amount of text from books, articles, websites, conversations, code, and more. The AI model is then trained to predict the next word in a sentence, over and over — like a super advanced autocomplete.
It’s not "reading" like a human does; it's statistically learning patterns: how words, facts, styles, and ideas usually connect.

2. Is it only web pages?

Nope — it's a mix.
  • Public web pages (some parts of the internet)
  • Academic papers
  • Books (lots)
  • Licensed data (things the creators paid for)
  • Open datasets (e.g., Wikipedia, open codebases like GitHub)
    Training isn't just random; engineers carefully curate much of the data to make it higher quality.
3. How does it know what’s true or false?
At the core: it doesn't truly "know."
During training, AI learns what sounds correct because it's been written that way often across trusted sources. It picks up patterns like "most good answers about math come from official educational websites," or "scientific facts usually look like this."

Later, researchers fine-tune the model using:
  • Human feedback (humans rank AI responses as good or bad)
  • Special datasets where correct answers are highlighted
  • Reinforcement learning to reward truthfulness and punish falsehoods
AI learns from huge piles of text → remembers patterns of good answers → tries to guess what is most likely to be "correct" based on those patterns.
 
I know your info on AI is very good and better than most of us, but just for the clarity, It's not that AI (at least Chatgpt) reads every page and repeat the false info on million of pages it has already scanned.
Here's a part of my conversation with chatgpt before I started using it seriously:

1. How AI "learns"
AI (especially models like me) doesn’t just "scan the web" in real time. During training, engineers gather a huge amount of text from books, articles, websites, conversations, code, and more. The AI model is then trained to predict the next word in a sentence, over and over — like a super advanced autocomplete.
It’s not "reading" like a human does; it's statistically learning patterns: how words, facts, styles, and ideas usually connect.

2. Is it only web pages?

Nope — it's a mix.
  • Public web pages (some parts of the internet)
  • Academic papers
  • Books (lots)
  • Licensed data (things the creators paid for)
  • Open datasets (e.g., Wikipedia, open codebases like GitHub)
    Training isn't just random; engineers carefully curate much of the data to make it higher quality.
3. How does it know what’s true or false?
At the core: it doesn't truly "know."
During training, AI learns what sounds correct because it's been written that way often across trusted sources. It picks up patterns like "most good answers about math come from official educational websites," or "scientific facts usually look like this."

Later, researchers fine-tune the model using:
  • Human feedback (humans rank AI responses as good or bad)
  • Special datasets where correct answers are highlighted
  • Reinforcement learning to reward truthfulness and punish falsehoods
AI learns from huge piles of text → remembers patterns of good answers → tries to guess what is most likely to be "correct" based on those patterns.

And don't forget that some of what it OUTPUTS ends up, by a million little streams, as becoming new fodder for INPUT - thus degrading its performance over time, if all else was equal, as it becomes inbred. This has begun to happen to painting AI.
 
So it begs the question :
"is AI a bit like someone with a brilliant memory and instant recall but little intelligence, and/or capacity for originality?"

Basically if something arises which has never been experienced before, do they both just tell a good tale?
 
So it begs the question :
"is AI a bit like someone with a brilliant memory and instant recall but little intelligence, and/or capacity for originality?"
Does AI, that beats the vast majority of humans on most exam benchmarks, IQ tests, and all humans on knowledge, lack intelligence? Us humans are always making stupid mistakes, yet we give ourselves a pass.

In 10 years time...

Human: "Those supposed super-intelligent AI's lack intelligence and just regugitate patterns. I know they solved death, energy, suffering, but they can't think!"

AI: "Those humans, they still don't get it. They just do the same, but worse."

Human: "Alexa, what's the weather today, do my accounts, 3D print me a dinner, give me a medical report, create me a TV episode, and solve the meaning of life."

For those who are interested, psychologists measure intelligence using IQ. And IQ is composed of "fluid intelligence" and "crystalised intelligence". The latter is essentially what you know. So memory does come into it.

Edit: I argued with someone who had a PhD in neural networks about whether or not AI's just regugitate information, which was his position. My argument was that so do humans, just using wetware, our biological substrate instead of silicon. Perhaps each person has a philosophical take on what intelligence actually means, with some believing in soul and some higher more spiritual aspect to it.

Edit2:

Just a potential insight...

Zoomed in view: It just regurgitates, as you understand how it works, kinda. It is not thinking, just statistical patterns. You look at how each word is predicted and you see how it works.

Zoomed out view: It thinks. The aggregation of all these tiny statistical patterns is what thinking actually is.

Or, if I want to zoom in to the atomic scale, thinking is just cause and effect + patterns, physics working at the most basic level, be that classical or quantum. In fact there is an analogous situation going on here. Classical physics is an aggregation of the statstical likelihoods at the quantum level. We think an electron is at x position in classical physics, but that is based on a probability distribution at the quantum level. This is similar to the zoomed in and out argument I just made above. Bit waffly but hope you got the gist.
 
Last edited:
Human: "Those supposed super-intelligent AI's lack intelligence and just regugitate patterns. I know they solved death, energy, suffering, but they can't think!"
I wish you had also mentioned solving medical problems.

Alex.jpg


 
  • Like
Reactions: Jon
We are doing ChatGPT wrong!

You need to operate it from the Bath Tub!!!


Edit:- Time index 5 minutes two or three days to do the paperwork to approve carpet laying reduced to a matter of hours with AI....


See time index 7 minutes.... treat your AI as a companion, a teammate and your production will soar....

At time index 8 minutes 30 seconds, use AI to coach you through a difficult meeting scenario so that you are prepared!
 
Last edited:
We are doing ChatGPT wrong!

You need to operate it from the Bath Tub!!!


Edit:- Time index 5 minutes two or three days to do the paperwork to approve carpet laying reduced to a matter of hours with AI....


See time index 7 minutes.... treat your AI as a companion, a teammate and your production will soar....

At time index 8 minutes 30 seconds, use AI to coach you through a difficult meeting scenario so that you are prepared!

This probably isn't the type of engagement you are looking for from your last post, but it stood out to me that you mentioned paperwork about laying carpet. In the UK do you have to get approval from your neighborhood or government for things as simple as a totally interior repair??
 

Users who are viewing this thread

Back
Top Bottom