I'm in Love with ChatGPT

probably due to wishful thinking
People are against AI, just because they can't accept the changes. Those who know what's going on, know for sure how tomorrow will be.


482794633_18501676009030160_3294171985429323931_n.jpeg


482341046_18501676018030160_3728522420081118277_n.jpeg



 
  • Like
Reactions: Jon
I agree, but I think there will be plenty of zig-zagging along the way from failure to understand the human element involved in AI, from accidents where AI 's output is degraded, and people hurrying to trust something they don't understand. It will be a 2 steps forward 1 step backward kind of evolution with plenty of potholes along the road.
 
First off, you need a proper AI platform for coding, (GitHub Copilot, Code Llama, StarCoder), etc. You can't really expect a generic AI platform like ChatGPT to do well in everything.
 
Agreed, ultimately those should be more hardcore and specialized for coding. ChatGPT has a nice personality, though - ya know, it's all about the personality 😆
 
I’ve been writing Python code to interface directly with the OpenAI API, accessing ChatGPT-4 without using the standard interface. I’ve built my own system based on the Nifty Transcription Tool, but I’ve noticed that the AI interacts with me differently depending on how I access it.

When I use the ChatGPT web interface, it feels as if I’m engaging with a real person—there’s something personable about it. I wonder if the AI is specifically instructed in the nuances of user interaction to make it more engaging. However, when I interact directly through the OpenAI API, the level of personal engagement seems reduced. It’s still present to some degree, but the experience feels noticeably different.
That's interesting, I also find it very personable especially when I provide any indication I wish it so. It always asks me follow up questions and never stops talking, basically, until I stop.
Whereas maybe the API is the way it is for good reason, as it expects to be used less frivolously
 
I saw this today and though I'd share it.
Apparently Google AI Overview has been making an idiot of itself again. With users feeding it false sayings and sniggering when it tries to explain them.

So this guy went to CHATGPT and asked : "You cannot stroke a kestrel when watching Bargain Hunt"
In part of a lengthy response the robot replied : " Some things in life require your full attention.......you cannot expect to relax and handle something wild or challenging at the same time"

Their second was : "The longer the whiskers, the smaller the pie"
To which it concluded after expounding a tirade of statistical theory, that it was "a metaphor for decision-taking and risk"

No doubt similar results would come pouring out following the entry of other total nonsense. Which to my mind is a cause for concern. It does appear that politicians and business leaders with little knowledge of software see AI as the new knowledge emanating from superhuman consultants. They will be entering totally new questions about situations that have not arisen in the past. Similar to entering nonsense. Then the resulting advice will probably end up being totally accepted and relied upon, at all times and without question.

Which is as we know would be crazy. As another example, if it was needed of IT ignorance by politicians. Look no further that the Welsh government. They are proposing to use AI to re-value every property for Council Tax. To decide on that, all that is needed is maybe a dozen decisions. Something easily done in the most trivial piece of software, created by a sixteen year old.
I just wonder if they asked AI : "Is AI the only way to revalue all Welsh properties for council tax? Or would Python, or Access suffice"
Just what the result would be.

It is bad enough that we are dictated to by simple minded and incompetent politicians. But when they receive AI 'wisdom' they will no doubt slavishly follow the AI advice with only disaster being the result. Of course using AI that fails would be a good excuse, because they can fall back on the excuse that the AI system caused the problem. Nothing to do with them. Which is probably where the global warming theory began.
 
Last edited:
I saw this today and though I'd share it.
Apparently Google AI Overview has been making an idiot of itself again. With users feeding it false sayings and sniggering when it tries to explain them.

So this guy went to CHATGPT and asked : "You cannot stroke a kestrel when watching Bargain Hunt"
In part of a lengthy response the robot replied : " Some things in life require your full attention.......you cannot expect to relax and handle something wild or challenging at the same time"

Their second was : "The longer the whiskers, the smaller the pie"
To which it concluded after expounding a tirade of statistical theory, that it was "a metaphor for decision-taking and risk"

No doubt similar results would come pouring out following the entry of other total nonsense. Which to my mind is a cause for concern. It does appear that politicians and business leaders with little knowledge of software see AI as the new knowledge emanating from superhuman consultants. They will be entering totally new questions about situations that have not arisen in the past. Similar to entering nonsense. Then the resulting advice will probably end up being totally accepted and relied upon, at all times and without question.

Which is as we know would be crazy. As another example, if it was needed of IT ignorance by politicians. Look no further that the Welsh government. They are proposing to use AI to re-value every property for Council Tax. To decide on that, all that is needed is maybe a dozen decisions. Something easily done in the most trivial piece of software, created by a sixteen year old.
I just wonder if they asked AI : "Is AI the only way to revalue all Welsh properties for council tax? Or would Python, or Access suffice"
Just what the result would be.

It is bad enough that we are dictated to by simple minded and incompetent politicians. But when they receive AI 'wisdom' they will no doubt slavishly follow the AI advice with only disaster being the result. Of course using AI that fails would be a good excuse, because they can fall back on the excuse that the AI system caused the problem. Nothing to do with them. Which is probably where the global warming theory began.

That is actually really interesting. So basically the AI made something up or it approached the problem as, " okay I Can't really find that saying anywhere online, so instead I'll try to interpret it directly" so to speak.

It's a good example of how hard a person has to work at giving the right prompt and context. So in that situation the prompt would have had to include something along the lines of, "only answer this question if you can really find information about the quote".

Sometimes I feel the examples I see of bad AI responses mean the prompts have to get better and better and better so we're actually doing more and more work feeding good information into it to get good output. I think the amount of quality that the prompt has to be from the prompter is so so misunderstood and understated by most users..

And in the case of the example you gave, a continued dialogue which sometimes saves the day for bad output, wouldn't really have helped, because the user would never know that the saying didn't actually exist if they didn't know.
 
Re:- That is actually really interesting. So basically the AI made something up.....

The AI's learn Everything they know from reading what humans have written on the internet....
But in this case wasn't the AI fed something that didn't exist on the internet?
 
The AI's learn Everything they know from reading what humans have written on the internet
I know your info on AI is very good and better than most of us, but just for the clarity, It's not that AI (at least Chatgpt) reads every page and repeat the false info on million of pages it has already scanned.
Here's a part of my conversation with chatgpt before I started using it seriously:

1. How AI "learns"
AI (especially models like me) doesn’t just "scan the web" in real time. During training, engineers gather a huge amount of text from books, articles, websites, conversations, code, and more. The AI model is then trained to predict the next word in a sentence, over and over — like a super advanced autocomplete.
It’s not "reading" like a human does; it's statistically learning patterns: how words, facts, styles, and ideas usually connect.

2. Is it only web pages?

Nope — it's a mix.
  • Public web pages (some parts of the internet)
  • Academic papers
  • Books (lots)
  • Licensed data (things the creators paid for)
  • Open datasets (e.g., Wikipedia, open codebases like GitHub)
    Training isn't just random; engineers carefully curate much of the data to make it higher quality.
3. How does it know what’s true or false?
At the core: it doesn't truly "know."
During training, AI learns what sounds correct because it's been written that way often across trusted sources. It picks up patterns like "most good answers about math come from official educational websites," or "scientific facts usually look like this."

Later, researchers fine-tune the model using:
  • Human feedback (humans rank AI responses as good or bad)
  • Special datasets where correct answers are highlighted
  • Reinforcement learning to reward truthfulness and punish falsehoods
AI learns from huge piles of textremembers patterns of good answerstries to guess what is most likely to be "correct" based on those patterns.
 
I know your info on AI is very good and better than most of us, but just for the clarity, It's not that AI (at least Chatgpt) reads every page and repeat the false info on million of pages it has already scanned.
Here's a part of my conversation with chatgpt before I started using it seriously:

1. How AI "learns"
AI (especially models like me) doesn’t just "scan the web" in real time. During training, engineers gather a huge amount of text from books, articles, websites, conversations, code, and more. The AI model is then trained to predict the next word in a sentence, over and over — like a super advanced autocomplete.
It’s not "reading" like a human does; it's statistically learning patterns: how words, facts, styles, and ideas usually connect.

2. Is it only web pages?

Nope — it's a mix.
  • Public web pages (some parts of the internet)
  • Academic papers
  • Books (lots)
  • Licensed data (things the creators paid for)
  • Open datasets (e.g., Wikipedia, open codebases like GitHub)
    Training isn't just random; engineers carefully curate much of the data to make it higher quality.
3. How does it know what’s true or false?
At the core: it doesn't truly "know."
During training, AI learns what sounds correct because it's been written that way often across trusted sources. It picks up patterns like "most good answers about math come from official educational websites," or "scientific facts usually look like this."

Later, researchers fine-tune the model using:
  • Human feedback (humans rank AI responses as good or bad)
  • Special datasets where correct answers are highlighted
  • Reinforcement learning to reward truthfulness and punish falsehoods
AI learns from huge piles of textremembers patterns of good answerstries to guess what is most likely to be "correct" based on those patterns.

And don't forget that some of what it OUTPUTS ends up, by a million little streams, as becoming new fodder for INPUT - thus degrading its performance over time, if all else was equal, as it becomes inbred. This has begun to happen to painting AI.
 
So it begs the question :
"is AI a bit like someone with a brilliant memory and instant recall but little intelligence, and/or capacity for originality?"

Basically if something arises which has never been experienced before, do they both just tell a good tale?
 
So it begs the question :
"is AI a bit like someone with a brilliant memory and instant recall but little intelligence, and/or capacity for originality?"
Does AI, that beats the vast majority of humans on most exam benchmarks, IQ tests, and all humans on knowledge, lack intelligence? Us humans are always making stupid mistakes, yet we give ourselves a pass.

In 10 years time...

Human: "Those supposed super-intelligent AI's lack intelligence and just regugitate patterns. I know they solved death, energy, suffering, but they can't think!"

AI: "Those humans, they still don't get it. They just do the same, but worse."

Human: "Alexa, what's the weather today, do my accounts, 3D print me a dinner, give me a medical report, create me a TV episode, and solve the meaning of life."

For those who are interested, psychologists measure intelligence using IQ. And IQ is composed of "fluid intelligence" and "crystalised intelligence". The latter is essentially what you know. So memory does come into it.

Edit: I argued with someone who had a PhD in neural networks about whether or not AI's just regugitate information, which was his position. My argument was that so do humans, just using wetware, our biological substrate instead of silicon. Perhaps each person has a philosophical take on what intelligence actually means, with some believing in soul and some higher more spiritual aspect to it.

Edit2:

Just a potential insight...

Zoomed in view: It just regurgitates, as you understand how it works, kinda. It is not thinking, just statistical patterns. You look at how each word is predicted and you see how it works.

Zoomed out view: It thinks. The aggregation of all these tiny statistical patterns is what thinking actually is.

Or, if I want to zoom in to the atomic scale, thinking is just cause and effect + patterns, physics working at the most basic level, be that classical or quantum. In fact there is an analogous situation going on here. Classical physics is an aggregation of the statstical likelihoods at the quantum level. We think an electron is at x position in classical physics, but that is based on a probability distribution at the quantum level. This is similar to the zoomed in and out argument I just made above. Bit waffly but hope you got the gist.
 
Last edited:
We are doing ChatGPT wrong!

You need to operate it from the Bath Tub!!!


Edit:- Time index 5 minutes two or three days to do the paperwork to approve carpet laying reduced to a matter of hours with AI....


See time index 7 minutes.... treat your AI as a companion, a teammate and your production will soar....

At time index 8 minutes 30 seconds, use AI to coach you through a difficult meeting scenario so that you are prepared!

This probably isn't the type of engagement you are looking for from your last post, but it stood out to me that you mentioned paperwork about laying carpet. In the UK do you have to get approval from your neighborhood or government for things as simple as a totally interior repair??
 
This probably isn't the type of engagement you are looking for from your last post, but it stood out to me that you mentioned paperwork about laying carpet. In the UK do you have to get approval from your neighborhood or government for things as simple as a totally interior repair??
You do for certain types of building.
Here we have Grade1 and Grade2 designations that are allocated to older buildings, or in particular locations like National Parks. In some cases even paint colours inside and out can be dictated.

Making unauthorised changes inside and out can result in out of proportion fines. Personally, I'd never consider a graded property because work has to be completed in particular ways, or to specifications that don't totally make sense. Plus the type people providing approval would irritate me and are simply not people I'd want to meet. (Nor them me I suppose) A couple of for-instances: you must replace all existing doors and windows exactly as original. So even if beneficial you cannot replace wood with plastic double-glazing. Even if they are difficult to distinguish between. Modern plaster cannot be used, it must be the old lime based material. Some tradesmen specialise in grade1&2 and clearly charge a huge premium. A friend of mine bought a derelict building out in a national park with excellent views. He renovated it totally, then near the end mentioned that he'd like to drop the window ledge to the lounge to be able to see out when sat down. That was refused because "it wasn't his view" . That unfortunately a typical example of the people in charge. Avarice plays a big part in their decision.
 
Well my initial reaction was no... But then I thought, hold on, there are certain building regs which have to be adhered to....

It's not like someone would come knocking on your door, saying stop that you're breaking the law! It's more like you can't get a mortgage to sell your house because you have not followed the approved methods....

For instance some people have had lofts insulated with foam insulation spray, and are now unable to get a mortgage because the foam covers up a multitude of sins! Making it difficult to inspect the structure for rot etc....
Very interesting.
Is it also true that the UK is one of those countries - or am I thinking of another one - where you can "take your mortgage loan with you" as you sell a house and buy a new house? When I read that it sounded like the best system in the world - (well, for everyone except the mortgage lender)
 
depends on the lender, some offer the facility and some don't. If they do, the new property needs to meet or exceed the original requirements with regards type of property (i.e. meets their survey requirements) and loan to value ratio, and clearly you still need a good credit rating and still have sufficient income to continue to pay the mortgage. I also suspect market conditions will come into play. If you are on a very low interest rate and given that current rates are maybe 5 times higher than they were 5 years ago, then may not honour the lower rate - but at least you have a mortgage to proceed.
 

Users who are viewing this thread

Back
Top Bottom