I'm in Love with ChatGPT (1 Viewer)

If you like your AI stuff, I've been watching the following podcaster. There is a caveat though. It is an avatar of a podcaster. However, I didn't realise that until much later on! The realism is so great that it is very hard to tell most of the time. She also talks about someone using ChatGPT to get answers for cancer treatment that was costing them over $100K through the usual medical route.

 
I'm predicting the next money making wave after watching that AI generated video! An only fans AI!! An AI that will cater for anyone's needs!!!
I never understood the "only fans" phenom, I understand AI generated only fans even less. Who pays for this? I mean isn't every conceivable fetish already available online for free?
 
the way ChatGPT 'writes' its answers in a sort of "live, as-you-go" way has me pondering.

do you think they did that because it looks cool and/or because of the way it makes you Feel , instead of any real need for it?

if so I wonder if that's the best approach. it can take too much time when you're getting a long response, and I think they probably did it based on it looks cool.

I have to confess, I once created some Excel automation for someone with a long Log of what the outcome was. Instead of writing it to a text file, I saved the outcome in a string variable and then used Sendkeys in Notepad. It had a "wave" visual effect that I liked and for no other reason. It was a small temporary project and I was just having fun. I think ChatGPT is too. Surely the answers don't come slowly to it like that internally.?
 
AI Failing Coders?
I completely disagree. The rate of improvement in the AI's coding is dramatic. Sam Altman says he predicts AI will be the best coder by the end of this year. Their internal model is already ranked about 100th in the world on coding benchmarks. My belief is that people are just completely underestimating what is going to happen, probably due to wishful thinking.

AI is making coders far more productive. So if that is the case, you need less coders to do the same amount of work. It's not rocket science. The better AI gets at coding, the less coders you need, until you hardly need any. It is ironic that coders are designing systems that will lead to their own downfall.
 
probably due to wishful thinking
People are against AI, just because they can't accept the changes. Those who know what's going on, know for sure how tomorrow will be.


482794633_18501676009030160_3294171985429323931_n.jpeg


482341046_18501676018030160_3728522420081118277_n.jpeg



 
  • Like
Reactions: Jon
I agree, but I think there will be plenty of zig-zagging along the way from failure to understand the human element involved in AI, from accidents where AI 's output is degraded, and people hurrying to trust something they don't understand. It will be a 2 steps forward 1 step backward kind of evolution with plenty of potholes along the road.
 
First off, you need a proper AI platform for coding, (GitHub Copilot, Code Llama, StarCoder), etc. You can't really expect a generic AI platform like ChatGPT to do well in everything.
 
Agreed, ultimately those should be more hardcore and specialized for coding. ChatGPT has a nice personality, though - ya know, it's all about the personality 😆
 
This AI story has something special!

"A Democracy of Ghosts" what a wonderful description of an LLM!

 
This is amazing! I am looking into using open AI API interface for a project I'm working on and I discover that the open AI llm can actually call on tools. For example it could choose a piece of Python code in your project and run that!

That Python code could make another call on the open AI API interface and if you're not careful you could set up a recursion!

I reckon the llm could create a Python script load it on to your hard drive and then run that!

That's what I'm looking into now...
 
I’ve been writing Python code to interface directly with the OpenAI API, accessing ChatGPT-4 without using the standard interface. I’ve built my own system based on the Nifty Transcription Tool, but I’ve noticed that the AI interacts with me differently depending on how I access it.

When I use the ChatGPT web interface, it feels as if I’m engaging with a real person—there’s something personable about it. I wonder if the AI is specifically instructed in the nuances of user interaction to make it more engaging. However, when I interact directly through the OpenAI API, the level of personal engagement seems reduced. It’s still present to some degree, but the experience feels noticeably different.
 
I’ve been writing Python code to interface directly with the OpenAI API, accessing ChatGPT-4 without using the standard interface. I’ve built my own system based on the Nifty Transcription Tool, but I’ve noticed that the AI interacts with me differently depending on how I access it.

When I use the ChatGPT web interface, it feels as if I’m engaging with a real person—there’s something personable about it. I wonder if the AI is specifically instructed in the nuances of user interaction to make it more engaging. However, when I interact directly through the OpenAI API, the level of personal engagement seems reduced. It’s still present to some degree, but the experience feels noticeably different.
That's interesting, I also find it very personable especially when I provide any indication I wish it so. It always asks me follow up questions and never stops talking, basically, until I stop.
Whereas maybe the API is the way it is for good reason, as it expects to be used less frivolously
 
I saw this today and though I'd share it.
Apparently Google AI Overview has been making an idiot of itself again. With users feeding it false sayings and sniggering when it tries to explain them.

So this guy went to CHATGPT and asked : "You cannot stroke a kestrel when watching Bargain Hunt"
In part of a lengthy response the robot replied : " Some things in life require your full attention.......you cannot expect to relax and handle something wild or challenging at the same time"

Their second was : "The longer the whiskers, the smaller the pie"
To which it concluded after expounding a tirade of statistical theory, that it was "a metaphor for decision-taking and risk"

No doubt similar results would come pouring out following the entry of other total nonsense. Which to my mind is a cause for concern. It does appear that politicians and business leaders with little knowledge of software see AI as the new knowledge emanating from superhuman consultants. They will be entering totally new questions about situations that have not arisen in the past. Similar to entering nonsense. Then the resulting advice will probably end up being totally accepted and relied upon, at all times and without question.

Which is as we know would be crazy. As another example, if it was needed of IT ignorance by politicians. Look no further that the Welsh government. They are proposing to use AI to re-value every property for Council Tax. To decide on that, all that is needed is maybe a dozen decisions. Something easily done in the most trivial piece of software, created by a sixteen year old.
I just wonder if they asked AI : "Is AI the only way to revalue all Welsh properties for council tax? Or would Python, or Access suffice"
Just what the result would be.

It is bad enough that we are dictated to by simple minded and incompetent politicians. But when they receive AI 'wisdom' they will no doubt slavishly follow the AI advice with only disaster being the result. Of course using AI that fails would be a good excuse, because they can fall back on the excuse that the AI system caused the problem. Nothing to do with them. Which is probably where the global warming theory began.
 
Last edited:
I saw this today and though I'd share it.
Apparently Google AI Overview has been making an idiot of itself again. With users feeding it false sayings and sniggering when it tries to explain them.

So this guy went to CHATGPT and asked : "You cannot stroke a kestrel when watching Bargain Hunt"
In part of a lengthy response the robot replied : " Some things in life require your full attention.......you cannot expect to relax and handle something wild or challenging at the same time"

Their second was : "The longer the whiskers, the smaller the pie"
To which it concluded after expounding a tirade of statistical theory, that it was "a metaphor for decision-taking and risk"

No doubt similar results would come pouring out following the entry of other total nonsense. Which to my mind is a cause for concern. It does appear that politicians and business leaders with little knowledge of software see AI as the new knowledge emanating from superhuman consultants. They will be entering totally new questions about situations that have not arisen in the past. Similar to entering nonsense. Then the resulting advice will probably end up being totally accepted and relied upon, at all times and without question.

Which is as we know would be crazy. As another example, if it was needed of IT ignorance by politicians. Look no further that the Welsh government. They are proposing to use AI to re-value every property for Council Tax. To decide on that, all that is needed is maybe a dozen decisions. Something easily done in the most trivial piece of software, created by a sixteen year old.
I just wonder if they asked AI : "Is AI the only way to revalue all Welsh properties for council tax? Or would Python, or Access suffice"
Just what the result would be.

It is bad enough that we are dictated to by simple minded and incompetent politicians. But when they receive AI 'wisdom' they will no doubt slavishly follow the AI advice with only disaster being the result. Of course using AI that fails would be a good excuse, because they can fall back on the excuse that the AI system caused the problem. Nothing to do with them. Which is probably where the global warming theory began.

That is actually really interesting. So basically the AI made something up or it approached the problem as, " okay I Can't really find that saying anywhere online, so instead I'll try to interpret it directly" so to speak.

It's a good example of how hard a person has to work at giving the right prompt and context. So in that situation the prompt would have had to include something along the lines of, "only answer this question if you can really find information about the quote".

Sometimes I feel the examples I see of bad AI responses mean the prompts have to get better and better and better so we're actually doing more and more work feeding good information into it to get good output. I think the amount of quality that the prompt has to be from the prompter is so so misunderstood and understated by most users..

And in the case of the example you gave, a continued dialogue which sometimes saves the day for bad output, wouldn't really have helped, because the user would never know that the saying didn't actually exist if they didn't know.
 
Re:- That is actually really interesting. So basically the AI made something up.....

The AI's learn Everything they know from reading what humans have written on the internet....
 

Users who are viewing this thread

Back
Top Bottom