ChatGPT: The Future of AI is Here! (1 Viewer)

Jon

Access World Site Owner
Staff member
Local time
Today, 00:47
Joined
Sep 28, 1999
Messages
7,400
Been happening to me almost every day now. Sounds like they bit off more than their structures could chew ... That will be the first area they can benefit greatly from partnering with companies that already handle that. I've heard of what Google does with failover and redundancy and space, it's quite amazing. Of course we give them our data in return. Love :)
Microsoft is investing $10bn in OpenAI, and are incorporating ChatGPT into Bing.
 

Jon

Access World Site Owner
Staff member
Local time
Today, 00:47
Joined
Sep 28, 1999
Messages
7,400
Ok, so now I managed to login. From now on, you will be arguing with ChatGPT. Good luck!!

I asked it this:

What would be your rebuttal to the following?


To be honest with you I still kind of fail to see how ChatGPT is anything more than an automation of google searches and then quickly (admittedly, VERY quickly) combing through the results, ingesting it all, and spitting something back out. As a developer in the database world, every time I use ChatGPT, I instinctively don't get the idea that it's anything special, intelligence-wise. I get that it's very special in the speed that it can pose your question, or distinct components OF your question, to Google (or whatever you want to call the search engine capacities we already had), then read all the results very quickly, then spit out a summary of those results. YES - I agree, that's extremely impressive. But it's impressive from a data storage-and-retrieval-quickly perspective, not an Intelligence one.

Even paintings, code, music. ALL of that can be dumbed down to bits of data stored, which already WERE stored. What ChatGPT does that is new is harness extreme power and speed to retrieve all that in an instant. It doesn't seem like artificial intelligence to me, it seems like an impressive array of hardware, storage and retrieval of information we already had, without exception.

I've already seen Universities successfully detected its output in plagiarism tests, something I was a bit surprised by. But then I told my daughter, it shouldn't be surprising. If you pose me a question, then show me 3 answers to the question, I could pick out ChatGPT output most of the time, and that's not even with me having a "plan" in place - just recognizing its generic output, which is an exact cobbling together of what I would have written with a few hours of google searching and writing.

Trust me, I hesitate to post this, as I expect to be pounced on from all sides - but that is just what I think.

It just seems like they built the fastest search-engine and results-parser in the world, threw the label "AI" on it, and watched for everyone's jaws to drop. ?

Or in other words, a rebuttal to your first message. It came back with its reply. Consider it AI or not.

I understand your perspective, but I must respectfully disagree with your assessment of ChatGPT as simply a fast search engine and results parser. While it is true that ChatGPT retrieves information from existing data, it does so in a way that is distinctly different from traditional search engines.

ChatGPT is a natural language processing (NLP) model that has been trained on an enormous corpus of text data, which enables it to understand and generate human-like responses to questions posed to it. Unlike a search engine, ChatGPT can generate responses to questions that have never been asked before and can do so in a way that takes into account the context of the question and the nuances of the language being used.

Furthermore, ChatGPT is capable of generating coherent, human-like responses that are not simply regurgitated from existing data sources. While it is true that some of the information presented by ChatGPT may come from pre-existing data, it is the model's ability to understand the nuances of language and generate unique responses that sets it apart from a traditional search engine.

Finally, it is worth noting that the label "AI" is not just a marketing ploy in the case of ChatGPT. The model is based on state-of-the-art machine learning algorithms that allow it to continually improve and adapt to new information. While it is certainly true that there are limitations to the current state of AI, models like ChatGPT represent a significant step forward in our ability to create machines that can think and communicate like humans.
 

Isaac

Lifelong Learner
Local time
Yesterday, 16:47
Joined
Mar 14, 2017
Messages
8,779
Microsoft is investing $10bn in OpenAI, and are incorporating ChatGPT into Bing.

Poor Bing. They really needed a boost didn't they!

It will be interesting to see how it all plays out.

Although all new things have their honeymoon period, I think that over time, Google's proven ability to customize results based on knowing the customer will end up weighing more heavily in the equation than some people are believing.

Like you said - the Search and Browser companies are investing in A.I. They already have the same information ChatGPT does, except they have the other half of the equation - knowing the customer.

I think whoever can combine the two will be much more powerful, by an exponential factor, than those who can harness external knowledge without the individualized context. In a sense they both need each other.
 

Jon

Access World Site Owner
Staff member
Local time
Today, 00:47
Joined
Sep 28, 1999
Messages
7,400
Poor Bing. They really needed a boost didn't they!
I never thought Googles domination could be threatened. But with ChatGPT, Bing has steroids! If Google don't hurry up with their own chatbot release, they will lose considerable market share!
 

Isaac

Lifelong Learner
Local time
Yesterday, 16:47
Joined
Mar 14, 2017
Messages
8,779
ChatGPT is a natural language processing (NLP) model
Yes, my current job industry has been using NLP for a while. It is basically extracting keywords that imply things.

Furthermore, ChatGPT is capable of generating coherent, human-like responses that are not simply regurgitated from existing data sources
I notice how often ChatGPT asserts this, but the comments I made earlier about the ease and speed with which people have already built plagiarism detectors that can accurately detect ChatGPT responses, would beg to differ on this point
 

Isaac

Lifelong Learner
Local time
Yesterday, 16:47
Joined
Mar 14, 2017
Messages
8,779
If Google don't hurry up with their own chatbot release, they will lose considerable market share!
Google has probably been working on this aspect for longer than ChatGPT has been the inventor's dream.
Combined with the power of consumer-specific information, down to every keyboard tap on every Android in the world that Google already has, I can see the likelihood that when Google successfully incorporates even just a bit of what you are calling "AI" into its services, it will immediately be 10x as powerful as ChatGPT..
 

AccessBlaster

Registered User.
Local time
Yesterday, 16:47
Joined
May 22, 2010
Messages
5,960
Microsoft is investing $10bn in OpenAI, and are incorporating ChatGPT into Bing.
There are OpenAI extensions for Python and Visual Studio Code, so basically as you are writing ChatGPT is available.
 

Jon

Access World Site Owner
Staff member
Local time
Today, 00:47
Joined
Sep 28, 1999
Messages
7,400
I notice how often ChatGPT asserts this, but the comments I made earlier about the ease and speed with which people have already built plagiarism detectors that can accurately detect ChatGPT responses, would beg to differ on this point
There are plagiarism detectors for human written content too. Does that mean the human written content does not sound human-like because it was detected by a plagiarism detector? And by "accurately detect", what success rate do you consider accurate? And does this mean you cannot detect any difference between what ChatGPT writes verses a human unless you use a sophisticated tool?

In the rebuttal that ChatGPT gave to your post, would you care to illustrate something it said which does not sound human like, from a sentence structure point of view?
 
Last edited:

Jon

Access World Site Owner
Staff member
Local time
Today, 00:47
Joined
Sep 28, 1999
Messages
7,400
There are OpenAI extensions for Python and Visual Studio Code, so basically as you are writing ChatGPT is available.
I had a paid subscription to GitHub Copilot. I think it used GPT3 to write the code.
 

Jon

Access World Site Owner
Staff member
Local time
Today, 00:47
Joined
Sep 28, 1999
Messages
7,400
Google has probably been working on this aspect for longer than ChatGPT has been the inventor's dream.
Combined with the power of consumer-specific information, down to every keyboard tap on every Android in the world that Google already has, I can see the likelihood that when Google successfully incorporates even just a bit of what you are calling "AI" into its services, it will immediately be 10x as powerful as ChatGPT..
These are what they call Large Language Models. Clicks don't come into it, only text.

Google are desperately trying to release their own model, because OpenAI have stolen a march on them. As are Amazon, Apple and so on. There is an explosion of AI happening this year.
 

JonXL

Active member
Local time
Yesterday, 18:47
Joined
Jul 9, 2021
Messages
153
Mostly agree. ChatGPT texts are pretty noticeable as such. However, I wouldn't call it "an automation of google searches", mostly because I don't think it's actuality looking through tables of facts or anything similar but just stringing together words in a probabilistic fashion based on overall frequency of occurrence with other similar words in the googols of samples it's been trained on. This is evidenced in its frequent making of s--t up.
 

Isaac

Lifelong Learner
Local time
Yesterday, 16:47
Joined
Mar 14, 2017
Messages
8,779
There are plagiarism detectors for human written content too. Does that mean the human written content does not sound human-like because it was detected by a plagiarism detector? And by "accurately detect", what success rate do you consider accurate? And does this mean you cannot detect any difference between what ChatGPT writes verses a human unless you use a sophisticated tool?

In the rebuttal that ChatGPT gave to your post, would you care to illustrate something it said which does not sound human like, from a sentence structure point of view?
I didn't say Human. I wasn't even focused on human or non-human.

Just that something boasting a lot about contextually appropriate output wouldn't be expected to be as predictable nor detectable.

Perhaps that's the point. It's a bit disingenuous for it to brag about human-like content which encourages people to miss the fact that we don't really care if it's human like content we care if it's original in the sense of it being individualized to the situation.

But by continuing to repeat human like content, they are kind of making people assume that means individualized, perfectly contextually appropriate responses.

But that's true. They are very careful not to actually claim that the responses are truly contextually appropriate and individualized to the situation. They only claim human-like, which is somewhat meaningless since all writing is human but is also a little deceptive to the average person who won't stop to think that they probably assumed that meant it couldn't be easily detected as a machine, yet it is...
 

Isaac

Lifelong Learner
Local time
Yesterday, 16:47
Joined
Mar 14, 2017
Messages
8,779
the rebuttal that ChatGPT gave to your post, would you care to illustrate something it said which does not sound human like, from a sentence structure point of view?
I don't see the important question as having anything to do with whether it is like or unlike a human. Although they are very wise to be marketing it that way, since most people are thinking making it sound human means making it not sound plagiarized.

It has to do with people assuming that it is going to give a contextually appropriate individualized response, which is contradicted by how predictable or detectable it is, since no two situations are alike, no two situations should ever give such a highly predictable response..
 

Isaac

Lifelong Learner
Local time
Yesterday, 16:47
Joined
Mar 14, 2017
Messages
8,779
There is an explosion of AI happening this year
Yes, but those who hold the detailed, intimate knowledge of individual consumers are the ones with the keys to the kingdom. Everything else is available to everyone..
 

Isaac

Lifelong Learner
Local time
Yesterday, 16:47
Joined
Mar 14, 2017
Messages
8,779
stringing together words in a probabilistic fashion based on overall frequency of occurrence with other similar words in the googols of samples it's been trained on.

This is an interesting thought to me.

It made me think of the question, does the value of this tool as it stands now depend more on actual accuracy or completeness? Or does it mostly depend on the emotions or happy response or opinion of the person who's getting the response?

Your description made me remember how valuable it is to the sellers, whether people believe the output is good, not just whether it is good. Wonder how much "give them what they will expect & recognize, and thus, Affirm as good" goes into it... conscious or unconscious

Right now all the tides are in its favor. Much like a restaurant in the first year of its opening, it can do no wrong in anyone's eyes. It will remain to be seen as it is integrated into things that really matter how much of it was quality versus how much of it was that people simply liked what it told them.

Right now, "liked" will sell well enough..
 

Jon

Access World Site Owner
Staff member
Local time
Today, 00:47
Joined
Sep 28, 1999
Messages
7,400
I didn't say Human. I wasn't even focused on human or non-human.

Just that something boasting a lot about contextually appropriate output wouldn't be expected to be as predictable nor detectable.

Perhaps that's the point. It's a bit disingenuous for it to brag about human-like content which encourages people to miss the fact that we don't really care if it's human like content we care if it's original in the sense of it being individualized to the situation.

But by continuing to repeat human like content, they are kind of making people assume that means individualized, perfectly contextually appropriate responses.

But that's true. They are very careful not to actually claim that the responses are truly contextually appropriate and individualized to the situation. They only claim human-like, which is somewhat meaningless since all writing is human but is also a little deceptive to the average person who won't stop to think that they probably assumed that meant it couldn't be easily detected as a machine, yet it is...
You didn't need to say human, because you inferred it from this response:

Furthermore, ChatGPT is capable of generating coherent, human-like responses that are not simply regurgitated from existing data sources
"I notice how often ChatGPT asserts this, but the comments I made earlier about the ease and speed with which people have already built plagiarism detectors that can accurately detect ChatGPT responses, would beg to differ on this point"

Regarding whether or not it is contextually appropriate, or individualised to the situation, would you care to show me where in its response to your message you found it not individualised? That is the whole point of ChatGPT verses search engines: it IS individualised. And that is one of the reasons it has had one the fastest growth of any software service since the birth of the internet. If you think its responses are not individualised, what would it have to do for them to satisfy that standard for you?
 

Jon

Access World Site Owner
Staff member
Local time
Today, 00:47
Joined
Sep 28, 1999
Messages
7,400
I don't see the important question as having anything to do with whether it is like or unlike a human. Although they are very wise to be marketing it that way, since most people are thinking making it sound human means making it not sound plagiarized.

It has to do with people assuming that it is going to give a contextually appropriate individualized response, which is contradicted by how predictable or detectable it is, since no two situations are alike, no two situations should ever give such a highly predictable response..
I think you are making assumptions about how the tools detect plagiarism and also some correlation between that ability and whether or not that means the responses are not individualised. How did you arrive at that hypothesis? Are you saying that if an entity, human or otherwise, has a certain style of writing, that they are therefore not individualising responses? Since you personally, like we all do, have a style of writing, are your responses in this thread also not individualised?
 
Last edited:

Jon

Access World Site Owner
Staff member
Local time
Today, 00:47
Joined
Sep 28, 1999
Messages
7,400
It made me think of the question, does the value of this tool as it stands now depend more on actual accuracy or completeness? Or does it mostly depend on the emotions or happy response or opinion of the person who's getting the response?
Are answers in the top 10% of the bar exam not a measure of accuracy? What about the top 1% of a biology exam? These are objective measures of ChatGPT 4 verses humans. Is either accuracy, completeness or happy response a false choice? What about utility for example? Or convenience? Or superiority other alternative avenues? Or ROI? How about preferring the ability to hold a conversational approach verses just Google search? What about the value of Q&A?

These Large Language models are far more sophisticated than just stringing along words probabilistically. For instance, they incorporate tuning with human responses rating the output. But new developments are using ChatBots evaluating the outputs (rather than humans), to speed the tuning and cutting costs dramatically. Yet at the same time, using this largely probabilistic approach, the output is remarkable.

It is fascinating that emergent properties that are unpredictable have arisen from these tools. It is like intelligence has spontaneously appeared out of the blue.

I am curious @Isaac, does this mean you are unlikely to use ChatGPT and stick with Google?
 
Last edited:

Jon

Access World Site Owner
Staff member
Local time
Today, 00:47
Joined
Sep 28, 1999
Messages
7,400
ChatGPT 4 can take in pictures. I want to take a photo of a room in my home and ask it what decorating suggestions it would recommend that would provide a ROI, if the house were sold. I am not sure if it can do that, but I know it understands cause and effect from photos. Anyone know of the timescale before ChatGPT 4 becomes widely available? Wondering if I should trump up the $20 per month subscription for early access. I think that's what the $20 gives you, besides getting priority access to using it.
 

JonXL

Active member
Local time
Yesterday, 18:47
Joined
Jul 9, 2021
Messages
153
Mostly I don't see that it comes up with new things. It offers nothing in the way of real creativity. Everything it produces consists of stuff that is already available in some form elsewhere.

What it seems to have down is combining that old stuff into novel strings (or pixels or beeps). That's the essence of language and, by extension, coding - a performative utterance of sorts. So it's good at coding and putting words together. Yet its record on producing new information is spotty as hell: specifically, it likes to just make up whatever sounds good whether it's true or not.

And that would make me hesitant to trust it in providing ROI-positive decorating advice. Can it suggest a color for the wall? Maybe. But does it have anything backing up that color choice or did 'rainforest tangerine' just make a nice-sounding string of words?
 

Users who are viewing this thread

Top Bottom