ChatGPT: The Future of AI is Here!

Poor Bing. They really needed a boost didn't they!
I never thought Googles domination could be threatened. But with ChatGPT, Bing has steroids! If Google don't hurry up with their own chatbot release, they will lose considerable market share!
 
ChatGPT is a natural language processing (NLP) model
Yes, my current job industry has been using NLP for a while. It is basically extracting keywords that imply things.

Furthermore, ChatGPT is capable of generating coherent, human-like responses that are not simply regurgitated from existing data sources
I notice how often ChatGPT asserts this, but the comments I made earlier about the ease and speed with which people have already built plagiarism detectors that can accurately detect ChatGPT responses, would beg to differ on this point
 
If Google don't hurry up with their own chatbot release, they will lose considerable market share!
Google has probably been working on this aspect for longer than ChatGPT has been the inventor's dream.
Combined with the power of consumer-specific information, down to every keyboard tap on every Android in the world that Google already has, I can see the likelihood that when Google successfully incorporates even just a bit of what you are calling "AI" into its services, it will immediately be 10x as powerful as ChatGPT..
 
Microsoft is investing $10bn in OpenAI, and are incorporating ChatGPT into Bing.
There are OpenAI extensions for Python and Visual Studio Code, so basically as you are writing ChatGPT is available.
 
I notice how often ChatGPT asserts this, but the comments I made earlier about the ease and speed with which people have already built plagiarism detectors that can accurately detect ChatGPT responses, would beg to differ on this point
There are plagiarism detectors for human written content too. Does that mean the human written content does not sound human-like because it was detected by a plagiarism detector? And by "accurately detect", what success rate do you consider accurate? And does this mean you cannot detect any difference between what ChatGPT writes verses a human unless you use a sophisticated tool?

In the rebuttal that ChatGPT gave to your post, would you care to illustrate something it said which does not sound human like, from a sentence structure point of view?
 
Last edited:
There are OpenAI extensions for Python and Visual Studio Code, so basically as you are writing ChatGPT is available.
I had a paid subscription to GitHub Copilot. I think it used GPT3 to write the code.
 
Google has probably been working on this aspect for longer than ChatGPT has been the inventor's dream.
Combined with the power of consumer-specific information, down to every keyboard tap on every Android in the world that Google already has, I can see the likelihood that when Google successfully incorporates even just a bit of what you are calling "AI" into its services, it will immediately be 10x as powerful as ChatGPT..
These are what they call Large Language Models. Clicks don't come into it, only text.

Google are desperately trying to release their own model, because OpenAI have stolen a march on them. As are Amazon, Apple and so on. There is an explosion of AI happening this year.
 
Mostly agree. ChatGPT texts are pretty noticeable as such. However, I wouldn't call it "an automation of google searches", mostly because I don't think it's actuality looking through tables of facts or anything similar but just stringing together words in a probabilistic fashion based on overall frequency of occurrence with other similar words in the googols of samples it's been trained on. This is evidenced in its frequent making of s--t up.
 
There are plagiarism detectors for human written content too. Does that mean the human written content does not sound human-like because it was detected by a plagiarism detector? And by "accurately detect", what success rate do you consider accurate? And does this mean you cannot detect any difference between what ChatGPT writes verses a human unless you use a sophisticated tool?

In the rebuttal that ChatGPT gave to your post, would you care to illustrate something it said which does not sound human like, from a sentence structure point of view?
I didn't say Human. I wasn't even focused on human or non-human.

Just that something boasting a lot about contextually appropriate output wouldn't be expected to be as predictable nor detectable.

Perhaps that's the point. It's a bit disingenuous for it to brag about human-like content which encourages people to miss the fact that we don't really care if it's human like content we care if it's original in the sense of it being individualized to the situation.

But by continuing to repeat human like content, they are kind of making people assume that means individualized, perfectly contextually appropriate responses.

But that's true. They are very careful not to actually claim that the responses are truly contextually appropriate and individualized to the situation. They only claim human-like, which is somewhat meaningless since all writing is human but is also a little deceptive to the average person who won't stop to think that they probably assumed that meant it couldn't be easily detected as a machine, yet it is...
 
the rebuttal that ChatGPT gave to your post, would you care to illustrate something it said which does not sound human like, from a sentence structure point of view?
I don't see the important question as having anything to do with whether it is like or unlike a human. Although they are very wise to be marketing it that way, since most people are thinking making it sound human means making it not sound plagiarized.

It has to do with people assuming that it is going to give a contextually appropriate individualized response, which is contradicted by how predictable or detectable it is, since no two situations are alike, no two situations should ever give such a highly predictable response..
 
There is an explosion of AI happening this year
Yes, but those who hold the detailed, intimate knowledge of individual consumers are the ones with the keys to the kingdom. Everything else is available to everyone..
 
stringing together words in a probabilistic fashion based on overall frequency of occurrence with other similar words in the googols of samples it's been trained on.

This is an interesting thought to me.

It made me think of the question, does the value of this tool as it stands now depend more on actual accuracy or completeness? Or does it mostly depend on the emotions or happy response or opinion of the person who's getting the response?

Your description made me remember how valuable it is to the sellers, whether people believe the output is good, not just whether it is good. Wonder how much "give them what they will expect & recognize, and thus, Affirm as good" goes into it... conscious or unconscious

Right now all the tides are in its favor. Much like a restaurant in the first year of its opening, it can do no wrong in anyone's eyes. It will remain to be seen as it is integrated into things that really matter how much of it was quality versus how much of it was that people simply liked what it told them.

Right now, "liked" will sell well enough..
 
I didn't say Human. I wasn't even focused on human or non-human.

Just that something boasting a lot about contextually appropriate output wouldn't be expected to be as predictable nor detectable.

Perhaps that's the point. It's a bit disingenuous for it to brag about human-like content which encourages people to miss the fact that we don't really care if it's human like content we care if it's original in the sense of it being individualized to the situation.

But by continuing to repeat human like content, they are kind of making people assume that means individualized, perfectly contextually appropriate responses.

But that's true. They are very careful not to actually claim that the responses are truly contextually appropriate and individualized to the situation. They only claim human-like, which is somewhat meaningless since all writing is human but is also a little deceptive to the average person who won't stop to think that they probably assumed that meant it couldn't be easily detected as a machine, yet it is...
You didn't need to say human, because you inferred it from this response:

Furthermore, ChatGPT is capable of generating coherent, human-like responses that are not simply regurgitated from existing data sources
"I notice how often ChatGPT asserts this, but the comments I made earlier about the ease and speed with which people have already built plagiarism detectors that can accurately detect ChatGPT responses, would beg to differ on this point"

Regarding whether or not it is contextually appropriate, or individualised to the situation, would you care to show me where in its response to your message you found it not individualised? That is the whole point of ChatGPT verses search engines: it IS individualised. And that is one of the reasons it has had one the fastest growth of any software service since the birth of the internet. If you think its responses are not individualised, what would it have to do for them to satisfy that standard for you?
 
I don't see the important question as having anything to do with whether it is like or unlike a human. Although they are very wise to be marketing it that way, since most people are thinking making it sound human means making it not sound plagiarized.

It has to do with people assuming that it is going to give a contextually appropriate individualized response, which is contradicted by how predictable or detectable it is, since no two situations are alike, no two situations should ever give such a highly predictable response..
I think you are making assumptions about how the tools detect plagiarism and also some correlation between that ability and whether or not that means the responses are not individualised. How did you arrive at that hypothesis? Are you saying that if an entity, human or otherwise, has a certain style of writing, that they are therefore not individualising responses? Since you personally, like we all do, have a style of writing, are your responses in this thread also not individualised?
 
Last edited:
It made me think of the question, does the value of this tool as it stands now depend more on actual accuracy or completeness? Or does it mostly depend on the emotions or happy response or opinion of the person who's getting the response?
Are answers in the top 10% of the bar exam not a measure of accuracy? What about the top 1% of a biology exam? These are objective measures of ChatGPT 4 verses humans. Is either accuracy, completeness or happy response a false choice? What about utility for example? Or convenience? Or superiority other alternative avenues? Or ROI? How about preferring the ability to hold a conversational approach verses just Google search? What about the value of Q&A?

These Large Language models are far more sophisticated than just stringing along words probabilistically. For instance, they incorporate tuning with human responses rating the output. But new developments are using ChatBots evaluating the outputs (rather than humans), to speed the tuning and cutting costs dramatically. Yet at the same time, using this largely probabilistic approach, the output is remarkable.

It is fascinating that emergent properties that are unpredictable have arisen from these tools. It is like intelligence has spontaneously appeared out of the blue.

I am curious @Isaac, does this mean you are unlikely to use ChatGPT and stick with Google?
 
Last edited:
ChatGPT 4 can take in pictures. I want to take a photo of a room in my home and ask it what decorating suggestions it would recommend that would provide a ROI, if the house were sold. I am not sure if it can do that, but I know it understands cause and effect from photos. Anyone know of the timescale before ChatGPT 4 becomes widely available? Wondering if I should trump up the $20 per month subscription for early access. I think that's what the $20 gives you, besides getting priority access to using it.
 
Mostly I don't see that it comes up with new things. It offers nothing in the way of real creativity. Everything it produces consists of stuff that is already available in some form elsewhere.

What it seems to have down is combining that old stuff into novel strings (or pixels or beeps). That's the essence of language and, by extension, coding - a performative utterance of sorts. So it's good at coding and putting words together. Yet its record on producing new information is spotty as hell: specifically, it likes to just make up whatever sounds good whether it's true or not.

And that would make me hesitant to trust it in providing ROI-positive decorating advice. Can it suggest a color for the wall? Maybe. But does it have anything backing up that color choice or did 'rainforest tangerine' just make a nice-sounding string of words?
 
Mostly I don't see that it comes up with new things. It offers nothing in the way of real creativity. Everything it produces consists of stuff that is already available in some form elsewhere.

What it seems to have down is combining that old stuff into novel strings (or pixels or beeps). That's the essence of language and, by extension, coding - a performative utterance of sorts. So it's good at coding and putting words together. Yet its record on producing new information is spotty as hell: specifically, it likes to just make up whatever sounds good whether it's true or not.

And that would make me hesitant to trust it in providing ROI-positive decorating advice. Can it suggest a color for the wall? Maybe. But does it have anything backing up that color choice or did 'rainforest tangerine' just make a nice-sounding string of words?
And we are back to the question of what is the basis of the value of its output?

How can we know whether or not a big chunk of that basis for its value currently rests on whether people recognize, and therefore a firm and trust the output? It is definitely very good at outputting responses that make you think gee that sounds perfect and well-rounded.

Now I'm starting to put a little effort into recalling what if any is the average difference between what I usually find when I research a topic versus what it outputs when I ask it a question. And precisely what the meaning of that difference may or may not be.

These are all important questions to keep asking as artificial intelligence or anything that has that label continues to expand in usage.

Like other subjects, if asking the question gets a person in a lot of trouble, that tells you something. And the something that it tells you does not bode well for the outcome, with the exception of temporary revenue.
 
Mostly I don't see that it comes up with new things. It offers nothing in the way of real creativity. Everything it produces consists of stuff that is already available in some form elsewhere.

What it seems to have down is combining that old stuff into novel strings (or pixels or beeps). That's the essence of language and, by extension, coding - a performative utterance of sorts. So it's good at coding and putting words together. Yet its record on producing new information is spotty as hell: specifically, it likes to just make up whatever sounds good whether it's true or not.

And that would make me hesitant to trust it in providing ROI-positive decorating advice. Can it suggest a color for the wall? Maybe. But does it have anything backing up that color choice or did 'rainforest tangerine' just make a nice-sounding string of words?
What would it have to demonstrate for you to believe it is creative? When it writes a poem, for example, are you saying that the poem is not creative? Or if it comes up with 20 different product names for something, is that not creative? Are these generated product names, or new poem not an example of new information?

The colour choice for a wall would be an example of emergent properties that these Large Language Models provide, the property being intelligence. The corpus of human knowledge gets distilled into the recommendation.

Edit: Incidentally, the fact it gets into the top 10% of people taking the bar exam suggests that what backs up its answers is in most cases accurate. Or you could say, it is more accurate that most humans. It won't be long before it is more accurate than all humans.
 
Last edited:
Great video on more big AI announcements that have happened in just the last 2 days, including Large Language models for drug discovery, Google's competing Baird system, and some amazing image creation, such as text to video and image manipulation.

 

Users who are viewing this thread

Back
Top Bottom