I'm in Love with ChatGPT (1 Viewer)

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 08:50
Joined
Jul 9, 2003
Messages
16,304
I might fall out with ChatGPT over this:-


Keep your eyes open for a van with peculiar looking aerial sticking out at the top of it. I know what to do! Fit a camera in your vehicle with an AI watching out for this novel speed trap vehicle!
 

Steve R.

Retired
Local time
Today, 03:50
Joined
Jul 5, 2006
Messages
4,705
The underlying study for the article above: How Is ChatGPT’s Behavior Changing over Time.
How do GPT-4 and GPT-3.5’s math solving skills evolve over time? As a canonical study, we explore
the drifts in these LLMs’ ability to figure out whether a given integer if prime.
One would think that solving math problems would be an easy task for an AI since. Especially since computers are routinely used to solve math problems. So why has the ability of AI to correctly identify a prime number declined?
Why was there such a large difference? One possible explanation is the drifts of chain-of-thoughts’
effects. Figure 2 (b) gives an illustrative example. To determine whether 17077 is a prime number, the
GPT-4’s March version followed the chain-of-thought instruction very well. It first decomposed the task
into four steps, checking if 17077 is even, finding 17077’s square root, obtaining all prime numbers less
than it, checking if 17077 is divisible by any of these numbers. Then it executed each step, and finally
reached the correct answer that 17077 is indeed a prime number. However, the chain-of-thought did
not work for the June version: the service did not generate any intermediate steps and simply produced
“No”. Chain-of-thought’s effects had a different drift pattern for GPT-3.5. In March, GPT-3.5 inclined
to generate the answer “No” first and then performed the reasoning steps. Thus, even if the steps and
final conclusion (“17077 is a prime number”) were correct, its nominal answer was still wrong. On
the other hand, the June update seemed to fix this issue: it started by writing the reasoning steps
and finally generate the answer “Yes”, which was correct. This interesting phenomenon indicates
that the same prompting approach, even these widely adopted such as chain-of-thought, could lead to
substantially different performance due to LLM drifts.
Superficially, it would appear that giving an AI "freedom" to consider how to solve a problem introduces logical ambiguities that can result in inappropriate responses.
As a secondary question, why didn't the AI use a subroutine to test if a number was prime? I suspect that was beyond the scope this study. The apparent focus of this study was to evaluate, how an AI's "freedom" to think through a problem could be derailed.
“It’s sort of like when we're teaching human students,” Zuo says. “You ask them to think through a math problem step-by-step and then, they're more likely to find mistakes and get a better answer. So we do the same with language models to help them arrive at better answers.” (Yahoo! article)
 

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 08:50
Joined
Jul 9, 2003
Messages
16,304
This lecture by Michio Kaku is entertaining informative and well worth a watch. You don't have to watch, just listen!

 

jdraw

Super Moderator
Staff member
Local time
Today, 03:50
Joined
Jan 23, 2006
Messages
15,393
I thought the relevance of super quantum capacity/speed vs current security approaches was interesting. Put quantum computing with evolving AI and you'll never know who or what to believe. (Not that we have it under control now)

Update: Tony I am pretty sure that is not the video I watched. Sorry for the bad link. Not sure what is going on--that's the only 1 in my history. It was about the speed and research of quantum computers/computing and how security/encryption/decryption methods we have would be totally vulnerable. It was more about the chaos potential if any and all privileged/secure networks/systems/files were easily hacked and newer/faster AI could be/end up in the hands of hackers/malcontents and just plain bad guys.

I did just watch the video with Kaku in #148 --very good.(y)
 
Last edited:

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 08:50
Joined
Jul 9, 2003
Messages
16,304
It amazes me that they don't really know how chat GPT works!


There seems to be some evidence that it works by analogy. Now if that's true, then I need to design some chat GPT prompts that would promote the analogy aspect. Any idea what I could try?
 

jdraw

Super Moderator
Staff member
Local time
Today, 03:50
Joined
Jan 23, 2006
Messages
15,393
I don't know what you had in mind, so I took a simple prompt and used the word "analogy" to get generated response.
This show chatGPT has displayed the process in terms of my term(farming), but it certainly doesn't address how my question was broken into tokens, "matched/weighted statistical analysis or magic" and delivered.

Can you describe the assembly of an automobile using farming as analogy?

Sure! Let's imagine building an automobile using farming as an analogy:
  1. Planning the Crop (Designing the Automobile): Just as a farmer plans the type of crop to grow, engineers and designers carefully plan the automobile. They decide on the type of vehicle (sedan, SUV, truck, etc.), the features it should have, the materials to be used, and the overall aesthetics.
  2. Preparing the Soil (Preparation of Materials): Before planting, a farmer prepares the soil by tilling and fertilizing it. Similarly, in the automobile assembly, the raw materials like steel, aluminum, plastic, glass, and electronics need to be prepared and processed for use.
  3. Sowing the Seeds (Manufacturing Components): In farming, seeds are sown to grow the crops. In the automobile assembly, components like the engine, chassis, body panels, seats, and various parts are manufactured individually in factories.
  4. Growing the Crop (Assembly Line): As the crop grows, it needs nurturing and care. Similarly, the automobile components are brought together on the assembly line. Robots and skilled workers work in harmony, attaching various parts to the vehicle's frame, step-by-step, until it takes its shape.
  5. Harvesting (Quality Control): When the crops are ready, farmers harvest them. For the automobile, once the assembly is complete, quality control checks are performed to ensure that everything meets safety and performance standards.
  6. Transporting and Packaging (Distribution): After harvesting, the farmer transports and packages the crops for distribution. For automobiles, once they pass quality checks, they are transported to dealerships to be made available to consumers.
  7. Selling and Delivery (Sales): Just like farmers sell their crops to consumers, automobile dealerships sell cars to customers. The buyers take their new vehicles home, just as consumers take home their freshly harvested produce.
  8. Regular Maintenance (Service and Repairs): In farming, crops need regular care like watering and weeding. Automobiles also require maintenance, like oil changes, tire rotations, and occasional repairs, to keep them running smoothly.
  9. Life Cycle (Lifecycle of the Product): Both farming and automobile manufacturing involve a life cycle. For crops, it's the cycle from planting to harvesting, while for automobiles, it's from initial design to the end of the vehicle's usable life and eventual recycling.
  10. Environment Impact (Environmental Considerations): Farming has environmental considerations like sustainable practices and land preservation. Similarly, the automobile industry focuses on reducing emissions, developing electric vehicles, and recycling materials to minimize its environmental impact.
By using this analogy, we can see how the process of building an automobile mirrors the various stages and considerations involved in farming, making it easier to understand and appreciate the complexity of automobile assembly.
 

Jon

Access World Site Owner
Staff member
Local time
Today, 08:50
Joined
Sep 28, 1999
Messages
7,425
It amazes me that they don't really know how chat GPT works!
I have my own theory.

Language was constructed by humans to convey information. Inherent in that language is human logic. Any sentence has some sort of logical meaning. The LLM's are based on probabilities of what word comes next, as I understand it. Correct me if I am wrong. So, the LLM's are presenting probabilities of what a human would say, or, a humans logic. I may be way off the mark, but I wonder if the aggregation of the probabilities of what comes next is like the aggregation of all human thought. And so you get intelligence as an emergent property, because the structure of language is a distillation of human intelligence. I hope that made sense.

Example: The dog chased the ___. Most of us would say cat. So, the LLM would have high probability that cat comes next. See, it has intelligence because the probabilities represents what is probably true, when you take an average of what most people would say!
 

jdraw

Super Moderator
Staff member
Local time
Today, 03:50
Joined
Jan 23, 2006
Messages
15,393
Subsequent to my automobile assembly in terms of farming, I asked chatGPT as follows:

I recognize that you break my question into tokens and perform analysis on them, but how exactly do you organize your best fit/completion to my tokens?




As an AI language model, my primary function is to generate text based on the input I receive. When you provide a question or a prompt, I break it down into smaller units called tokens. Tokens can be words, punctuation marks, or other characters, and they help me understand the context of the input. These tokens are then processed and analyzed by the underlying language model (GPT-3.5) to generate a response.
The process of generating the best fit completion to your tokens involves a few steps:

  1. Tokenization: The input text is tokenized, which means breaking it down into individual tokens. Each token has an associated ID, and these tokens are passed as input to the language model.
  2. Language Model Processing: The language model processes the input tokens to understand the context and meaning of the prompt. It uses its deep learning architecture to analyze the sequence of tokens and identify patterns, relationships, and correlations within the text.
  3. Context and Probability: Based on the input tokens, the language model calculates the probabilities of various possible completions or responses. It considers the context provided by the tokens and generates responses that are more likely and relevant in that context.
  4. Sampling and Beam Search: There are different techniques used to generate responses. One common approach is called "sampling," where the language model randomly selects tokens based on their probabilities to create a response. Another approach is "beam search," where the model considers the top-ranked tokens at each step and explores multiple possible paths to find the most likely completion.
  5. Best Fit Completion: After generating several possible completions, the language model evaluates the quality of each completion based on various factors such as fluency, coherence, relevance, and context. It then selects the most appropriate completion, which is considered the "best fit" based on these criteria.
  6. Output: The selected completion, which is the best fit response, is then presented as the output to your original question or prompt.
It's important to note that the process is probabilistic, which means that the language model doesn't have a single fixed answer for any given input. It generates responses based on statistical patterns learned during training on a vast dataset. As a result, the best fit completion may vary if you provide slightly different variations of the same prompt.
Additionally, the model's responses are influenced by its training data, which includes a wide range of sources from the internet. It does not have access to any external databases or real-time information beyond its training data, which has a cutoff date of September 2021.
 

jdraw

Super Moderator
Staff member
Local time
Today, 03:50
Joined
Jan 23, 2006
Messages
15,393
Jon/Tony,

As a test

Can you provide more details on the process? Perhaps an example with some intermediary steps using The dog chased the ____!




Certainly! Let's go through an example using the prompt "The dog chased the ____!" and break down the intermediary steps in generating a completion for this prompt:
  1. Tokenization: The prompt "The dog chased the ____!" is tokenized into individual tokens:
    css
  2. [ "The", "dog", "chased", "the", "[MASK]", "!" ]
  3. Language Model Processing: The language model processes these tokens and analyzes the context of the input. It recognizes that there is a missing word represented by "[MASK]", and the sentence is about a dog chasing something.
  4. Context and Probability: The language model calculates probabilities for potential words that can fill the blank. Based on its training data, it considers words that are commonly associated with a dog chasing something, such as "cat," "ball," "frisbee," "rabbit," "squirrel," etc. Each word is assigned a probability score.
  5. Sampling and Beam Search: Let's assume we use sampling for this example. The model randomly selects a word based on the probabilities assigned to each candidate word. For instance, it might select "cat" as the first token to fill the blank.
  6. Best Fit Completion: The model then evaluates the generated completion, "The dog chased the cat!" based on various factors like coherence and context. It may also take into account the probabilities of other words that could have been chosen.
  7. Output: The output is the best fit completion, which is "The dog chased the cat!" in this case.
Keep in mind that if we use beam search instead of sampling, the model will explore multiple possibilities by selecting the top-ranked words at each step. This can lead to a more deterministic output, but it may not be as diverse in some cases.
It's important to note that the actual probabilities and outcomes may vary depending on the specific language model's architecture and the dataset it was trained on. Additionally, the AI language model doesn't have access to any external context beyond the provided prompt and the knowledge within its training data, which goes up until September 2021.
 

Jon

Access World Site Owner
Staff member
Local time
Today, 08:50
Joined
Sep 28, 1999
Messages
7,425
I think that was kinda what I was saying. Although that is what it is doing, the intelligence itself is based on the structure of language. Certain things go together because they represent a common meaning.

e.g. In the UK, it always ____. Yes, you got it, rains! So the context of the sentence has implicit probabilities of what it likely. When you speak a sentence, we used our intelligence to construct that sentence. When a LLM predicts the next word, it has in effect tapped into our own intelligence to come up with what is likely, since it knows what the intelligence of humans would have said.

Where is my nobel prize?
 

jdraw

Super Moderator
Staff member
Local time
Today, 03:50
Joined
Jan 23, 2006
Messages
15,393
I agree Jon. The probabilities can be arranged based on all the source materials used to train the LLM. It can't just be brute force (see this Mike Wolfe article). Since it works in multiple languages, some of which have different sentence construct, there's a lot behind the kimono so to speak.

AS for your nobel prize, see this.
 

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 08:50
Joined
Jul 9, 2003
Messages
16,304
Language was constructed by humans to convey information. Inherent in that language is human logic.

I was thinking the same. I was making a cup of tea earlier, and I was constructing a sentence in my head. I couldn't think of the next word, but then several words made themselves apparent. I still couldn't find the word I was looking for, but one of the other words would do.

Then I thought, 'Is that what these large language models do?' I know that computer programs, software, have analyzed language and organized the words in to these LLM's. Weighing them in meaning, calculating how close they are to other words. I wondered, does my mind do the same thing.

I just finished that sentence, 'the same thing,' and I came to an abrupt halt because I didn't know what to say next. It's not like there is something lined up ready to say, and it's not like what I say appears out of thin air, although in a way, it does. As I am saying this now, I can feel a certain amount of emotion attached to certain words/sentences as I construct them. I don't think it's the construction of the sentence which is important to me, it's the emotion that's attached to it.

Now, don't get me wrong, I'm not getting all emotional and tearful talking about my mate ChatGPT, although I'd be very sad if he went mad, which I think is a distinct possibility. I'm waffling on now, things randomly occurring to me. Shall I speak them, or do you think it's time I shut up?
 

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 08:50
Joined
Jul 9, 2003
Messages
16,304
intelligence itself is based on the structure of language

Could it be that the only reason humans are intelligent is because we constructed language, and that formed intelligence? I mean, it's hard to think that a chat GPT isn't intelligent. Look at what it wrote there for Jack - absolutely brilliant! So, is that just the difference between Man and animals - we have a large language vocabulary which promotes intelligence? I mean, you can see it in the LLM.
 

Steve R.

Retired
Local time
Today, 03:50
Joined
Jul 5, 2006
Messages
4,705
Could it be that the only reason humans are intelligent is because we constructed language, and that formed intelligence?.
No. Intelligence is the ability to solve a problem. But language of course helps, as Jon noted:
Language was constructed by humans to convey information.
And following up on that. Humans have the ability to pass-on knowledge from one generation to the next and to share that information too. That allows for incremental "intelligence" as we are able to build upon past scientific accomplishments to innovate new scientific progress. Contrast this with many animals who could be intelligent and discover something over the course of their lifespan but are unable to pass that discovery to their offspring or to share it with others of their species. So it is quite possible that an animal could be just as intelligent as a human, but the only reason we are "advanced" is due to the ability to share knowledge versus animals having to "reinvent the wheel" each generation.

As a side observation there is the issue of biology: are we "advanced" because we have hands?
Animals that could be intelligent as humans also may not be able to progress due to their biological constraints. Whales and Dolphins live in water and don't have manipulative appendages. Makes writing as source of knowledge a bit difficult.

For some humor.
1690830400049.png


1690830490879.png
 

Users who are viewing this thread

Top Bottom