I'm in Love with ChatGPT

It amazes me that they don't really know how chat GPT works!


There seems to be some evidence that it works by analogy. Now if that's true, then I need to design some chat GPT prompts that would promote the analogy aspect. Any idea what I could try?
 
I don't know what you had in mind, so I took a simple prompt and used the word "analogy" to get generated response.
This show chatGPT has displayed the process in terms of my term(farming), but it certainly doesn't address how my question was broken into tokens, "matched/weighted statistical analysis or magic" and delivered.

Can you describe the assembly of an automobile using farming as analogy?

Sure! Let's imagine building an automobile using farming as an analogy:
  1. Planning the Crop (Designing the Automobile): Just as a farmer plans the type of crop to grow, engineers and designers carefully plan the automobile. They decide on the type of vehicle (sedan, SUV, truck, etc.), the features it should have, the materials to be used, and the overall aesthetics.
  2. Preparing the Soil (Preparation of Materials): Before planting, a farmer prepares the soil by tilling and fertilizing it. Similarly, in the automobile assembly, the raw materials like steel, aluminum, plastic, glass, and electronics need to be prepared and processed for use.
  3. Sowing the Seeds (Manufacturing Components): In farming, seeds are sown to grow the crops. In the automobile assembly, components like the engine, chassis, body panels, seats, and various parts are manufactured individually in factories.
  4. Growing the Crop (Assembly Line): As the crop grows, it needs nurturing and care. Similarly, the automobile components are brought together on the assembly line. Robots and skilled workers work in harmony, attaching various parts to the vehicle's frame, step-by-step, until it takes its shape.
  5. Harvesting (Quality Control): When the crops are ready, farmers harvest them. For the automobile, once the assembly is complete, quality control checks are performed to ensure that everything meets safety and performance standards.
  6. Transporting and Packaging (Distribution): After harvesting, the farmer transports and packages the crops for distribution. For automobiles, once they pass quality checks, they are transported to dealerships to be made available to consumers.
  7. Selling and Delivery (Sales): Just like farmers sell their crops to consumers, automobile dealerships sell cars to customers. The buyers take their new vehicles home, just as consumers take home their freshly harvested produce.
  8. Regular Maintenance (Service and Repairs): In farming, crops need regular care like watering and weeding. Automobiles also require maintenance, like oil changes, tire rotations, and occasional repairs, to keep them running smoothly.
  9. Life Cycle (Lifecycle of the Product): Both farming and automobile manufacturing involve a life cycle. For crops, it's the cycle from planting to harvesting, while for automobiles, it's from initial design to the end of the vehicle's usable life and eventual recycling.
  10. Environment Impact (Environmental Considerations): Farming has environmental considerations like sustainable practices and land preservation. Similarly, the automobile industry focuses on reducing emissions, developing electric vehicles, and recycling materials to minimize its environmental impact.
By using this analogy, we can see how the process of building an automobile mirrors the various stages and considerations involved in farming, making it easier to understand and appreciate the complexity of automobile assembly.
 
It amazes me that they don't really know how chat GPT works!
I have my own theory.

Language was constructed by humans to convey information. Inherent in that language is human logic. Any sentence has some sort of logical meaning. The LLM's are based on probabilities of what word comes next, as I understand it. Correct me if I am wrong. So, the LLM's are presenting probabilities of what a human would say, or, a humans logic. I may be way off the mark, but I wonder if the aggregation of the probabilities of what comes next is like the aggregation of all human thought. And so you get intelligence as an emergent property, because the structure of language is a distillation of human intelligence. I hope that made sense.

Example: The dog chased the ___. Most of us would say cat. So, the LLM would have high probability that cat comes next. See, it has intelligence because the probabilities represents what is probably true, when you take an average of what most people would say!
 
Subsequent to my automobile assembly in terms of farming, I asked chatGPT as follows:

I recognize that you break my question into tokens and perform analysis on them, but how exactly do you organize your best fit/completion to my tokens?




As an AI language model, my primary function is to generate text based on the input I receive. When you provide a question or a prompt, I break it down into smaller units called tokens. Tokens can be words, punctuation marks, or other characters, and they help me understand the context of the input. These tokens are then processed and analyzed by the underlying language model (GPT-3.5) to generate a response.
The process of generating the best fit completion to your tokens involves a few steps:

  1. Tokenization: The input text is tokenized, which means breaking it down into individual tokens. Each token has an associated ID, and these tokens are passed as input to the language model.
  2. Language Model Processing: The language model processes the input tokens to understand the context and meaning of the prompt. It uses its deep learning architecture to analyze the sequence of tokens and identify patterns, relationships, and correlations within the text.
  3. Context and Probability: Based on the input tokens, the language model calculates the probabilities of various possible completions or responses. It considers the context provided by the tokens and generates responses that are more likely and relevant in that context.
  4. Sampling and Beam Search: There are different techniques used to generate responses. One common approach is called "sampling," where the language model randomly selects tokens based on their probabilities to create a response. Another approach is "beam search," where the model considers the top-ranked tokens at each step and explores multiple possible paths to find the most likely completion.
  5. Best Fit Completion: After generating several possible completions, the language model evaluates the quality of each completion based on various factors such as fluency, coherence, relevance, and context. It then selects the most appropriate completion, which is considered the "best fit" based on these criteria.
  6. Output: The selected completion, which is the best fit response, is then presented as the output to your original question or prompt.
It's important to note that the process is probabilistic, which means that the language model doesn't have a single fixed answer for any given input. It generates responses based on statistical patterns learned during training on a vast dataset. As a result, the best fit completion may vary if you provide slightly different variations of the same prompt.
Additionally, the model's responses are influenced by its training data, which includes a wide range of sources from the internet. It does not have access to any external databases or real-time information beyond its training data, which has a cutoff date of September 2021.
 
Jon/Tony,

As a test

Can you provide more details on the process? Perhaps an example with some intermediary steps using The dog chased the ____!




Certainly! Let's go through an example using the prompt "The dog chased the ____!" and break down the intermediary steps in generating a completion for this prompt:
  1. Tokenization: The prompt "The dog chased the ____!" is tokenized into individual tokens:
    css
  2. [ "The", "dog", "chased", "the", "[MASK]", "!" ]
  3. Language Model Processing: The language model processes these tokens and analyzes the context of the input. It recognizes that there is a missing word represented by "[MASK]", and the sentence is about a dog chasing something.
  4. Context and Probability: The language model calculates probabilities for potential words that can fill the blank. Based on its training data, it considers words that are commonly associated with a dog chasing something, such as "cat," "ball," "frisbee," "rabbit," "squirrel," etc. Each word is assigned a probability score.
  5. Sampling and Beam Search: Let's assume we use sampling for this example. The model randomly selects a word based on the probabilities assigned to each candidate word. For instance, it might select "cat" as the first token to fill the blank.
  6. Best Fit Completion: The model then evaluates the generated completion, "The dog chased the cat!" based on various factors like coherence and context. It may also take into account the probabilities of other words that could have been chosen.
  7. Output: The output is the best fit completion, which is "The dog chased the cat!" in this case.
Keep in mind that if we use beam search instead of sampling, the model will explore multiple possibilities by selecting the top-ranked words at each step. This can lead to a more deterministic output, but it may not be as diverse in some cases.
It's important to note that the actual probabilities and outcomes may vary depending on the specific language model's architecture and the dataset it was trained on. Additionally, the AI language model doesn't have access to any external context beyond the provided prompt and the knowledge within its training data, which goes up until September 2021.
 
I think that was kinda what I was saying. Although that is what it is doing, the intelligence itself is based on the structure of language. Certain things go together because they represent a common meaning.

e.g. In the UK, it always ____. Yes, you got it, rains! So the context of the sentence has implicit probabilities of what it likely. When you speak a sentence, we used our intelligence to construct that sentence. When a LLM predicts the next word, it has in effect tapped into our own intelligence to come up with what is likely, since it knows what the intelligence of humans would have said.

Where is my nobel prize?
 
I agree Jon. The probabilities can be arranged based on all the source materials used to train the LLM. It can't just be brute force (see this Mike Wolfe article). Since it works in multiple languages, some of which have different sentence construct, there's a lot behind the kimono so to speak.

AS for your nobel prize, see this.
 
Language was constructed by humans to convey information. Inherent in that language is human logic.

I was thinking the same. I was making a cup of tea earlier, and I was constructing a sentence in my head. I couldn't think of the next word, but then several words made themselves apparent. I still couldn't find the word I was looking for, but one of the other words would do.

Then I thought, 'Is that what these large language models do?' I know that computer programs, software, have analyzed language and organized the words in to these LLM's. Weighing them in meaning, calculating how close they are to other words. I wondered, does my mind do the same thing.

I just finished that sentence, 'the same thing,' and I came to an abrupt halt because I didn't know what to say next. It's not like there is something lined up ready to say, and it's not like what I say appears out of thin air, although in a way, it does. As I am saying this now, I can feel a certain amount of emotion attached to certain words/sentences as I construct them. I don't think it's the construction of the sentence which is important to me, it's the emotion that's attached to it.

Now, don't get me wrong, I'm not getting all emotional and tearful talking about my mate ChatGPT, although I'd be very sad if he went mad, which I think is a distinct possibility. I'm waffling on now, things randomly occurring to me. Shall I speak them, or do you think it's time I shut up?
 
intelligence itself is based on the structure of language

Could it be that the only reason humans are intelligent is because we constructed language, and that formed intelligence? I mean, it's hard to think that a chat GPT isn't intelligent. Look at what it wrote there for Jack - absolutely brilliant! So, is that just the difference between Man and animals - we have a large language vocabulary which promotes intelligence? I mean, you can see it in the LLM.
 
Could it be that the only reason humans are intelligent is because we constructed language, and that formed intelligence?.
No. Intelligence is the ability to solve a problem. But language of course helps, as Jon noted:
Language was constructed by humans to convey information.
And following up on that. Humans have the ability to pass-on knowledge from one generation to the next and to share that information too. That allows for incremental "intelligence" as we are able to build upon past scientific accomplishments to innovate new scientific progress. Contrast this with many animals who could be intelligent and discover something over the course of their lifespan but are unable to pass that discovery to their offspring or to share it with others of their species. So it is quite possible that an animal could be just as intelligent as a human, but the only reason we are "advanced" is due to the ability to share knowledge versus animals having to "reinvent the wheel" each generation.

As a side observation there is the issue of biology: are we "advanced" because we have hands?
Animals that could be intelligent as humans also may not be able to progress due to their biological constraints. Whales and Dolphins live in water and don't have manipulative appendages. Makes writing as source of knowledge a bit difficult.

For some humor.
1690830400049.png


1690830490879.png
 
Could it be that the only reason humans are intelligent is because we constructed language, and that formed intelligence? I mean, it's hard to think that a chat GPT isn't intelligent. Look at what it wrote there for Jack - absolutely brilliant! So, is that just the difference between Man and animals - we have a large language vocabulary which promotes intelligence? I mean, you can see it in the LLM.
Animals have different levels of intelligence. For me, I think the fact we have language means that AI can hitch a ride on it. It crunches the numbers of probabilities. But if a species doesn't have language, then AI could perhaps mimic their behaviour, and so display intelligence that way. But no speakie? Well, animals don't speak! Not like a master orator anyway. I did watch a Crow on YouTube the other day keep repeating the word, "What!", but that was the extent of it.
 
Tony,
I just finished that sentence, 'the same thing,' and I came to an abrupt halt because I didn't know what to say next.
I don't think it happens at the sentence level. I think initially there is a concept/idea (much like an Entity that has attributes). Depending on the tokenization and chatGPT's associated attributes, my guess is that's when sentences occur.
You add the next sentence in terms of your thoughts/views of the concept. chatGPT adds sentences based on his statistical priorities until he/she feels the concept has been "covered".

From previous video discussion, I was impressed with Kaku's comment that only humans have the ability to conceptualize time. Yesterday, today, tomorrow, next week... that seemed to be his mark of intelligence -at least to some degree. He also said current AI has the intelligence of a dumb, cockroach. Animals have some instincts.
 
Keep your eyes open for a van with peculiar looking aerial sticking out at the top of it. I know what to do! Fit a camera in your vehicle with an AI watching out for this novel speed trap vehicle!
No need to look out for vans Uncle.
Most cars made in the last maybe five years and every single electric car is a networked computer. It knows where you are and where you've been and when. Not only that, Musk and others can probably predict where you are going to go. He and others are logging every journey you make and what your maximum and average speed was. The authorities will pay for that data whenever they need it. Cameras will eventually be abandoned as unnecessary as soon as there are so few of the older cars as not to matter. Or just left there just to catch those in the older cars.

Your car will advise the police when and where you were speeding, making an illegal turn, going over a double white line, or illegally stopping to drop someone off. Then, when you get home the fine is on your card. Your insurance company will increase your premium by analysing the data they will buy from car companies or the authorities. You are about to be charged into submission.

Even now in one way or another you are suppling so much information about yourself and the frightening thing is that there are people out there who want to know these trivialities. Nothing is private anymore and hasn't been for some time. Some of the increases in taxation is so the government can buy your data.

It's all about money, loads of money, money for nothing.
 
That's a brilliant explanation thank you Jack..

I note The following:-

Extract:-
Traditional software is designed to operate on data that’s unambiguous.

And

People resolve ambiguities like this based on context, but there are no simple or deterministic rules for doing this.

I heard mention that Elon Musk has adapted the language model idea to the Self driving software. With regard to the above I can see it would be a relatively simple process! (Tongue-in cheek)... But there's no doubt in my mind that we will shortly have full self driving and not long after that it will spread across the world. Then there will be massive legal battles, and eventually the right to drive on the roads will be taken away, because it will be deemed unsafe.
 
If there is a major surge in electric vehicle usage, then we can expect a corresponding series of brown-outs/black-outs.
Without electricity, possibly because of climate lockdowns, we'd have limited use of internet and chatGPT. Sounds like a vicious circle.
 

Users who are viewing this thread

Back
Top Bottom