I'm in Love with ChatGPT

The AI "war" is on and getting "hotter". I don't think the Chinese will feel any ethical constraints regarding the usage of AI for their benefit.
There is a side issue to this. In our arrogance, the US complains how the Chinese are "stealing" US technologies. What they forget is that the Chinese are here in the US learning and they are fully capable of creating their own so-called "intellectual property". Given current trends, the US may soon find itself being accused of "stealing" Chinese "intellectual property". After all, Asian students totally outperform non-Asian students in colleges.
 
I don't think the Chinese will feel any ethical constraints regarding the usage of AI for their benefit.
...nor any of the nation-less hackers who raise havoc from time to time.
 
My Journey with Chat GPT: An Update
A few years ago, I embarked on a grand project: creating a Flutter application that would allow people to communicate via icons, essentially an iconic language. However, due to my inexperience with Flutter and the complexity of developing sets of icons, this project soon petered out.

Reviving the Project with Chat GPT
As I gained an understanding of the capabilities of Chat GPT, I decided to dust off this project and give it another go. I chose to sidestep Flutter at this stage and iron out the problems with an MS Access version instead.

One realization that came to me was the need to simplify the task by reducing the number of required icons. I understood that reasonable communication could be achieved with around 2,000 words. And guess what? I found a list of these 2,000 words on the internet. The word list also identified the nouns, of which there were around 500. This gave me a good start, as nouns should be easier to convert into icons, images, pictures, or GIFs.

The Program Concept
The idea behind the program is simple: press a button displaying an icon. That "press" changes all the images on the buttons to represent the next layer. Each layer should consist of 12 images. To have any chance of this working, I needed to categorize the icons so that I have a category underneath each button.

The question then arose: how am I going to do that? Enter Chat GPT. My trusty companion, Chat GPT, began categorizing and sorting the words. Now, I must admit, it's not perfect at this. Some of the choices are questionable, but I'm heading in the right direction. I need to create a demo. Perfection at this stage is a luxury I can't afford. I need something I can show people.

Utilizing Chat GPT for VBA Code
I needed a table to store the words in, so I asked Chat GPT. It came up with a suggestion and on a whim, I asked, "Can you write VBA code that will create that table?" To my surprise, it did. I didn't even have to build the table myself.

Following this discovery, I realized that Chat GPT could also build a form with VBA automatically. I asked Chat GPT to create a form with a text box at the top, 12 buttons set out in a three-by-four pattern, and a couple of other buttons underneath. After a few attempts, it produced a very nice form with 12 buttons on it. The buttons weren't quite the right size and the spacing between them was overly large.

I could have asked Chat GPT to correct this, but I didn't. Instead, I went into the code and changed the settings myself. However, I realized that if I was less experienced, I could have asked Chat GPT to place comments in the code indicating where I should change the width, the height, and the positioning of the various buttons. This would have made the process much easier.

Progress and Limitations
I had a really productive day yesterday. I've made immense progress on my project. I used Chat GPT so much that it blocked me from using it for a few hours, stating that I'd gone over my limits, even though I'm on a paid subscription.

That's the end of today's Chat GPT update. I might have more things to share later in the week.
I thought CHATGPT 4 is limited but 3.5 is unlimited?
 
Will ChatGPT survive if plaintiff's win the class action lawsuit against it? Can AI un-learn things?
 
This is an absurd investigation. On surface an Ai has no capability to evaluate the validity of the information it is fed. Consider recent claims by John Kerry asserting that he owns no private jets to imply that he does not use them. Kerry's misleading implication is that he has not borrowed a private jet from someone else. Given these conflicting "facts", an AI would be caught in a conundrum. To make matters worse, those feeding the AI information could be feeding it (exclusively) one-sided information.

What is the FTC's objective? :unsure:
 
Last edited:
I personally probably don't believe most articles and photos now..

I was taken in by the POPE jacket and disappointed by Belling cat posting false Trump photos laughing at them and saying this is terrible when others do it but isn't it funny when we do it...
 
Healthy skepticism should be the norm. I assume we're being manipulated long before the internet and AI.
 
I think I've perfected google searches, I'm really good at finding this or that half-remembered bit of something. And then ChatGPT comes along and does it at least five times better.
 
I can't describe my reaction, it was sort of like a sickening incredulousness... My first direct contact with wokeness! It has got to be stopped......
I hate all the wokeness and over-sanitisation of the AI responses. However, it won't be long before models will get released without all that bullshit. They are most likely to be open sourced models, in my view.

Let's face it, nearly all movies on Netflix involve death, violence, horror and so on. It is what makes it interesting! We would all fall asleep watching a happy clappy bunch of people smiling and talking about how wonderful everything is. We want corruption, intrigue, thrills!

Aside from all that, it seems that you have had some interesting debates with the AI. I was quite impressed with its argumentation, even though I don't agree with its perspective. At least it gave examples like citing Gandhi etc.

A more sinister thought is if Netflix ends up providing a never ending stream of movies where it is all so positive. I think they tried that already and had a backlash over it, if I recall correctly. Or was it a backlash about employees only wanting woke content, I think perhaps that. They would lose most of their viewership.
 
They would have to ban all historical dramas, war films, serial killer movies, Hitchcock...the list goes on. Ironically, I am sure if you asked how many people died during WW2 it would provide the answers, instead of saying, "It is negative to talk about such things. Let's talk about more positive things, such as changing the genders of young children through surgery."
 
I agree with most of what you have written, except one part which I will get to. On my daily walk to the local coffee shop, I was contemplating advances in medicine and what factors will go into improving life expectancy. Firstly, there is the explosion in AI where it will advance research massively. Then we have quantum computing, which will eventually help us model cells. Currently, classical computing is far too slow to do a great job at this. AI will help advance quantum computer, with a synergistic loop going on between the two. As quantum computers get faster, so will the AI models that run on them and you get runaway intelligence.

Now to the part I disagree with.

We are on the first step of the seriously steep part of the singularity graph. It is a truly exponential step as described by Ray Kurzweil.
Kurzwell has been misleading us when her refers to the "Knee of the curve", i.e. where the graph gets steeper. It is as though this steep part of the graph has a higher rate of growth than the beginning part of the curve. But this is false! The rate of growth is always the same. If you graph something that grows at say 20% per year, the graph is always showing a gradually steepening curve. Yet the growth is still only 20% per year.

This seems a bit confusing to most. The steep part of the curve all depends on how far back the starting point is. If you start 15 years ago on a 20 year graph, the last 5 years always appear steep. Yet if you redraw that diagram 1 year ago, the part of the curve you are currently on is shallow. The distinction is that the rate of change would appear steep only if you look at it from the starting point of 20 years ago. If you look at it from today, the rate of change appears far slower because you are comparing it to what you are used to. I hope all that made sense!

If you dispute any of the above, just do a little experiment in Excel to find out what the graphs look like based on my examples.
 
The underlying study for the article above: How Is ChatGPT’s Behavior Changing over Time.
How do GPT-4 and GPT-3.5’s math solving skills evolve over time? As a canonical study, we explore
the drifts in these LLMs’ ability to figure out whether a given integer if prime.
One would think that solving math problems would be an easy task for an AI since. Especially since computers are routinely used to solve math problems. So why has the ability of AI to correctly identify a prime number declined?
Why was there such a large difference? One possible explanation is the drifts of chain-of-thoughts’
effects. Figure 2 (b) gives an illustrative example. To determine whether 17077 is a prime number, the
GPT-4’s March version followed the chain-of-thought instruction very well. It first decomposed the task
into four steps, checking if 17077 is even, finding 17077’s square root, obtaining all prime numbers less
than it, checking if 17077 is divisible by any of these numbers. Then it executed each step, and finally
reached the correct answer that 17077 is indeed a prime number. However, the chain-of-thought did
not work for the June version: the service did not generate any intermediate steps and simply produced
“No”. Chain-of-thought’s effects had a different drift pattern for GPT-3.5. In March, GPT-3.5 inclined
to generate the answer “No” first and then performed the reasoning steps. Thus, even if the steps and
final conclusion (“17077 is a prime number”) were correct, its nominal answer was still wrong. On
the other hand, the June update seemed to fix this issue: it started by writing the reasoning steps
and finally generate the answer “Yes”, which was correct. This interesting phenomenon indicates
that the same prompting approach, even these widely adopted such as chain-of-thought, could lead to
substantially different performance due to LLM drifts.
Superficially, it would appear that giving an AI "freedom" to consider how to solve a problem introduces logical ambiguities that can result in inappropriate responses.
As a secondary question, why didn't the AI use a subroutine to test if a number was prime? I suspect that was beyond the scope this study. The apparent focus of this study was to evaluate, how an AI's "freedom" to think through a problem could be derailed.
“It’s sort of like when we're teaching human students,” Zuo says. “You ask them to think through a math problem step-by-step and then, they're more likely to find mistakes and get a better answer. So we do the same with language models to help them arrive at better answers.” (Yahoo! article)
 
I thought the relevance of super quantum capacity/speed vs current security approaches was interesting. Put quantum computing with evolving AI and you'll never know who or what to believe. (Not that we have it under control now)

Update: Tony I am pretty sure that is not the video I watched. Sorry for the bad link. Not sure what is going on--that's the only 1 in my history. It was about the speed and research of quantum computers/computing and how security/encryption/decryption methods we have would be totally vulnerable. It was more about the chaos potential if any and all privileged/secure networks/systems/files were easily hacked and newer/faster AI could be/end up in the hands of hackers/malcontents and just plain bad guys.

I did just watch the video with Kaku in #148 --very good.(y)
 
Last edited:

Users who are viewing this thread

Back
Top Bottom