benefit of GPT (1 Viewer)

Jelari

New member
Local time
Today, 12:30
Joined
Sep 15, 2024
Messages
2
I'm new to AI and don't understand how GPT can benefit me?
 
It makes it possible for you to pretend to be smarter without actually learning anything.
 
AI has nothing to do with Access.
ChatGPT solved a lot of my problems in Access. Just a query and the answer is on the screen. I also admit that it made some mistakes. Just like a lot of experts here. I frequently see "I stand corrected". So what's the difference?

It makes it possible for you to pretend to be smarter without actually learning anything.
It's actually true. But what's wrong with it. If the job is done, does it hurt if I look smart?
I'm an engineer and my main job is design. AI has helped me so many times that I can not even count. Job is done, customer is happy and I feel I'm smart. I don't see anything wrong here, though others may.
We are living in an era where knowledge doesn't count. There was a time that the value of the people was judged by the amount of they know. But not anymore. How many of us read books? And do you think that "book" will exist in a near future?

Think it this way. There was a time that PC has not been invented. Everything should be done by our calculations. But today, we don't learn a lot of things because a computer is doing that part of the job for us. Is it wrong to use a PC to help us doing what we can not?
Does it mean I'm pretending to be smarter because the hard part of the job is on PC's shoulders And I don't need to learn that part?

What will you be without your PC? What can you do without your phone? Everything here is to help us for job being done.
Even Access is run by a PC. To me, thinking "AI is helping us to be smarter without learning" is the same as telling "Access is helping us to manage a database without knowing how"?
What's the difference. You use Access without learning how it works behind the scene. I use AI to help me Access work better.
Ai makes mistakes, Access has its bugs.

At present, a lot of our hospitals where I live, are using AI to find the cause of what a patient is suffering from. They have the data from different hospitals world wide and can compare them with the results of medical tests passed to them, to find the disease and the cure. Do you think using AI in medical science is wrong because it helps the doctors look smart and not learn?

Just my two cents.
 
Last edited:
AI, as it appears to be currently implemented, is a statistical process for which proof and/or correctness of result are sometimes lacking. The current forms of AI will give you what appears to be the statistically preferred answer to your question. If it is simple, the odds favor your answer being at least close to accurate. If it is not so simple, or if there is room for opinion, your margin of error increases. For political questions (if you can even get an AI to answer them) the margin for error is very high. For medical questions, the more urgent the problem, the less often I would ever ask an AI anything.

We have seen AI answers on this forum where the proposed answer SHOULD have been in VBA but statistically that is close to VB6 and several of the popular scripting languages. So a little cross-contamination has been known to creep in.

If you use the AI judiciously as a way to see how a problem MIGHT be approached and just recognize that you may have gotten an answer in some dialect of VB other than "pure" VBA, then you will be less disappointed.
 
ChatGPT solved a lot of my problems in Access. Just a query and the answer is on the screen. I also admit that it made some mistakes. Just like a lot of experts here. I frequently see "I stand corrected". So what's the difference?


It's actually true. But what's wrong with it. If the job is done, does it hurt if I look smart?
I'm an engineer and my main job is design. AI has helped me so many times that I can not even count. Job is done, customer is happy and I feel I'm smart. I don't see anything wrong here, though others may.
We are living in an era where knowledge doesn't count. There was a time that the value of the people was judged by the amount of they know. But not anymore. How many of us read books? And do you think that "book" will exist in a near future?

Think it this way. There was a time that PC has not been invented. Everything should be done by our calculations. But today, we don't learn a lot of things because a computer is doing that part of the job for us. Is it wrong to use a PC to help us doing what we can not?
Does it mean I'm pretending to be smarter because the hard part of the job is on PC's shoulders And I don't need to learn that part?

What will you be without your PC? What can you do without your phone? Everything here is to help us for job being done.
Even Access is run by a PC. To me, thinking "AI is helping us to be smarter without learning" is the same as telling "Access is helping us to manage a database without knowing how"?
What's the difference. You use Access without learning how it works behind the scene. I use AI to help me Access work better.
Ai makes mistakes, Access has its bugs.

At present, a lot of our hospitals where I live, are using AI to find the cause of what a patient is suffering from. They have the data from different hospitals world wide and can compare them with the results of medical tests passed to them, to find the disease and the cure. Do you think using AI in medical science is wrong because it helps the doctors look smart and not learn?

Just my two cents.
I think it's a matter of blending your knowledge and experience with the AI results. If the AI helps you solve a problem so much the better.

As you say, AI has helped you, which isn't the same as saying AI has replaced you. Hopefully we don't all get replaced by AI. :D
 
I see AI (specifically ChatGPT and GitHub CoPilot) as tools that help me complete routine programming tasks faster.
It doesn't do the thinking for me, but CoPilot makes pretty good suggestions about what I need next when writing code - not complete methods - but if you use good names that describe specific tasks, it can happen that the suggested code needs little to no change.

It is more complicated in Access VBA. I do use an add-in that can insert code, but since the code does not come directly via IntelliSence, I rarely use it. However, it helps with code blocks that I rarely need (such as http accesses), as it suggests the code to me and I don't have to think about what I need to write. However, it is always necessary to check the code for correctness.

Ein anderes Beispiel für VBA:
Ich arbeite gerade daran, mein Unit-Test-Add-In mit AI zu erweitern, dass es Test-Prozeduren inkl. Testwerten erstellt.
Das funktioniert einigermaßen, ist aber noch ausbaufähig. Der Prompt, der and die ChatGPT-API geschickt wird, muss von mir noch verbessert werden.

Code example:
I let ChatGPT create a test method with test parameters for the following interface:
Code:
Public Function BuildCriteria(ByVal FieldName As String, ByVal FieldDataType As SqlFieldDataType, _
                              ByVal RelationalOperator As SqlRelationalOperators, _
                              ByVal FilterValue As Variant, _
                     Optional ByVal FilterValue2 As Variant = Null, _
                     Optional ByVal IgnoreValue As Variant, _
                     Optional ByVal DisableIgnoreNullValue As Boolean = False) As String
... (Note: The entire procedure code was transmitted via the API)
End Function
The result:
Code:
'AccUnit:Row("CustomerID", SQL_Text, SQL_Equal, "1001", Null, Null, "CustomerID = '1001'").Name("TestEqualText")
'AccUnit:Row("OrderAmount", SQL_Numeric, SQL_GreaterThan, 500, Null, Null, "OrderAmount > 500").Name("TestGreaterThanNumeric")
'AccUnit:Row("OrderDate", SQL_Date, SQL_Between, #1/1/2020#, #12/31/2020#, Null, "OrderDate BETWEEN #1/1/2020# AND #12/31/2020#").Name("TestBetweenDates")
'AccUnit:Row("ProductName", SQL_Text, SQL_Like, "%milk%", Null, Null, "ProductName LIKE '%milk%'").Name("TestLikeText")
Public Sub TestBuildCriteria(ByVal FieldName As String, ByVal FieldDataType As SqlFielddataType, _
                             ByVal RelationalOperator As SqlRelationalOperators, ByVal FilterValue As Variant, _
                             ByVal FilterValue2 As Variant, ByVal IgnoreValue As Variant, ByVal Expected As String)
    ' Arrange
    Dim sqlTools As New SqlTools
 
    ' Act
    Dim result As String
    result = sqlTools.BuildCriteria(FieldName, FieldDataType, RelationalOperator, FilterValue, FilterValue2, IgnoreValue)
 
    ' Assert
    Assert.That(result, Iz.EqualTo(Expected))
End Sub
This result is good. But this is not always the case.
In my opinion, the problem is that GPT-4o (which I use) probably has little data for unit tests with VBA and therefore tends to imitate other languages.
It will probably be similar for other more complex tasks.

AI + Access beginners:
I can't say whether AI helps an Access beginner with little programming experience. I suspect that a suboptimal result will also be achieved quickly, as the question for ChatGPT & Co. is probably not formulated precisely enough.
 
Last edited:
ChatGPT solved a lot of my problems in Access. Just a query and the answer is on the screen. I also admit that it made some mistakes. Just like a lot of experts here. I frequently see "I stand corrected". So what's the difference?

It's actually true. But what's wrong with it. If the job is done, does it hurt if I look smart?
I'm an engineer and my main job is design. AI has helped me so many times that I can not even count. Job is done, customer is happy and I feel I'm smart. I don't see anything wrong here, though others may.
We are living in an era where knowledge doesn't count. There was a time that the value of the people was judged by the amount of they know. But not anymore. How many of us read books? And do you think that "book" will exist in a near future?

Think it this way. There was a time that PC has not been invented. Everything should be done by our calculations. But today, we don't learn a lot of things because a computer is doing that part of the job for us. Is it wrong to use a PC to help us doing what we can not?
Does it mean I'm pretending to be smarter because the hard part of the job is on PC's shoulders And I don't need to learn that part?

What will you be without your PC? What can you do without your phone? Everything here is to help us for job being done.
Even Access is run by a PC. To me, thinking "AI is helping us to be smarter without learning" is the same as telling "Access is helping us to manage a database without knowing how"?
What's the difference. You use Access without learning how it works behind the scene. I use AI to help me Access work better.
Ai makes mistakes, Access has its bugs.

At present, a lot of our hospitals where I live, are using AI to find the cause of what a patient is suffering from. They have the data from different hospitals world wide and can compare them with the results of medical tests passed to them, to find the disease and the cure. Do you think using AI in medical science is wrong because it helps the doctors look smart and not learn?

Just my two cents.

While my riposte was, at least in part, made in jest, there is a kernel of truth to it. I used the term "pretend" to convey the idea that many people do seem to think that actually knowing something is irrelevant. All that matters is that there is an answer which can be copied and pasted into some other location.

I agree that using tools differentiates mankind from almost every other animal. And the sophistication of our tools constantly increases. The danger of AI, in my opinion, is that some foolish people mistake the AI generated script for something it is not. An AI scripted response can appear to be technically valid yet subtly flawed. That's why I think it is wrong to pretend to be smart because you used a tool.

The analogy I would use to explain this point is like opening a tool box, extracting a wrench, and using it to drive in a nail. Sure, the tool has a flat, poundy surface that seems to get the job done. Unfortunately, any understanding about the process is tossed aside. The tool is misused and the user knows no better.

Use AI as a tool if you wish, but please don't pretend it makes anyone really any smarter; it is nothing more than a tool. And the key to using any tool is to understand what the tool is designed for, how to use it, and above all, how not to use it.
 
After I got my doctorate, I found that people had all sorts of expectations that caused them to ask me some of the damnedest questions I had ever heard. But I also learned the solution to that problem. It doesn't matter if you actually know a particular answer to a peculiar question as long as you can say, "I don't know the answer but I know where to find it for you." Then, of course, you have to actually FIND their farshlugginer answer. But the key is knowing how to look up things. Which, oddly enough, IS a skill I had to learn when doing my doctoral research. Sometimes the skill isn't knowing the answer but knowing the right question.

In light of that explanation, ChapGPT may help you find the right answer by asking the right question.
 
I think we humans, are afraid of new things.
There have been many inventions and technologies throughout history that were initially met with skepticism, fear, or resistance but are now commonly accepted as part of daily life. People were against using electricity because of its potentioal dangers, against Graham Bell because of concerns about privacy, against Edward Jenner because of his smallpox vaccination method, against railroad and trains because of their speed, against credit cards because of the possibility of identity theft, against microwave oven because of radiation, against organ transplants because of the risk of rejection, the long-term survival of patients, and the moral implications of taking organs from one person to save another.

They were even agsint using computers for being too complicated for the average person to use. They were also agaist computers because of being depended on machines for tasks that were traditionally done manually.
And here we are. None of us can live a day without a computer, yet we all think we are smart.

When I was in collage, we had to learn hundreds of formulas for calculating simulation flaw and destructive tests against our designs. And spent hours with calculators to do a test.
Today, no one needs to remember all those physics formula. Because a simple PC program do it in a matter of few seconds. Should I assume myself smarter than today's graduated students because I know how to calculate? And believe they are not smart because they rely on a PC (a tool)?

Use AI as a tool if you wish, but please don't pretend it makes anyone really any smarter;
Unfortunately I do. Because my dictionary says smart=clever and clever means mentally quick and resourceful. If I have the answer to the problem, no matter I had it or I searched for it, I assume myself being smart and I can't help it. Because I actually solved the problem.
 
Last edited:
I think we humans, are afraid of new things.
There have been many inventions and technologies throughout history that were initially met with skepticism, fear, or resistance but are now commonly accepted as part of daily life. People were against using electricity because of its potentioal dangers, against Graham Bell because of concerns about privacy, against Edward Jenner because of his smallpox vaccination method, against railroad and trains because of their speed, against credit cards because of the possibility of identity theft, against microwave oven because of radiation, against organ transplants because of the risk of rejection, the long-term survival of patients, and the moral implications of taking organs from one person to save another.

They were even agsint using computers for being too complicated for the average person to use. They were also agaist computers because of being depended on machines for tasks that were traditionally done manually.
And here we are. None of us can live a day without a computer, yet we all think we are smart.

When I was in collage, we had to learn hundreds of formulas for calculating simulation flaw and destructive tests against our designs. And spent hours with calculators to do a test.
Today, no one needs to remember all those physics formula. Because a simple PC program do it in a matter of few seconds. Should I assume myself smarter than today's graduated students because I know how to calculate? And believe they are not smart because they rely on a PC (a tool)?


Unfortunately I do. Because my dictionary says smart=clever and clever means mentally quick and resourceful. If I have the answer to the problem, no matter I had it or I searched for it, I assume myself being smart and I can't help it. Because I actually solved the problem.
I remember when the most reasonable response to a certain kind of question was, "Here, let me Google that for you."

The equivalent today is "Here, let me ask ChatGPT about that for you."

You can go right ahead and consider yourself clever because you know how to use ChatGPT.
 
I frequently see "I stand corrected". So what's the difference?
Because the AI is a computer, people tend to believe it without question. When you are asking a programming question, what can go wrong? Aside from wasting hours going down a rabbit hole, probably nothing. If you actually test the code, you are given very carefully and do not just assume it to be correct, you will be OK. The real danger with AI is when you do not have the background or subject matter knowledge to recognize a flaw in the answer. It is especially dangerous when the bias of the programmers or designer comes through. When you ask why you should vote for Kamala, you get wine and flowers and violins, and they mention her "experience" as a VP. When you ask about Trump, it used to say that it couldn't weigh in on political questions. Now it gives some tepid response, but it doesn't mention his experience as President. So, Kamala's do-nothing VP job gets more importance than Trump's actual experience as President for four years. Making the life and death decisions every single day. And Kamala's failure at everything, including keeping her staff (94% attrition rate over 3.5 years), is lauded.
 
It is especially dangerous when the bias of the programmers or designer comes through.

Your post proves that point perfectly. That is your bias
 
Your post proves that point perfectly. That is your bias

Yes, but Colin... Pat wasn't wrong.

You have to remember that the management of AI facilities more often than not is liberal in orientation, so their pet project will be liberal in orientation. It's a matter of what they choose to used when they feed/train the AI. It makes its responses based on the statistics of what it was fed. Therefore if it is fed a biased pile of ... source material, it will produced statistically biased answers. That is the nature of this beast.
 
Yes, but Colin... Pat wasn't wrong.
That may be your opinion but it isn't fact.
You have to remember that the management of AI facilities more often than not is liberal in orientation, so their pet project will be liberal in orientation.
That is also based on your bias / opinion. What evidence do you have for that assertion?
Bringing politics into discussion of this type is unhelpful.
 
When you ask why you should vote for Kamala, you get wine and flowers and violins, and they mention her "experience" as a VP. When you ask about Trump, it used to say that it couldn't weigh in on political questions. Now it gives some tepid response, but it doesn't mention his experience as President. So, Kamala's do-nothing VP job gets more importance than Trump's actual experience as President for four years. Making the life and death decisions every single day.

Chatty used to be very woke... It's a lot better now! Still defaults to being woke, but you can argue your case.

I argued with chatty about transgender men as women infiltrating women's sports... It was adamant that it was for the best .

But I eventually made some good points as to why men should not be allowed in woman's spaces, and it acknowledged there were issues....

Why does the Corpus of data it's trained on make it woke?
 
Your post proves that point perfectly. That is your bias
Wow, so you think it is OK for the AI to talk about Kamala as if she were the second coming (she isn't) but say they were not allowed to say anything about Trump. And when that hit the fan and they "corrected" their algorithm, they chose to completely ignore four years of on-the-job training and say that 3.5 years of failure was important? You probably don't understand American politics, which is fine. You don't have to. But the VP is almost always treated as the "spare". We don't have an heir. He/she/it is supposed to keep a low profile and especially to not criticize the President. Biden actually hated Kamala though so he kept giving her public tasks that he knew she would fail at. It was pretty amusing to watch. Biden had a great sense of humor and some of it still slips out.

I gave you a specific example of bias. And, sadly it is political but that is where the true danger of AI rests. This is like "race". To some people, everything they disagree with is "racist" to others, every statement that even mentions politicians is "political". I suppose bad code could make planes fall out of the sky if a stupid programmer took some code as correct and didn't bother to test. But political bias run amuck leads us to Hitler. And since Trump wasn't Hitler the first time, there is no reason to keep saying that he will be Hitler the second time.

AI's are controlled by biased people. I would not approve of an AI having a conservative bias so you may accuse me of bias, but at least I understand what bias is and I understand that we must combat it every single day. You cannot just believe what the talking heads tell you 24/7. Repeating a lie doesn't make it true.
That is also based on your bias / opinion. What evidence do you have for that assertion?
What evidence do you have that AI's don't acquire the bias of their developers?
 
@Pat Hartman @The_Doc_Man
For god's sake stop making every post political. You have a lot of political threads and you can say anything you want there.
Here, we are talking about AI in programing and science. The thread is in Access forum and not in water cooler. You know the difference.

Please Stop inserting your political issues into every and each post.

You are admins here, how comes you break the rules?
After a year or two here, it's really sad to see this site heads toward being a political site rather than a technical one.

so their pet project will be liberal in orientation.
Wow, so you think it is OK for the AI to talk about Kamala as if she were the second coming (she isn't) but say they were not allowed to say anything about Trump.
 
Last edited:

Users who are viewing this thread

Back
Top Bottom