VBA and ChatGPT

We had an interesting conversation last night following John's presentation on VBA and ChatGPT.

For me, the main take-away at this point, subject to having time to review the recording, was the importance of developing a strategy for creating effective prompts to guide the AI assistant you acquire. That is followed closely by the importance of being persistent and diligent about testing and validating responses prior to deployment.

The role of the AI assisted developer needs to be clearly defined and understood to be effective And the role of the AI assistant must also be clearly defined and understood.

I generally like the idea of incorporating a responsible AI assistant as a key role in most development projects.
 
I am using the new ChatGPT 5 and boy is it not an improvement. It takes three times as long to process my admittedly complicated requests. but then it flat ignores my prompts.

The way I worked in the past was to upload several documents to chat to "hydrate" it (chat's terminology) with needed stuff to understand what we are doing. Things like a metadata dump of the tables and fields from SQL Server, an ODT document with what the project is and specifically what we are doing, a coding standards etc etc. This before even entering my first prompt. And so I did that. However I wanted to do some housekeeping and so my prompt was to have chat do XYZ for me.

Yea... no!!! It was as if it hadn't even read the prompt. It launched into how cool what we had done was (true), and how it could improve this thing and that thing and... several pages of that. Not a word in there anywhere about the prompt I just entered.

Anyway it took me a half hour to wrestle it back onto what I wanted. I actually had to enter prompts specifically saying that it had not answered my previous prompt and THEN it answered that prompt. WEIRD!!! But when it asked if I wanted it to do something related to what I had asked for, and I said yes, it launched back into what it could do to change this thing or that thing. Totally ignored my latest prompt.

This is continuing. I am slowly getting stuff done but only by steering it pointedly back to my last input prompt.

HUGE BUG I THINK

And no way to roll back to ChatGPT 4.
 
Last edited:
I am using the new ChatGPT 5 and boy is it not an improvement. It takes three times as long to process my admittedly complicated requests. but then it flat ignores my prompts.

The way I worked in the past was to upload several documents to chat to "hydrate" it (chat's terminology) with needed stuff to understand what we are doing. Things like a metadata dump of the tables and fields from SQL Server, an ODT document with what the project is and specifically what we are doing, a coding standards etc etc. This before even entering my first prompt. And so I did that. However I wanted to do some housekeeping and so my prompt was to have chat do XYZ for me.

Yea... no!!! It was as if it hadn't even read the prompt. It launched into how cool what we had done was (true), and how it could improve this thing and that thing and... several pages of that. Not a word in there anywhere about the prompt I just entered.

Anyway it took me a half hour to wrestle it back onto what I wanted. I actually had to enter prompts specifically saying that it had not answered my previous prompt and THEN it answered that prompt. WEIRD!!! But when it asked if I wanted it to do something related to what I had asked for, and I said yes, it launched back into what it could do to change this thing or that thing. Totally ignored my latest prompt.

This is continuing. I am slowly getting stuff done but only by steering it pointedly back to my last input prompt.

HUGE BUG I THINK

And no way to roll back to GhatGPT4.
100% agree. I've been waiting for nearly 3 1/5 hours for a report. It is very extensive and ambitious, I know, but ChatGPT 5 has told me "another 15-30 minutes" for 2 hours now. I'm not sure if this particular request is really all that complex, or if it's just gone brain dead in the transition.
 
And... the new limitations are potentially huge. 80 exchanges in 3 hours for the $20 / month version. I am spending all my prompts just trying ti get it to do something useful.
 
Just clarified: For the GPT-5 model on ChatGPT Plus, OpenAI does not publish an explicit message-per-day cap. Instead, they use rolling windows—typically allowing several dozen messages per a few-hour period based on the model and current load.

So:
  • No fixed daily limit on GPT-5 usage.
  • Instead, there’s a rolling hourly/3-hour cap that refreshes over time.
  • Voice conversation uses the same message budget—and is also constrained by session duration (around 15 minutes).
These caps can adjust if usage surges occur.
 
The day of your presentation I was trying to add a 2 command to my multiselect treeview form. First to show only those nodes with the check box checked, then back to show all nodes. Microsoft common controls treeview controls have a visible property that can be set true or false, making it look easy, except the property has no affect on the tree. It required deleting the un-check nodes then restoring them. Saving nodes to a collection before deleting handled the hide but I was having trouble restoring them in the same order. The solution was to copy the entire tree to a collection then restore the tree when needed.

After the presentation I tried Copilot, It took a few attempts with debugging to get a working solution. I posted it in sample databases. My come away is that Copilot code will often reference properties that don't exist or not compile, but after informing Copilot of the error and the line of code it helped me get to a working function which I then cleaned up and added error trapping.

Being as I had some time I used Copilot to help me with positioning the form. It gave me functions to give me the height of the tool bar and border of the parent form.
 
The thing about AI is that while it knows "everything" it often knows things that aren't real. :D:eek:
 
100% agree. I've been waiting for nearly 3 1/5 hours for a report. It is very extensive and ambitious, I know, but ChatGPT 5 has told me "another 15-30 minutes" for 2 hours now. I'm not sure if this particular request is really all that complex, or if it's just gone brain dead in the transition.
Did you ever get your report?
 
I have to shrink my project list. ;)
I purchased a 4tb ssd. I had two 1tb ssds, with my dev D: drive on the first drive. It became too large and my c: was running out of room as well. And the saga begins...

In no particular order

I replaced the boot drive with my new ssd. THAT didn't work!!!
Got that squared away. I bought a little program to help me with messing with the partitions since Windows util is so useless.
I migrated the d: drive over to the new ssd.
Deleted the d: on the boot drive.
Expanded the c: partition to fill the larger space.
Could NOT get the OS to see the expanded space in c:.
Messed around,. Messed around some more.
And some more.
Then I realized that I had encrypted the C: drive (Veracrypt) and it was now a 450gb encrypted partition that windows THOUGHT was a real partition. But it was NOT going to let me see the rest of the space.
Unencrypted the boot drive. Used the partition manager thingie to expand the c: drive to fill the disk.
Rebooted and voila, I have a 900gb c: drive. Reencrypted the drive with Veracrypt (because I didn't have Windows pro)
Decided I really needed to upgrade to pro so I could use bitlocker to encrypt the d: drive.
Bought the upgrade to Pro.
Tried to use Bitlocker on the d: drive only to be informed that Bitlocker would NOT encrypt D: unless C: s=was also encrypted using Bitlocker.
Unencrypted c: (veracrypt)
Started Bitlocker encrypting the C: drive.
Started trying to encrypt the d: drive.
Had a hidden modal dialog bos=x re Bitlocker which (being hidden) I didn't realize was locking bitlocker up.
Messed around for another hour before deciding to reboot to see if that helped.
It did help because in the process of shutting everything down to reboot I discovered the hidden modal bitlocker dialog.
It has been that kinda day.

In the end, many many hours later... bitlocker is doing it's thing "behind the scenes"
and I am slowly regaining my sanity.
 
100% agree. I've been waiting for nearly 3 1/5 hours for a report. It is very extensive and ambitious, I know, but ChatGPT 5 has told me "another 15-30 minutes" for 2 hours now. I'm not sure if this particular request is really all that complex, or if it's just gone brain dead in the transition.
I tried to get it to write a Class Module for me and the code looked impressive enough until I looked at it.

Even with my MEAGER knowledge of Class Modules I was able to see it wouldn't work and when I mentioned this, it said "Yeah, you're RIGHT! Try this instead..." I'm sorry, I thought Chatty was supposed to me telling ME, not the other way around.

Finally got to where I couldn't see any issues and after I got a clean compile, tried to run it and got runtime errors on initiation. I finally said screw and I will try this the old fashioned way...

Not only is 5.0 a liar, it SUCKS!!!
 
Not only is 5.0 a liar, it SUCKS!!!

I had a completely different experience with open AI 5....


I fed these instructions into both GROK, and open ai GPT 5 ---- Grok could not fix the error Gemini had already given up and gpt5 came up with the solution ! --- I was well impressed


Although I have noticed it is not working as well lately ! It was a few days ago when it cured the above mentioned problem....

I might have to switch back to Claude opus, it's expensive, but it seems to get the job done!
 
I started a request early Saturday, the 9th. ChatGPT 5.0 promised to complete it but offered no timeline. After 2+ hours I asked for a progress report. It replied along the lines of "I'm working on it."

Over the next two days, I got a variety of firm deadlines, (2-4 hours, 4:00pm, and others). Each time I checked in it acted surprised and wanted to know should it start right now?

Today, I got snippy and told it to just do the work. It now estimates Thursday, the 14th. I'll not hold my breath.

The pattern has been, solemn promise, failure to respond, surprise that I'm still waiting, a new promise to start right away and finish this time for sure. I submitted a significant request, I know. But this performance is disheartening. It suggests that lying and temporizing are now built-in to the model.
 
Although I have noticed it is not working as well lately ! It was a few days ago when it cured the above mentioned problem....

I have an account with Grok, which I am using more and more for coding... I asked it to examine the X platform and report on people's views on GPT 5 - here is a summary:-

Overall Confirmation of Your Feelings
Yes, the X sentiment strongly confirms your and your colleagues' views: GPT-5 is seen as a single "flash in the pan" amid broader letdown, with inconsistent results, poor routing, and no game-changing features justifying the hype. It's "sometimes good" in niches like coding or reduced errors, but "mostly bad" for general use, leading many to stick with or revert to o4/GPT-4o. This mirrors your workflow shift after one brilliant solve, followed by disappointment. If OpenAI iterates on feedback (e.g., better routing or access to older models), it could improve, but current discourse is skeptical. If you'd like me to dive deeper into specific posts or search for more (e.g., Reddit or forums), let me know!
 

Users who are viewing this thread

Back
Top Bottom