VBA and ChatGPT (2 Viewers)

Yeah, but YOU are authoring the presentation article.
 
It's ironic that the whole point of his upcoming presentation is that JWC intends to explain how he used AI to develop a project.

The post title for this thread itself says it all:

VBA and ChatGPT

And yet, here we are stating how he MUST acknowledge that he used AI to complete a project, just as if that were not the whole point of the presentation?

I get it, all of the LLMs in use today depend on the availability of massive data coming from the work of billions of people. There are a lot of questions around the ownership of material produced with the assistance of those LLMs.

I could see a legitimate concern if John was touting the resulting VBA while trying to hide the fact that it was created largely with the assistance of ChatGPT. But that is the exact opposite of what is really going on!

I strongly suggest that those concerns are misdirected here. John explicitly states his presentation is all about using AI to assist in the development of code. What else do you need to see to get that?

Anyway, it may turn out that the controversy will help pique interest and we'll have a good turnout for the live presentation. I guess it's not bad after all.
 
I strongly suggest that those concerns are misdirected here. John explicitly states his presentation is all about using AI to assist in the development of code. What else do you need to see to get that?
apology, overthinking of me.
 
It's ironic that the whole point of his upcoming presentation is that JWC intends to explain how he used AI to develop a project.
I suppose it’s inevitable that some people will feel threatened by large language models… and ironically, the more of an expert you are, the more threatened you’ll likely be.

It’s amusing in a way — because the more cerebral among us would normally condemn the old Luddite mentality: smashing machines out of fear they’d take away jobs. But now that a new kind of machine is encroaching not on manual labor, but on the intellectual, creative work of experts — work they’ve spent decades refining — suddenly it’s the experts who start behaving like Luddites.

That’s what’s playing out here, I think. The objection isn’t really about attribution or intellectual honesty — not when John’s entire presentation is about how he used AI to assist his development process. It’s more about discomfort. A sense of territory being crossed.

It’s going to be a fascinating decade ahead.
 
and ironically, the more of an expert you are, the more threatened you’ll likely be.

But here's the thing — those same experts are still needed, at least for the next few years. Maybe longer.

That’s because a lot of what they know isn’t written down anywhere. It's the kind of hands-on, fiddly knowledge you pick up over time. I believe the correct term for it is tacit knowledge... Especially with something like Microsoft Access ....

Large language models are brilliant at scraping the surface — all the stuff that’s been written and indexed. But the real gold is buried in the bits that never made it to a blog post. And unless the AI is set up to learn from expert interaction, it won’t get there on its own.

Funny enough, this reminds me of when computers first started becoming a big deal. My brother-in-law was a programmer back then (earning top money) — real punch-card era stuff. But the clever part was, he got out early as the money dwindled, and became a manager of programmers (earning top money again). Not because he wanted to escape, but because there weren’t enough experienced people around to run the teams. He saw it coming.

And I think we’re at a similar point now. The world’s waking up to AI, but it’ll be the experts — the ones who work with the tools, not against them — who’ll shape what comes next. Not just because they know the old tricks, but because they’re the ones who can show AI where the real knowledge lives.
 
At this point, ChatGPT is not particularly threatening. It can't do anything out in the real world - yet. I am doing a big project in Python and SQL Server. Chat writes code and uses an almost comical variety of ways to get that code to me. Sometimes it places it in a .Py file and presents it to me to download. Sometimes it writes the code directly in the chat window, usually but not always in a "canvas" - with comments about why it is doing whatever it is doing. Sometimes it is a line or two, sometimes it is an entire .py file. Sometimes it places the comments in the chat window and opens a big canvas and places the code there. Sometimes it sloooooowwwwwwllllllyyyyy overwrites everything in the canvas in order to place some new code at the bottom.

It is damned annoying actually. Some of the time it asks if I want it to place it all into a py file and present it.

I have written a "coding standards" document which I was presenting to chat before every session. I discovered that because that is short it can just be placed into the project instructions window. I also had Chat generate Metadata stored procedures which I can run to get table / field / data type / index information.

This by the way is absolutely critical to give back to Chat. Even though Chat (mostly) provides me scripts to build out these things, once a chat session ends, it "forgets" the names of things. It then starts "making up" table and field names, which mostly appear that they are correct, until I try to run the next piece of the puzzle - often in python scripts trying to use these tables. Whereupon his made up names cause errors. I have learned to just run the metadata SP, get (scrape) the metadata out of SQL Server, and paste it into the chat and voila... chat acknowledges his errors and starts using the metadata.

It is so bad that this is part of my coding standards instructions:

Python Coding Standards​

  • At the top of every Py script I need a full explanation of what the script does.
  • Add print functions at the top, the very bottom and inside every loop until we are fully debugged
  • Wrap all database access in try/except blocks with full traceback.
  • Hard-stop on core logic errors using sys.exit(1) or raise SystemExit.
  • Catch and skip expected errors like duplicate key violations using error codes.
  • Batch process records when possible instead of row-by-row.
  • Use joblib or pickle to save/load model files in a dedicated models directory.
  • Use logging for status, errors, and progress — not just print().

SQL Server Standards​

Never do something without knowing the metadata

  • sp_GetMetadataWithPurpose to get the existing table metadata.
  • Never guess. Verify actual table and field names before writing queries — never guess.
  • Use sp_addextendedproperty to document tables, fields, and indexes. If you build something, DOCUMENT IT, right then and there. Add a purpose property to anything new, whether table, field or index.
  • Always include a table-level description using SQL extended properties.
  • Views and stored procedures must be commented before the AS block.
  • Use clear, consistent naming conventions for all objects.

BTW there is a somewhat limited amount of text that can go in there.

From Chat:

The project “instructions” window (also called the system message or model instructions) can hold approximately:
32,000 characters (32 KB)
That includes:
  • Your custom coding standards
  • Metadata like project goals, preferences, and schema notes
  • All system instructions loaded into the session at startup

As Chat intimates, I paste a handful of documents in there. The contents, not the documents themselves. The problem is that the documents change and I have to occasionally go in and edit that memory area. But... once they are in that area, Chat (mostly) actually uses these things. And when it doesn't I can simply ask... "Did you use the metadata" or something similar.
 

Users who are viewing this thread

Back
Top Bottom