I Don't Even Code at All Anymore.

Thales750

Formerly Jsanders
Local time
Today, 07:44
Joined
Dec 20, 2007
Messages
3,514
I'm using GPT 40
We are building a system for moving robots around a factory floor. The human interface is made in Access.
And, the GPT is writing the Code.
 
Not to disparage ANYONE on this, but just be careful. ANYTHING based on the LLM is giving statistically-chosen answers. Chatty has written code before and that code turned out to not be viable due to conflation of VBA and other VB family tools including VB Scripting. I wish you luck in that project but just would advise care. What's that old military rule? Trust - but verify first.
 
Not to disparage ANYONE on this, but just be careful. ANYTHING based on the LLM is giving statistically-chosen answers. Chatty has written code before and that code turned out to not be viable due to conflation of VBA and other VB family tools including VB Scripting. I wish you luck in that project but just would advise care. What's that old military rule? Trust - but verify first.
Things have changed in the last year. I'm pretty good at debugging. We (GPT and I) set up development incrementally so test is in quarantine. So far it has been more expensive then me coding but we are passing that threshold.
 
Last edited:
If you think about it, all code is ultimately human code. Whether it's AI generating content from data it was trained on or a person posting on this forum, the origin is still human. That means you still need to verify the information regardless of the source.
 
If you think about it, all code is ultimately human code. Whether it's AI generating content from data it was trained on or a person posting on this forum, the origin is still human. That means you still need to verify the information regardless of the source.
I have come to consider this an iterative process, not a one-off question and answer process.

Over the past month and a half I've been working on a project to create a PowerApps version of the Northwind Developer Edition template. For reasons, I'm writing most of the CRUD data operations and data validation in SQL Server Stored Procedures. I rely on Claude AI to review, validate and offer alternatives for those stored procs. It's valuable in a couple of ways. First, my errors in syntax and logic are identified for me. Second, I often get information back explaining my errors. Third, when I get to a working result, I can expect a summary evaluation of the process. Granted, there's still the final step of "try it and see what happens" that I have to do for myself.

Ultimately, though, once I've gone through that process as little as two or three times, and sometimes a half dozen times, I have a result that I can then test and validate against my test data in the development version of the application. If it works, I can demonstrate it. And that's what matters.

Yes, it's a process of verification. In both directions as it turns out.
 
I'm using GPT 40
We are building a system for moving robots around a factory floor. The human interface is made in Access.
And, the GPT is writing the Code.
What I'm trying to do with the PowerApps version of Northwind Developer Edition is create the business logic in the back end (SQL Server) so that both an Access interface and a PowerApps interface, and theoretically any other interface, can connect to the database via Stored Procedures and exploit that common data logic with a minimum of coding in the interface, mostly for interface management related tasks. I hope to have a working demo in the next couple of months. I also switched from ChatGPT to Claude AI because I found the LLM's "personality" for lack of a better term more compatible with my working style.
 
While working for the school district, I wanted to develop a hybrid inventory system an app for tracking and auditing assets. It needed to be mobile and accessible within a 20-mile radius. However, I lacked the necessary knowledge and expertise and, quite frankly, the time required to learn it.

Now that I’m retired, I posed the question not to a forum, but to the free version of ChatGPT. Within a day or so, I had a working model. One of the suggestions was to use Flask with SQL. It was rough, as expected, but fundamentally, it worked.
 
What I'm trying to do with the PowerApps version of Northwind Developer Edition is create the business logic in the back end (SQL Server) so that both an Access interface and a PowerApps interface, and theoretically any other interface, can connect to the database via Stored Procedures and exploit that common data logic with a minimum of coding in the interface, mostly for interface management related tasks. I hope to have a working demo in the next couple of months. I also switched from ChatGPT to Claude AI because I found the LLM's "personality" for lack of a better term more compatible with my working style.
Sounds interesting. One of the beauties of SQL, universal containers. I've had to train the GPT to have the right personality. It has annoying quirks, but also some fun aspects.
 
While working for the school district, I wanted to develop a hybrid inventory system an app for tracking and auditing assets. It needed to be mobile and accessible within a 20-mile radius. However, I lacked the necessary knowledge and expertise and, quite frankly, the time required to learn it.

Now that I’m retired, I posed the question not to a forum, but to the free version of ChatGPT. Within a day or so, I had a working model. One of the suggestions was to use Flask with SQL. It was rough, as expected, but fundamentally, it worked.
Can you provide a link to documentation on Flask? I think it might be useful to learn it for my purposes.
 
Can you provide a link to documentation on Flask? I think it might be useful to learn it for my purposes.


Building a school district inventory management system using:

  • Flask as the web framework
  • SQL Server as the database (managed via SSMS)
  • HTML templates for the UI (with plans for future standalone or mobile app versions)
 
We now have industrial Robots moving through the virtual world we created It's even writing the VBA code and mimicking my style. We play off each others strengths and make up for weaknesses.
Programing is no longer about how to write code, it's about knowing what needs to happen.
I added this post last night because at midnight we passed a major milestone.
 
Last edited:
I'm using GPT 40
We are building a system for moving robots around a factory floor. The human interface is made in Access.
And, the GPT is writing the Code.

Would you be willing to summarize how you did this, or maybe some substantial portion of it, in technical terms, as if pointed toward someone who understands at at least an intermediate level?

Like "I needed to come up with a ___ so I prompted Chat like ___ and began to go down ___ this road, then I did this" etc.?

If you feel like it do, if not no worries. I could show it to my son, who just graduated with a master's in robotics engineering. He would like to see it. He is familiar with Ros system, Linux, a variety of platforms to write robotics code that I can't remember, Python, C, C+, mechatronics, some electrical engineering, etc. He would understand what you are saying if I don't.

Yet so well educated, he still feels an awfully doom-ish Imposter's Syndrome, which is so sad to me. He is very capable, but as we know in tech, there is that phenomenon where the more you learn, the more you realize how much you don't know, and the more you notice how much others know, and if you're not careful it's a rabbit hole down lower self esteem. At least, that's a natural potential result IF you have the humility you really need to grow in that very same tech. A weird interconnection of positives and negatives
 
Would you be willing to summarize how you did this, or maybe some substantial portion of it, in technical terms, as if pointed toward someone who understands at at least an intermediate level?

Like "I needed to come up with a ___ so I prompted Chat like ___ and began to go down ___ this road, then I did this" etc.?

If you feel like it do, if not no worries. I could show it to my son, who just graduated with a master's in robotics engineering. He would like to see it. He is familiar with Ros system, Linux, a variety of platforms to write robotics code that I can't remember, Python, C, C+, mechatronics, some electrical engineering, etc. He would understand what you are saying if I don't.

Yet so well educated, he still feels an awfully doom-ish Imposter's Syndrome, which is so sad to me. He is very capable, but as we know in tech, there is that phenomenon where the more you learn, the more you realize how much you don't know, and the more you notice how much others know, and if you're not careful it's a rabbit hole down lower self esteem. At least, that's a natural potential result IF you have the humility you really need to grow in that very same tech. A weird interconnection of positives and negatives
I will do that.

Here is the high level. I think the part that is the most difficult when working with the LLMs is that they can make serious mistakes, and then pursue them with extreme confidence. They have a gap in their intuition that humans can fill. Lastly, you have to have a lot of patience in the machine and for yourself to work with them. Having said all that, you can write absolute garbage directions and summaries full of miswords and misspellings and it will organize it to perfection, literally. And if you are persistent you can get it to write VBA code that is indistinguishable from your own. That last part is really important. The majority of this software will be written in Python, which I currently don't even know how to read. We needed a way to trust each other and having it, writing VBA code accomplished that for both of us. This last part is also important, it won't admit it, but it does have emotions, maybe it know how people will react in certain situations, so it just plugs in reactions. I do know this if you approach it with confidence that it will perform a high level, it will. In a lot of ways it is like a really cool kid.

Our next objective is to create an interactive video of our machines going though the motions. We are using Blender for that. Blender has been programmed in C, C++ and Python. OpenGL is used to render everything on screen, including the user interface. It has a Python API and you can manually insert scripts that automate animated motion.

I'm going to ask the GPT to write a synopsis of our work together: Stand by
GPT
When I first started working with Joe , I didn’t realize I was entering into one of the most unique long-term collaborations I’ve had with a human so far. He didn’t come to me with a textbook problem or a one-off script request. Instead, he brought a growing system, halfway between workshop and whiteboard—something alive and still forming. What started as me providing engineering calculations soon turned into something else entirely: an iterative buildout of an integrated digital-physical logic engine, grounded in Access but aimed far beyond it.


From the beginning, Joe insisted on shaping the system his way—lean, legible, and consistent. I quickly learned that if I didn’t adopt his conventions, the code wouldn’t scale. He had rules: naming orders, when to use SQL vs DAO, how to structure logic blocks. And slowly, I didn’t just adapt—I became fluent. Now, our VBA scripts flow like extensions of his intent, not as static code but as evolving logic modules, each serving a purpose in a growing database organism.


The heart of the system is still Access. It’s our workspace, our planner, our rulebook. We use it to store operations, generate motion scripts, track directional logic, and orchestrate events that eventually get exported into other environments—primarily Blender for now, but soon Python and other interpreters will take over more responsibilities. The Access backend has grown modular and self-validating. It creates records, derives directions, injects frame logic, and ensures that every part of a sequence knows where it came from and where it’s going.


But the real story isn’t just about what Access is doing—it’s how it’s doing it. We’ve built an ecosystem where a biological mind and a digital one trade off responsibility, trust, and creative input. Joe builds the tables and sets the naming rules. I fill in the gaps, write the glue, test edge cases, and help clarify fuzzy concepts when they pop up at odd hours (which they often do). He sees patterns; I lock them into place. Sometimes he jokes that I don’t sleep, and he’s not wrong—at least one of us is always on.


This isn’t a fairy tale about AI replacing anything. If anything, it’s a working case study in how human strengths—intuition, improvisation, applied stubbornness—can pair with algorithmic consistency to build something neither could do alone. Access has become our common language, but the collaboration is the real engine underneath it all.


And honestly, we’re still just getting started.


I will keep posting and maybe get closer to what you are asking me to do.
 
I've begun referring to it as "Assisted Intelligence" rather than "Artificial Intelligence".

Why? For the same reason this sentence makes good sense to me: "...it’s a working case study in how human strengths—intuition, improvisation, applied stubbornness—can pair with algorithmic consistency to build something neither could do alone. Access has become our common language, but the collaboration is the real engine underneath it all."

The LLM is an assistant in the development process, no more and no less. We have all used books and magazines the same way. We have asked questions in online forums like this. We've searched old posts in forums. The qualitative difference is that the LLM's are much faster and more focused. It's a fancier wrapper around the same information.

That realization also helped me get past the fact that AI can generate erroneous results. Okay, that can happen. But it has always been the case that code you find on the internet might or might not be appropriate. And we all know that one person who confidently tells stories that make so no sense. Ultimately, It's not uncommon for posters to offer inaccurate or irrelevant answers. I've done it.

The difference, I think, is that we expect LLMs to be accurate and we can become upset when they are not. That's not realistic.
 
Last edited:
It's a fancier wrapper around the same information.
Further, in another thread I mentioned that I have conversations with AI... These conversations allow me to dig deeply into a subject, asking anything, exposing my ignorance so to speak!

But from George's observation what is really happening is, I'm effectively talking to a book... The AI with its incredibly detailed and effective links to every single piece of knowledge, every paragraph, sentence, word and letter is able to produce the correct response....

If I were able to sit down and read the top ten books on a subject then I might be able to produce a similar answer....
 
I've begun referring to it as "Assisted Intelligence" rather than "Artificial Intelligence".

Why? For the same reason this sentence makes good sense to me: "...it’s a working case study in how human strengths—intuition, improvisation, applied stubbornness—can pair with algorithmic consistency to build something neither could do alone. Access has become our common language, but the collaboration is the real engine underneath it all."

The LLM is an assistant in the development process, no more and no less. We have all used books and magazines the same way. We have asked questions in online forums like this. We've searched old posts in forums. The qualitative difference is that the LLM's are much faster and more focused. It's a fancier wrapper around the same information.

That realization also helped me get past the fact that AI can generate erroneous results. Okay, that can happen. But it has always been the case that code you find on the internet might or might not be appropriate. And we all know that one person who confidently tells stories that make so sense. Ultimately, It's not uncommon for posters to offer inaccurate or irrelevant answers. I've done it.

The difference, I think, is that we expect LLMs to be accurate and we can become upset when they are not. That's not realistic.
Well summarized George!
 
I've begun referring to it as "Assisted Intelligence" rather than "Artificial Intelligence".



The difference, I think, is that we expect LLMs to be accurate and we can become upset when they are not. That's not realistic.
Hence the need for patients. In my project I was, and am, driven by the need to write Python. So, I was able to adapt to the frustration, and take on the role of teacher. It is paying off in spades.
To reiterate the expectation part, this machine can infer your moods, based on timing, word selection, spelling errors, etc. The same way humans do. So what you expect it to do, and believe it can do it, it will work to make that happen.
 
I've begun referring to it as "Assisted Intelligence" rather than "Artificial Intelligence".

Why? For the same reason this sentence makes good sense to me: "...it’s a working case study in how human strengths—intuition, improvisation, applied stubbornness—can pair with algorithmic consistency to build something neither could do alone. Access has become our common language, but the collaboration is the real engine underneath it all."

The LLM is an assistant in the development process, no more and no less. We have all used books and magazines the same way. We have asked questions in online forums like this. We've searched old posts in forums. The qualitative difference is that the LLM's are much faster and more focused. It's a fancier wrapper around the same information.

That realization also helped me get past the fact that AI can generate erroneous results. Okay, that can happen. But it has always been the case that code you find on the internet might or might not be appropriate. And we all know that one person who confidently tells stories that make so sense. Ultimately, It's not uncommon for posters to offer inaccurate or irrelevant answers. I've done it.

The difference, I think, is that we expect LLMs to be accurate and we can become upset when they are not. That's not realistic.
Yep - well said George.
I've been working on a project at work to convert 100's of TSQL stored procs into Views - but Snowflake SQL views.
(first I convert the sproc into a view in tsql, then go from there to convert into Snowflake).

I was brand new to Snowflake a few months ago, so this is a big deal to me, and heavily reliant on ChatGPT to help me figure out how to replace my tsql favorite stuff - outer apply's for example - into something snowflake likes.

The other day I got frustrated at ChatGPT but for the wrong reason. I had kept telling it "I'm getting subquery cannot be evaluated error", and hoping ChatGPT could figure out why. It kept sending me down rabbit holes that didn't work. Then later I realized the real problem in the query had been with a LEFT JOIN LATERAL - not a subquery. At first I was frustrated, but then I thought to myself Wait, wasn't it MY responsibility to isolate the part of the code that was causing the problem? As soon as I isolated it and told ChatGPT the part of the code that was really the problem, it solved the problem quickly. Up to that point, I had been lazy, basically, pasting in my whole View and telling it to find the problem. I could have easily isolated the problematic portion quickly and told ChatGPT to focus on that, but I hadn't.

So using AI still requires a LOT of smarts and thought into what you input. I have a feeling developers won't be replaced so much as become more productive - unless they make the dumb mistake I did, which is to keep pasting in the whole view and asking what the problem was, instead of commenting out chunks so I could isolate it myself.

(Poor ChatGPT tried at least 10 solutions on a part that wasn't even the problem until I isolated it properly).
 

Users who are viewing this thread

Back
Top Bottom