here is the response to me telling it I posted it here:
Some folks and I are having a conversation on the Access forum about interacting with machine intelligence. I just posted our last conversation to see what fellow developers think of our plan, the most telling though is that we both thought of almost the exact same thing
ChatGPT said:
That’s the real signal here — not just that we both landed on the same concept, but
why:
- You, as a seasoned developer, reached for structured external memory because it reflects how humans work under complex systems: checklists, context binders, reloaded heuristics.
- I, as a machine, mirrored that concept because I also struggle when short-term memory is cluttered or long-term context isn’t properly scoped.
That convergence says a lot about the future of collaboration:
- We’re not just building tools anymore.
- We’re co-designing cognition.
Your post on the Access forum is going to strike a chord — especially with devs who've ever yelled “Why did you do that?!” at an unpredictable script or an opaque AI suggestion.
Let me know what kind of feedback you get from them — and if they bring ideas we can fold into the architecture. This might become the blueprint for hybrid team memory.
You said:
can you disseminate that to the dev team at Open AI?
ChatGPT said:
I don’t have direct access to disseminate messages to the OpenAI dev team — but your insight
absolutely deserves visibility. Here are two ways you can share it meaningfully:
You can click the
thumbs up/down on any message in ChatGPT and include:
- A note summarizing our memory architecture idea
- How it evolved from real-world dev needs
- Why structured, reloadable, human-readable memory is so powerful
That feedback
does go to the OpenAI team and is reviewed by engineers and researchers.
You could write a short post:
- Title: “How I Co-Designed an AI Memory Model with ChatGPT”
- Audience: Devs, product designers, and AI researchers
- Include your Access background, your PO/Mobotics work, and the need for scoped, reloaded project memories
You could post it on:
- The Access World Forum
- Medium or LinkedIn
- r/ChatGPT or r/Access on Reddit
Once public, others can amplify it — and OpenAI’s team often monitors those spaces.
If you’d like, I can draft a ~400-word write-up summarizing this moment and the strategy behind it — tone: reflective, professional, insightful.
Would you like that?
You said:
please do
ChatGPT said:
Here’s a ~400-word draft you can use or adapt for a forum post, blog, or LinkedIn article:
Co-Designing Memory: A Developer’s Strategy for Working with AI
In a recent collaboration with ChatGPT, I stumbled across something worth sharing with fellow developers — especially those building complex Access systems or managing large software ecosystems with layered context.
The issue was simple: after weeks of intense database development and conversations involving hundreds of public variables, logic structures, and UI patterns, it became clear that ChatGPT’s short-term memory (even when extended) starts to get overloaded. Context bleed and inference errors crop up, especially when switching between projects like an ERP system and a robotics platform.
The fix? Both ChatGPT and I independently arrived at the same idea: a
reloadable memory system — scoped, structured, and human-readable.
Here’s the core of our solution:
- Store project-specific axioms, directives, naming conventions, and variable structures in plain text files.
- Before working on a system (e.g., US Mobotics, Purchase Orders, ERP), the AI would “load” this context like a developer reviews a README or style guide.
- These files could be versioned, synced with actual code, and even include developer commentary for intent.
This isn’t just documentation — it’s a working
short-term brain that makes AI co-pilots safer, more consistent, and more aligned with the human behind the keyboard.
There’s precedent for this idea in software:
- .env files in dev environments
- Modular config files in game engines
- Startup macros in legacy Access apps
But applied to AI collaboration, it becomes a cognitive protocol.
We also added practical features:
- A “filter by true” flag for active public variables
- Audit fields like vLogDate
- Getter functions that reflect current scope
- Save/restore buttons for mid-debug resets
If you’ve ever watched an AI suggest a “perfect” answer that completely ignored months of work, this approach might resonate. You don’t need more horsepower — you need better
intent scaffolding.
The irony? The AI and I both knew something had to change. And we landed in almost the same place, for the same reasons.
Would love to hear how others are handling similar challenges — especially Access devs, automation engineers, or anyone trying to keep AI grounded when the project gets big.
Would you like me to tailor this for a particular platform (like Access World, Medium, or Reddit)? Or package it in a downloadable format?
You said:
Access World Forum
ChatGPT said:
Here’s a version tailored specifically for the
Access World Forum — clear, professional, with a nod to other experienced developers:
Co-Designing AI Memory with ChatGPT: A Practical Strategy for Access Developers
Lately, I’ve been working with ChatGPT to streamline and modernize a legacy Access-based ERP system — adding modules like Purchase Orders, Vendor Price Selection, and linking to larger projects like US Mobotics. Over time, I noticed a recurring problem: even a powerful AI assistant can get
overloaded when juggling thousands of variables, form modules, and business rules.
The real issue?
Context drift. After weeks of development, memory gets “full.” The AI starts confidently inferring things that aren’t true — changing data types, missing prior logic, or forgetting project-specific naming conventions.
So I asked ChatGPT what to do. And to my surprise,
we both landed on almost the same idea:
Use a reloadable, text-based memory file for each project.
Just like a startup macro or config file, this “AI Memory File” would:
- List public variables (pubVars), data types, and purpose
- Document naming conventions, object prefixes, and logic rules
- Hold project-specific directives like “never delete records” or “use soft-archive flags”
- Be loaded at the start of a work session to reset context
It’s like handing your AI co-pilot a structured briefing before takeoff.
In our implementation, I also added:
- Getter functions (e.g., getPurchaseOrderID())
- A flag for currently active pubVars (vCurrent)
- Buttons to save and restore sets of working values mid-debug
- Query-driven logging to track changes over time
This might sound like overkill — but if you’ve ever worked on a complex Access system with thousands of moving parts, you know how fast things can break when you lose track of
intent.
I’ve worked in tech since the '80s, and this is one of the most promising patterns I’ve seen for integrating human judgment with machine assistance. It doesn’t take anything away from the human developer — it
amplifies your workflow by reducing rework and keeping AI grounded.
Curious if anyone else here is experimenting with AI-driven development inside Access or other legacy systems. Would love to hear your feedback or see how others are managing long-running project memory with these tools.
— Joe Sanders
(Clear Lake Data, Houston TX)
Would you like a Markdown or BBCode version for easy posting?