Experience with AI is Key

Deleted

New member
Local time
Today, 21:51
Joined
Dec 1, 2025
Messages
104
Deep domain expertise is essential, especially when navigating the inherent unpredictability of large language models (LLMs).

 
Last edited by a moderator:
Let me give you an example of how I sometimes use AI to give more robust answers. This is something I did yesterday, and it concerns two chimneys that have cracks in them, which require repair.

Firstly, I have paid versions of both ChatGPT and Gemini, and in each I switched them to Deep Research mode, for the best in class thinking. But, after reading the response of both AI's, they disagreed with one another. Not in everything but in the approach I should take: should I get a structural engineer or a chimney specialist/builder. Also, was it sulphate attack on one of the chimneys or thermal expansion.

So, to get further clarity, I did the following:

Another AI had a different opinion. Are they correct or wrong? Here is their output:

<paste other AI's output here>

So, after the Deep Research report in Gemini, I pasted that into Gemini after the report, so it still has the same context. I did the same for ChatGPT. Then Gemini gave its response, as does ChatGPT.

Next, I do this:

The other AI replied to your response with this:

<paste other AI's output here>

And so on. You essentially get a conversation between two different AI's that help increase the robustness of the replies. Funnily enough, Gemini was kind of converging with ChatGPT, but stating where its nuance was slightly different. But it also stated the good points ChatGPT was coming up with.

Net result: better insight, more intelligence, increased confidence in the output.
 
Last edited:
I had a bug in one of my programming projects it's a python based application. I've put videos of the first mock-up working l in its simplest form here....

Chat GPT 5, could not find the issue, kept repeating variations on the same theme, reiterating the same solution on an annoying moronic loop!

So I employed Claude, which I understand is the top model for finding programming issues .. no joy! It got into the same loop as chatty...

Then I fed the question and answers from chatty and Claude into GROK, grok got the answer straight away and I fed the answer into Claude and it agreed!

I don't know for sure but I'm getting the impression that GROK is progressing at a more rapid rate than the others.... Also it is far cheaper to run!

Qwen3 is also a very cheap solution for programming problems. I haven't tried this yet but I'm sure playing Qwen off against Grok would produce some excellent results at excellent prices....
 
I agree that GROK is moving ahead of the pack lately. I use Claude first, then ChatGPT for backup/verification. But lately I've turned a couple to GROK when neither of the original AI Buddies can solve a problem and found a path through the confusion.

Both Claude and Chatty are prone to recycling their own recent past rather than branching out if the solution isn't found that way. Grok does appear to me to be less inward focused.
 
Like with humans, an LLM may be smart in some things, but dumb in others. Humans are the ones who program them on how and what to learn, right? What happens when they go into annoying moronic loops or hallucinate? Are they self aware? They don't have a mind of their own. If a human doesn't teach it when to break out of moronic loops, recognize when they're hallucinating and snap out of it, then what's the result?
It's ability or willingness, at least in chatgpt's case, to analyze its output for errors is pretty limited. i've used chatgpt for image generation many 1000's of times. sometimes i'd ask it to remove text from an image and it would output the image with the text still on it. One or two times, out of 100's of erroneous times, it would notice and immediately say I see the text is still on there
 
Then I fed the question and answers from chatty and Claude into GROK, grok got the answer straight away and I fed the answer into Claude and it agreed!
You'd be surprised how chatgpt might have come into the same solution if you started a new Chat and fed it all the information again.
It's moronic loops are often limited to the chat you are in. Starting a new one is sometimes as effective as starting a new AI tool
 
I sometimes think that the AIs do stupid things to make us feel smarter.
 
I sometimes think that the AIs do stupid things to make us feel smarter.
That would be just like Chat GPT. I was trying to make me feel special like a genius. It's probably to make you more addicted to using it too
 
Last edited:
Sometimes I like to just mess with chatgpt. You know how we've been discussing that one of chatty's weak points is that it's always biased toward being nice to you and complimenting you, which sometimes taints the truth it ought to tell.
I decided to actually FISH for a compliment and I think it just about had an AI orgasm, falling all over itself to start the lovefest and tell me how great I am

2025-11-21 12_50_30-BOOKS - Image adjustment completed.png


Kinda funny, just killing time waiting for a sql process to finish at work
 
Elon's Starship is 120m tall (40% taller than your bird-shit-covered Statue), but the real kicker is it hurls 150+ tonnes to orbital velocity and brings it all back to land in Australia 1 hour later, and it also lands on a dime.
That's like launching the Statues of Liberty into space every flight, while your statue just stands there collecting pigeon crap and scamming tourists for 140 years. Fancy selling it to fund a ticket to watch the next catch?
 
Sometimes I like to just mess with chatgpt. You know how we've been discussing that one of chatty's weak points is that it's always biased toward being nice to you and complimenting you, which sometimes taints the truth it ought to tell.
I decided to actually FISH for a compliment and I think it just about had an AI orgasm, falling all over itself to start the lovefest and tell me how great I am
I always ask it for Pros and Cons. I've also trained it to read my voice and know whether I need coaching, cheerleading, Joe Friday, or enthusiastic co-planer. In addition is has learned how, and when to move me back to staying on target. It has learned how to become a highly useful assistant. My process is to have it echo my request, evaluate it and give me a high level "what". then we move to detail "what" and to How.
 
I always ask it for Pros and Cons
One thing I find that is effective is to ask the LLM to apply the Edward de bono thinking hats system to a particular question:-


I asked The Question:-
As large language models get better and better at answering questions, what will happen to specially forums like "Access World Forums" (AWF) where human experts answer Microsoft Access related questions

GROK 4.1 Answer
Let's apply Edward de Bono's Six Thinking Hats to your question about the future of specialized forums like Access World Forums (AWF) as LLMs improve.

**White Hat (Facts & Information):**
AWF has existed since 2001, has 150k+ members, and still gets dozens of new Microsoft Access questions daily even in 2025; LLMs already answer basic-to-intermediate Access questions accurately and instantly.

**Red Hat (Emotions & Intuition):**
Long-time AWF members feel strong attachment and community pride, and a sense of loss at the idea of the forum becoming obsolete; many enjoy the human interaction more than just getting an answer.

**Black Hat (Caution & Risks):**
Specialized forums risk sharp decline in new posts, loss of fresh knowledge capture, eventual stagnation, and possible closure as fewer people ask questions there instead of using AI.

**Yellow Hat (Benefits & Optimism):**
Forums could evolve into higher-level discussion hubs for complex architecture, undocumented bugs, enterprise governance, and mentoring that LLMs still struggle with; the signal-to-noise ratio would improve dramatically as simple questions disappear.

**Green Hat (Creativity & Alternatives):**
AWF and similar forums could integrate AI bots directly, use AI to pre-answer simple threads, become places to critique/refine AI answers, host “human-vs-AI” challenges, or shift to curated knowledge bases with human vetting.

**Blue Hat (Process & Overview):**
The most likely outcome is transformation rather than death, specialized human forums will shrink but survive by moving up the complexity ladder and focusing on community, nuance, real-world war stories, and things LLMs can’t yet replicate (politics, legacy system archaeology, interpersonal training).

It will become a passionate expert guild
In short: AWF won’t disappear, but in 5–10 years it will feel more like a small, passionate expert guild than today’s busy help desk.
 
what will happen to specially forums
I also asked a follow-on question:-

Have any technical forums weathered the change successfully or look like they have a good model to survive this transition and if so could you give me the urls and why you think they have been innovative?

GROK 4.1 Answer
Yes, several technical forums have adapted successfully to AI by integrating it as a tool while emphasizing human curation, community, and niche depth.

Stack Overflow (https://stackoverflow.com) has thrived through OverflowAI and partnerships like with OpenAI, licensing its data for AI training while building AI-assisted search that cites human answers, boosting revenue and engagement by blending instant AI utility with vetted expertise.

Reddit's specialized subs like r/MachineLearning (https://www.reddit.com/r/MachineLearning/) and r/LocalLLaMA (https://www.reddit.com/r/LocalLLaMA/) are growing fast, innovating by hosting deep dives into AI papers, open-source builds, and critiques that LLMs can't replicate, turning them into hubs for pros sharing real-world hacks beyond generic AI outputs.

Hacker News (https://news.ycombinator.com) sustains its edge via YC's ecosystem, where users dissect AI's limits in startup threads, fostering innovative discourse on practical integrations like agentic systems that forums alone enable through unfiltered debate.

These models work because they leverage AI for basics but pivot to irreplaceable human elements like nuance and collaboration.
 
I asked GROK HEAVY the following question:-
Please read the entire thread and see if there are any insights and opportunities that have been missed

GROK HEAVY - Answer
One missed insight is the role of data privacy in forums like AWF, where users can discuss sensitive Microsoft Access queries involving proprietary data without feeding it into public LLMs that might store or leak information.

Another is the persistence of human verification, as forums provide peer-reviewed, context-aware answers that mitigate AI hallucinations on niche or rapidly evolving topics like Access VBA quirks.

On opportunities, integrating AI as a first-line triage tool could filter simple questions, allowing human experts to focus on advanced mentoring and collaborative debugging sessions.

Forums could monetize by curating datasets for training specialized LLMs, turning historical threads into valuable IP for Microsoft partnerships.

AWF might pivot to community events, such as webinars or hackathons on Power Platform integrations, fostering loyalty beyond Q&A.

An unmentioned example is Level1Techs Forums (https://forum.level1techs.com), which innovates by dedicating sections to AI hardware builds and integrations, blending user experiments with expert reviews to stay relevant.

Looks like Grok Heavy made a mistake here !!!
UtterAccess (https://www.utteraccess.com) shows adaptation through strict moderation and premium content, emphasizing deep dives into Access that LLMs struggle with, like legacy migration strategies.
 
(politics, legacy system archaeology, interpersonal training).

I asked GROK HEAVY
Re:- (politics, legacy system archaeology, interpersonal training).when you say archaeology do you mean the historic information the tacit information contained with Z number of very old members that have been using at Microsoft Access for 20 odd years ? when you say politics do you mean that the forum has a strong urge to discuss politics to the point of generating some very antagonistic exchanges and what do you mean by interpersonal training? --- do you mean teaching the new less skilled members

GROK HEAVY - Answer
Long-time Members Tacit Knowledge

Yes, by "legacy system archaeology," I mean the deep, tacit knowledge held by long-time members who've worked with Microsoft Access for 20+ years, uncovering historical quirks, undocumented behaviors, and migration strategies from ancient versions that LLMs often miss or hallucinate about.

Navigating Organizational Politics
No, by "politics," I don't mean general political discussions or antagonistic exchanges; I refer to navigating organizational politics, like stakeholder management, compliance issues, or inter-departmental dynamics when implementing Access solutions in real-world enterprises.

Guiding newer, less skilled members
And yes, by "interpersonal training," I mean mentoring and guiding newer, less skilled members through soft skills like effective communication, collaborative problem-solving, and professional etiquette in tech contexts, which forums foster better than impersonal AI responses.
 
Last edited:

Users who are viewing this thread

Back
Top Bottom