LLM's Are nothing but Hackers!!!

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 17:28
Joined
Jul 9, 2003
Messages
17,469
I am building my own MCP Server (Model Context Protocol) for use with AI... so I am completely out of my wheelhouse !!!! I might even be a little bit out of my skull...

I think I've made a Discovery... mainly due to my experience with the sort of stupid results you can get back from a large language model when dealing with Microsoft Access VBA...

Bearing in mind that the large language models are not intelligent but they are basically reading the internet like a book and presenting you with the information...

I have errors in my MCP Server just small errors , but I noticed that the large language model immediately comes up with some sort of hack …. I didn't know it was a hack, because like I said, I don't know anything about this MCP server, the large language model is building it for me.... but I directly challenged the large language model I said this feels like a hack ! Please could you tell me the correct procedure the industry standard way over doing this and it came back with the completely different idea !!!

Ask yourself why ??? --- it's because it reads everything people say on the internet and when people get in a corner, have a coding problem, they start to do all sorts of stupid hacks instead of sorting out the code properly, in a workman like manner... so the large language model reports all of the hacks and work around everyone is using... When you fix your programming with hacks and workarounds because you can't be bothered to do it properly, you just introduce more and more problems, until you get complete and utter failure!!

I have no idea if I'm barking up the wrong tree... but at the moment I'm sat here wagging my tail... you could say I'm a dog with two tales !!!

I'm posting this here in the hope that anyone else who has experience with large language models can confirm my hypothesis or come up with a better description of what's happening !!!
 
Bearing in mind that the large language models are not intelligent but they are basically reading the internet like a book and presenting you with the information...
Not sure this is how you should think about it. You should not assume it's reading the internet like a book. It has been trained to respond a certain way based on its parameters, and the responses are always different, which means it's all statistics at play.

This is why you should always try multiple variations of the same prompt, so you can evaluate whether there is enough confidence, but always making sure there is something out there with a human solution.

If you're just vibe coding without trying to understand what is going on, you'll be presented with a lot of pre-assumptions, and when something isn't true, you'll simply break things and you won't realize it until you pass everything as context for the LLM to check if it can find the issue.

Now, given that it's so forgiving, it will take your code as truth to an extent and then do all sorts of weird things.

TL;DR
AI Slops exist, that's why all LLM say "hey this thing can make up stuff, be careful"
 
I'm sat here wagging my tail...
Today I have been "metaprogramming" ---- That is writing code to update code --- I needed to change some UNC paths from Relative to Absolute....
 
Not sure this is how you should think about it. You should not assume it's reading the internet like a book. It has been trained to respond a certain way based on its parameters, and the responses are always different, which means it's all statistics at play.

This is why you should always try multiple variations of the same prompt, so you can evaluate whether there is enough confidence, but always making sure there is something out there with a human solution.

If you're just vibe coding without trying to understand what is going on, you'll be presented with a lot of pre-assumptions, and when something isn't true, you'll simply break things and you won't realize it until you pass everything as context for the LLM to check if it can find the issue.

Now, given that it's so forgiving, it will take your code as truth to an extent and then do all sorts of weird things.

TL;DR
AI Slops exist, that's why all LLM say "hey this thing can make up stuff, be careful"

If I could magically press a button and change/improve ONE thing about AI such as ChatGPT, it would be the following - it would be smart enough to insist on getting more context prior to answering some questions. It's good at asking for more context after answering a question, but in my mind what I would change is, it would know when to insist on getting more context before even answering the first time.

Rather than assuming the question is perfect and trying desperately to answer regardless of the risk. It would spot bad prompts lacking context and insist on fixing them up before trying to posit an answer
 
Be careful what you wish for. 😉
It's bad enough when someone here asks for more context before answering a question. I am not sure I would want to be badgered for details by Chatty.

Seriously, though, I agree with the general approach. I often go through at least three iterations for prompts. Sometimes, part of a response triggers an additional consideration I hadn't thought of originally. That requires an additional prompt back to address the new consideration and so on.
 
Recent tests on getting llms to solve simple puzzles have found that the LLMS still pattern match on their data instead of doing actual reasoning:-

 
I don't think that's too surprising. My basic understanding of how LLMs work was that it used pattern matching on the data available to it. I think, in fact, that what is called "training AI", is simply accumulating sufficiently large data sets to support the algorithms needed to identify patterns in it.

In one sense, I guess, AI is "fake" in that it doesn't so human reasoning. Yet there's nothing fake about the fact that I can feed it the contents of multiple Stored Procedures and have it find and offer fixes for common errors in those SPs. I don't particularly care if it "reasoned" it's way to the fixes. Or if it used pattern matching between my SPs and thousands of other SPs in its dataset to identify potential errors in my code.

Nothing fake about watching a PowerApps screen send a request to a stored proc to validate data input and getting back the appropriate validation. When that happens after only 3 or 4 minutes of having an LLM provide suggested fixes to the previously non-working code, I have the same satisfaction as I would spending 20 minutes going line by line through it myself, trying to spot what went wrong.
 
I mentioned this some years ago, but it seems relevant now in this thread.

I used to watch a show called Numb3rs that was about an FBI officer and his genius mathematician brother Charlie who consults for the L.A. office of the FBI. It was a guilty pleasure that was worth a laugh about 3/4ths of the time because of applying statistics to samples too small for statistics to have any value. But they DID research the methods they were using enough to actually make valid statements about what those methods were intended to do.

Where this show fits in is that in March of 2009, season 5, episode 17, the suspect in a murder is a seemingly sentient AI named Baley. Charlie and his computer-literate lady-friend Amita are called in to assist in the murder investigation. After a few dramatic moments, they discover two important things:

(1) The real murderer is an insider on that project who hacked into the AI and forced its logs to contain some incriminating entries to cover up embezzlement of federal grant funds. They used legit "sniffing" methods to isolate and detect another hacker (hired by the murderer to erase Baley's memory), whose interference revealed the entry point of the attack that resulted in the death in question.

(2) The "AI" is not called an LLM because that term wasn't use in 2009. "LLM" didn't become a common term until 2017 with the advent of CHAT-GP1. However, Charlie's explanation of what Baley (the AI) does is EXACTLY what an LLM does. He and Amita explain that it is NOT sentient, it is nothing more than a statistical harvester of things it can find via the Internet. Baley's one ability was to come SO close to passing the Turing test that to most people it would appear that it HAD passed the test. But it was a fraud scheme to bilk DARPA out of millions of dollars.

This episode was one example of why I thought Numb3rs was SLIGHTLY more literate than some of the police procedurals floating around in the first decade of the 21st century.
 
"...I thought Numb3rs was SLIGHTLY A LOT more literate..."

Fixed it for you.
 

Users who are viewing this thread

Back
Top Bottom