LLM's Are nothing but Hackers!!!

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 13:52
Joined
Jul 9, 2003
Messages
17,464
I am building my own MCP Server (Model Context Protocol) for use with AI... so I am completely out of my wheelhouse !!!! I might even be a little bit out of my skull...

I think I've made a Discovery... mainly due to my experience with the sort of stupid results you can get back from a large language model when dealing with Microsoft Access VBA...

Bearing in mind that the large language models are not intelligent but they are basically reading the internet like a book and presenting you with the information...

I have errors in my MCP Server just small errors , but I noticed that the large language model immediately comes up with some sort of hack …. I didn't know it was a hack, because like I said, I don't know anything about this MCP server, the large language model is building it for me.... but I directly challenged the large language model I said this feels like a hack ! Please could you tell me the correct procedure the industry standard way over doing this and it came back with the completely different idea !!!

Ask yourself why ??? --- it's because it reads everything people say on the internet and when people get in a corner, have a coding problem, they start to do all sorts of stupid hacks instead of sorting out the code properly, in a workman like manner... so the large language model reports all of the hacks and work around everyone is using... When you fix your programming with hacks and workarounds because you can't be bothered to do it properly, you just introduce more and more problems, until you get complete and utter failure!!

I have no idea if I'm barking up the wrong tree... but at the moment I'm sat here wagging my tail... you could say I'm a dog with two tales !!!

I'm posting this here in the hope that anyone else who has experience with large language models can confirm my hypothesis or come up with a better description of what's happening !!!
 
Bearing in mind that the large language models are not intelligent but they are basically reading the internet like a book and presenting you with the information...
Not sure this is how you should think about it. You should not assume it's reading the internet like a book. It has been trained to respond a certain way based on its parameters, and the responses are always different, which means it's all statistics at play.

This is why you should always try multiple variations of the same prompt, so you can evaluate whether there is enough confidence, but always making sure there is something out there with a human solution.

If you're just vibe coding without trying to understand what is going on, you'll be presented with a lot of pre-assumptions, and when something isn't true, you'll simply break things and you won't realize it until you pass everything as context for the LLM to check if it can find the issue.

Now, given that it's so forgiving, it will take your code as truth to an extent and then do all sorts of weird things.

TL;DR
AI Slops exist, that's why all LLM say "hey this thing can make up stuff, be careful"
 
I'm sat here wagging my tail...
Today I have been "metaprogramming" ---- That is writing code to update code --- I needed to change some UNC paths from Relative to Absolute....
 

Users who are viewing this thread

Back
Top Bottom