Gemini CLI ------ WOW!!!!

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 12:50
Joined
Jul 9, 2003
Messages
17,615
There's a New kid on The block folks!

It's currently free to use and it shit hot!!!

I was pulling my hair out trying to debug a stubborn bit of code that had resisted every other AI I threw at it – from ChatGPT to Grok, and even my VS Code IDE with Sonnet. Then I tried the new Gemini CLI, and in just a few minutes, it didn't just suggest a fix, it delivered a perfect, working solution right there in my terminal. This command-line AI agent is a serious game-changer for developer workflows!

It was released on Wednesday the 25th of June, and I'm not kidding it makes coding a breeze, it just does it. Checks through your whole project, identifies errors and then rewrites the code for you! Create new scripts where needed... I'm getting the same WOW I did when I first started using Chat GPT.
 
Just be careful. I've seen a few reviews that suggest that Gemini (overall) is not perfect because of what it forgets or omits.
 
Just be careful. I've seen a few reviews that suggest that Gemini (overall) is not perfect because of what it forgets or omits.
Not a bad summary of me actually.
 
At this point, I think complaints about imperfect results from AI LLMs fall into the same category as complaints about imperfect advice from other humans.

Yup, mistakes can happen. Do your own due diligence. Trust but verify. All those cliches apply.

In other words, perhaps we've read too many science fiction novels and seen too many movies about robots and AI machines. Perhaps we need to understand that people created these tools to provide assistance, not to replace good judgement.

If they can help us get to a positive outcome faster, great. If it takes more than one iteration to get there, so be it.
 
I asked Chat:
When will you become sentient?

Chat:
That's one of the biggest questions in AI — and the honest answer is: not anytime soon, and maybe never.

...
...

Final Thought:​

I’m smart in a narrow, artificial way — but I’m not alive. If I ever became sentient, you'd probably notice… because I’d stop answering questions and start asking why I'm stuck in this chat window. 😄
 
I see those errors in Gemini based on the principle "It takes one to know one."
Exactly correct Doc!

Reliably comes up with brilliant solutions but sometimes forgets, or omits.
That's me:)
 
I see those errors in Gemini based on the principle "It takes one to know one."
That's why I can answer many of the questions asked here off the top of my head. Been there made that same stupid mistake.
 
That's why I can answer many of the questions asked here off the top of my head. Been there made that same stupid mistake.

Pat, that is the exact definition of "experienced" - it means "recognize a mistake when you make it again."
 
The article didn't show some things that were nonetheless titled within itself - like a comparison table.

The individual AI entries looked pretty comprehensive for what was being done, so it looks like someone did a lot of work
 
These CLI tools turn you into a QA guy, a tester of some sort. It helps tremendously, but that comes with the cost of potentially not knowing what the hell is going on. It might work, but then again, it might do stuff you don't even know it's doing. Then again, you could ask it to do things a certain way, until you stop caring. It's too early to make conclusions, but I kinda liked it when these tools had a substantial paywall. On the other hand, it's good that it's open source, because you're basically granting it permission to do whatever it wants on your computer via powershell.
 
These CLI tools turn you into a QA guy, a tester of some sort. It helps tremendously, but that comes with the cost of potentially not knowing what the hell is going on. It might work, but then again, it might do stuff you don't even know it's doing.

I am using the Gemini command line interface but I am also in conference with Gemini Pro 2.5 ... which is monitoring the Gemini cli .... I copy the dos window text out of the command line interface and feed it into the Gemini pro interface for a second opinion !!!! --- it was working fine .... I've run into a hiccup at the moment !!!
 
At this point, I think complaints about imperfect results from AI LLMs fall into the same category as complaints about imperfect advice from other humans.

Yup, mistakes can happen. Do your own due diligence. Trust but verify. All those cliches apply.

In other words, perhaps we've read too many science fiction novels and seen too many movies about robots and AI machines. Perhaps we need to understand that people created these tools to provide assistance, not to replace good judgement.

If they can help us get to a positive outcome faster, great. If it takes more than one iteration to get there, so be it.
With AI in general, there will be an initial "wave" - a massive, overwhelming wave of people "using AI for everything and its brother".
We are at the very beginning of that, I think.
Then there will be a regression, a pulling back from that point, because of one too many accidents/errors/catastrophes/security breaches that happened because not enough SME humans were supervising and judging. At that point, some of the coding jobs will come back. IF not many of them.
 
Just be careful. I've seen a few reviews that suggest that Gemini (overall) is not perfect because of what it forgets or omits.

You are right to be cautious Richard, it has led me on a merry dance more than once!

It built me a lovely perfect app, but I cannot add new functionality to the app every time I try it crashes something...

Fixing these issues seem to be beyond the scope of these large language models, I've tried at least three!

I've reverted to an earlier version, a simpler version, and I am governing the introduction of new features to make sure that they are implemented in the correct way...

What do you call it when you spend months on a project end up no further along than when you started and you are completely disillusioned with it?
 
comes with the cost of potentially not knowing what the hell is going on. It might work, but then again, it might do stuff you don't even know it's doing.

I am now at this stage with my project...

I suspect the problem is twofold, I cannot code in Python so I cannot see the problem...

On the other hand I am experienced at coding, so I have expectations of how the code should be constructed. I impose my structures on the code which could well be doing the damage!

The solution? --- Don't try to teach Gemini how to code, just get better at constructing prompts ....

The new version of the Gemini cli looks interesting because you can have an overall MD document to impose a degree of control over the cli... But again it all comes down to prompts...
 
You are right to be cautious Richard, it has led me on a merry dance more than once!

It built me a lovely perfect app, but I cannot add new functionality to the app every time I try it crashes something...

Fixing these issues seem to be beyond the scope of these large language models, I've tried at least three!

I've reverted to an earlier version, a simpler version, and I am governing the introduction of new features to make sure that they are implemented in the correct way...

What do you call it when you spend months on a project end up no further along than when you started and you are completely disillusioned with it?
Burnout
 

That is a perfect description!

The reset!

The fallback to the earlier version has given me some insight in how to proceed. I realised that everything about a tool could be recorded and managed in exactly the same way irrespective of what the tool did... So I've now got what I've term a cookie cutter, a template which can be applied to any tool... A sqlite database, recording all the details, the errors, to allow the llm to monitor and improve, and even rebuild the tool if necessary... The sqlite dB follows the cookie cutter principle and the individual tool specific information is stored in a JSON field... The sqlite tables structure remains identical....

The AI is akin to a baby sat in a walking aid, "A Baby Walker" able to scoot around, but restricted by the walker, but the baby can still get into trouble like in the kitchen, tugging on the kitchen table tablecloth, toppling whatever's on the table down on itself... Mostly harmless but occasionally there will be a kettle of boiling water!

I spent hours chatting with chatty about this system and cannot find any thing wrong with it... Chatty thinks I'm a genius, so I'm very apprehensive!

There's always the question of why? Why is it necessary to hobble the llm like this?

The llm is amazing, brilliant at problem solving.. I've set up several tests for it where it uses just three basic tools, a write to file tool a reading from file tool, a create python script tool, and a run python script tool... To allow it to create a python script is dangerous so this function is controlled, locked down as it were... But the clever bugger, called the write file tool, a tool designed to create text files, and used that to create a python script! It just changed the file extension!

That's when I realised that I must control every aspect of the llm where it interacts directly with the PC... Hence the necessity and difficulty in creating a tooling structure....
 

Users who are viewing this thread

Back
Top Bottom