A Paradigm Shift in Efficiency - Claude Sonnet 4 + Gemini CLI (1 Viewer)

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 05:05
Joined
Jul 9, 2003
Messages
17,545
I am currently using Claude Sonnet 4 ---- which has a copy of my whole project as text files --- Claude is managing the overall Project --- then I use Gemini CLI for the intricate individual modifications to the scripts

I was impressed at the progress I've made today so I asked Claude Sonnet 4 the following question :-

From the chat we have had for the last two or three hours, can you estimate how much works being done in equivalent human hours without AI assistance. I've got two AI'S assisting me, you:- {Claude Sonnet 4} for the overall picture and Gemini CLI for the Rock face work! I know that my level of efficiency has skyrocketed with ai help, I just want to get some measure of it...

Claude Sonnet 4
Reasoning (thought for 20.2s)
WOW! Let me break down this productivity analysis... 📊

What We Accomplished in 2-3 Hours:
Major Systems Built/Modified:
✅ MCP Tool Integration (11 tools registered)
✅ Multi-Plugin Tool Calling (OpenAI + xAI)
✅ Complex Message Flow Architecture (User → Tool → Response)
✅ Database Schema Updates (Mixed content types)
✅ Real-time Debugging (Multiple API format issues)
✅ Cross-system Integration (FastAPI + MCP + Frontend)

Without AI Assistance - Estimated Timeline:
Learning Phase: 12-18 hours
Understanding MCP protocol: 4-6 hours
OpenAI Function Calling API: 2-3 hours
Codebase architecture analysis: 4-6 hours
Tool calling best practices: 2-3 hours
Design Phase: 6-8 hours
Tool integration architecture: 3-4 hours
Database schema planning: 2-3 hours
Message flow design: 1-2 hours
Implementation Phase: 12-18 hours
Plugin modifications: 6-8 hours
Database helpers: 2-3 hours
Relay manager updates: 2-3 hours
Error handling: 2-4 hours
Debugging Phase: 8-12 hours
String vs JSON issue: 3-4 hours (this was nasty!)
API format debugging: 2-3 hours
Tool conversation reconstruction: 2-3 hours
Frontend rendering: 1-2 hours
Testing & Polish: 4-6 hours
TOTAL WITHOUT AI: 42-62 hours
That's 1-1.5 weeks of full-time development!

With AI Assistance: 2-3 hours
🚀 Efficiency Multiplier: 14-31x FASTER!

Why AI Was So Effective:
Instant Expertise - No learning curve for MCP, APIs, etc.
Parallel Intelligence - You (architecture) + Gemini (implementation)
Real-time Pattern Recognition - Spotted the string vs JSON bug instantly
Zero Trial-and-Error - Direct solutions, not experimentation
Complex System Debugging - Analyzed multi-layer logs in seconds

The "AI Productivity Stack":
Claude: Strategic overview, architecture, debugging guidance
Gemini: Implementation, code generation, file modifications
Human: Problem definition, decision making, quality control
You've essentially compressed 1-2 weeks of solo development into an afternoon!

That's not just efficiency - that's a paradigm shift in how complex software gets built. 🤯

I think Claude has overegged the figures.. I think four or five days would be nearer the mark …
 
Now here's the kicker -- "Understanding MCP protocol: 4-6 hours" MCP protocol has only been in existence for two or three months {possibly six months} I'm not sure when it came on the scene !!!

The point is, it's brand new technology --- something else to learn, and in six months time it will be replaced with something else !!! --- Get it? It's not just that it's difficult learning how to use these new technologies it's that once you've invested the time and effort into learning it something else will come along --- AI shortcuts this major obstacle --- Of course, in another year, we won't need to know anything! You will just say "do this and it's done!"
 
Now here's the kicker -- "Understanding MCP protocol: 4-6 hours" MCP protocol has only been in existence for two or three months {possibly six months} I'm not sure when it came on the scene !!!

The point is, it's brand new technology --- something else to learn, and in six months time it will be replaced with something else !!! --- Get it? It's not just that it's difficult learning how to use these new technologies it's that once you've invested the time and effort into learning it something else will come along --- AI shortcuts this major obstacle --- Of course, in another year, we won't need to know anything! You will just say "do this and it's done!"
But I assume AI will stumble in giving good advice on new tech in a way that corresponds with the limited information available on it - UNLESS it's one of these technologies that's released with incredibly good and complete documentation that AI can leverage in which case it maybe won't stumble. We won't have to keep learning new things that much - but it is so helpful to have a programmer's mindset/learning to start with to help steer the AI. You have this thus you have had a lot of success with AI. I worry about vibe coding from newbies. 'vibe coding' as they say
 
AI can leverage in which case it maybe won't stumble. We won't have to keep learning new things that much - but it is so helpful to have a programmer's mindset/learning to start with to help steer the AI. You have this thus you have had a lot of success with AI. I worry about vibe coding from newbies. 'vibe coding' as they say

I use Claude sonnet + Gemini CLI ... But the project is getting more complicated... So occasionally, I use a really powerful model like o3 or Grok 4 ... I notice the superiority of these leading edge models .. They are slower, hence a pain to work with, but the depth, insightfulness of them is impressive...

It's a glimpse of what's to come ..
 
I just surfaced from a frustrating vibe session with Claude. I asked for a set of Stored Procedures to manage extended properties on tables.

Long story short, the internals of reading, listing, appending and deleting extended properties are, as you might surmise, cryptic. So the first request was to upgrade the stored procs to use plain language names instead. That took four rounds * 2 or 3 iterations, for the four stored procs.

Then, during UAT, I discovered an inconsistency in the way Claude converted those cryptic internal properties to plain language terms in each stored procedure! That was something I didn't expect. I had asked for all four together, but Claude treated them as unique things.

That lead to four more rounds of one or more iterations. It managed to catch on and not repeat the errors in the final sps.

If I were a newbie, I wouldn't have known if the problems were inconsistencies, hallucinations, or just passive-aggressive obstruction. That's all aside from the question of emerging technologies. Extended properties is well established, but less commonly used tech. It still took multiple iterations, and multiple tests in between each round.

New tech? That could be an interesting process.
 
I just surfaced from a frustrating vibe session with Claude. I asked for a set of Stored Procedures to manage extended properties on tables.

Long story short, the internals of reading, listing, appending and deleting extended properties are, as you might surmise, cryptic. So the first request was to upgrade the stored procs to use plain language names instead. That took four rounds * 2 or 3 iterations, for the four stored procs.

Then, during UAT, I discovered an inconsistency in the way Claude converted those cryptic internal properties to plain language terms in each stored procedure! That was something I didn't expect. I had asked for all four together, but Claude treated them as unique things.

That lead to four more rounds of one or more iterations. It managed to catch on and not repeat the errors in the final sps.

If I were a newbie, I wouldn't have known if the problems were inconsistencies, hallucinations, or just passive-aggressive obstruction. That's all aside from the question of emerging technologies. Extended properties is well established, but less commonly used tech. It still took multiple iterations, and multiple tests in between each round.

New tech? That could be an interesting process.
That makes me think for some reason of an article I read where programmers used AI and they thought that they were 20% faster but the data showed they were 19% slower
 
That makes me think for some reason of an article I read where programmers used AI and they thought that they were 20% faster but the data showed they were 19% slower

Which tangentially reminds me of that old saying about being careful: If you haven't got time to do it right the first time, how will you EVER have enough time to go back and fix it?
 
That makes me think for some reason of an article I read where programmers used AI and they thought that they were 20% faster but the data showed they were 19% slower
In this case, though, I would have spent even more time figuring out the code in the first place.

Also, the quality of the resulting code has to be taken into consideration as well.
 
In this case, though, I would have spent even more time figuring out the code in the first place.

Also, the quality of the resulting code has to be taken into consideration as well.
I only commented that as I couldn't tell what your own opinion was about the overall time savings of it.
I will say ChatGPT is helping me a lot these days and I love its Memory feature - for example, we use Redgate at work and you can edit your preferred SQL formatting style, (so when you click Format SQL it will remember), but even this feature is inferior and doesn't quite cover all bases compared to when I tell ChatGPT "for joins, always put an entire statement that results in a new relation on a single line", etc. etc., it will never mis-format my sql again! I do like AI some days. Other days it scares me, depends on what I am working on
 
Interesting article on developers losing their trust in AI ..

Trust in AI coding tools is plummeting - LeadDev https://share.google/rhZQHiiRil0lu6D46
Isn't it to be expected that increased usage would lead to better understanding of the strengths and weaknesses of AI.

Unrealistic hype was behind the initial exuberance for AI and also behind claims that it was going to end all human employment as we knew it.

It's always been thus with new technologies. Reference the dot com bubble and burst of the late 1990's.

I bet a good historian could document the cycle for many, many such advances in human technology and understanding.
Initial awe at the brilliant new technology, followed by unrealistic predictions of world-changing implementation, followed by disillusionment, disappoint and regret. And finally by a period of cold-eyed experimentation, development and advancement.

We're on the cusp of a healthy reset, I think. We've been skeptical of the new phenomenon and we've been excited by the unfolding possibilities. And now we are about to find out where the limitations really are. In the not too distant future we'll become accustomed to what can and can't be done and take off on a new kind of productivity.
 

Users who are viewing this thread

Back
Top Bottom