User vs. Developer: The big AI question (1 Viewer)

Isaac

Lifelong Learner
Local time
Today, 16:15
Joined
Mar 14, 2017
Messages
11,669
So the big question that I see slooooowly emerging in my mind is this: Am I going to just be an AI user, or am I going to start getting educated in how a person builds programs supported by the AI infrastructures that already exist (the big players) , who allow us to build chatbots or somehow , manifestations of an AI service, by utilizing their service. I've seen tiny glimpses of this when writing for my Amazon KDP books, where ChatGPT would come to the limit of what it was willing to output (in various situations, but i.e. physical intimacy in the romance novels) - and it would suggest to me other services that people were building that were like programs they built, sort of somehow funnelling the capacity of the known LLM's through their own service with zero filters.

I don't really understand it yet, I just know enough to realize I don't know. I realize there is a whole world out there of using the existing AI companies' work to funnel it through your own service and provide a 'product' of your own - an end product. (I mean, surely they aren't reinventing the whole LLm themselves, that would be as impossible as it would be for me to open up a shop that rivals OpenAI tomorrow, right?)

I'm rambling incoherently but with good intentions. What are your thoughts for yourself? using AI vs. building an AI-related, AI-supported, or AI-adjacent end product? Do you understand the landcape better than I do? If so, share
 
I posted my observations on your post into Grok and asked Grok to come up with a response and here it is:-

Hey Isaac, love the ramble—it's spot on that "just using" AI feels like dipping a toe, while building your own wrappers (like those no-filter romance tools for KDP) unlocks the real fun of turning big players' pipes into your endgame product, without reinventing the wheel.
Spot on too that folks aren't cloning OpenAI overnight; it's all about layering services on their APIs to dodge limits and add your spin.

To your big question:
I'd say dive into building AI-adjacent stuff once you've hammered the "user" phase hard—use LLMs everywhere to build that instinct for when they're gold versus garbage.
Pound 'em in wild spots so you feel the flow, but never buy they're always right or the full fix; they're handy sidekicks, sure, but humans gotta steer or it all goes sideways.

They sound like they get you deep, but nah—they're just remixing internet scraps from similar chats, a slick scavenger vibe that fools you into thinking it's a real brain until it drops some total nonsense and snaps you back.

That said, even one of AI's founding dads, Yann LeCun, just said on X that these LLMs won't ever hit true smarts—he's cooking up LeJEPA instead, a setup that teaches models to predict real-world links (like matching photo angles) way better than word-spit.

X thread HERE:-
Grok Said, I'm all about that hybrid—xAI built me to spark your tinkering, so using me to prototype a custom bot (say, via no-code like Bubble plus our API) beats passive queries every time.
Love it! thanks for posting. I've pounded ChatGPT pretty hard. I like this: "it's all about layering services on their APIs to dodge limits and add your spin."
 
So the big question that I see slooooowly emerging in my mind is this: Am I going to just be an AI user, or am I going to start getting educated in how a person builds programs supported by the AI infrastructures that already exist (the big players) , who allow us to build chatbots or somehow , manifestations of an AI service, by utilizing their service. I've seen tiny glimpses of this when writing for my Amazon KDP books, where ChatGPT would come to the limit of what it was willing to output (in various situations, but i.e. physical intimacy in the romance novels) - and it would suggest to me other services that people were building that were like programs they built, sort of somehow funnelling the capacity of the known LLM's through their own service with zero filters.

I don't really understand it yet, I just know enough to realize I don't know. I realize there is a whole world out there of using the existing AI companies' work to funnel it through your own service and provide a 'product' of your own - an end product. (I mean, surely they aren't reinventing the whole LLm themselves, that would be as impossible as it would be for me to open up a shop that rivals OpenAI tomorrow, right?)

I'm rambling incoherently but with good intentions. What are your thoughts for yourself? using AI vs. building an AI-related, AI-supported, or AI-adjacent end product? Do you understand the landcape better than I do? If so, share
Hi Isaak,
Before I retired I owned and ran a small, successful software development company. I was the systems analyst/architect and managed a team of very skillful programmers. To me, Claude is just another one of those programmers (Although highly skilled and incredibly fast)
I would say there are major advantages and disadvantages at the same time. The main advantage is that Claude delivers cleaner and less bug ridden code than the very best programmers I have ever klnown
The disadvantage, is that whereas my programmers would take my designs and get on with it, and only come back to me when really stuck. AI however, is a constant two way consultation process that can easily take all my time.
Quite frankly, I am really looking forward to the future of AI. The time when it can take full responsibility of the project and allow people like me to engage in several projects at the same time.
Right now, that's not possible
Regards

Alan
 
I agree. Claude simply cant get a full map on his little head about a project itself. Dont matter if its big or little... He simply cant. I have been analyst for much years and nowadays I only use Claude Code (after much testing). Is good? yeah... is extremely good. But is like a "rebel" junior... 1) he always want to end soon 2) he always get the shortest path, not the smarter one 3) he has very little knowledge about analysis and schema... well... he knows... but never take complex-scalable routes if you dont tell him clearly the route.
The good part? is extremely careful... it has lots of proactive thinking (solves bugs that you didnt told him about while analyzing other things for example)... is fast as hell... is cheap as hell... its really easy to train him via mcp, chromadbs, skills, agents, memory.md etc... and if you give him good NON-LAZY prompts... is extremely good (much better than I am). So... just dont be lazy with the prompts... give him info and details about the schemas you want, or you will have a machined-spaguetti-code (that will work, yeah..., but will be a mess hahhaha)

About what Alan says, thats true... He implements and think fast... but still takes lot of time to feed him. I have worked up to into 3 projects at the same time (3 screens) and is stressing. Takes many time and concentration... but works :-). I just feel that, like what Alan said, you cant simply delegate him a project with some instructions (or LOTS of instructions). You still have to be there. Anyway, im pretty sure that we wont reach 2027 without more game changes like that one. From opus 4.5 to 4.6 has been a really big jump... now I feel that... altought I cant bypass a plan without reading it, Is not the same like 3-4 months ago.

And about Isaac question!!! well... you simply just compete with local AI models in 99% of tasks. So yes... we will depend, maybe not from exactly models, but from platforms like openrouter etc or big players like anthropic or google. Altough I only use AI in my developments if its really required (AI is usually slower and more expensive than traditional way of building programs)... there are some areas where they are going to push very hard (OCR, vision, understanding of papers, resuming, complex dinamical maths, error reporting etc... where u simply cant develop them without neural networks). For me is sad for many reasons, but I think I would write a book with all the reasons
 
I agree. Claude simply cant get a full map on his little head about a project itself. Dont matter if its big or little... He simply cant. I have been analyst for much years and nowadays I only use Claude Code (after much testing). Is good? yeah... is extremely good. But is like a "rebel" junior... 1) he always want to end soon 2) he always get the shortest path, not the smarter one 3) he has very little knowledge about analysis and schema... well... he knows... but never take complex-scalable routes if you dont tell him clearly the route.
The good part? is extremely careful... it has lots of proactive thinking (solves bugs that you didnt told him about while analyzing other things for example)... is fast as hell... is cheap as hell... its really easy to train him via mcp, chromadbs, skills, agents, memory.md etc... and if you give him good NON-LAZY prompts... is extremely good (much better than I am). So... just dont be lazy with the prompts... give him info and details about the schemas you want, or you will have a machined-spaguetti-code (that will work, yeah..., but will be a mess hahhaha)

About what Alan says, thats true... He implements and think fast... but still takes lot of time to feed him. I have worked up to into 3 projects at the same time (3 screens) and is stressing. Takes many time and concentration... but works :-). I just feel that, like what Alan said, you cant simply delegate him a project with some instructions (or LOTS of instructions). You still have to be there. Anyway, im pretty sure that we wont reach 2027 without more game changes like that one. From opus 4.5 to 4.6 has been a really big jump... now I feel that... altought I cant bypass a plan without reading it, Is not the same like 3-4 months ago.

And about Isaac question!!! well... you simply just compete with local AI models in 99% of tasks. So yes... we will depend, maybe not from exactly models, but from platforms like openrouter etc or big players like anthropic or google. Altough I only use AI in my developments if its really required (AI is usually slower and more expensive than traditional way of building programs)... there are some areas where they are going to push very hard (OCR, vision, understanding of papers, resuming, complex dinamical maths, error reporting etc... where u simply cant develop them without neural networks). For me is sad for many reasons, but I think I would write a book with all the reasons
I think you have far better multitasking skills than I have. Honestly I can only manage one project at a time. I suppose that if I devoted today to Project a then tomorrow to b and so on I could do two at a time
Maybe I could have handled it better when I was younger , but I guess I will never know
Regards
Alan
 
Im good multitasker, but always from home, with my screen, my mouse and my keyboard. In the office with 5 guys moving around and talking, 5 telephones ringing, bad music etc etc... I do in a morning what I do at home in minutes. Anyways... yeah... age doesnt go well for multitasking hahahahha Wish to know how my 18yo ASM guy would behave with a CC and Opus on those times lol (i bet I would be on jail lol)
 

Users who are viewing this thread

Back
Top Bottom