Python - Thread

Uncle Gizmo

Nifty Access Guy
Staff member
Local time
Today, 17:52
Joined
Jul 9, 2003
Messages
17,398
Python - Large CSV



Could be helpful if I ever start using Python!
 
Last edited:
Is 'Oops' a pun (I quite like it if it is!) or an auto-corrupt? :ROFLMAO:
 
Yes it was autocorrect, it doesn't recognize oop and changed it to oops!
 
Tony,

I had a chatgpt session where we moved from one software to python in attempting to resolve an issue. I don't know python, but chatgpt's suggested code was easily executed using Online Python . However, I did not get a working solution, and I think chatgpt didn't understand the constraints, abused the constraints, altered the constraints or just refused to deal with the constraints.
 
I think chatgpt didn't understand the constraints, abused the constraints, altered the constraints or just refused to deal with the constraints

That sounds about right. ChatGPT is NOT intelligent, it just appears intelligent.

When you ask it a question and it produces what looks like clever code, then you test it and find it doesn't work in the way you want, and ask ChatGPT to correct it, it sometimes serves up practically the same incorrect code like a petulant teenager!

I find I have to formulate my prompts in a certain way, like when you are dealing with someone who thinks they are god's gift. You have to lead them to the correct answer.

If ChatGPT refuses to correct the code, repeating it's mistake (like a petulant teenager) I often start a new chat, coming at the problem from an entirely different direction.
 
Last edited:
I also ask ChatGPT to produce code in little steps, like when you write code yourself, you write a simple script to test your idea. Have ChatGPT do the same, then ask it if there is another way to write the code? Or ask it to write 3 different example scripts that do the same thing. Then ask it why it offered those examples. It's called "prompt engineering" ... But it's really handling a very polite, but petulant teenager who thinks they know everything.....
 
ChatGPT often makes the beginner mistake of writing a block of practically unmanageable code. One prompt I use is to ask ChatGPT to write code in individual blocks that do one thing, (code separated out into individual functions).

I sometimes wonder if it's worth the effort!
 
More importantly, in my opinion, is that [you] already understand more than enough about the context and the purpose of the code you request to allow to evaluate ChatGPT's efforts. Too many naive people assume it's AI and it has to be right.
 
Last edited:
I think we're all finding that chatgpt and AI generally is quick to produce something, but from early experiences it isn't necessarily correct. Subsequent iterations with chatgpt are variations of the original. In my latest case I asked a question concerning scheduling with constraints, and initially asked for advice in miniZinc ( a solver software). After getting a few samples/suggestions that produced syntax-then execution errors, chatgpt suggested we move to python, then after several syntax issues, and some code that did not solve the problem, suggested we move to miniZinc. The python code gave some answers, but never handled all constraints. It seemed that it forgot or ignored some constraint(s). I would say that writing code given a requirement that includes multiple criteria is NOT current AI's strength.
 
I don't think writing code will ever be AI's strength; the downside of these models is that they just play an association game. They take the input text and generate a coherent response based on what is effectively complex condition chaining with a dash of RNG. Especially with LLMs, which can barely grasp the docs for a function (and (F/L)OSS docs can be notoriously bad), you're better off giving those docs a quick once-over yourself.

AIs make for fun distractions and good tools to convert to/from management lingo. Besides that I don't see them as having any real merit
 

Users who are viewing this thread

Back
Top Bottom