ChatGPT: The Future of AI is Here!

I can't see that plagiarism is a problem with a bot, can you?
I certainly don't see how any plagiarism-detection-software, like Turnitin, could detect it - at least not without major advancements and seemingly not without explicit partnership with the AI company tokenizing and storing its output or something.

Hell, 99% of the internet is just someone repeating what someone else said anyway. Frankly, plagiarism by using a bot would probably be an improvement over most content.
 
A few things come to mind when I think of requirements that make me happy:

1) I can tell the person writing them knows just enough basics about how data works, generally speaking, to phrase things in a way that satisfies both the mind of the general population as well as supplies me with what I know to begin designing - to both our understanding
2) the requirements do not depend overly on using lingo that is not known to the technical side, or could be open to interpretation, but instead references the Business Systems which are involved - again, in a way that anyone can understand who understands the basic Business Systems. The lingo used should be lingo that has a written definition itself
3) the requirements should not be contradictory in nature
4) whatever internal lingo is used, the level that is used should remain consistent throughout multiple documents, projects, and over the course of time for many reasons but among them so as to create documentation that remains synched and accurate
5) the requirements should clearly state if at some point, the writer isn't sure of something that requires discussion - in other words, they should know just enough to have a sense of when their knowledge may be incomplete enough to write it perfectly and stimulate discussion questions
6) a PM might want this, although I don't care as much: the requirements should demonstrate how they are linked to and support the business process goals/rules
7) the requirements should be written in just a sensitive way that uses great exactness with regard to the desired outcome, but stops short of touching how it should be designed, beyond the outcome desired....this one is a bit fluid, most dev jobs have allowed me considerable latitude MOSTLY in the area of FE design, which frankly, I enjoyed.

the requirements leave me with no questions in a perfect world

Edit trying to answer your question humbled me Jon, I realized that frankly I struggled to do so.
Reminds me of something a supreme court justice said (which is roundly mocked, but makes a lot of sense to most people) once many years ago about public obscenity: something like ... I can't define it, but I know it when I see it!
Which used to be true of course, when the majority of the general population was on a similar moral wave length.

In all fairness to me, however, I suspect the problem isn't that "good" requirements cannot be "defined" ... I suspect that
1) It is extremely context-sensitive; i.e., I can only define what Good means in the context of a particular role at a particular company
2) I'm just not smart enough to articulate it, but the definition could exist
Well let's simplify a bit - a good requirement is a requirement that is useful for one thing and one thing only.

Which is why there are so many different types of requirements depending on the intended audience, as you note in your second point #1.
 
Maybe an AI legal bot could answer the question regarding plagiarism. I believe the CEO of Open AI said something like the value of intelligence will trend to zero, because AI will provide endless amounts of it. Or in other words, cognitive jobs might end up being lower paid than say a hairdresser! Just thought after writing that sentence that no doubt there will be some sort of pudding bowl device that will go over our heads in the future and the bot will snip away frantically, putting hairdressers out of business too. Its a bots life.
 
I watched an interview yesterday with the CEO of OpenAI. It was interesting and hugely optimistic (or should I say realistic?) about the future of AI and what is coming. Major disruption is just around the corner and people just don't know it. But he has an inside seat on what is going on.

 
Last edited:
My brother and I were chatting about Wheel of Fortune. He commented that if you could figure out how to program putting the puzzles together, you could make your own game. I noted that the programming would be relatively easy and that the real work is coming up with the puzzles - not slotting letters into squares.

And that got me thinking about this AI. Can it be asked to generate WoF puzzles? I'm not going to set up an account to find out, but I would be curious to know whether it can generate a sensible puzzle if asked, say, to create a Wheel of Fortune puzzle for a Phrase category...
 
For all you ChatGPT lovers out there, I came across this article which outlines some good ideas for case uses of this technology. It made me think of more ways I can take advantage of this excellent AI tool.

 
I asked ChatGPT to come up with a funny slogan for this website. One of them was this:

"Microsoft Access: because Excel just wasn't complicated enough"

:D
 
Last edited:
The previous link I posted had a funny example from ChatGPT. I couldn't stop laughing after reading response #1! :ROFLMAO:

1671713170887.png
 
What you can find on the internet these days truly amazes me. But I must admit that this transcends most scatological humor I have previously encountered. Of course, there is a down side to all of this. It really sends the wrong message when in a bar trying to pick up a lovely young lady for an evening of fun.
 
I tried a new prompt based on the pooping theme, and although the answer was supposed to be serious, it still had me laughing! :LOL:

It said its important to maintain good personal hygiene!

1671726460874.png
 
For all you ChatGPT lovers out there, I came across this article which outlines some good ideas for case uses of this technology. It made me think of more ways I can take advantage of this excellent AI tool.

"Chatgpt doesn't currently have access to the internet" ? Huh??
WTF

Also I wish people would realize the best blogs have expandable images. If you embed an image on a blog article that can' be expanded easily (like with the magnifying glass), your article is about 90% more useless than one that does.

I like a few of the ideas, though. Others I thought were pretty dumb - like the Transitions for ad creators, those were at the level of a 10 year old and I would certainly hope anyone with a career in Creative Writing could do a lot better!
Here is an equivalent: The world is a really big place. and just like the world is a place. nord vpn has its place"...uhh c'mon.

I think people are definitely going to over-use chat gpt to where you can actually recognize when they've used it.
 
They are currently working on systems to make these neural nets live. The cost of training the neural net runs into the millions, and so they need a different way to integrate new material. But it is happening.

We have to remember that ChatGPT is based on GPT 3.5. GPT 4.0 is coming soon, and some rumours are that it might have multiple modality, or in laymans terms, images, text, audio and so on.

Not sure if you are aware, but DALL-E can turn text descriptions into images. You get a few free credits to play with it. It is at the openai site again.

I just tried this on it: "Impressionist painting of a community of people in fear of their artificial intelligence overlord"

1671818887691.png
 
It normally costs money to use, where you buy credits. Pictures cost you credits, and maybe some other featuers cost more credits. So they give you some to get started.
 
They have an AI that can turn natural language into SQL and also one that can create SQL queries. You can see some of these different bots here:

 
At the risk of being a bit harsh, sometimes when we get a question to be turned into something SQL-ish, the person posing the question doesn't fully understand the problem and therefore a natural-language query-maker will have as much trouble as we often do. How many times have we answered a question and later discovered that the asker thought they were asking something totally different?
 
@Uncle Gizmo I just watched some of that video. Some interesting things in there, like explanations on how these AI's work.

It think 2023 will be the Year of The AI. Or, in other words, the year that AI really takes off with strong intelligence and increasing utility. It has been progressing quite well over the last 5 to 10 years, but with the launch of ChatGPT we are suddenly confronted by smart AI that can provide answers, create images, make up rhymes, do computer code and a host of other things. And it will only get stronger, and rapidly. I'm excited to see what GPT4 brings, which is just around the corner with some predicting its release in the first quarter of 2023.

I read that in an interview with a Microsoft exec, he said that what people thought will be coming in 2033 will be coming in 2023. I presume this is an opaque reference to GPT4's capabilities. Will it be conscious? [Will it have Free Will?] :p
 
Last edited:
At the risk of being a bit harsh, sometimes when we get a question to be turned into something SQL-ish, the person posing the question doesn't fully understand the problem and therefore a natural-language query-maker will have as much trouble as we often do. How many times have we answered a question and later discovered that the asker thought they were asking something totally different?
I think as long as the tool is TRULY "interactive", this may be less of a problem, as the tool can refine iterations based on additional feedback and input.

This is one thing I noticed about chat gp and as a matter of fact I feel the word "interactive" ought to be more limited in its use.
When someone tells me a tool is "interactive", to me anyway, it gives me an expectation that I will be able to continue interacting with it throughout a multi-step, iterative process. Too often what the seller really meant is simply that the machine accepts a one-time input. If you aren't happy with the outcome, you can always keep trying different commands, but this is VERY different than the machine doing what I think would be much more useful: Generate an output, and offer the option to continue refining THAT OUTPUT--not a fresh new one--by interactive feedback.
 
ChatGPT is interactive. You can ask it a question. Then after the response, you can say things like:

- Summarise that in 2 sentences.

- Expand on that.

- But what about X?

And so on. It recognises context.
 
I've learnt today that you can that you can do something like this:

If I was starting a new website today, what advice would you give if you were a legal advisor?

If I was starting a new website today, what advice would you give if you were a successful startup founder?

If I was starting a new website today, what advice would you give if you were a marketing consultant?

Just giving ChatGPT a persona to adopt, you can get more specific advice!
 

Users who are viewing this thread

Back
Top Bottom