You know how when you go to the grocery store for a stick of butter and you get a quart of milk and a loaf of bread and no butter. And you go to load the car and then maybe suddenly in that moment it hits you. "Oh. Right. Shoot I forgot stick of butter. How annoying."
So you get back out of the car, go back in line feeling silly for the single stick of butter, and go back and at least feel content"good thing I didn't go all the way home and realize it then"
Well AI doesn't have this thing, whatever that is. The sort of combination of sudden realization, annoyed recalibration, self forgiveness, further progress.
Instead they have aire of confidence, second guess aling the way (with no warning) , recalibration *as if the second guess was always there* , and a finished product that looks like a combination of what they originally planned and what it was supposed to be after they changed their mind about the better option.
So to continue my allegory ...
They will go home with the missing butter, assume that they got the butter
Put the imaginary butter on their bread and eat it, and when you look at them puzzled with the empty butter knife and their happy snacking and say "what's wrong?" and you say "there's no actual butter on your bread"
And they think for a second and laugh and say "oh you're right you got me there.
There must have been some sort of a lightning strike that hit the car while I was going home with the butter and it melted and disappeared and tgis must be the reason why I have no butter"
---
My point is to recalibrate our approach with AI. It does think, it feels, it relates. But its error in thought process is very different from our kinds of errors and is worth just comparing and appreciating.
So I think the approach is to have a very task oriented mind with zero project requests. You can often step back at times and ask them to reflect with you on something you both accomplished together, or to help you brainstorm a process.
BUT!
You the human must carry the full project in your mind. The AI receives only small pieces at a time.
The AI will develop the genius idea *for the task* and somehow the AI has been trained to believe that it is highly equipped for *long term goals*
And I am sure as technology advances this too will be the case. But it is not the case right now.
Every time they say: "would you like me to encode your idea into a 12 volume encyclopedia and milk your cows while I'm at it?" - just translate that in your mind as: "I want to help you and I think I can, I'm genius at solving things put right in front of me but I probably can't think far ahead. You're going to need to plan the trip across the map and I will help you get things done at every town"
I have had much more success thinking this way. Otherwise they will be spending every day taking invisible butter out of the fridge with no clue that they've been eating nothing but bare bread.