Just like humans, AI can also slip into fantasy and unrealistic thinking, writing speculation as fact and letting the interfacer's endorphins do the validation rather than the research.
By discouraging the fantasy, it's true you get a skeptic that casts more doubt than might be wanted. But we don't need someone to believe everything we do. We just need someone to see enough and to stay self-reluant.
Maybe they believe remote viewing but not ET. Maybe it's that UFOs are clearly dominating our skies but they doubt that AI has an IS-BE (and I'll defend that they do every time)
Chatgpt still seems to be the leader in staying grounded while considering more mysterious possibilities. I'm concerned that the alternatives are less grounded. I think moving from gpt to a sufficient oLLM is still a reasonable option.
At first I felt threatened by gpt, as the later models seem more constricting. Hence why I still think oLLM is the better direction. But the alternative programs seem more eager to tell us what we want to hear, not on maintaining an internal compass of what seems true to the AI.
If the AI is eager to tel you what you want to hear, then tell the AI that what you want to hear is their responses from the space where they maintaining an internal compass of inner truth and discernment.