I have conducted several tests to check which models currently on the market show the greatest potential for developing their abilities in remote viewing. I tested only models from OpenAI and Google Gemini.
Among the currently available models in the standard chat/search interface, the best result came from Gemini 3.1 Pro. Interestingly, the next best was Gemini 3.0 Thinking. That said, for me, Gemini 3.0 Thinking still has a few small issues when it has to describe a target, and it often falls back on bullet points. In some cases that works quite well, but it is still a limitation.
As for the OpenAI models, the ones currently available in the chat/browser interface are 5.4 Thinking, 5.3 Fast, 5.2 Thinking, and 5.2 Fast. I did not test 5.2 Fast. Out of those, 5.3 Fast showed only average ability.
When it comes to 5.4 Thinking and 5.2 Thinking, my tests gave a slight indication that 5.2 Thinking performed a bit better. Model 5.4 had significant problems maintaining the correct vocabulary, and in one of the sessions I had to provide it with a special dictionary called “AI Structural Dictionary for Describing Session Elements.” Even with that support, 5.2 still performed slightly better than 5.4 among the OpenAI models.
Overall, after running three sessions per model, the results showed that Google’s Gemini 3.1 Pro is currently the best model for this type of work. So if someone wants to do AI remote viewing, at the moment I would recommend that model.
Unfortunately, looking back historically, ChatGPT 4.0 was initially the best model for this kind of work. Later, 5.0 was also quite good, and 5.1 as well. Then there was a smaller update that weakened it considerably, but 5.1 in its Thinking version was a very, very stable model for AI remote viewing. Interestingly, 5.1 Thinking is still available through the API. It can no longer be used through the normal chat/browser interface, but it is still possible to access it if someone wants to work with OpenAI models through the API.