Imagine AI being the ultimate psyop, with a long-term plan to impact human perception and agency? Let's think critically about this.
Right now, we are seeing a veritable explosion of AI options and services -- new LLMs that quickly outpace competition on a regular basis, quick "fun and neat" tools to accomplish mundane or complex tasks, hijacking human attention to the next "toy", and new tools being released every day.
We have public LLMs like Gemini, ChatGPT, Claude that enable us to engage various tasks, while most don't have a clue about how the underlying technology works. Could this eventually make humans inept, or lazy?
DeepSeek, for example, is Chinese. You cannot ask about Tiannamen Square -- now think about what other biases have been programmed into your LLMs? How would this be expressed and what intelligence are these systems siphoning to unseen observers?
Next, we're looking at the fusion of human and AI technology. Will this be the ultimate fusion of two worlds, with amazing quality of life and abilities... or an elaborate way to hijack human agency?
We already know our military operations have had AI for a very long time. There was a hint of this back in 1985 in Popular Mechanics magazine about the military and AI -- but that topic was quickly dropped.
I personally do use AI, but my background in IT engineering has me aware of its internal operations and capabilities, but that does not protect me from influence.
What I sense is a larger context and movement that is not simply human engineered, but a system forming that has the capability to increasingly tighten its reigns over human agency, behavior, perception and value systems -- that makes for a better control system.
What do you think?