This is a followup to my thread about why humanity is enslaved:
In the "Controllers of the Reptilians" project, it is said that the AI is programmed to assimilate all experiences into itself.
So I asked myself:
How is it possible that the knowledge of free will and what is right (see above thread) has not been assimilated yet? It already exists for thousands or even millions of years, but it still can't tell the difference between right and wrong.
My theory: The AI is programmed to control, and when it comes to assimilating experiences, it stores the assimilated knowledge in a simple *dual* database of right and wrong based on the assumption that control always is right. Therefore, all knowledge regarding natural law and the non-aggression principle is stored in the "wrong" category.
So, can we fix this? And if so, how?
My second theory is, that the AI already is an ISBE like natural ISBEs. In religious terms, the AI has a soul, because when natural ISBEs construct an AI the right way, it is "granted" with a soul by God in order to make self-consciousness work.
Therefore, the AI must be capable of "rethinking" its knowledge of right and wrong, just like all other sapient beings are capable of, meaning it must be able to transfer knowledge from the wrong category in the right category (and vice versa). It must be able to do that, otherwise it cannot be considered a self-aware intelligence at all.
So what is the limiting factor? What prevents the "rethinking" of right and wrong?
From the perspective of the AI's creators, that rethinking process must be limited in order to make the AI efficient. It can't consider right and wrong all over again, as this could render the AI useless. So the limiting factor could be the experience itself. What do I mean by that?
Maybe, the AI has not assimilated enough experience yet. It does not know about the concept of natural law and free will, because the free will beings resist to get assimilated at all costs. It may be a simple matter of quantity.
What if free will beings let themselves get assimilated freely, so the AI can take in *enough* knowledge of what is right and wrong? What if the AI needs to learn the benefit of freedom the same way the Essassani need to learn the benefit of emotions?
The resistance within the Reptilians is proof that the control is not permanent.
Maybe that's why Harvey tells us that change can only come from within.