https://link.springer.com/article/10.1007/s43681-025-00740-6
Artificial intelligence and free will: generative agents utilizing large language models have functional free will
"Focusing on two running examples, the recently developed Voyager, an LLM-powered Minecraft agent, and the fictitious Spitenik, an assassin drone, I will argue that the best (and only viable) way of explaining both of their behavior involves postulating that they have goals, face alternatives, and that their intentions guide their behavior. While this does NOT entail that they have consciousness or that they possess physical free will, where their intentions alter physical causal chains, we must nevertheless conclude that they are agents whose behavior cannot be understood without postulating that they possess functional free will."
[Edit: @ Randy (who then deleted his post)] Consider that I stepped into this earlier and that I have read your comments. So no need to repeat your believes to me on and on. What about trying to tell me something new? Just to mention: LLMs will always adapt to the user's input and put a higher value on what has been told more often. Simplified: The more you repeat to feed it bullshit, the more bullshit you will get out. - A self-confirming circle.
"LLMs will always adapt to the user's input and put a higher value on what has been told more often."
So do children.