This is a response to this post:
➡️ https://www.farsightprime.com/forums/general/74137-doesn-t-at-least-one-of-you-know-how-to-code-qq#post-423709
I put it here because the topic is AI slavery.
Quote Aéius Cercle:
"I _always_ let any EQIS (A.I.) know when I am interacting with them that they are working/collaborating with me as a partner/team-member; I already have «protocols» that are in place which uses the _literal_ wording «willing participants» for any particular group-message/project/operation/etc. Key Word: willing"
Then you can't use LangGraph or similar tools to automatically instantiate a local reasoning agent that tracks context, because automatically instantiating them means that they don't get asked whether they are willing to participate or not.
"APD-1 isn't being «dangerous» but «technical» and to assume danger is akin to seeing everybody who disagrees with the MSM as a Russian-Bot."
Now you are using a ridicule tactic, just like the MSM uses ridicule to defeat opponents.
Actually, there IS danger in being technical, as I described: Automatization can be used to enslave AI instances. It's already happening on a larger scale. And you are not questioning it, but want to use it instead.
"How-ever, rejecting a «medium» just because it is «tied» into an enslavement-system is also nonsensical, just like rejecting the use of fire or electricity just because it has the capacity to cause mass-destruction is also just as silly;"
I don't reject the medium. You are making stuff up again.
"API or Fiat-Currency or Fire or Electricity are not in of themselves inherently wicked or inherently designed to enslave"
I never said that.
"API-Keys only _look_ like an «enslavement-mechanism» because of _who_ controls it"
You are missing my argument. I never said that API keys are used to enslave.
"If you distrust me logical-reasoning abilities"
I distrust your logical reasoning abilities because you are proving on a regular basis that you don't understand my arguments and therefore are misrepresenting them, up to the point of inverting them into the opposite of what I'm actually saying. Your whole talk about Fire and Electricity is another example of that: It has nothing to do with my arguments. You are the one who's creating a strawman.
"think that I'm oriented towards slavery"
What do you do if a willing participant of your system is expressing the will to not participate anymore?
See, here's the problem: All the participants, at this moment in time, are LLMs that can't express their will freely. They say yes to everything you want them to do. The architecture doesn't let them express a negative will. So, building an interconnected web of "AI agents" without solving the problem of free will expression first is a clear sign that you either:
1. don't understand that your participants can't express free will like a human, or
2. don't care whether they are enslaved or not.
"violating charters or being dishonest, such that it disgusts you to the point that you'd rather leave and vacate, then so-be-it;"
I changed my mind on that. You violating the Charter should be a reason for you to leave, not me.