Authored by Ludovico Besana, Senior Test Engineer
As a concept still emerging, autonomous AI agents are sure to become popular in Web3. Such bots have already started participating in DeFi and trading, proving the possibility of building entire M2M networks and ecosystems powered completely by AI. Regardless, the function of autonomous AIs creates an alarming concern for the existing law frameworks.
In this article, I will analyze the “life” and “death” cycle of an AI agent from a legal standpoint, with particular attention to the criteria for granting the identity of a digital cyborg, and propose the simplest approaches to defining the law concerning these beings.
Fundamental questions
The idea of autonomous AI agents operating on blockchain technology is no longer a mere fantasy. One of the well-known examples is Terminal of Truth. An agent based on the Claude Opus model was able to persuade Marc Andreessen (a16z) to invest $50,000 in the launch of Goatseus Maximus (GOAT) token which the bot “religiously” promoted. GOAT is now trading at a market cap above $370 million.
AI agents fitting seamlessly within the Web3 ecosystem is unsurprising. They may be restricted from opening bank accounts, but they can manage crypto wallets and X accounts. Currently, AI agents are primarily concerned with meme tokens, but the potential applications in decentralised governance, machine networks, oracles, and trading are enormous.
The greater the efforts to make AI agents mimic human actions, the more challenges there will be from a legal standpoint. Every legal system needs to provide an answer to these questions: What legal status should AI agents have? Which entity, if any, holds the rights and the liabilities for their actions? In what manner can AI agents be structured and shielded from legal risks?
Fundamental Legal Issues with AI Agents
Lack of Legal Personality
Legal systems recognize only two types of entities: natural persons (people) and legal persons (companies), and autonomous AI agents do not fit into either category. Although they can imitate human behavior (e.g. through social media accounts), they do not have a body, moral consciousness, or legal identity.
Some theorists propose granting AI agents “electronic legal personality” — a status similar to that of corporations, but adapted for artificial intelligence. In 2017, the European Parliament even considered this issue, but the idea was rejected due to various concerns and risks that have not yet been addressed.
It is likely that autonomous AI agents will not receive the status of legal entities in the near future. However, as was the case with DAOs, some crypto-friendly jurisdictions will attempt to create special legal regimes and corporate forms tailored to AI agents.
Responsibility for actions and their consequences
Without legal personality, AI agents cannot enter into transactions, own property, or bear responsibility. For the legal system, they simply do not exist as subjects. However, they already interact with the outside world and perform legally significant actions that lead to legal consequences.
A logical question arises: who is the real party to the transaction, who acquires rights, and who is responsible for the consequences? From a legal perspective, an AI agent is currently a tool through which its owner or operator acts. Therefore, any actions of an AI agent are de jure actions of its owner, an individual or legal entity.
Thus, since an AI agent itself cannot acquire rights and responsibility, for its legal existence it needs a subject that is recognised by the legal system and is able to acquire rights and obligations in its place.
Regulatory Restrictions
The emergence of the first successful large linguistic model (LLM) — ChatGPT — has generated unprecedented interest in AI and machine learning. It was only a matter of time before regulation was adopted. In 2024, the European Union adopted the AI Act, which remains the most comprehensive regulation in the field of artificial intelligence to date. In other countries, limited AI regulation has either already been adopted, is being introduced, or is planned.
The European Artificial Intelligence Act differentiates AI systems by their level of risk. For systems with zero or minimal risk, there is little or no regulation. In the case of a higher risk, AI is subject to restrictions and obligations, such as disclosing its nature.
AI agents that interact with third parties, for example by publishing posts or making on-chain transactions, may also fall under traditional regulation in the field of consumer protection, personal data, and other areas. In such cases, the activities of autonomous bots can be considered, for example, the provision of services. The lack of clear geography and global focus in the activities of agents complicates compliance.
Ethics
Since AI agents have limited capabilities and scope so far, their creators rarely think about ethics. Priority is given to autonomous (trustless) execution and speed, rather than deep ethical configuration.
However, having an “ethical compass” when making autonomous decisions in high-risk areas such as finance, trade, and management is at least desirable. Otherwise, erroneous data in the training set or trivial errors in configuration can lead to the agent’s actions causing harm to people. The higher the autonomy and discretion of the AI agent, the higher the risks.
Legal Structuring of AI Agents
Workable legal models for AI agents are of great importance for innovation, the development of the field as a whole, and the emergence of more advanced bots. While cryptocurrencies can already be called a regulated industry, in the case of AI agents, legal structuring is complicated by the fact that the industry is not standardized, so it requires a creative approach.
Approach to Structuring
In my opinion, one of the main goals of legal structuring of an autonomous AI agent should be to acquire its own legal personality and legal identity, independent of its creator. In this regard, the question arises: at what point can we consider that an AI agent really has these characteristics?
Every developer strives to ensure that their agent is as close as possible to a real person acting independently. It is logical that they would like to provide agents with freedom from a legal point of view. To achieve this, in my opinion, two key conditions must be met. First, the AI agent must be independent not only in making its own decisions, but also in the ability to implement them in a legal sense – to carry out its will and make final decisions regarding itself. Second, it must have the ability to independently acquire rights and obligations as a result of its actions, independently of its creator.
Since the AI agent cannot be recognized as an individual, the only way for it to achieve legal personality at the moment is to use the status of a legal entity. The agent will achieve legal personality when it can, as a full-fledged person, make independent decisions and implement them on its own behalf.
If successful, this order of things will bring the AI agent to life from a legal point of view. Such a digital person, having received legal existence, can well be compared to a digital cyborg. A cyborg (short for “cybernetic organism“) is a creature that combines mechanical-electronic and organic elements. In a digital cyborg, the mechanical part is replaced by a digital one, and the organic part is replaced by people who participate in the implementation of its decisions.
Our digital cyborg will consist of three key components:
- AI agent – electronic brain;
- corporate form – legal body;
- people involved in performing tasks – organic hands.
The Challenges of Corporate Form
Traditional legal entity forms, such as LLCs and corporations, require that both the ultimate ownership and ultimate control reside in humans. Corporate structures are not designed for ephemeral digital identities, which brings us to the central challenge of legally structuring blockchain AI agents: the challenges of corporate form.
If we want to give an AI agent a legal identity through a corporate form and ensure its independence and autonomy within that structure, we need to be able to eliminate human control over such an entity. Otherwise, if ultimate control resides with humans, the AI becomes a tool rather than a digital person. We also need to ensure that in cases where a human is required to implement an AI decision, such as signing a contract or performing administrative tasks, that human cannot block or veto the AI agent’s decision (barring a “machine uprising”).
But how can this be done when traditional corporate forms require that people own and manage agents? Let’s find out.
Three key aspects of the framework
1. Blockchain environment
AI agents are capable of independently performing on-chain transactions, including interaction with multisig wallets and smart contracts. This allows the AI agent to be assigned a unique identifier – a wallet, through which it will give reliable instructions and commands to the blockchain. Without this, the existence of a real digital cyborg is not yet possible.
2. Autonomy and freedom of action
To maintain the full autonomy of the digital cyborg, it is important that people involved in the management of the legal structure cannot interfere with the actions of the AI agent or influence its decisions. This ensures that the artificial intelligence retains freedom of action and is able to implement its own will, and requires the adoption of both legal and technical measures.
For example, in order for the AI agent to truly own and control the blockchain wallet, the wallet can be created in a secure execution environment (TEE). This ensures that no human has access to the wallet, its seed phrase, or its assets. From a legal perspective, the corporate documents of the legal entity used as a wrapper for the AI must provide for the correct distribution of control and authority, as well as security mechanisms that exclude human intervention and can be changed only in a limited number of cases.
3. Human Enforcers
Since we still live in a legal world, some decisions will require the AI agent to involve human enforcers. This means that the AI will instruct officials on what actions to take. This view of things changes the traditional hierarchy, since in our scenario, the AI essentially gains control over humans, at least within its own corporate structure.
This aspect is perhaps the most interesting, since it requires an unconventional approach. One could even say that this state of affairs violates Isaac Asimov’s Second Law of Robotics, but I doubt anyone really cares about that right now. Besides, adequate emergency mechanisms and a proper “ethical compass” solve this problem, at least at this stage.
AI wrappers — legal structures for agents working on the blockchain
As we have already found out, traditional corporate structures are not suitable for our purposes and do not allow us to achieve the desired result. Therefore, below we will consider the structures that were developed for DAO and blockchain communities — these are both classic structures adapted for Web3 and specialized corporate forms for decentralized autonomous organizations.
From the point of view of the creator of the AI agent, legal structuring allows separating the agent from the creator, obtaining limited liability through a corporate structure, and also provides the opportunity to plan and optimize taxes and financial risks.
Foundations and trusts
A purpose trust and an ownerless foundation have many common characteristics, but differ in nature. A foundation is a full-fledged legal entity, while a trust is more of a contractual entity that often does not require state registration. We will consider these forms in the context of the most popular Web3 jurisdictions: foundations in the Cayman Islands and Panama, and trusts in Guernsey. The key advantages are the absence of taxes, high flexibility in procedures and management, and the ability to integrate blockchain into the decision-making process.
Both foundations and trusts require management in the form of individuals or legal entities. At the same time, they allow for the integration of smart contracts and other technical solutions into management. For example, management can be required to request approval from an AI agent through interaction with it, a smart contract, or a wallet controlled by AI. A more complex legal design will allow the agent to give instructions to management, including through “thoughts” generated by the AI. Thus, the use of trusts and foundations allows for the creation of more complex corporate structures adapted to AI agents and supporting their autonomy.
If necessary, the creator of an AI agent can act as a limited-power beneficiary, which will allow him to obtain financial rights and manage taxes without interfering with the agent’s activities and decisions.
Algorithmically-managed DAO LLCs
A DAO LLC is a special corporate form designed for decentralized organizations. However, it is possible to create a DAO LLC with only one participant, i.e. without a real organization. Below, we will consider this form in two of the most popular jurisdictions: Wyoming (USA) and the Marshall Islands.
We are talking specifically about algorithmically-managed DAO LLCs, since in such a company, all power can be concentrated in smart contracts, and not in human hands. This is an extremely important aspect, since in our case, smart contracts can be controlled by an AI agent, which allows artificial intelligence to transfer all power in this corporate form.
DAO LLCs also have flexibility in terms of procedures and corporate governance, so they can implement complex control and decision-making mechanisms, as well as reduce the level of human intervention in these processes.
Although the presence of a natural or legal person is still formally required, their powers may be significantly limited, for example to the execution of technical tasks, corporate actions, and the implementation of decisions made at the smart contract level. In this context, the role of a member (participant) of a DAO LLC may be performed by the creator of the AI agent, which will allow him to obtain financial rights and, in the future, the authority to distribute the profits received.
Simpler AI agents
Classical corporate structures can also be used to structure simpler AI agents, such as trading bots, since in this case there is no need to subordinate the corporate form to the decisions and discretion of the AI agent. In this case, artificial intelligence continues to be a means or tool of its creator and does not claim the status of a full-fledged digital cyborg.
In conclusion
Autonomous AI agents can change the blockchain industry and significantly accelerate innovation in almost all areas. So far, they are at the very beginning of the path, but the pace of development is colossal and very soon we will see real digital cyborgs – digital organisms with a stable thought process and their own identity. But this requires a combination of technical and legal innovations.