Patentability of AI-generated inventions? Possible. Allow AI system to be a patent owner? No

Patentability of AI-generated inventions? Possible. Allow AI system to be a patent owner? No

While artificial intelligence (“AI”) becomes a buzzword, there are discussions regarding whether “AI-generated inventions” can be patented, and if yes, who will be the inventor. Here are my thoughts:

First, there is no single “yes” or “no” answer on these questions, as any answer could be wrong if there is no clear definition of “AI-generated inventions.” For example, if “AI-generated invention” means inventions created by artificial intelligence without human intervention, then she must answer: what amounts to the “human intervention.” And what is “artificial intelligence” actually mean?

Second, even if we take it as a truth that an AI, or a device or a piece of software program, can create new products, there are no grounds to support an argument that the device or the software program is eligible to be an “inventor.” Please note, when we say “inventor” in the context of the patent law, we are talking about the “owner” of rights and the obligator of obligations. In the context of the patent law, such ownership is the right to “prohibit others from implementing the patented idea”. A machine, even if it can do something in its designed purposes, it has no “will” and cannot “prohibit” others from doing anything.

Will is an emotion. Emotion is a unique character of the human being (unless you want to re-define human beings). For patent or copyright owners, the most important right is to decide, with their will, to allow others or prohibit others doing something. If an IP owner likes you, she may permit you using/implementing her work/invention without any charge. If she does not like you, she can simply prohibit you from using her work or implementing her patented technology. She can also keep silent and do not express her willingness — and by default, such silence would be assumed a “prohibition” under IP statutes (with some exceptions). The right owner does not have an obligation to permit others. This is why it is called “right” but not “duty”.

One may argue that “legal entities” or “legal persons” are also not human beings. However, this is changing the topic. Human beings operate legal persons. The only reason that they become the right owners is that they are assumed to take responsibilities and receive interests. A machine can undoubtedly become an asset to a company. That company can be a patent right owner. In this situation, it would still be the “legal person” who owns right. Why? Because the decision to implement patents or prohibiting others from using patented ideas is still subject to the human controller of that company. It is not the company, as an asset, makes the decision.

The machine does not make decisions – when say “make”, we are stressing the wills, the process of emotional decisions, but not outputs. A computer will only process data and output results. Its output may be creative and novel, but the same machine would not be able to make its own emotional decision on whether allowing someone using its creation but in the meantime prohibiting others – without any reason, just emotion. Also, it cannot take responsibility – if a subject cannot take responsibility, it would not be eligible to be an IP right owner.

Finally, the ultimate purpose of patent law (and copyright law) is to “promote the progress of science” (Art. 8 Section 1, U.S. Constitution). In other words, IP rights are provided as incentives to the creators – they got exclusive right to decide whether and who implements their ideas. “Incentives” itself is an emotional issue. Someone may think she wants a great material appreciation before she can allow others implementing her patented idea; another one may just want a very small amount of money; some others may even just grant licenses to those with good looking faces. Everyone has its view of value. The law grants people with rights of prohibition, and then people use such right to satisfy their personal needs – material, spiritual, physical or emotional. None of these will be needed by any AI system — unless such AI system becomes real a competitor of human being and has its own desires — I do not see that trend at this point in time. And more fundamentally, even if it becomes true – why human beings call such competitor “AI” but not just call them another form of human beings?