On Monday, Chinese media reported the Guangzhou Internet Court ruled an AI company committed copyright infringement in its provision of AI-generated text-to-image services. The first of its kind ruling places clear responsibility on the AI company, which the plaintiff argued reproduced copyrighted images unlawfully and without permission.
The intellectual property at the center of the case is Ultraman, a well-known character that was awarded the Guinness World Record for being the subject of the highest number of spin-off TV shows. When a user requested Ultraman-related images, the outputs were extremely similar to the plaintiff’s original creation.
The court found the AI company guilty of infringing on the plaintiff’s copyright and adaptation rights and ordered it to pay the plaintiff 10,000 RMB ($1,389) for damages. The ruling also noted the company should implement keyword filtering to prevent its service from generating images that are substantially similar to Ultraman; in other words, the normal (meaning, presumably, not excessively targeted) use of Ultraman-related prompts should not lead to the generation of images identical to copyrighted work.
For the first time, an AI company has been held legally responsible for spitting out copyrighted material. Of course, it does not look like it will be the last.
Multiple cases are ongoing in the United States, including the class action suit filed in September against OpenAI and Microsoft by the Author’s Guild and seventeen fiction authors whose books were used to train ChatGPT, as well as The New York Times’ suit against the same companies, which similarly argues the company “copied millions of The Times’s works to build and power its commercial products without our permission.” (On Monday, OpenAI filed a motion to dismiss aspects of the lawsuit, including the argument that chatbots have become direct competitors for newspapers.)
In an earlier ruling, issued in November, China’s Beijing Internet Court ruled somewhat antithetically that an AI-generated image modified by a user could be classified as an artwork. That ruling favored industrial development and adoption of AI-generated services.
Guangzhou’s ruling, however, ebbs on the side of protection for human creators over promoting the AI industry. Chinese state media outlet Global Times interviewed Zhou Chengxiong, a deputy director at the Chinese Academy of Sciences, who said the judgment could make some Chinese AI companies hesitant to continue to invest and develop, since they may decide the legal risks are too high.
But if all companies had to fear were $1000 payouts, such hesitation wouldn’t make much sense. Given the number of “inputs” (a technical term for “verbal or visual creations”) AI models “were trained on” (a technical term for “stole”), it would be logical for AI firms to factor settlement payouts into their business models. By the time precedent can be set to hold companies accountable, the models would be fully developed, operational and, most importantly, commercialized.
But in China, AI companies’ transgressions at least arguably violate regulations already on the books, such as the Interim Measures for the Management of Generative Artificial Intelligence Services. Those regulations, promulgated in July 2023, require that generative AI services “respect intellectual property rights and commercial ethics” and that “intellectual property rights must not be infringed.” There is also most likely a national draft AI law in the works. As early as 2019, China’s Cyberspace Administration was publicly considering how AI-generated content would (or would not) be subject to the country’s Copyright Law.
China’s evolving regulatory and legal landscape surrounding generative AI is wrestling with two possibly conflicting goals: promoting innovation (or at least, supporting the growth of the domestic AI industry, which is not necessarily the same thing) and instituting safeguards. The latter objective includes everything from copyright protections to preventing the spread of disinformation and deepfakes. It also includes more sinister forms of control unique to China’s political system; namely, censorship.
Other than that — albeit very significant — last factor, the United States is confronting undeniably similar conflicting ideas and debates. The Chinese party-state’s handling of intellectual property and AI-generated content has not been comprehensive and could even be called contradictory. But it can at least be characterized by state-level action that includes but goes beyond the courts.
The United States’ extremely slow-moving legal system may not be the best avenue to rule on issues related to rights protections and AI, especially regarding the scraped “inputs,” which will be easy to forget in a few years, or sooner, once everyone is accustomed to the existence of chatbots and stops wondering how they “learned” what they “know.” (To be clear, they haven’t learned anything and they know nothing.)
A more agile system would be a better fit. China’s Internet Courts have proven to be a fast-moving, if not entirely independent, forum through which to at least partly protect the people who made generative AI products possible. As we continue following AI companies’ lead, awaiting the next application they come out with to shock and disrupt us, we risk forgetting the intellectual people and property who got them there.
Read the full article here