Artificial Intelligence and Data: Swift Regulatory Measures Needed!
Even though humanity is still far from the era of science fiction artificial intelligence (AI) depicted in movies, where robots can think, feel emotions, or even rebel, recent technological advancements such as machine learning and Generative AI (a type of AI aimed at generating new data based on existing data) have garnered the attention of the public and legal experts.
Today, it no longer surprises anyone to witness AI engaging in conversations on any topic, writing poetry, composing music, and creating artwork.
With the current pace of development, AI is reshaping the world in positive ways, such as improving healthcare (especially in diagnosis and disease prediction), enhancing agricultural productivity, and promoting safety and security. However, AI also brings numerous potential risks, including lack of transparency, discrimination, infringement of personal privacy rights, not to mention its potential for unlawful use. Furthermore, private AI companies often underestimate these risks and lack adequate measures to protect users of AI applications, particularly in the absence of clear legal regulations in this field.
AI is the amalgamation of various data mining technologies, algorithms, and computational capabilities. The development of AI relies on two pillars: significant advancements in computational power and the ever-increasing volume of data. It can be said that AI is also one of the most crucial applications in data-driven economies. We are currently at a juncture where it can be affirmed that economic growth and societal development will increasingly rely on the values generated by data. Presently, the majority of stored and utilized data is related to consumers. However, experts predict that in the near future, data will become even more abundant, with a significant portion originating from industries, businesses, as well as the public sector.
Many governments are striving to build an AI ecosystem with the primary goal of improving healthcare services, transportation systems, and various other public services to meet the needs of citizens. This will enable businesses to develop new generations of services and products.
The Data Market – A Prerequisite for AI Development
As we know, it is impossible to separate AI and data. To develop and manage AI effectively, specific, and reasonable policies for data management are essential.
Let’s take the example of the European Union. According to the European Commission’s White Paper issued in 2020, data is a critical factor in shaping AI systems. Therefore, questions regarding the ability to use data arise. According to this document, the European Commission aspires to develop data in several key areas such as healthcare, energy, financial data, and to promote the exchange of data from the public sector to businesses, and vice versa, as well as among businesses.
AI and data cannot be separated. To develop and manage AI effectively, specific, and reasonable policies for data management are of imperative importance.
Regarding the sharing and utilization of personal data among businesses, the European Commission believes that the European Union has not yet reached a reasonable level. The reasons for this lack of cooperation and sharing are numerous, such as the absence of incentives from governments, businesses’ concerns about losing competitive edge in the market if they share their data with other companies, a lack of trust among businesses, as well as a lack of a clear and specific legal framework to define the limits of data use.
Therefore, the current goal of the European Union is to establish a framework to promote data sharing within the union. Initially, this includes the application of principles for the free movement of non-personal data, and member states can only apply restrictions in this area based on public security grounds. It is also worth adding that the European Union is currently leading in the program to build and develop AI based on fundamental individual values, such as personal dignity and the protection of individuals’ private lives.
Recently, the French National Commission on Informatics and Liberty (CNIL) – a nation highly regarded for prioritizing the protection of personal data worldwide – also sought input from experts regarding the incorporation of AI system development within a framework to ensure individual rights while promoting and supporting innovative creativity.
Intellectual Property Rights and AI
One of the recent questions raised pertains to whether AI’s utilization of protected intellectual property (content, images, music, etc.) to generate new content (Generative AI) falls within the scope of existing legal frameworks. In practice, instead of seeking permission from authors and artists to use protected content, AI companies are opting to freely utilize vast amounts of such data as “training data” (initial data inputted into machine learning algorithms to train them for prediction or creative content generation). Presently, many artists have filed lawsuits against Midjourney, Stability AI, and DeviantArt for this reason.
This issue has been under discussion in the U.S. Senate Committee for several months. In a statement before the Committee, renowned American visual artist Karla Ortiz, known for her contributions to major Marvel blockbusters like Guardians of the Galaxy Vol. 3, Loki, The Eternals, Black Panther, Avengers: Infinity War, and Doctor Strange, declared, “I am no longer certain about my future – a new technology emerging threatens the careers of artists like me – that is Generative AI. This technology has used my creations without permission, without recognition, and without any compensation… I do not oppose AI, but AI must be fair and adhere to ethical principles. AI must be fair to the customers using AI, as well as to creators like me, who provide the “raw materials” that AI depends on.” She is also one of several artists jointly filing lawsuits against Stable Diffusion, Midjourney, and DreamUp – three AI image creation companies.
U.S. scholars and experts have varying opinions on this matter. Some argue for the use of the “fair use” exception to be applied in cases where AI uses protected content, due to practical “impossibility” – no AI development company can feasibly contact each individual owner for permission and royalty payments. This is also Google’s argument in the digitalization of books project in certain U.S. libraries (see the case of Authors Guild, Inc. v. Google, Inc.). Of course, this approach does not satisfy copyright owners. Another proposed approach is an opt-out system: owners can choose to declare non-permission from the outset, so that their works are excluded from AI’s training data.
Regarding content generated by AI based on existing data, opinions at the U.S. Senate Committee tend towards the view that such content should not be protected, as current principles do not allow for the protection of works created by machines.
It can be said that at this present juncture, faced with the increasing likelihood of disputes related to AI and data usage, countries need to promptly establish legal frameworks to ensure the rights of genuine artists, as well as to encourage creativity and innovation.
Read the original article in Vietnamese at The Saigon Times.