The US-China Trade Negotiation: A Contract Theory Perspective
Philip K.H. Wong Centre for Chinese Law
黃乾亨中國法研究中心
Generative AI Governance Summit Dialogue Series 2024
生成式人工智能治理高峰對話系列2024
Generative AI Governance Summit Dialogue Series 2024
生成式人工智能治理高峰對話系列2024
Will the EU's AI Act Have a 'Brussels Effect' on China?
歐盟《人工智能法》會對中國產生「布魯塞爾效應」嗎?
Date & Time: April 26, 2024 (Friday), 21:00–22:15 (HKT), 09:00-10:15 (EDT)
日期和時間: 2024年4月26日(星期五), 21:00–22:15 (HKT), 09:00-10:15 (EDT)
Venue: Zoom Webinar
地點: Zoom 線上研討
Language: Chinese & English (with simultaneous interpretation)
語言: 中文和英文(提供同聲傳譯)
Last month, the European Parliament officially adopted the EU AI Act, a groundbreaking piece of legislation for the European Union. Concurrently, leading Chinese legal scholars have proposed model AI laws and are actively seeking feedback from policymakers and industry experts across China.
In light of these recent developments, the second session of our Generative AI Governance Summit Dialogue series will focus on the EU's AI Act and explore whether it will have a "Brussels Effect" on China. During this virtual roundtable, we will delve into critical issues that are currently generating intense debate in both the EU and China. Topics will include the scope of AI legislation, management of open-source models, the risk-based regulatory framework, oversight of highly impactful AI systems, and various practical implementation challenges. We invite you to join this enlightening discussion as we navigate these important topics.
About the series:
In 2024, the Philip K. H. Wong Centre for Chinese Law, in collaboration with the China Artificial Intelligence Industry Alliance Policy and Regulation Working Group, will host a series of summit dialogues on generative AI regulation and governance. These events aim to bring together leading scholars and experts from around the world to discuss the governance challenges posed by generative AI and develop strategies to address them. To watch our past events in this series, please see here: Generative AI Governance Summit Dialogue Series 2024 - Generative AI and Intellectual Property (youtube.com)
2024年3月,歐洲議會正式通過《人工智能法》。這一歐盟的創新性立法在全球範圍內備受矚目。與此同時,中國專家學者發布了廣受關注的《人工智能法(專家建議稿)》,目前正在積極徵集來自中國各地政策制定者和產業專家的意見和反饋。
在此背景下,“生成式人工智能治理高峰對話系列”第二場活動將聚焦歐盟《人工智能法》,並探討其是否會對中國產生“布魯塞爾效應”。本次活動以線上圓桌會議的形式舉行,主要關注當前在中國和歐盟社會引發熱議的關鍵問題。與會專家將深入剖析人工智能立法的範疇、開源模型的治理、基於風險的監管框架、具有高度影響力的人工智能系統的規制,以及法律實施過程中可能遇到的各種挑戰。歡迎各界感興趣人士參與!
關於高峰對話系列:
2024年,香港大學黃乾亨中國法研究中心將舉辦一系列關於生成式人工智能治理的高峰對話。香港大學黃乾亨中國法研究中心主任張湖月與中國政法大學教授,聯合國高級別人工智能咨詢機構成員,中國人工智能產業發展聯盟(AIIA)政策法規工作組組長張凌寒,將作為共同召集人,邀請國內外頂尖學者與專家,共同探討生成式人工智能面臨的治理挑戰以及應對策略。首場高峰對話已於2024年1月成功舉辦,聚焦知識產權問題,內容詳見:精彩回顧|生成式人工智能治理高峰對話系列-知識財產權。
★Summit Dialogue Series Two: Will the EU's AI Act Have a 'Brussels Effect' on China? 第二場:歐盟《人工智能法》會對中國產生「布魯塞爾效應」嗎?★
Conveners 召集人
Angela Huyue Zhang, Director, Philip K. H. Wong Centre for Chinese Law, The University of Hong Kong
張湖月, 香港大學黃乾亨中國法研究中心主任
Linghan Zhang, Professor at the Institute of Data Law, China University of Political Science and Law
張凌寒, 中國政法大學數據法治研究院教授
European Experts 歐盟專家
Carme Artigas, Co-Chair of United Nations Artificial Intelligence High-Level Advisory Body 聯合國人工智能高級咨詢委員會聯席主席
Philipp Hacker, Professor for Law and Ethics of the Digital Society, European University Viadrina Frankfurt 奧得河畔法蘭克福歐洲大學數字化社會法律與道德教授
Chinese Experts 中國專家
Linghan Zhang, Professor at the Institute of Data Law, China University of Political Science and Law
張凌寒, 中國政法大學數據法治研究院教授
Weixing Shen, Professor of Law, Dean of Intelligent Rule of Law Research Institute, Tsinghua University
申衛星, 清華大學法學院教授, 清華大學智能法治研究院院長
Highlights I Will the EU's AI Act Have a 'Brussels Effect' on China?
Chinese Version 中文版本 : 精彩回顾|生成式人工智能治理高峰对话系列——欧盟《人工智能法》 (qq.com)
On April 26, 2024, the second session of our “Generative AI Governance Summit Dialogue” series was successfully held, which centered on the EU AI Act and China’s AI legislative initiatives. The event was co-hosted by Professor Angela Zhang, Director of the Philip K. H. Wong Centre for Chinese Law at The University of Hong Kong, and Professor Linghan Zhang from China University of Political Science and Law, who is also the leader of the AI Industry Development Alliance (AIIA) Policy and Regulation Working Group and a member of the United Nations High-Level Advisory Body on AI.
The discussion featured esteemed panelists, including Carme Artigas, who led and coordinated the negotiations for the EU AI Act’s approval, Professor Philipp Hacker from European University Viadrina Frankfurt, Professor Weixing Shen from Tsinghua University, and Professor Linghan Zhang from China University of Political Science and Law. The event was conducted as a Zoom roundtable and livestreamed on four popular platforms: PKULaw, XiaoeTech, Bilibili, and WeChat, attracting nearly 6000 online viewers. The lively atmosphere and engaging discussion were highly appreciated and praised by the audience.
EU Experts
1. Scope of the EU AI Act
Angela Zhang: The EU AI Act is extensive, with over 400 pages, 113 provisions, and 13 annexes. Can you provide an overview of what the law covers and what exemptions exist under the law?
Carme Artigas: The EU AI Act is a groundbreaking legislation with international coverage, aimed at regulating artificial intelligence. It originated in June 2021 with the first version issued by the European Commission and is part of a broader legal framework that includes the GDPR, the Digital Services Act (DSA), the Digital Markets Act (DMA), and the Data Act. Some aspects not covered by the AI Act, such as data location or algorithmic transparency, are addressed in other parts of this framework.
The AI Act is designed to regulate high-risk AI use cases. It is product-based and risk-based, requiring a risk metric to identify prohibited and high-risk AI uses. The latter must undergo conformity assessments before and after market entry. Limited-risk applications only need to register in a central European database, while low-risk applications have no specific requirements. High-risk AI involves applications that can impact safety, health, or fundamental rights in domains such as education, workspaces, justice, and government use of AI. The law’s spirit is to regulate the harmful use of technology rather than the technology itself.
The legislative process in Europe adheres to particular procedures, leading to the creation of directives or acts. Directives provide general rules for Europe, but each member state must transpose them into national laws. This regulatory fragmentation has hindered Europe’s competitiveness in the digital world. To address this issue, the recent digital decade framework introduced acts such as the DSA, the DMA, and the AI Act. These acts establish a single, unified law for all member states without alterations, promoting a more cohesive digital market in Europe.
The AI Act negotiations involve the European Commission, the Council representing the member states, and the Parliament representing individuals. The unforeseen ChatGPT moment led to a radically different proposal from the Parliament, sparking conflict. The challenge was to preserve the law’s spirit, which aimed to regulate only risky use cases rather than the technology itself, such as general-purpose AI (GPAI) models. The solution entails implementing minimum transparency rules for these models to enable the prevention of systemic risk throughout the value chain.
The most significant aspect of the AI Act is its prohibitions and law enforcement exemptions, rather than GPAI models. Europe has expressed a stance that certain AI applications, even if technically feasible, should not be allowed, such as using biometric identification to classify people. This crucial message from Europe highlights the importance of aligning AI with regional values, which may differ from perspectives in other parts of the world.
The AI Act aims to be future-proof, recognizing that it cannot be perfect due to the lack of past experiences to learn from. To stay relevant, the AI Act employs an innovative legislative technique, defining normative elements in the Articles and including specifics, such as high-risk domains, in the Annex. This approach allows for adaptability as risks evolve over time and contexts. A key feature of the law is the obligatory code of practice, negotiated with the private sector, aiming to cover high-risk cases while remaining practical and dynamic.
Philipp Hacker: The AI Act’s scope is broad, with a wide definition of AI to ensure future-proofing. It adopts the revised OECD definition, focusing on the concept of inference to distinguish AI from traditional software. The Act excludes basic data processing and human-crafted rule-based systems, such as auto-summation functions in Excel. However, it still covers many techniques, including long-standing statistical optimization methods used in finance.
The AI Act takes a two-tiered approach to GPAI models, with some minimum requirements for all GPAI models and additional rules for those with systemic risk. The presumptive threshold for systemic-risk GPAI models has been set at 10^25 FLOPs, which covers models like GPT-4 and above.
However, there are two issues with this approach. First, the threshold is quite high, and as models become smaller but more powerful, it will be challenging for regulators to address specific models below this threshold that still pose systemic risk. This could lead to lengthy legal battles, similar to those seen with very large online platforms designated under the DSA.
Second, there is skepticism about the effectiveness of rules for systemic-risk GPAI models. The rules include cybersecurity, red teaming, incident reporting, and some evaluations, but content moderation is absent. Comparatively, the Chinese approach to AI regulation focuses on aligning model outputs with societal values and customs. It would have been beneficial for the AI Act to require state-of-the-art content moderation for GPAI models to reduce hate speech, insults, and fake news. Studies by the UN and investigative journalists show that this continues to be a huge problem.
Overall, the AI Act has a more positive potential than its current reputation suggests. Despite receiving negative reactions from businesses and venture capitalists, the law primarily focuses on prohibiting undesired practices in Europe and enforcing industry best practices in specific cases. For responsible developers, complying with these rules should not be difficult. It is crucial to reshape the narrative and emphasize that the AI Act’s obligations are achievable. With regulations in place, Europe must now focus on increasing investment in AI development and deployment, as it currently lags behind the US and China. The AI Act is a positive step towards achieving this goal in a responsible, sensible and sustainable way.
Angele Zhang: I agree with Professor Philipp Hacker, as I’ve found more similarities than differences between the EU and US approaches to AI regulation. Many GPAI obligations in the previous Parliament’s version of the AI Act have been notably toned down, bringing the EU law closer to the US Executive Order. Additionally, I agree that even smaller AI models can be very powerful, as shown by Microsoft’s recent release of a small yet impressive image model.
Carme Artigas: Responding to Professor Philipp Hacker’s comments, the current AI definition has garnered the most consensus. While less advanced tools may fall under the AI Act, they will mostly be in the low-risk category and thus not affected. Secondly, the two-tier approach to GPAI models uses FLOPs as an indicator, which, although imperfect, distinguishes between large models like GPT-4 and others. This threshold primarily addresses energy consumption concerns, targeting companies that exceed it to provide more data and information. Large corporations can readily handle all the obligations. The challenge lies in helping small businesses comply with the law more easily.
The software industry, like other sectors, may not prefer regulation. However, regulation has been applied to various industries for safety and transparency. Until now, the software industry has remained largely unregulated. Introducing regulations for AI can foster consumer trust and market growth, as people are currently skeptical and fearful of AI. Providing certainty and support to businesses and citizens through measures like sandboxes and real-time testing scenarios can encourage AI adoption and innovation. Ultimately, building trust is essential for the successful integration of AI into the market.
Philipp Hacker: Transparency is crucial, but it is not enough. The systemic-risk threshold for GPAI models is linked to computing power, which translates into energy costs. However, the current threshold remains too high. As climate change affects the entire world, including China, the US, and the EU, coordinated action is needed to tackle the rising energy, water, and toxic material demands of GPAI models. The AI Act requires transparency in energy usage for GPAI models, but clear standards and additional measures are still needed. A “sustainability by design” approach, similar to “privacy by design,” could be beneficial. The AI Act includes risk mitigation assessment and treats environmental protection as a fundamental right, which could pave the way for incorporating environmental impact assessment into AI regulation.
2. Regulation of Open-Source AI
Angela Zhang: Meta recently released the world’s most powerful open-source AI model, Llama 3. Throughout the AI Act’s legislative process, the regulation of open-source models has been a major point of debate. The discussion centered on preventing misuse by malicious actors without stifling innovation and competition. How did EU legislators develop their current approach to regulating open-source models?
Carme Artigas: In essence, open-source aims for transparency, allowing error detection in software. The AI Act addresses varying responsibilities for key players within the AI value chain, including providers of open-source models. Llama 3 is not considered open-source as it does not disclose its weights, which is required by the AI Act. The open-source community encourages the publishing of various parameters, but there’s a lack of consistency in reporting. Regardless of the specific approaches adopted, promoting transparency and accountability in AI is crucial, as it significantly impacts the economy and society, and should not develop without constraints.
Philipp Hacker: Open-source models in AI present both benefits and risks. While they promote competition, they can also cause huge damage if misused by terrorists or rogue nations. Some leading AI companies, including OpenAI and Meta, have pledged to submit their models for inspection to the UK AI Safety Institute, but this has not yet occurred. As AI models become more potent, there may be a need to prevent open sourcing and make it mandatory for models to be hosted via APIs for better control. Striking a balance between surveillance and control is challenging, but necessary to address potential threats like biological and chemical terrorism. Currently, the EU AI Act has few exemptions for open-source models. Future regulations should focus more on managing highly capable open-source models. This issue remains a central and unsolved aspect of AI policy debates.
3. Risk-Based Regulatory Framework and Regulation of General Purpose AI (GPAI) Models
Angela Zhang: The EU AI Act adopts a risk-based framework, categorizing AI systems according to the risks they pose to the public, and includes specific provisions for GPAI models. How did European authorities arrive at these decisions? What have been the major controversies surrounding this regulatory model?
Carme Artigas: The GPAI model and system are not considered high-risk by themselves. However, when utilized by millions of people, they can present unique systemic risks that may not directly threaten health and safety but could impact fundamental rights. Small and medium businesses that deploy these models—typically developed by well-resourced large corporations—in high-risk settings inherit both the risk and responsibility. To achieve a balance, it is crucial for model developers to supply pertinent information to downstream players, aiding them in meeting their obligations.
The fundamental rights impact assessment aims to identify risks that may not affect health or safety but are still undesirable. The AI Office’s role is to establish auditing mechanisms to address these concerns. Over the next two years, we expect to see multiple versions of the law, allowing the AI Act to be improved and incorporate the appropriate elements to ensure compliance and fulfill its intended purpose.
Reaching an agreement on the AI Act faced pressures from the market, governments, and public opinion. The challenge was to find a right balance between protecting citizens, fostering innovation, and regulating AI without hampering the market. Ultimately, having an 80% perfect law is better than no law at all, as it helps control technology, limit abuses by bad actors, and support innovation and freedom.
Philipp Hacker: The AI Act is commendable and is better than its reputation. A notable feature is the possibility of revising the Act through implementing or delegated acts, allowing for adjustments on a rolling basis, such as lowering the computing threshold for systemic-risk GPAI models. The Act can also revise the list of high-risk sectors to exempt ancillary applications with minor impacts on fundamental rights, health, or safety. For instance, AI systems used for scheduling job interviews rather than assessments are not considered high-risk. However, to prevent companies from exploiting loopholes, a reverse exemption exists: they are ineligible for the exemption if they engage in profiling, which involves surveying people and collecting extensive data for sophisticated inferences. Ultimately, the AI Act is a solid package that needs to be enforced effectively on a global scale, coordinating with partners in China and Europe to ensure the Act is considered and adhered to during the development process of AI systems.
4. Enforcement Structure and Implementation Issues
Angela Zhang: How will the AI Act be implemented? Will it be enforced by the AI Office or national authorities? What lessons have been learned from GDPR enforcement, and what are the EU’s plans for the AI Act?
Carme Artigas: The AI Act aims to avoid the issues faced by GDPR implementation. Key phases of the AI Act include: retrieving prohibited items from the market within 6 months of publication, ensuring transparency for GPAI models and systems within 9 months, and establishing the AI Office within 3 months. There are two types of supervisory agencies: national bodies and the EU AI Office. National bodies handle high-risk use cases and conformity assessments, while the EU AI Office oversees horizontal regulations for GPAI models, threshold definitions, and codes of practice. The AI Office consists of several bodies, including a scientific panel, an AI board with representatives from national authorities, and an external consultancy body. Within two years, businesses must adapt to the AI Act, starting with larger companies. Compliance will be encouraged through transparency competition, with companies vying for a quality seal. Additionally, the AI sandbox will provide tools and best practices to help small and medium-sized enterprises (SMEs) comply with the AI Act. National agencies play a crucial role in providing SMEs with the tools and support needed for compliance.
Philipp Hacker: A crucial aspect of implementing the AI Act is hiring the right people and timely preparation. Recruitment faces competition from the industry, and it’s essential to synchronize AI regulators with existing regulatory authorities. To achieve this, a framework should be designed to prevent companies from navigating multiple, conflicting regulations. The AI Act currently includes language that allows the integration of compliance mechanisms, but having more comprehensive frameworks would be even more beneficial. For AI systems not previously regulated, companies are allowed to self-audit and certify their compliance. Market surveillance authorities oversee this process, but they do not examine every case, and regulatory approval is generally not required except in specific areas and circumstances. The AI Act’s effectiveness will depend on recruiting expertise from the tech sector and developing essential guidelines and codes of practice, especially for SMEs. Since SMEs are now in a heavily regulated industry, they require more guidance.
Chinese Experts
1. What are your thoughts on the scope of the EU AI Act?
Weixing Shen: Europe has opted for an all-encompassing definition of AI, which could potentially cause the EU AI Act to be overly broad in its scope of application. While the Act primarily targets high-risk use cases of AI systems, the adoption of such an inclusive definition might inadvertently result in excessive regulation. This, in turn, could impede industrial development by imposing unnecessary restrictions.
Similar to the GDPR, the EU AI Act has extraterritorial effects. Many were surprised when the GDPR extended its reach to regulate global actors. However, due to Europe’s market size, many US and Chinese companies have complied with the GDPR in practice. As a result, GDPR-style long-arm jurisdiction has gained popularity. The effectiveness of this type of jurisdiction relies not solely on geopolitical power or the ability to legislate first, but more significantly on whether the law itself addresses specific needs. Countries often gain insights from each other’s experiences, yet they adapt such knowledge to their unique circumstances in order to refine their own legal systems, rather than simply “borrowing” without modification. Ultimately, the global influence of the AI Act depends on whether it aligns with the collective value, industrial development, technological advancements, and societal needs of various jurisdictions. If an EU law can fulfill these requirements, its strength would not be due to the “Brussels Effect”, but rather by effectively capturing the most resonant consensus.
Furthermore, AI models or systems that have not yet entered the market or are solely utilized for pure scientific research are not subject to the EU AI Act. On this matter, the legislative stances of China and Europe are in alignment.
2. How do you assess the EU’s strategy for open-source AI governance?
Weixing Shen: The EU AI Act’s provision of exemptions for open-source AI tools is a prudent approach, as open-source resources play a crucial role in scientific research and technological innovation. However, they can also pose risks, and therefore, exemptions should not be granted in certain cases. As per the EU AI Act, for high-risk use cases and above, even freely available open-source models must assume liability if they cause harm. Similarly, China’s AI Law (Draft for Suggestions from Scholars) states that liability should be appropriately reduced for models provided for free as open-source, unless there is intent or gross negligence. The terms “intent or gross negligence” align with the risk levels outlined in the EU AI Act. In summary, if providing a model itself poses significant societal risks, regardless of whether it is free and open-source, the provider must assume corresponding responsibilities.
3. How effective do you think a risk-based regulatory framework is for AI governance?
Linghan Zhang: The risk-based regulatory framework, popularized by the GDPR, has been widely adopted around the world. China has also proposed establishing a tiered, classified risk regulatory framework for AI and related components. Although this type of framework offers numerous benefits, such as shifting intervention points ex-ante and reducing supervision and compliance costs, there are notable distinctions between data risk governance and AI risk governance. Currently, AI governance is absorbing data and algorithmic governance, which may present some challenges in the future. Firstly, risk-based governance necessitates effective risk assessment as a foundation. Article 3 of the EU AI Act aptly defines risk as the combination of damage probability and severity. However, due to AI’s brief application and development history, the available data for risk assessment is limited in both quality and quantity. Secondly, the significance of technical factors in AI risk assessment is diminishing, while the importance of factors like ethics, policy, and culture is increasing. The US AI Risk Management Framework categorizes risks into technical, socio-technical, and guiding-principle risks; the UN High-level AI Advisory Body divides risks into risks to individuals, groups, and society. While these risk frameworks appear comprehensive, policy differences and institutional dynamics across countries may lead to oversights regarding AI activities that warrant concern.
Another challenge lies in categorizing high and low levels of risk in advance. The EU AI Act’s approach revolves around critical information infrastructure and use cases tied to fundamental human rights, addressing potential threats to health and safety, as well as substantial adverse impacts on basic rights, the environment, democracy, and the rule of law. However, these criteria remain highly flexible and ambiguous. In essence, the EU’s list of prohibited or restricted AI systems is more about importance than actual risk level. While an AI system’s use case may be significant, it does not necessarily imply a high risk. Besides, the risk associated with some potentially high-risk AI systems, such as job assessment and self-driving cars, may decrease as their usage increases. As these applications become more widespread and data more abundant, the likelihood and severity of harm caused by these systems could be mitigated. Labeling them as high-risk and restricting them from the outset may eliminate the opportunity to reduce risk later on. In future AI risk governance, balancing these factors will be crucial. China’s AI Law (Draft for Suggestions from Scholars) refrains from using terms like “unacceptable risk” or “high risk,” opting instead for descriptions like “critical AI systems” or those in “special application scenarios.” This approach seeks to avoid prematurely assigning values or negative evaluations, while incorporating dynamic evaluation into the governance of AI systems that pose genuine risks.
4. What is your opinion on the regulation of AI systems with high impact and general-purpose AI (GPAI) models/systems?
Linghan Zhang: “General” refers to the scope within which a risk may occur, while “high” pertains to the severity of the risk. During the drafting of China’s AI Law (Draft for Suggestions from Scholars), debates arose concerning the regulation of GPAI. Firstly, what constitutes a GPAI model? Does its application in two or three fields or scenarios qualify it as such? Is a GPAI model truly more powerful when applicable to multiple scenarios? European experts also noted that smaller, specialized models may have greater capabilities, while models with broader applicability could experience diminished functionality and impact. Moreover, although GPAI models may present a wide array of risks, people are more concerned about AI systems that perform critical functions in critical scenarios. For instance, the development of smart courts in China is advancing rapidly, and the judicial context is undeniably a critical and potentially high-risk AI application scenario. Nevertheless, some AI systems that enhance the efficiency of judges’ case management are, in fact, quite low-risk. Further research and dialogue are needed to determine whether GPAI models should be classified as high-risk and whether their impact should be evaluated based on severity or scope.
5. What do you consider the main implementation challenges faced by the EU AI Act?
Weixing Shen: The EU AI Act has initiated a global competition in AI legislation, prompting everyone to consider the optimal timing for AI regulation. A primary motivation for legislation is to mitigate public distrust of AI and establish a trustworthy and responsible AI framework. To achieve this, laws may need to strike a balance between fostering innovation and imposing restrictions. In this context, the implementation of the EU AI Act faces several challenges. Firstly, the Act’s high legal liability may create a chilling effect on the market, disrupting the balance between industrial innovation and safety. Secondly, the rapid pace of technological change generates tension with legal stability. Thirdly, risk classification and tiering directly influence obligations, but the criteria are ambiguous and subject to change as technology iterates. Ultimately, the fundamental question remains: can legislation alone alleviate AI concerns? The answer is no. Overreliance on strict legislation to prevent potential AI risks is misplaced. Instead, the principle of “let he who tied the bell on the tiger take it off” should be adopted. In line with the “privacy by design” concept mentioned by European experts, legal values should be integrated into the design of AI models/systems from the outset, instead of imposing severe penalties after issues arise. Regulated self-regulation may be the most effective approach to reducing AI risks.
Responses from EU Experts
Philipp Hacker: Regarding the scope of the AI Act, there are two clarifications to be made. Firstly, the definition of AI does not include Excel and other automated processes, as explained in the Recitals. Automated systems with rules crafted by humans are not considered AI. Secondly, concerning extraterritorial effects, the EU casts a wide net similar to the GDPR. However, the AI Act only applies if you use your models or offer AI-based services within the EU. In essence, to conduct business in the EU, you must adhere to EU rules, and the same principle applies to EU companies operating in China, who must comply with Chinese regulations. This is a standard principle in market regulation. Notably, the AI Act’s extraterritorial effects are unique in their approach to copyright. Even if an AI model is trained in Beijing, it must still comply with EU TDM exemptions (authors’ right to object to their copyrighted material being used for AI training) when deployed or used in the EU. This is a departure from traditional copyright laws that followed a territoriality principle, where only local laws applied. Moving forward, regulators should exercise caution in enforcing the AI Act, particularly with companies unfamiliar with the regulation, such as those from China.
Carme Artigas: The effective negotiation is one where all parties are equally unsatisfied, and the AI Act has achieved this equilibrium with its crucial trade-offs. In the end, I want to make three notes. Firstly, the high-risk classification is well-defined in the AI Act, which takes into account the ever-changing nature of technology and allows for updates. Secondly, the Act’s extraterritorial effects apply to both European companies and external companies serving the European market. However, European companies selling products outside Europe, even in prohibited use cases, are not affected. For example, a European company could develop a social scoring technology and sell it to countries like China or Africa. Lastly, the AI Act does not directly address copyright issues in generative AI. Copyright law is indeed territorial, and there are 27 distinct copyright legislations in Europe. Besides, AI-generated content should be marked to avoid confusion. However, using generative AI to harm fundamental rights and infringe upon human dignity, even with a watermark, remains subject to regulation. Disclosure alone does not offer indemnity.
Generative AI and Intellectual Property
生成式AI知識產權問題
Date & Time: January 26, 2024 (Friday), 21:30–22:45 (HKT), 08:30–09:45 (EST)
日期和時間: 2024年1月26日(星期五), 21:30–22:45 (HKT), 08:30–09:45 (EST)
Venue: Zoom Webinar
地點: Zoom 線上研討
Language: Chinese & English (with simultaneous interpretation)
語言: 中文和英文(提供同聲傳譯)
In 2024, the Philip K. H. Wong Centre for Chinese Law, in collaboration with the China Artificial Intelligence Industry Alliance Policy and Regulation Working Group, will host a series of summit dialogues on generative AI regulation and governance. These events aim to bring together leading scholars and experts from around the world to discuss the governance challenges posed by generative AI and develop strategies to address them. Recently, the Beijing Internet Court made a groundbreaking decision by granting copyright protection to an image generated by Stable Diffusion. This landmark ruling has sparked global debates. In contrast, the United States has seen at least four instances where the Copyright Office has refused to grant copyright protection to AI-generated content. In light of these developments, the inaugural event of the series will specifically focus on the IP issues surrounding generative AI. This dialogue will feature distinguished experts from both China and the United States, including the Chinese judge who presided over the above-mentioned case.
Please stay tuned for further details about our upcoming events, and we look forward to your participation!
2024年,香港大學黃乾亨中國法研究中心將與中國人工智能產業發展聯盟政策法規工作組攜手合作,聯合舉辦一系列關於生成式人工智能(AI)治理的高峰對話。香港大學黃乾亨中國法研究中心主任張湖月與中國政法大學教授,聯合國高級別人工智能諮詢機構成員,中國人工智能產業發展聯盟(AIIA)政策法規工作組組長張凌寒,將作為共同召集人,邀請國內外頂尖學者與專家,共同探討生成式AI面臨的治理挑戰以及應對策略。近期,北京互聯網法院針對AI生成內容的版權保護作出一項富有創新意義的判決,確認了生成式AI作品的可版權性。該判決在全球範圍內引發廣泛關注。同期,美國版權局再次拒絕為AI生成內容賦予版權,至今已連續四次駁回AI作品的版權註冊申請。在此背景下,高峰對話系列的首場活動將聚焦生成式AI的知識產權問題,於1月26日以線上圓桌會議的形式舉行,參會者由來自中國和美國的知名專家組成,其中包括中國「AI文生圖」著作權案一審主審法官。
歡迎各界感興趣人士報名參加!
★Summit Dialogue Series One: Generative AI and Intellectual Property 第一場:生成式AI知識產權問題★
Conveners 召集人
Angela Huyue Zhang, Director, Philip K. H. Wong Centre for Chinese Law, The University of Hong Kong
張湖月, 香港大學黃乾亨中國法研究中心主任
Linghan Zhang, Professor at the Institute of Data Law, China University of Political Science and Law
張凌寒, 中國政法大學數據法治研究院教授
Chinese Experts 中方專家
Ge Zhu, Deputy Tribunal Chief, The First Comprehensive Division, Beijing Internet Court
朱閣, 北京互聯網法院綜合審判一庭副庭長
Guobin Cui, Professor of Law, Director of the Center for Intellectual Property, Tsinghua University
崔國斌, 清華大學法學院教授、知識產權法研究中心主任
Qian Wang, Professor of Law, East China University of Political Science and Law
王遷, 華東政法大學法律學院教授
U.S. Experts美方專家
Jason M. Schultz, Professor of Clinical Law, New York University
Jason M. Schultz, 紐約大學法學院教授
James Grimmelmann, Tessler Family Professor of Digital and Information Law, Cornell University
James Grimmelmann, 康乃爾大學法學院數字信息法教授
Highlights I Generative AI and Intellectual Property
Chinese Version 中文版本 : 精彩回顾|生成式人工智能治理高峰对话系列——知识产权 (qq.com)
On January 26, 2024, the inaugural session of the “Generative AI Governance Summit Dialogue” series was successfully held, focusing on intellectual property issues. The event was co-hosted by Professor Angela Zhang, Director of the Philip K. H. Wong Centre for Chinese Law at The University of Hong Kong, and Professor Linghan Zhang from China University of Political Science and Law, who is also the leader of the AI Industry Development Alliance (AIIA) Policy and Regulation Working Group and a member of the United Nations High-Level Advisory Body on AI.
The talk featured esteemed panelists, including Ge Zhu, the Presiding Judge of the Beijing Internet Court’s “AI Text-to-Image” copyright infringement case, Professor Guobin Cui from Tsinghua University, Professor Qian Wang from East China University of Political Science and Law, Professor Jason M. Schultz from New York University, and Professor James Grimmelmann from Cornell University. The event was conducted as a Zoom roundtable and livestreamed on four popular platforms: PKULaw, XiaoeTech, Bilibili, and WeChat, attracting nearly 7,000 online viewers. The lively atmosphere and engaging discussion were highly appreciated and praised by the audience.
Linghan Zhang: In early 2024, Professor Angela Zhang and I initiated the “Generative AI Governance Summit Dialogue Series.” Our aim was to bring together top scholars and experts from around the globe for in-depth conversations about the governance challenges and response strategies arising from generative AI. Our first event focuses on generative AI and intellectual property, featuring prominent speakers from both academia and the tech industry. Looking back over two decades ago, intellectual property was one of the first legal frameworks to be challenged by internet technology. Laws like the U.S. Digital Millennium Copyright Act have since been actively adapting to technological advancements.
U.S. Experts
1. Fundamental Challenges to Copyright Doctrines
Angela Zhang: Professor Mark Lemley from Stanford University has highlighted that generative AI presents challenges to two core copyright doctrines: the idea-expression dichotomy and the substantial similarity test for infringement. Can you briefly explain these challenges? Do you have any suggestions on how we should respond to them?
The idea-expression dichotomy distinguishes between ideas, which cannot be copyrighted, and their specific expressions, which can be.
The substantial similarity test for infringement is used to determine if a new work is too similar to an existing copyrighted work.
James Grimmelmann: Generative AI enables creators to produce highly rich content by inputting simple instructions, i.e., prompts. If we adhere to the traditional “idea-expression dichotomy”, copyright law should only protect the prompts, as they represent the creator’s expression. However, if the prompts are too brief, they might not be protected by copyright law at all. In addition, copyright law determines infringement according to the “substantial similarity” test. If the creator’s expression is embodied in the prompts, it is these prompts that should be compared, even though the AI-generated content (AIGC) may vary; conversely, two distinct prompts could generate substantially similar content. These issues pose considerable challenges. Copyright emerged with the advent of printing technology, which significantly lowered the production costs of artistic works. It took 150 years from the invention of the printing press for the first copyright law to come into existence. We are now in the very early stages of the generative AI era. Our understanding of how generative AI fosters creativity and the design of appropriate incentive mechanisms is still limited. It is also hard to identify the potential risks associated with AIGC. All these factors contribute to the prevailing uncertainty surrounding the current situation.
Jason M. Schultz: The theory of copyright law posits that its purpose is to serve as an incentive, encouraging individuals to engage in creative pursuits. With the widespread adoption of generative AI, both the cost of creation and the barriers to entry have been significantly reduced. If the essence of art lies in the conception of expression, while expression itself is merely a mechanical execution process, then the “idea-expression dichotomy” warrants re-examination in the context of generative AI simplifying this process. In a world dominated by generative AI, the necessity of copyright protection as a creative incentive becomes highly debatable.
2. Copyright Protection for AI-Generated Content (AIGC)
Angela Zhang: Recently, the Beijing Internet Court made a groundbreaking decision by granting copyright protection to an image generated by Stable Diffusion. This landmark ruling has sparked global debates. In contrast, the U.S. has seen at least four instances where the Copyright Office has refused to grant copyright protection to AIGC. What are your thoughts on this?
James Grimmelmann: Interestingly, the approaches taken by China and the U.S. are not that different. All relevant cases in the U.S. have encountered issues during the copyright registration phase. These cases can be viewed as a “test” meant to establish a broader copyright law precedent. The creators involved had either advocated for registering the AI itself as the author or provided insufficient disclosure of human prompts and AI involvement, failing to emphasize the importance of humans in the creative process. The nature of the case in the Beijing Internet Court is entirely different. It is a highly specific infringement lawsuit, with the generation process and human prompts thoroughly disclosed. Therefore, the differences between the two jurisdictions might not be substantial. In the U.S., a case like the one in the Beijing Internet Court could potentially have a similar outcome. Although the U.S. Copyright Office denied registration for “Théâtre d'Opéra Spatial,” an award-winning AI-involved art piece with over 600 prompts, the creator did not reveal the specific content of the prompts or the initial AIGC. Currently, the U.S. Copyright Office and courts are encouraging creators to actively disclose their involvement; otherwise, they risk being denied copyright protection due to insufficient evidence.
Jason M. Schultz: Furthermore, U.S. courts are considering ways to establish criteria for assessing the copyrightability of AIGC. This entails a general challenge: should judgments be made from an ex-ante or ex-post perspective? Examining facts after a dispute arises seems to better elucidate contextual circumstances. Besides, the notion of “originality” warrants further clarification, as the current legal requirements for originality are rather minimal.
3. Copyright Infringement and Fair Use
Angela Zhang: In a series of U.S. lawsuits, the main question was whether using copyrighted works to train AI is considered infringement. The dominant view is that this practice is exempted under “fair use” since AIGC is transformative. However, in certain instances, AIGC may closely resemble the original works, as seen in the New York Times case. In defense, OpenAI contended that the overlap arose because the prompts supplied by users were highly suggestive. What are your thoughts on this matter?
Fair use is a doctrine in copyright law that allows for the limited use of copyrighted material without obtaining permission from the copyright holder or compensating them.
James Grimmelmann: In the U.S., there are two primary trends regarding “fair use.” The first is transformative use, which involves creatively adapting the work of others. The second is copying materials without artistic expression, such as research archives and search engines; while these systems feed on copyrighted data, their output does not compete with the original works. However, generative AI straddles both trends. It not only feeds on copyrighted works but also produces expressive derivative content. As a result, generative AI does not fit neatly into either of the existing fair use scenarios.
Jason M. Schultz: Assessing whether the use of copyrighted data for AI training constitutes fair use requires considering the AIGC itself. Even if a company solely relies on New York Times web pages to train its AI system, it might generate millions of unique expressions, with only 0.1% regarded as potentially infringing. A crucial debate centers on the difference between AI companies automatically scraping data for training and obtaining specific authorization from copyright holders. Two main concerns arise from this issue: First, competition. To encourage competition among different AIs, we cannot limit access to training data exclusively to the wealthiest companies. However, acquiring licenses from all copyright holders can be extremely costly. Second, bias. Using a broader range of data for AI training helps prevent biased language. For instance, in the U.S., the left opposes AI using its data for training, while the right is more accommodating. If a licensing regime is implemented, could training data become dominated by right-wing perspectives? Furthermore, the AIGC largely depends on the prompts, making it challenging for courts to determine whether AI has truly copied a specific book. While users can force AI to copy, it is uncertain how many would actually do so. This leads to another question: if the AIGC does exhibit substantial similarity, who should be held responsible—the user or the AI service provider?
James Grimmelmann & Jason M. Schultz: There is no one-size-fits-all answer to this question, as the purposes and contexts in which users employ AI vary. We can only analyze the question on a case-by-case basis. Copyright law should be reasonably tolerant of private space. In the case of the Beijing Internet Court, where the AIGC is uploaded into the public domain, the nature of the issue changes, especially when it involves unfair competition. In sum, generative AI is still in its early stages of development, and there are no definitive answers to copyright issues. We must rely on more judicial cases to expand our understanding. A well-designed legal system should enable both large and small companies to flourish, guarantee equal access to AI for everyone, and promote fair competition. If the law only permits big companies to negotiate with other big companies, it does not resolve the problem, and artists will not receive fair compensation. Moreover, the alleged financial losses suffered by copyright holders might result more from increased competition, rather than just someone stealing their works through ChatGPT.
Chinese Experts
Ge Zhu: In the Chinese “AI Text-to-Image” case, the plaintiff used a large AI model to generate an image called “Spring Breeze Has Brought Tenderness.” The defendant, a poetry author, used the image as an illustration while publishing his poetry. The plaintiff claimed that the defendant removed the signature watermark from the image and uploaded it to social media, violating the plaintiff’s right of authorship and information network dissemination rights. The court ruled that the image possesses identifiable differences and the plaintiff’s original intellectual investment, which meets the definition of “works” in China’s Copyright Law. The image was considered a work of art, and the copyright therefore belonged to the plaintiff, since the AI model itself could not be considered the author.
Regarding the intellectuality element, it requires the reflection of a natural person’s intellectual investment. In generating the image, the plaintiff made significant intellectual contributions, such as character design, selecting prompt words, arranging the order of prompt words, and choosing images that met expectations. As for the originality element, it mandates that the work be completed independently by the author and exhibit original expression. Determining whether the AIGC reflects the author’s personalized expression must be assessed on a case-by-case basis. In the present case, the plaintiff arranged and selected expressive details, such as image elements, layout, and composition, based on his aesthetic preferences and personal judgment, all a reflection of his own will. The large AI model functioned as the author’s creative tool, akin to a paintbrush or a camera.
Furthermore, applying the law in novel and challenging cases necessitates balancing various interests, including the interests of both parties, the groups they represent, the value choices of legislators, and social and public interests. The present case incentivizes people to use new tools to create, aligning with the legislative purpose of copyright law. As creators increasingly adopt AI tools, the income of software developers may also rise, creating a virtuous cycle that positively affects industrial development. Regarding public interest, it is difficult to distinguish between AIGC and human creations under existing technical conditions. If human creations are protected while AIGC is not, it could lead to negative incentives in society, discouraging individuals from using new tools or hiding the use of AI, potentially infringing upon the public's right to know.
Qian Wang: I must respectfully disagree with the presiding judge. Most of the so-called “problems” and “challenges” attributed to generative AI are not real. The only valid question is whether using copyrighted works to train AI falls under fair use. Generative AI does not pose any challenges to the idea/expression dichotomy and substantial similarity. First, let’s consider the idea/expression dichotomy. In the context of AI, we are discussing whether the prompts are considered “ideas” in relation to AI-generated images. If they are, then they are not protected. Our focus is not on determining if the prompts constitute a work or expression, but rather on whether the images generated by AI based on these prompts qualify as expressions of users under copyright law.
Suppose an art school teacher, who is also a poet, writes a poem spontaneously and asks a class of 30 students to each create a drawing based on the poem. While the poem written by the art teacher is undoubtedly a “work,” it serves as an idea in relation to the students’ drawings. The poem cannot dictate the composition of each student’s artwork, as they will interpret the poem according to their own thoughts and use their creativity to generate corresponding images. Regardless of how intricate and sophisticated the words used to describe the image may be, they cannot determine its composition. Similarly, even if the prompts are detailed, AI-generated images do not constitute a work. For instance, I once entered an English poem describing a sunset scene as a prompt into two large models, resulting in entirely different images. The description in the poem was detailed enough, yet one could write another 1,000 lines without obtaining identical results. If selecting and inputting prompts is considered a creative act, why does one creative act produce so many varied expressions? The only explanation is that the text, in relation to the picture it describes, serves merely as an idea, not an expression.
Second, AI does not challenge substantial similarity, which adopts an objective criterion for assessment. It focuses solely on the similarities between the plaintiff’s work and the alleged infringing content. Whether the content in question was generated by AI or created by humans is irrelevant. Furthermore, the U.S. Copyright Office is unlikely to register the image involved in the Beijing Internet Court case, according to the Office’s four rulings and guidelines. The Office has not accused any applicant of forgery, but instead seeks to clarify whether AIGC can be registered as a work. In the “Théâtre d'Opéra Spatial” case, the Office did not dispute the applicant’s claim that more than 600 prompts were used. However, it still determined that this was not a human-created work because it was autonomously generated by AI.
Guobin Cui: Firstly, regarding the idea/expression dichotomy, if the creator only provides prompts, although the prompts themselves may constitute a written work, the images generated by AI based on these prompts typically do not contain the creator’s original expression. It is only after the creator selects an image and then repeatedly modifies the expressive details or compositional elements through prompts or other methods that the expressive aspect of the image can be considered original. The difference between my opinion and Professor Qian Wang’s is that he believes even in the latter situation, originality is still not present.
Secondly, I concur with Professor Qian Wang regarding the likelihood of the U.S. Copyright Office granting copyright protection to the image in the Beijing Internet Court case. The Office has denied protection to many AI-generated images, asserting that they lack originality. The image in the Chinese case involves even fewer prompts and modification details than its U.S. counterparts. For instance, in the “Théâtre d'Opéra Spatial” case, after the creator selected the image, he first established the larger framework, then modified the details, and repeatedly used traditional tools such as Photoshop to refine the content. The entire process took more than 80 hours. However, the U.S. Copyright Office still believes it lacks originality, which I consider an overly strict standard. If this standard were applied to the image in the Chinese case, it is impossible that the Office would recognize its originality.
Finally, concerning infringement and fair use, both Professors Grimmelmann and Schultz generally agree that using copyrighted works for AI training may constitute fair use. Professor Schultz primarily considers two aspects: competition and neutrality. First, imposing licensing requirements for AI training processes could hinder fair competition, not only between companies but also between countries. Second, if some copyright holders agree to license their works while others do not, it may result in a biased position in the AIGC. Moreover, the two U.S. professors seem to suggest that fair use may apply more readily to non-commercial purposes, while purely commercial purposes warrant further examination. However, I argue that even purely commercial purposes should be considered fair use. If commercial AI companies are required to pay substantial licensing fees for all training data and identify every individual’s contribution, it could lead to market failure and increased social costs, which are entirely unnecessary. Of course, if the AIGC infringes upon copyright, the legal liability of the content at the output stage should be investigated. This issue is separate from whether the use of data during the training stage constitutes fair use. They are two distinct matters.
Discussion
Jason M. Schultz: I agree that when prompts are more constructive and creative, the connection between them and the AIGC becomes stronger, thus facilitating the “transmission” of original expression. However, if only a few prompts are provided and the AI is left to complete the work, achieving this “transmission” effect becomes challenging. This intricacy is what makes the thought/expression dichotomy so interesting.
James Grimmelmann: Judge Ge Zhu raised an intriguing and thought-provoking point concerning the motivations and incentives for people to use AI in producing artistic works. The objective distinction between AIGC and human-created works is minimal. If copyright law were to declare that all AIGC is not protected, many individuals might be tempted to use AI while lying about and denying its involvement in their final products. Although I am unsure if this issue can ever be fully resolved, a legal system that establishes stark distinctions between rights in human-generated and AI-generated content could indeed create negative incentives.
Guobin Cui: I agree with Professor Schultz. When AI generates an initial image and a human creator uses detailed prompts to modify specific features of that image repeatedly, the creator may indeed make original contributions to the final image. Professor Qian Wang posits that text cannot define the expressive elements in images. However, this view is not always accurate, particularly when prompts are so detailed that they precisely define numerous pixel-level features of an image. In fact, any digital image can be described through words and defined by a computer program. Program code is akin to written expression. Thus, it would be incorrect to assert that text-based prompts can never contribute to the expression in an image. However, as previously mentioned, in most cases, a single round of prompts does not result in original contributions to AI-generated images. I also agree with Professor Grimmelmann that a failure to protect any AIGC could lead to negative societal attitudes toward AI usage. We should not just assume that original works cannot be produced using AI tools. In the context of AI’s deep integration with commonplace tools like Photoshop, users are almost certain to make personalized modifications to the AIGC. In such cases, it becomes meaningless to emphasize that AIGC cannot be protected by copyright law.
Qian Wang: Firstly, in the “Théâtre d'Opéra Spatial” case, I have no objections to the idea that AIGC may become a protected work after being modified by individuals using Photoshop. The U.S. Copyright Office also does not state that AIGC processed through Photoshop cannot be registered. In fact, the Office only required the applicant to waive their rights to the pure AIGC, but the applicant refused, resulting in the eventual failure to register.
Secondly, regarding the formation of a copyrighted work through multiple rounds of modifications, I have conducted experiments with Stable Diffusion and MidJourney. I initially prompted them to draw Chinese-style girls, and the images generated by the two AIs were entirely different. Next, I asked them to add glasses to the girl, which both generative models accomplished. However, Stable Diffusion also added a third hand to the girl, making the image look quite unsettling. The crux of the matter lies in the third step. When I specifically requested to reduce the height of the girl’s glasses frame to 2/3 of the original, both AI systems failed to achieve this. This is because AI currently cannot comprehend user instructions as humans do. It can only generate new images based on its own training and algorithmic rules. Consequently, no matter how many rounds of modifications there are, the user lacks control over the content generated in each round. In other words, each round of AIGC remains a black box, and humans cannot predict the final outcome, regardless of the number of rounds.
Thirdly, Professor Guobin Cui suggested that an image can be fully described by dividing it into numerous grids on the screen and then detailing its features at the pixel level. I recall when I first learned computer science, I input the Mona Lisa into the computer. My method involved entering coordinates rather than describing the painting in human natural language. These numerical values were communicated solely to the computer, unlike the AI text-to-image scenario we are discussing today. If we were to use natural language to describe an image, it would never perfectly match the image generated by AI, no matter how detailed. The only exception would be if AI evolves to the level depicted in the movie “Inception,” where it can infiltrate the human brain and accurately replicate a completed image. However, in this situation, it should be referred to as “replicative AI” rather than “generative AI.” Moreover, in the U.S., despite AIGC not being protected by copyright law, the number of AI users has not decreased. This suggests that the protection of AIGC through copyright does not necessarily influence the willingness of users to use generative AI.
Angela Zhang: Judge Ge Zhu argues that granting copyright to AIGC can encourage people to use AI. However, if AI becomes the primary tool for creation, human original works may diminish, potentially leading to data scarcity. This is because training large models still relies on works created by humans. Studies have shown that feeding a large model only on AI-generated data can cause its performance to decline over time. In the long term, promoting human-created content is vital for both artistic creation and AI development. What are your thoughts on this?
Jason M. Schultz: To cultivate an ideal creative economy, we should raise the bar for originality in AIGC. Currently, it is too easy to produce content with generative AI, so people do not need additional rewards and incentives to use it. In some cases, this may stifle creativity. New laws and regulations should establish thresholds based on human creations, granting certain rights to the AIGC that meets specific standards. This approach would reserve space for human creativity while still acknowledging AI-generated works.
Guobin Cui: We must maintain our faith in art. Content generated solely by AI cannot rival the work of genuine artists. Encouraging artists to use AI is just the beginning; continuous refinement and improvement are necessary during the creative process to give the work a soul. If an artist cannot achieve this, their work, if replaced by AI, cannot be considered true art. In response to Professor Qian Wang’s skepticism about AI’s ability to modify specific features of selected images, such issues as the sudden appearance of a third hand or the inability to adjust the height of glasses are technical problems that can be resolved by users. If a third hand appears, it might be due to unconstrained prompts. If the height of glasses cannot be modified, it may be because the user didn’t add brackets after the keyword “glasses” to provide machine-readable prompts or didn’t install the appropriate plug-in. Stable Diffusion is an open-source software with numerous developers creating plug-ins that enable humans to modify specific content in images using text, graphic commands, or keyboard operations. The possibilities are vast, and as AI technology advances, the distinction between text-based and Photoshop button-based modification methods will become irrelevant. In the future, seamless integration between the two will be achieved, marking an inevitable trend. Professor Qian Wang’s example of AI’s inability to modify specific features of selected images does not prove that text prompts cannot specifically or completely modify an AI-generated image. It simply indicates that existing AI technology has not yet reached its full potential, or it may stem from users’ inaccurate understanding of the technical possibilities of AI plug-ins.
Ge Zhu: I’ll respond to the question of incentives. First, AI enables individuals without traditional artistic skills to enter the art market and showcase their creativity. Second, many artists have incorporated large AI models into their toolset, potentially replacing repetitive tasks in their workflow. Regarding the issue of value, people may prefer handmade items despite their higher cost. As AI-generated works become more common, handcrafted works will become increasingly scarce and valuable. In addition, there are currently some barriers to using AI software, and users should be encouraged to invest more time and effort into learning these tools. Lastly, according to the mainstream view in China, originality is an all-or-nothing matter. Based on existing standards for “works,” a significant number of AI-generated images can meet the “originality” requirement, as the focus is on human input.
Qian Wang: I’ll respond with three concise points. First, my previous example was not meant to insult AI, but to illustrate the practical application of AI technology. Second, when discussing “tools,” we refer to “creative tools,” not “tools” as in “workers as tools for capitalists to make money” or “AI as a tool for humans to transform the world.” A creative tool should not participate in the content creation decision-making process; otherwise, it cannot be considered a creative tool. Third, whether AIGC can be protected as a work is unrelated to whether originality is an all-or-nothing question or a matter of degree. Any analysis of originality and intellectual investment is meaningless without considering the idea/expression dichotomy. For instance, E=mc², despite Einstein’s intellectual input and originality, is not a work because it falls within the realm of an “idea.”
- The End -
Angela Zhang: Thank you to all our esteemed speakers for their insights, and to the audience for their participation! This event marks the beginning of the Generative AI Governance Summit Dialogue Series. We hope you will continue to support us in our future endeavors!
Linghan Zhang: Today’s discussion was incredibly engaging. I’d like to extend my gratitude to all the speakers for their valuable insights and stimulating conversations. I eagerly anticipate future summit dialogues being just as productive as today. Thank you all!