Science and Technology – The Yale Review of International Studies https://yris.yira.org Yale's Undergraduate Global Affairs Journal Mon, 17 Nov 2025 15:53:22 +0000 en-US hourly 1 https://i0.wp.com/yris.yira.org/wp-content/uploads/2024/02/cropped-output-onlinepngtools-3-1.png?fit=32%2C32&ssl=1 Science and Technology – The Yale Review of International Studies https://yris.yira.org 32 32 123508351 From Earth to the Moon: Crafting International Law for Space Resource Mining https://yris.yira.org/column/from-earth-to-the-moon-crafting-international-law-for-space-resource-mining/ Mon, 17 Nov 2025 15:53:18 +0000 https://yris.yira.org/?p=8959

Introduction

The exploration and utilization of space resources represent a frontier that promises to transform the global economy, fuel technological advancements, and shift geopolitical dynamics. As private corporations, such as SpaceX and Blue Origin, and spacefaring nations like the United States, China, and Luxembourg increasingly focus on mining celestial bodies, the existing international legal framework governing space exploration and resource extraction appears outdated and fragmented. The principles of peaceful use and prohibition of territorial appropriation, embedded in the Outer Space Treaty (OST), remain fundamental to space law; however, they offer little guidance on the complexities introduced by modern space mining technologies. As private ventures expand and technology advances, the current body of space law is failing to keep pace with the rapid developments, creating both significant opportunities and risks. 

My analysis asserts the need for a comprehensive, international framework to regulate the extraction of space resources. Specifically, it proposes the establishment of an International Space Mining Authority (ISMA), modeled on successful governance structures like the International Seabed Authority (ISA) under the United Nations Convention on the Law of the Sea (UNCLOS). Such an authority would ensure that space mining is regulated in a way that promotes equity, environmental sustainability, and peaceful international cooperation.

The Legal Foundation: Existing Treaties and their Limitations

At the heart of space law lies the 1967 Outer Space Treaty (OST), which asserts that “outer space, including the Moon and other celestial bodies, is not subject to national appropriation by any means.”1 The treaty envisions space as a domain for peaceful use by all, prohibiting territorial claims on celestial bodies and emphasizing cooperation among states. The 1979 Moon Agreement further elaborates on the principle that the Moon’s resources must be shared for the benefit of all nations, with particular regard to developing countries. However, both the OST and the Moon Agreement fail to address the growing challenges posed by space resource extraction.2 While these treaties provide a broad legal framework for peaceful exploration, they do not offer practical guidance on the ownership, extraction, or commercial use of space resources.

Moreover, while the OST’s principles are well-intentioned, they do not consider the rapid development of private space ventures and the potential for exploitation of extraterrestrial resources. The absence of clear legal provisions on ownership, extraction rights, and resource management has led to a patchwork of national laws. The U.S. Commercial Space Launch Competitiveness Act (CSLCA) of 2015, for instance, grants U.S. companies the right to extract and use space resources, which has raised concerns about nationalistic approaches to space mining and the lack of international coordination. Similarly, Luxembourg’s pioneering space mining laws have attracted private investments, but they raise significant concerns about equitable resource distribution and environmental protections.3

Thus, the existing framework fails to address the legal, ethical, and environmental complexities of space mining, creating potential for competition and conflict in outer space, with no effective governance mechanisms to ensure fair and responsible exploitation of celestial resources.

Ethical, Environmental, and Geopolitical Implications

Space resource mining presents profound ethical and environmental challenges that must be carefully considered in any new legal framework. Chief among these is the issue of ownership. If celestial bodies and their resources are to be considered the “common heritage of mankind,” how can we ensure that the wealth generated from their exploitation is fairly distributed among all nations, especially those less capable of accessing space?4 As it stands, the benefits of space mining are likely to be monopolized by technologically advanced nations and private corporations, exacerbating global inequalities.5 The lack of an equitable framework for resource distribution may further entrench disparities between the Global North and South, making space a domain where only the rich and powerful have access to its wealth.

From an environmental perspective, the extraction of resources from celestial bodies carries risks that are not yet fully understood. While the Outer Space Treaty prohibits harmful contamination of space environments, space mining could lead to the accumulation of space debris, alterations to celestial bodies’ physical structure, and disturbances to ecosystems that are currently unexplored. The potential environmental degradation of celestial bodies—such as the Moon’s fragile ecosystem—poses a long-term threat that cannot be ignored. The principle of precaution, often invoked in international law to prevent harm when scientific uncertainty exists, should be applied to the regulation of space mining.6

For example, the extraction of Helium-3 from the Moon, a resource believed to have significant potential for energy production, could destabilize the Moon’s geological and environmental balance, causing harm that could reverberate throughout the space ecosystem.7 These environmental risks further emphasize the necessity for a comprehensive governance framework that prioritizes sustainability and takes into account the unknown consequences of space resource extraction.

The Need for an International Space Mining Authority (ISMA)

In order to address these challenges, according to my analysis, the establishment of an International Space Mining Authority (ISMA), a governing body modeled after the International Seabed Authority (ISA) under UNCLOS, is essential. The ISA oversees the extraction of seabed resources in the deep ocean, ensuring that these resources are used for the benefit of all nations, with particular attention to developing countries. Similarly, the ISMA would regulate the extraction of space resources, ensuring that the benefits of space mining are shared equitably and that the environmental impact is minimized.

The ISMA would operate as an international body responsible for granting licenses for space mining operations, establishing environmental impact assessments (EIAs), and ensuring the responsible extraction of resources. It would also be tasked with creating a Space Resource Fund to support developing countries in accessing space resources and technologies.8 This fund could also facilitate international cooperation in space missions, enabling less developed nations to participate in space exploration and resource utilization.

In terms of environmental governance, the ISMA would be responsible for implementing strict environmental standards, including requirements for comprehensive EIAs and sustainable practices in space mining operations. The agency would have the authority to monitor and enforce compliance with these standards, ensuring that space mining does not lead to irreversible damage to celestial bodies and the space environment. Furthermore, the ISMA would oversee the development of technologies that minimize the environmental impact of mining operations, such as systems for space debris removal and the responsible management of mining byproducts.

Fostering Global Cooperation and Technological Exchange

Given the immense financial and technological challenges of space mining, no single country or corporation can effectively manage space resources alone. The creation of the ISMA would facilitate global cooperation by fostering technology-sharing agreements and joint ventures among spacefaring nations. Such collaboration would help democratize access to space resources, ensuring that developing countries are not left behind in the pursuit of space wealth.

Furthermore, the ISMA could play a critical role in overseeing the development of sustainable technologies for space mining. By promoting international partnerships, the ISMA would facilitate the sharing of technology and expertise, ensuring that space mining operations are environmentally responsible and technologically feasible.9 This exchange of knowledge and resources could significantly reduce the technological divide between wealthier and less developed nations, promoting greater equity in space exploration.

Conclusion

As the commercial exploitation of space resources accelerates, the need for a robust international legal framework has never been more urgent. The current legal landscape, with its outdated treaties and fragmented national laws, is ill-equipped to handle the complexities of space resource extraction. By establishing an International Space Mining Authority (ISMA), we can ensure that space mining is conducted in a way that is fair, sustainable, and environmentally responsible.

The ISMA would provide a platform for equitable governance, ensuring that the benefits of space resources are shared by all nations and that the space environment is protected for future generations. In this way, space exploration can serve as a tool for the collective advancement of humanity, rather than a new frontier for exploitation by a select few.

  1. United Nations, The Outer Space Treaty (1967), Article II, accessed October 13, 2025. ↩︎
  2. Baker, M. A. The Legal Regime of Space Mining: Progress and Pitfalls. Journal of Space Law 44, no. 2 (2018): 123-142. ↩︎
  3. Bryner, G. Private Space Exploration and the Need for Regulatory Frameworks. Space Policy Review 36, no. 3 (2020): 45-67. ↩︎
  4. United Nations, Agreement Governing the Activities of States on the Moon and Other Celestial Bodies, December 5, 1967, Article 1. ↩︎
  5. Sung, L. “Space Mining and Environmental Sustainability.” Journal of Space Law and Policy 47, no. 1 (2020): 15-34. ↩︎
  6. Baker, M. A. “The Legal Regime of Space Mining: Progress and Pitfalls.” Journal of Space Law 44, no. 2 (2018): 123-142. ↩︎
  7. Gagnon, J. “Ethical Issues in Space Resource Mining.” International Law Review 29, no. 1 (2019): 67-89. ↩︎
  8. Bryner, G. “Private Space Exploration and the Need for Regulatory Frameworks.” Space Policy Review 36, no. 3 (2020): 45-67. ↩︎
  9. Sung, L. “Space Mining and Environmental Sustainability.” Journal of Space Law and Policy 47, no. 1 (2020): 15-34. ↩︎

Featured/Headline Image Caption and Citation: Space Mining, Image sourced from Prism Sustainability | CC License, no changes made

]]>
8959
El Salvador’s Bitcoin Gamble: Lessons for a Digitally Financial Future https://yris.yira.org/column/el-salvadors-bitcoin-gamble-lessons-for-a-digitally-financial-future/ Tue, 29 Jul 2025 16:46:34 +0000 https://yris.yira.org/?p=8799

This summer, the United States made history. On July 18, Congress signed into law the GENIUS Act, the first major crypto legislation in the nation. What makes this moment remarkable is how quickly the nation’s sentiment on cryptocurrencies has shifted, especially within government. In 2013, former Federal Reserve Chairman Alan Greenspan expressed deep skepticism about Bitcoin, saying, “You really have to stretch your imagination to infer what the intrinsic value of Bitcoin is.” Today, by contrast, the sitting U.S. president is promising to make the United States the “crypto capital of the planet.”

This shift is not only visible in the United States. In 2024, the European Union implemented the Markets in Crypto-Assets (MiCA) framework, establishing rules for stablecoins, disclosures, and market practices across all member countries. These new legislative efforts represent a legitimization of cryptocurrencies in major economies around the world. Cryptocurrencies are no longer fringe, they are being integrated into the legal and financial infrastructure of the world’s largest economies.

But years before these efforts to regulate and integrate crypto, El Salvador made a far more radical move. In 2021, it became the first country in the world to adopt Bitcoin as legal tender. By 2024, however, this experiment was significantly scaled back through a $1.4 billion loan agreement with the International Monetary Fund (IMF), which included provisions that revoked Bitcoin’s official status for tax payments and made its use by businesses voluntary. As crypto becomes a serious policy issue for major nations, what lessons can be drawn from El Salvador’s high-risk, low-readiness experiment?

The Lead Up 

El Salvador’s young and bold president, Nayib Bukele, saw Bitcoin as a tool to modernize the economy and expand financial access. At the time, over 70 percent of Salvadorans lacked access to traditional banking services. Bitcoin was pitched as a solution to this gap, offering digital wallets and financial tools to those excluded from the banking system.

Another motivation was to lower remittance costs. In 2021, approximately 22 percent of El Salvador’s GDP came from remittances, largely sent from the United States. Bitcoin was seen as a way to reduce transfer fees, speed up transactions, and keep more money in the hands of recipients.

Beyond solving existing problems, Bukele envisioned building a tech-forward economy. He announced plans for Bitcoin City, a futuristic, tax-free city powered by geothermal energy from the Conchagua volcano. The city would attract foreign investment, support crypto startups, and issue “volcano bonds” to finance its development. 

What Went Wrong? 

The law was relatively easy to pass. Bukele’s party and its allies held a supermajority in the Legislative Assembly. The legislation was sweeping: all economic agents, unless lacking the necessary technology, were required to accept Bitcoin. Taxes could be paid in Bitcoin, and government subsidies were distributed in the cryptocurrency.

To support its rollout, the government introduced Chivo, a digital wallet app designed to make Bitcoin transactions accessible for everyday use. As an incentive, citizens who downloaded the app received $30 worth of Bitcoin. The government also installed hundreds of Bitcoin ATMs nationwide to make conversion into U.S. dollars easier. 

El Salvador experienced some early wins. In the first few months, crypto-based remittances rose, making up 4.5 percent of all inflows. However, this momentum did not last. By December 2024, cryptocurrency-linked remittances had declined to just 0.87 percent. Adoption of the Chivo wallet followed a similar trend. According to Bukele, three million people downloaded the app, roughly 46 percent of the population. But usage dropped significantly once the $30 bonus was spent. Fewer than 20 percent of users remained active. The average user made no Bitcoin transactions per month and only one U.S. dollar payment on average. 

Among those who chose not to download Chivo, the most common reasons were a preference for cash and a lack of trust in the system. Many users expressed concern over privacy and surveillance, saying Bitcoin transactions could be tracked, unlike cash. Others reported technical problems or feared volatility. 

These concerns were visible even before implementation. A poll conducted by Universidad Centroamericana José Simeón Cañas found that nearly 68 percent of Salvadorans opposed adopting Bitcoin as legal tender. Eight in ten said they did not trust the digital currency, and nine in ten said they did not fully understand it. This distrust was compounded by basic access issues: according to the World Bank, only 62 percent of Salvadorans were using the internet in 2021. The country’s rollout strategy did little to engage the public or prepare them for such a major shift. 

The IMF Deal and Rollback 

In 2024, the Salvadoran government secured a $1.4 billion loan agreement with the IMF, which came with significant policy conditions. These included changes to the Bitcoin Law: Bitcoin could no longer be used to pay taxes, and businesses were no longer required to accept it. These rollbacks marked a shift from mandatory, aggressive implementation to more voluntary and symbolic adoption. 

Still, the government has not entirely abandoned its crypto ambitions. President Bukele announced an $83 million profit from the country’s Bitcoin holdings, bolstered by market appreciation. Although the government has announced the sale of Chivo, Bitcoin ATMs remain in place, and the vision for Bitcoin City has not been officially abandoned.

What Can be Learned? 

El Salvador’s Bitcoin experiment underscores the challenges of adopting digital currencies without adequate public readiness, infrastructure, or trust. While the motivations behind the move, financial inclusion, remittance reform, and economic innovation, were valid and even admirable, the implementation was rapid and imposed from the top down. The disconnect between policy and public sentiment ultimately limited its impact. 

The United States and other major economies should take note. A 2023 Pew Research study found that 63 percent of Americans have “little to no confidence that current ways to invest in, trade, or use cryptocurrencies are reliable and safe,” and only 17 percent of adults report ever having used them. These figures suggest that public trust and understanding remain major hurdles, even in highly developed economies. 

If the United States truly aims to become the “crypto capital of the planet,” legislation alone will not be enough. Policymakers must also invest in public education, regulatory clarity, consumer protection, and infrastructure. Adoption should be careful, incremental, and democratic. 

El Salvador’s story should not be dismissed as a failure. It was a bold experiment that offered real lessons about timing, trust, and readiness. As the world moves further into the digital currency era, these lessons will only grow more relevant. 

Featured/Headline Image Caption and Citation: Bitcoin, Image sourced from The Central American Group | CC License, no changes made

]]>
8799
Moral Authority in a Technological World: The Vatican and the Global AI Race https://yris.yira.org/column/moral-authority-in-a-technological-world-the-vatican-and-the-global-ai-race/ Sat, 12 Jul 2025 17:32:08 +0000 https://yris.yira.org/?p=8738

In Vatican City, the home of the Roman Catholic Church and among the holiest sites in the Christian world, representatives from the world’s leading technological, legal, academic, and business sectors gathered for the Second Annual Rome Conference on AI, Ethics, and the Future of Corporate Governance. The goal: to discuss and reflect on the ethical challenges posed by artificial intelligence and explore its implications for the future of not only business, but society as a whole. Through roundtable discussions, panels, and fireside chats, industry leaders and experts were given a chance to engage in important dialogue about the rising use of AI in governance and business. 

During this conference, the newly elected Pope, Pope Leo XIV, delivered an address framing AI as not only a technological concern but also a moral and spiritual one. He called for attendees to “consider AI within the context of the necessary intergenerational apprenticeship that will enable young people to integrate truth into their moral and spiritual life,” expressing concern about AI’s hindrance on children’s ability to learn and possess an “authentic wisdom” that “has more to do with recognizing the true meaning of life than with the availability of data.” 

Arguably, Pope Leo XIV’s address was one of the first major public acts of his papacy to assert the moral authority of the Church on a pressing global issue. While the Vatican wields no formal political power, its ability to guide conscience and shape international norms through soft power is how it has remained one of the most influential institutions in the world. 

The 2025 Conference 

Founded in 2024 by Wilson Sonsini and Libra Legal Partners, the Rome Conference on AI, Ethics, and The Future Corporate Governance invites some of the biggest industry names. This year’s conference was attended by leaders from Meta, Google, and Open AI in addition to academics from prestigious institutions such as Harvard and Stanford Law. 

Key themes of this year’s conference included the transformation of corporate governance, the integration of ethical oversight into AI development, and the importance of public-private dialogue on accountability and human dignity. What differentiated this conference from similar ones was the uniquely religious setting and influence surrounding it. Pope Leo XIV explicitly expressed his and the Church’s “desire to participate in these discussions that directly affect the present and future of our human family.” While technological meetings like this generally sidestep moral dimensions, this gathering explicitly foregrounded language of conscience and purpose. 

Why the Vatican and AI? 

At first glance, it may seem unlikely that major players in the technology and AI sectors would listen to, or even care about, the opinions of the head of one of the world’s oldest religions. However, the Vatican’s ties to science and ethics have been prominent over the years, with multiple addresses and writings such as Pope John Paul II’s 1991 address to the Pontifical Academy of Sciences, in which he stressed the need for ethical guidance in genetic research. 

A particularly impactful moment came in with the release of Laudato Si’, the late Pope Francis’s landmark encyclical on environmental ethics, which encouraged worldwide collaboration in creating sustainable development to face environmental challenges. Laudato Si’ was released just months before the adoption of the Paris Agreement, an international treaty on climate change in which Vatican City participated as an observer state, shaping the moral framing of the negotiations and signaling the Vatican’s active engagement in global climate discourse. 

More recently, in the realm of AI, the Vatican led the creation of the Rome Call for AI Ethics in 2020 with the objective of promoting “an ethical sense of shared responsibility among international organizations, governments, institutions and the private sector.” Major corporations such as Qualcomm and Microsoft have become signatories, and although the Call is not legally binding, it provides a valuable moral framework. 

The impact of Laudato Si’ and the Rome Call for AI Ethics highlights the Vatican’s ability to influence global policy not through legal authority, but by providing an ethical framework and exerting soft power. 

The Moral Voice in the Technological World 

This year’s Rome Conference on AI, Ethics, and the Future of Corporate Governance demonstrated that the Vatican continues to wield meaningful soft power in the worlds of science and technology. But as artificial intelligence advances at a staggering pace, a pressing question remains: can the Vatican, or anyone interested in AI ethics, truly keep up? As psychologist and AI researcher David D. Luxton, PhD, observes, “‘AI is moving so fast that it’s difficult to grasp how significantly it’s going to change things.’” When the scale and direction of change are hard to even grasp, how can we form coherent opinions, mobilize conferences, or organize ethical frameworks and guardrails in time to respond? 

Though the Vatican was able to assert influence and help shape the adoption of the Paris Agreement, we have yet to see a binding international agreement for AI that matches its scope or force. Without a shared global framework, ethical commitments on AI remain fragmented. Non-binding, unenforceable commitments offer limited impact. The Vatican’s efforts underscore the urgent need to enforce international cooperation before technology outpaces our ability to govern it. The question now is whether, once again, the Vatican can rise to the moment and help facilitate a formal binding agreement as it did in 2015, or whether the acceleration of AI will leave the historic institution and its influence behind.

Featured/Headline Image Caption and Citation: Pope Leo XIV, Image sourced from HeuteCC License, no changes made

]]>
8738
Code and Control: How AI is Reshaping US Border Control https://yris.yira.org/column/code-and-control-how-ai-is-reshaping-us-border-control/ Wed, 04 Jun 2025 21:46:43 +0000 https://yris.yira.org/?p=8665

In recent years, artificial intelligence (AI) has become a key component of the US Department of Homeland Security’s (DHS) technological strategy, particularly in border enforcement and immigration control. From facial recognition at airports to algorithmic cargo screening and predictive threat assessments, the DHS has embraced AI under the promise of national security and operational innovation. Given strong public concerns regarding surveillance overreach and algorithmic bias, it is essential to thoroughly examine this innovation and establish necessary safeguards. The intersection of AI, border control, and human rights raises urgent questions: Who benefits from these technologies? Who is harmed? And how are systemic inequalities reinforced through digital governance?

The AI Push at DHS

According to the DHS AI Use Case Inventory, Customs and Border Protection (CBP) has the highest number of distinct AI applications among all DHS branches, with 75 identified use cases across various operational contexts. These systems are designed to scan cargo, validate identities, detect anomalies, and assess potential threats at ports of entry. Other DHS branches, such as US Citizenship and Immigration Services (USCIS) and Immigration and Customs Enforcement (ICE), also heavily rely on AI technologies to process applications, track migration trends, and support enforcement actions.

These systems are either classified as in deployment or pre-deployment phases. For example, 31 cases are currently deployed within CBP, with 13 marked as potentially impacting public safety and rights. Notably, many of these uses involve the collection of biometric data, including facial recognition and facial capture technologies, which are known to have disproportionately high error rates for people of color and raise heightened privacy concerns.

AI initiatives within DHS are formally governed by frameworks such as the Office of Management and Budget’s (OMB) Memorandum M-24-10 and President Biden’s Executive Order 14110 on trustworthy AI. These policies require agencies to designate Chief AI Officers, establish governance boards, and implement risk management practices. According to its principles, DHS’s AI must be “lawful, mission-appropriate, and mission-enhancing” as well as “safe, secure, responsible, trustworthy, and human-centered.” The oversight of these AI deployments is managed in part by the DHS Privacy Office and the Office for Civil Rights and Civil Liberties (CRCL). 

Yet, despite these procedural safeguards, critics argue that the system lacks genuine human rights accountability, particularly when AI intersects with immigration enforcement, a domain already saturated with structural racism and xenophobia.

Racialized Implications of AI at the Border

Civil society organizations, including the Promise Institute for Human Rights and the Black Alliance for Just Immigration (BAJI), have voiced grave concerns about the racial justice implications of AI at the border. In a 2023 thematic hearing before the Inter-American Commission on Human Rights (IACHR), these groups highlighted how AI-driven border technologies exacerbate existing racial discrimination, particularly against Black migrants. The Promise Institute’s submission warned that AI-enabled policies of deterrence and border externalization perpetuate the same patterns of exclusion and abuse that have historically plagued US immigration systems.

The testimony argued that without a racial justice lens, the deployment of border technologies only intensifies the racial discrimination already at the heart of the US immigration system. Technologies like facial recognition, which studies have shown to be less accurate for people with darker skin tones, are particularly prone to reinforcing bias when used in enforcement scenarios. 

These concerns are not hypothetical. The use of AI in immigration contexts has already resulted in wrongful detentions, misidentifications, and an erosion of due process. Predictive algorithms used to assess risk or determine case prioritization can reflect and reproduce biased data, creating feedback loops that disproportionately target racialized communities.

Governance Gaps and Power Imbalances

While the DHS has proposed its AI Governance Board as a mechanism for oversight, this council is primarily composed of senior DHS officials, including representatives from CBP, ICE, and other enforcement agencies. Independent human rights experts, public defenders, and civil society actors are absent. This power imbalance limits the scope of internal accountability and fails to represent the perspectives of those most affected by AI policies.

The lack of external oversight is compounded by leadership volatility. The inaugural Chief AI Officer of DHS, Eric Hysen, served during the Biden administration until January 2025. His successor, David Larrimore, departed only months later in April 2025, leaving the role vacant at a critical time. This instability raises questions about the continuity of ethical governance and long-term strategic oversight of DHS’s AI strategy. Several use cases in the DHS inventory are either too new to assess for rights implications or have been prematurely deemed non-impacting despite clear potential concerns. For example, face recognition technologies, a known flashpoint for civil liberties violations, are in active use but have not consistently been flagged as rights-impacting in DHS’s internal assessments.

Legal Compliance vs. Ethical Responsibility

The DHS insists that its AI practices are lawful and compliant with privacy and civil liberties policies. But legality does not equal justice. Policies like M-24-10, while well-intentioned, focus heavily on internal procedures and documentation rather than substantive human rights protections. They do not require external review by civil society or provide meaningful mechanisms for affected individuals to seek redress. The US government’s approach to AI in border control often assumes that technological development leads to better governance. This assumption ignores the broader socio-political context in which these technologies operate, one marked by decades of exclusionary immigration laws, racial profiling, and militarization of the southern border.

International Implications and Global Trends

The United States is not alone in deploying AI for border enforcement. Countries such as the United Kingdom, Australia, and members of the European Union have similarly adopted AI technologies to monitor and manage migration. This global trend raises concerns about a growing international norm where surveillance and control are prioritized over human rights. As wealthier nations expand AI surveillance at their borders, the brunt of its harms is disproportionately felt by migrants and asylum seekers from the Global South, regions historically shaped by economic exploitation, climate disruption, and political instability. As such, the consequences of border technologies trickle far, affecting migrants and asylum seekers who are already navigating precarious conditions. Recognizing this global dimension is crucial to building AI governance frameworks that are both ethically grounded and internationally accountable.

The Need for Rights-Based AI

A genuinely responsible AI strategy for border enforcement must go beyond internal compliance checklists. It must incorporate binding human rights standards, transparency mechanisms, and participatory design processes that center the experiences of marginalized communities. The United States has an opportunity to lead by example in aligning its national security strategies with democratic values and human dignity. But that can only happen if the deployment of AI at the border is held accountable not just to bureaucratic standards, but to the people it affects most deeply.

Artificial intelligence, while often beneficial, is not neutral. When embedded in immigration enforcement, it risks reinforcing the same inequalities that civil rights movements have long fought to dismantle. As the DHS continues to expand its AI capabilities, the need for critical scrutiny and structural reform becomes more urgent. Technology should not be a tool for deepening exclusion, but a means to uphold justice. That goal demands transparency, accountability, and a fundamental rethinking of what it means to secure a border in a way that respects human rights.

Featured/Headline Image Caption and Citation: AI Code, Image sourced from European AlternativesCC License, no changes made

]]>
8665
#OpISIS: Hacktivism and the New Era of Counterterrorism https://yris.yira.org/column/opisis-hacktivism-and-the-new-era-of-counterterrorism/ Thu, 29 May 2025 16:38:38 +0000 https://yris.yira.org/?p=8633

The year is 2015. Following the November Paris Attacks—in which 129 individuals were killed by ISIS, and even more were left wounded—online “hacktivist” group Anonymous publicly declared war on the Islamic State’s online operations, serving as the catalyst for swift, severe retaliation, and eventually, a full-blown cyberwar between the two parties. Anonymous and ISIS’s 2015 conflict (“OpISIS”)—while widely regarded as the first prominent declaration of cyberwarfare by a non-state actor in the 21st century—is not isolated. Rather, it demonstrates a larger shift in vigilante counterterrorism. Not only are international terrorist groups increasingly pivoting towards social media recruitment to expand their influence in the western world, but the subsequent retaliation is indicative of an entirely new means of warfare: one with the internet and social media at the forefront. 

Social media platforms, while often regarded politically as a way candidates running for public office may advertise their policy positions and secure a strong voter base, also have the propensity to become hotbeds for nefarious action. As online political discourse has become more prevalent, so have loosely regulated outlets which allow extremist indoctrination to spread more efficiently. In his book Islamic State: The Digital Caliphate scholar Abdel-Bari Atwan explains that platforms like 4Chan and Twitter have served as “recruiting tool[s] and psychological warfare weapon[s],” and have shapeshifted from conventional communication channels to “advanced media machine[s].”1 Moreover, a 2018 RAND report demonstrates that this is not isolated to the home countries of these groups. Rather, major terrorist groups such as ISIS have been able to mobilize “an estimated 40,000 foreign nationals from 100 countries” to join. 

This expansion is achieved through two key steps. First, extremist groups proliferate thousands of shell accounts designed to look like average users—typically in the western world. Second, they couple recruitment posts with unrelated, trending topics and hashtags (for instance, #WorldCup) in order to widen their reach to the median viewer. Yet despite being shrouded in popular trends and hashtags, much of this content is overtly radical, leading one to question: who buys in? While the fundamentally deregulated nature of platforms like 4Chan—and to some extent, X—have allowed for large-level dissemination of propaganda, there is a uniquely psychological aspect which outlines how it has garnered so much success in the first place.

As outlined in Tamar Mitts’s book From Isolation to Radicalization: Anti-Muslim Hostility and Support for ISIS in the West, when analyzing the demographic backgrounds of the nearly 30,000 foreigners who traveled to countries in the Middle East to fight for ISIS, there was little overlap in age group, racial background, and socioeconomic status. Rather, what most had in common was a shared sense of isolation coupled with an extensive amount of time spent online. Abdulla Almutairi’s “Social Media as a Recruitment Tool for ISIS” details that the appeal is typically towards “alienated youth, mostly male, who are searching for a sense of belonging and a true calling, a sense of mission and value for their disaffected lives.” 

The benefit that this demographic offers to extremist groups is twofold. First, it is easier to recruit, given that its members spend most of their time on the platforms where this content is rapidly spread. Second, they are more likely to be devoted to the cause to which they are radicalized, and will go to great lengths (from travel to violent action) to engage in the group’s larger ideological goals. While the messaging which appears in most propaganda videos at first focuses on incentives—such as the sense of “community” one can obtain by joining one of these groups. This often serves as not only a pipeline for more violent content, but an eventual impetus towards dangerous action once groupthink causes the collective to become more radical. These actions in recent years have become increasingly deterred by non-state “hacktivist” organizations. 

Anonymous was virtually unheard of before the turn of the 21st century, but following the early 2010s, it cemented its legacy as one of the largest international, non-state hacktivist collective organizations. Essentially, Anonymous is synonymous with the internet, and since its founding in 2003, has gained notoriety for interventions ranging from attacks on the Tunisian and Zimbabwean government websites in 2010, to hacking the Federal Reserve three years later. Anonymous’s distinctiveness lies in its autonomous nature, lack of defined ideology, and willingness to take significant risks—such as intercepting the systems of state governments—to achieve their goals. 

While Anonymous is arguably the largest and most notorious hacktivist group, more and more have propped up following the 2015 cyberwar, suggesting that we may be entering a new era in counterterrorism specific to digital attacks. At the very least, social media is certainly being used as a tool for states to achieve their own political goals. There is also increasing evidence that non-state actors beyond ISIS also have been employing it in lieu of password-protected forums because “the pool of potential recruits, supporters, or sympathizers that can be reached on social media is vastly larger.” The simultaneous rise of both terrorist digital attacks coupled with hacktivist retaliation is akin to two state governments in perpetual competition with one another—a digital arms race, with social media content and propaganda at the forefront. As a result, one may pose the question as to whether hacktivist groups can be mobilized in collaboration with governments such as the United States to take down targets. 

However, this does not come without potential risks. Groups like Anonymous, while possible tools to foster accountability in global cyberspace, lack centralization due to their lack of large-scale or uniform political or ideological commitments. Given that they do not pledge allegiance to any specific party, actor, or government, action may not be realized unless Anonymous feels a strong incentive—something hard to come by if there is little coherence among members and difficulty in mobilizing. But this doesn’t mean that hacktivism as a whole lacks security use, or that states cannot adopt similar measures in their own security operations. If state governments hope to keep pace with both terrorist threats and independent actors operating in cyberspace, they must rapidly advance their digital infrastructure, because if there’s one thing that Anonymous’s 2015 “OpISIS” demonstrates, it is that hacktivism is here to stay.

  1.  Abdel Bari Atwan, Islamic State: The Digital Caliphate (London: Saqi Books, 2015) ↩︎

Featured/Headline Image Caption and Citation: Social Media, Image sourced from Sprout Media Lab | CC License, no changes made

]]>
8633
The Weaponization of Data: International Legal Responses to Digital Espionage and State-Sponsored Cyber Warfare https://yris.yira.org/column/the-weaponization-of-data-international-legal-responses-to-digital-espionage-and-state-sponsored-cyber-warfare/ Thu, 15 May 2025 22:17:07 +0000 https://yris.yira.org/?p=8616

In the 21st century, data has emerged as one of the most valuable assets, fundamentally reshaping global  power dynamics and international relations. As nation-states increasingly harness data-driven strategies to advance their geopolitical interests, traditional notions of warfare and espionage are being redefined. This work examines the phenomenon of data weaponization in the digital age—specifically through state sponsored cyber espionage, disinformation campaigns, and counter-cyber operations—and analyzes the  efficacy of current international legal frameworks in mitigating these threats. Central to the discussion is  the role of Open-Source Intelligence (OSINT) in enhancing attribution and accountability in cyber  conflicts, a tool that has become indispensable in today’s interconnected world. 

The transformation of data into a strategic resource is unprecedented. Unlike traditional military assets,  data is intangible, easily replicable, and can be transmitted across borders in seconds. It is used by nations to keep tabs on other states’ actions. Sensitive information that would otherwise be kept secret can be gathered through cyber espionage, enabling a nation to influence another government’s decisions or sow discord among its competitors. Maintaining an advantage on the international stage and strengthening national security are two more benefits of having access to crucial intelligence. 

Several nations’ strategies have highlighted the strategic use of data in modern conflict. For instance,  China’s cyber-espionage campaigns have targeted both governmental and private sector networks, aiming to acquire technological and economic advantages. According to recent reports, China-affiliated actors have compromised multiple telecommunications networks in an extensive and serious cyber espionage effort to steal call logs and private data pertaining to requests from U.S. law enforcement. Similarly, Russia’s sophisticated disinformation and cyber operations have been central to its hybrid warfare strategies, influencing electoral processes and destabilizing adversaries. In response to this, the United States has increasingly adopted counter-cyber measures to protect its critical infrastructure and respond to hostile cyber activities. Academics refer to data as the new soil. These case studies underscore the need for robust international legal responses that can  address the complexities of data-driven warfare. 

The evolution of cyber warfare has prompted the adaptation of international law to address new security  challenges. Two significant legal instruments in this regard are the Tallinn Manual 2.0 and the Budapest Convention. The Tallinn Manual 2.0 provides a comprehensive analysis of how existing international law applies to cyber operations, covering aspects such as state responsibility, neutrality, and the applicability of the law of armed conflict. However, while the manual offers valuable guidance, it is non-binding and its recommendations rely on voluntary adoption by states. 

The Budapest Convention, on the other hand, focuses on combating cybercrime by establishing minimum  standards for criminalizing certain types of cyber activities. Although it serves as a model for national  legislation and international cooperation, its scope is limited to criminal matters and does not fully address  the nuances of state-sponsored cyber operations that may fall within the ambit of international conflict. 

Despite these legal instruments, significant challenges remain. One of the primary issues is the attribution problem: the difficulty in definitively identifying the origin of a cyberattack. Unlike conventional warfare where physical evidence can link actions to a specific actor, cyber operations often involve layers of obfuscation, including the use of proxy servers and anonymization techniques. This ambiguity complicates efforts to hold states accountable under international law. 

The current international legal instruments, while foundational, fall short of comprehensively addressing  the challenges posed by state-sponsored cyber warfare. The non-binding nature of the Tallinn Manual 2.0  and the limited scope of the Budapest Convention illustrate the gaps in the international legal system when it comes to cyber operations. Additionally, the persistent attribution problem and the rapid evolution of  cyber technologies outpace the slow processes of treaty negotiation and international consensus-building. 

Furthermore, the traditional principles of jus ad bellum (the right to engage in war) and jus in bello (the  law governing the conduct of warfare) are difficult to apply in cyberspace. The thresholds for what  constitute an act of aggression or a use of force in the digital realm are still under debate. Consequently,  the current legal framework is ill-equipped to address the full spectrum of cyber operations, leaving states  with a fragmented set of norms and practices. 

Open-Source Intelligence (OSINT) refers to the collection and analysis of information that is publicly  available, typically gathered from online sources, social media, forums, and other digital platforms. In the  context of cyber warfare, OSINT has become a critical tool for gathering evidence, tracking cyber actors,  and attributing attacks to specific state or non-state actors. 

The advantages of OSINT are manyfold. It provides an accessible means to collect vast amounts of data,  often in real-time, which can then be analyzed to detect patterns and anomalies associated with cyber threats. By leveraging OSINT, analysts can trace the digital footprints left by cyber operatives, correlating  online activities with known threat actors and drawing connections between disparate incidents. 

For instance, during the investigation of the SolarWinds breach, OSINT techniques were instrumental in  piecing together the attack’s modus operandi and linking it to sophisticated state-sponsored actors.  Similarly, OSINT has been used to monitor the activities of Chinese Advanced Persistent Threat (APT) groups and Russian information warfare campaigns, providing vital intelligence that supports legal and  diplomatic responses. 

While OSINT provides a wealth of information, its integration into legal frameworks poses challenges.  The admissibility of OSINT-derived evidence in international courts remains a grey area, particularly  given concerns about the reliability and verifiability of such data. Nevertheless, as cyber investigations  increasingly rely on OSINT, there is a growing need to establish standardized protocols that ensure the integrity and accuracy of OSINT evidence. 

Legal scholars argue that incorporating OSINT into international legal frameworks could enhance  attribution mechanisms and provide a more robust basis for state accountability. Establishing clear  guidelines for the collection, verification, and presentation of OSINT evidence would not only bolster  legal cases against state-sponsored cyber operations but also promote greater transparency and  cooperation among states. There are a few challenges in this, including the questionable admissibility of OSINT in court, the lack of standardized verification protocols, and the absence of a unified legal framework for addressing cyber operations—have prompted many legal scholars and practitioners to advocate for the development of a cyber-specific treaty.

In light of these challenges, many legal scholars and practitioners advocate for the development of a cyber specific treaty. Such a treaty would aim to create binding obligations for states, clearly defining what constitutes an act of cyber aggression and establishing mechanisms for accountability and redress. Key  elements of a cyber treaty could include: 

Clear Definitions and Thresholds: Establishing unambiguous definitions of cyber aggression, espionage,  and disinformation, as well as setting clear thresholds for what constitutes a use of force in cyberspace. 

Attribution Standards: Developing international standards for cyber attribution that incorporate OSINT, digital forensics, and intelligence-sharing protocols. 

Legal Recourse and Sanctions: Creating mechanisms for imposing sanctions or other punitive measures  on states that engage in unlawful cyber operations, thereby deterring malicious behaviour. 

Cooperative Frameworks: Promoting international cooperation in cyber investigations and fostering an  environment where states can share OSINT and other critical intelligence without compromising national  security. 

Effective regulation of state-sponsored cyber activities requires unprecedented levels of international  cooperation. States must overcome traditional rivalries and work collaboratively to address a threat that  transcends national borders. Multilateral organizations, such as the United Nations and the International Telecommunication Union (ITU), could play pivotal roles in mediating discussions and formulating a  cohesive global cyber governance framework. 

By integrating OSINT into cooperative international efforts, states can create a more transparent and  accountable system. Joint investigations, shared intelligence databases, and coordinated legal responses  can collectively strengthen the global response to cyber threats. This cooperative approach not only  enhances security but also reinforces the rule of law in cyberspace. 

However, while cooperation and shared intelligence are critical, they are not sufficient on their own. The effectiveness of international responses also depends heavily on the ability of legal systems to keep pace with evolving threats. One of the most significant challenges facing international legal responses to cyber warfare is the rapid pace of technological change. Cyber capabilities continue to evolve, and new methods of data manipulation and cyber intrusion emerge regularly. Legal frameworks, which are inherently slower to  adapt, risk becoming obsolete if they do not incorporate flexible, technology-agnostic principles. 

Legal reforms must be dynamic and forward-looking, allowing for periodic reviews and updates as  technology advances. Embedding adaptability into international treaties—for instance through built-in  review clauses or the establishment of specialized cyber oversight bodies—could ensure that legal  instruments remain relevant in the face of technological innovation. 

Another challenge lies in balancing national sovereignty with the need for global cyber norms. States are  often reluctant to cede control over their cyber policies, viewing them as critical components of national  security. However, the borderless nature of cyber operations necessitates a degree of compromise and  the establishment of universal standards. Finding the right balance between respecting state sovereignty and enforcing international norms will require diplomatic finesse and a willingness to engage in multilateral dialogue. The success of any legal reform will depend on the ability of states to reconcile these competing interests and work towards a  mutually beneficial framework. 

In addition to state actors, the private sector and civil society play crucial roles in the digital ecosystem.  Tech companies, cybersecurity firms, and academic institutions are often at the forefront of technological  innovation and cyber defense. Their expertise and insights can inform the development of more effective  legal and regulatory frameworks. Incorporating perspectives from non-state actors into international legal discussions can enrich the  dialogue and ensure that the resulting frameworks are comprehensive and well-informed. Collaborative initiatives, such as public-private partnerships and multi-stakeholder forums, can facilitate the exchange of ideas and foster a more resilient global cyber governance structure. 

The weaponization of data represents a paradigm shift in international relations and warfare. As state sponsored cyber operations become more prevalent, the inadequacies of existing international legal frameworks are increasingly exposed. This work explored how nations use cyber espionage,  disinformation, and digital coercion as strategic tools, while highlighting the transformative role of OSINT  in attributing and mitigating these actions. Current legal instruments—namely, the Tallinn Manual 2.0 and the Budapest Convention—offer some  guidance but are insufficient in addressing the complexities of modern cyber conflict. The challenges of attribution, the rapid pace of technological evolution, and the need for enhanced international  cooperation call for a cyber-specific treaty that clearly defines cyber aggression, incorporates OSINT driven evidence protocols, and establishes binding accountability measures. 

Looking ahead, the development of dynamic, adaptable legal frameworks that balance national  sovereignty with global norms is imperative. By leveraging OSINT and fostering international  collaboration, the global community can build a more robust and transparent system for regulating state sponsored cyber warfare. This, in turn, will help ensure that the digital domain remains a space where the rule of law prevails, safeguarding both national security and international stability. 

In conclusion, as nations continue to harness data as a geopolitical tool, the international legal community  must evolve in tandem. The creation of comprehensive legal instruments that effectively address cyber  threats is not merely an academic exercise—it is a critical step toward ensuring a secure and just digital  future. The journey toward such reform will undoubtedly be complex, but it is indeed necessary, requiring sustained dialogue, innovative legal thinking, and a commitment to bridging the gap between technology and law.

Featured/Headline Image Caption and Citation: Artificial Intelligence, Image sourced from ISPI | CC License, no changes made

]]>
8616
Climate Change is Russia’s Biggest Strategic Weapon   https://yris.yira.org/column/climate-change-is-russias-biggest-strategic-weapon/ Thu, 15 May 2025 04:45:00 +0000 https://yris.yira.org/?p=8611

In 2015, Russia signed the Paris Agreement on climate change. In 2019, the nation formally joined. According to the Kremlin, Russia aims to significantly reduce emissions by 2030 all  while fostering sustainable economic development. Since then, Russia has published numerous doctrines emphasizing their lofty goals on climate focused initiatives. The 2023 Kremlin’s updated climate doctrine vows carbon neutrality by 2060, citing numerous carbon reduction projects aimed to offset emissions and international research efforts to effectively implement these measures. 

Russia claims it is committed to climate awareness, but its actions tell a different story. While the Kremlin signs climate agreements and works policies into national strategy, it quietly stands to gain more from a warming planet.  

Like every country, climate change stands to harm Russia in some form. Thawing permafrost in its north is disrupting infrastructure, and warmer weather and droughts are hindering agriculture, one of Russia’s notable exports. Less access to clean water and natural disasters are also looming. This not only affects Russia’s economy, but also creates a lower quality of life for its people. In total, the G20 Climate Risk Atlas predicts that Russia’s economy, across all sectors, will lose 8.93% by 2100 if it sticks to its current climate trajectory. Although Russia remains ill-prepared to confront these realities, top-ranking officials barely recognize these crises or fund key agencies to foster intergovernmental cooperation. 

Despite climate agreements and fixed policies, Russia is backsliding towards a less climate-aware nation. This is mainly due to Russia’s economic cash cow—oil and gas exports. In 2024, oil and gas revenues made up 30% of the federal budget revenue. Before the UN’s COP28 climate summit, Russia spoke against the phasing out of fossil fuels. Additionally, the updated climate doctrine pays no mention to the effects of fossil fuels on climate change, a section omitted from the previous doctrine of 2009. At the 2024 COP29, a summit on climate change, Russia ironically sent 900 delegates with the goal of striking bilateral fossil fuel deals.  

Furthermore, Russia’s war on Ukraine has also marked their efforts illegitimate. The first two years of the  conflict produced over 175 million tons of carbon dioxide, accounting for over $32 billion in environmental damages. Additionally, government funding towards environmental protection ceased once the conflict began. Initiatives for government and private sector companies to adapt for carbon neutrality have also been abandoned due to economic constraints.  

The economic and societal issues posed by climate change are abundantly clear to the Kremlin.  Despite recent calls to stay in the Paris Agreement and vows to reduce its carbon footprint,  Russia’s current trajectory suggests the opposite. Russia’s climate claims are all talk, with little to no action. Any progress seen out of the country are numbers manipulated by the Kremlin through introducing new coefficients to calculate emissions. Why is Russia not making strides to curb global warming and mitigate projected losses?  

The answer lies in the Arctic. Russia stands to gain majorly from global warming and melting ice caps. The Arctic Circle is quickly becoming the nexus for worldwide competition and power projection in the coming decades. Russia’s territory accounts for 53% of the Arctic coastline, making it a decisive player. As the Arctic melts, Russia inches closer to massive economic opportunities.  

The Arctic holds an estimated 13% of the world’s undiscovered oil reserves and 30% of the world’s undiscovered natural gas reserves. However, current conditions in the high Arctic make extraction extremely difficult. This has not stopped Russia though as Moscow plans to extract and export 100 million tons of Arctic oil by 2030. As temperatures rise, conditions ease, opening up more room for exploitation. This was even highlighted in Moscow’s 2023 climate doctrine.  

But this is just the beginning. In 2020, Russia approved a development project worth over $300 billion to scale resource extraction in preparation for warmer weather. Russia also partnered with China and Saudi Arabia on the Vostok Oil Project, which was coined the largest in the  “modern-day global oil industry.” These projects aim to extract 8 billion barrels of oil from now until 2060.  

However, these ambitions are contingent on one factor, the continuation of global warming. Russia lacks an incentive to invest in reducing its carbon footprint because these projects are extremely lucrative.   

Better mining conditions are not the only factor global warming will bring; it will also melt the ice caps littered through the Northern Sea Route (NSR). Currently, the route is open only during the summer months, but once operational year-round, the NSR will cut shipping times between China and Europe by 30% to 40%. The route is in Russia’s Special Economic zone, and Moscow has already made significant investments to control the passage.  

Construction on two mega ports on both ends of the NSR (Murmansk and Vladivostok) are set to  be complete by 2026. Two other ports along the NSR, in Sabetta and Tiksi, are also under construction. All of this to turn the NSR “into a new Suez.” As melting ice caps open the route, it is predicted that 270 million tons of cargo will pass through by 2035, a 10x increase from 2022. 

Again, the success of these ventures depends on climate change. For Russia, the economic gain  and power projection these projects bring, far outweigh the negative impacts of global warming. Russia’s climate strategy is ultimately rooted in calculated contradictions. While the Kremlin claims climate awareness and carbon reduction, its actions tell a different story. Russia’s words are used to save face, and its actions are masked in misinformation to forward its agenda. 

Russia does not care about the environment, no matter how much it claims to. It depends on global warming to strengthen its economy and international standing. Until Russia aligns its actions with its climate rhetoric, its environmental promises will remain hollow, a dangerous gamble for both its people and the planet.

Featured/Headline Image Caption and Citation: Ship in Ice, Image sourced from ISPI | CC License, no changes made

]]>
8611
From Davos to Global Governance: How Youth Leaders Can Shape the Future of Ethical AI https://yris.yira.org/column/from-davos-to-global-governance-how-youth-leaders-can-shape-the-future-of-ethical-ai/ Tue, 22 Apr 2025 19:41:44 +0000 https://yris.yira.org/?p=8544

Artificial intelligence is no longer a niche issue—it is a defining force in international relations. From autonomous weapons and surveillance to AI-driven trade logistics and global health, its impact is as sweeping as it is unregulated. This pervasive influence underscores the critical significance of establishing robust frameworks for global AI governance to ensure its development and deployment align with ethical principles and societal well-being.   

Its reach is expansive, yet despite its vast implications, AI governance remains largely concentrated in the hands of senior policymakers, corporate leaders, and technocrats. The voices of the next generation—those who will inherit and advance this technology—are still notably absent from the global policy table. Youth Leaders Davos (YLD), a powerful initiative founded and led by the visionary global educator and entrepreneur April Swando Hu (Yale ’84), provided a unique platform to address this critical gap during the 2025 World Economic Forum week in Davos, Switzerland. In Davos, I joined youth leaders from across the globe to confront one of the most urgent challenges of our time: ethical and inclusive AI governance. This catalytic incubator of ideas, conceived by Ms. Hu’s commitment to fostering future-oriented leadership, transformed abstract interests into actionable conviction.

Participating in a specialized forum on the ethical implications of AI in healthcare diagnostics and a simulation exercise on international cooperation in regulating large language models served as critical launchpads for understanding the complexities and urgency of youth involvement in these crucial conversations. These experiences underscored how the future of global governance will be inextricably linked to the governance of artificial intelligence, demanding the inclusion of diverse perspectives to navigate its profound societal implications.

Global institutions have begun to respond: UNESCO introduced a global ethical framework for AI, the OECD offers AI policy observatories, and the EU’s AI Act marks a regulatory landmark. While these frameworks represent important progress, there remains an opportunity to integrate more diverse, youth-driven perspectives—especially from the Global South. Young people, who are both at the forefront of technological innovation and among those most impacted by its outcomes, bring vital insights that can complement the expertise of established policymakers and technocrats.   

This generational gap became clear during discussions where senior leaders contributed deep expertise on the technical and geopolitical dimensions of AI, while our cohort brought complementary perspectives rooted in ethical urgency, lived experience, and future-oriented thinking. We questioned how AI could reinforce structural inequities if left unchecked, and how algorithmic opacity might deepen divisions along socioeconomic and racial lines. Far from being idealistic, these concerns reflected a grounded understanding of how power and technology intersect.

International institutions can embed fairness, transparency, and sustainability into digital systems by adopting inclusive, human-centered design standards that prioritize equity across socioeconomic and cultural contexts. Fairness begins with representative data: AI systems must be trained on diverse, de-biased datasets that reflect global populations, not just data from dominant regions or demographics. To ensure transparency, institutions should advocate for algorithmic explainability—requiring developers to disclose decision-making processes and enabling public audits of AI tools deployed in high-stakes areas like healthcare and finance. Sustainability, meanwhile, demands regulatory alignment between technological innovation and environmental stewardship; this includes incentivizing energy-efficient AI models and embedding climate risk assessments into digital infrastructure development. Only through such principled and coordinated action can international institutions ensure that AI development aligns with the broader goals of equity, accountability, and long-term global well-being.

History offers precedents. Youth-led advocacy has long shaped global discourse—from the anti-apartheid movement to climate activism. More recently, figures like Greta Thunberg and Malala Yousafzai have challenged international bodies to rethink who gets to speak and what solutions are prioritized. Yet when it comes to emerging technologies like AI, young voices are often sidelined in favor of corporate interests or state security agendas.   

What would a more inclusive model of AI governance look like? It would involve intergenerational collaboration, yes, but also concrete policy mechanisms: youth representation in multilateral AI forums, funding for grassroots AI education in underrepresented regions, and a global youth assembly to propose digital rights frameworks. Just as the Paris Agreement was shaped by civil society and indigenous voices, the future of AI demands broad-based legitimacy grounded in ethical pluralism.

Three key principles should guide international AI governance moving forward. First, self-aware leadership. Before we can regulate machines, we must interrogate our own values. Leadership development grounded in self-reflection, empathy, and humility is critical to resisting the technocratic temptation to govern from above. Second, cross-cultural intelligence. AI systems are trained on data, but values are shaped by culture. Building ethical AI requires deep, cross-cultural engagement that prioritizes local contexts and historically marginalized communities. Third, purpose-driven innovation. Innovation must serve people, not just markets. As the tech sector becomes increasingly globalized, international bodies must align AI development with public goods—healthcare, education, and climate resilience—rather than profit maximization.   

The principles of self-awareness, cross-cultural intelligence, and purpose-driven innovation, underscored during my time with Youth Leaders Davos during the 2025 World Economic Forum week, offer a vital framework for how the next generation can and must contribute to shaping the ethical trajectory of AI on a global scale. Given the profound and multifaceted impact of AI on the international order, the inclusion of youth perspectives in its governance is not merely desirable but an imperative for a future that is both technologically advanced and ethically sound.

Artificial intelligence does not represent an inevitable trajectory, but rather a domain shaped by intentional design. The principles and values that guide its development today will fundamentally influence the international order and societal structures of tomorrow. The insights gained at Youth Leaders Davos highlight the urgent need to empower young leaders in this critical endeavor.

Featured/Headline Image Caption and Citation: Annual Meeting Davos: Aerial photograph of Davos, Image sourced from Flickr | CC License, no changes made

]]>
8544
The Geopolitics of AI Regulation  https://yris.yira.org/global-issue/the-geopolitics-of-ai-regulation/ Wed, 09 Apr 2025 23:12:47 +0000 https://yris.yira.org/?p=8493

On August 2, 2024, Europe’s AI Act entered into force, becoming the world’s first comprehensive legal framework on the issue of artificial intelligence. Operative across all 27 of its member states, the legislation provides a holistic set of rules for all players in the AI ecosystem –from developers to exporters and deployers.  

Though the Act got scant press in the US, it is clearly the opening salvo on what real legal proscriptions will look like for any business engaging in generative AI. Enforcement will be punitive – with fines of up to 35 million euros or 7% of global annual revenue (whichever is higher)—so even a large company operating outside the EU will likely contour to its agenda to avoid extraterritorial issues. 

As landmark legislation, it will cast an enduring shadow. AI will soon affect nearly every field of human endeavor. Recent studies suggest that worldwide artificial intelligence may add $2.6 trillion to $4.4 trillion annually to global economic output in the coming decade. AI is already altering the geostrategic landscape, as China deploys it in its military modernization and the US seeks to rebalance this unprecedented new threat to the existing international security architecture.  

Indeed, AI’s promise of epochal transformations portends effects that can’t yet be fully comprehended. To its credit, the EU has risen to the challenge of setting up preliminary guard rails. 

By designing a regulatory framework based on human rights and fundamental values, the EU believes it can develop an AI ecosystem that is inherently safer and more likely to not harm anyone. In this way, the EU aspires to be the global leader in safe AI.  

However, important questions remain. How does the act define artificial intelligence? How does it build on European legal precedent in its effort to protect its citizens? How does it compare to the Chinese and American approaches to AI regulation?  

From a geopolitical perspective, does the act ultimately help the US in the long run, forging a broader consensus as the West competes globally with China for a new era of technological supremacy?   

The Act’s Definition of AI and Risk

Originating from a European Commission proposal aimed at a “human-centric” approach to artificial intelligence, the final Act –a text totaling 50,000 words– is divided into 113 Articles, 180 recitals, and 13 annexes.

It categorizes AI systems based on their potential harm, aiming to ensure scrutiny, oversight, and –in extreme cases– outright bans on those products deemed dangerous.  

In earlier policy iterations, the European Commission’s definition of AI was criticized for being too broad. It was eventually modified to approximate the existing OECD definition and now focuses on two key characteristics of AI systems. Article 3(1) spells it out explicitly:  

  1. An “AI system” is a machine-based system designed to operate with varying levels of autonomy.
  1. An AI system exhibits adaptiveness after deployment and can infer –based on either explicit or implicit objectives– from the input it receives, how to generate outputs.  (These can be predictions, content, recommendations, or decisions that can influence physical or virtual environments.) 

This issue of inference is key. In its Recital 13, the act is explicit in that does not cover “systems that are based on the rules defined solely by natural persons to automatically execute operations.” Thus, the capacity of an applicable AI system to infer is what takes from the more commonplace data processing. It enables learning, reasoning, and can fashion new modeling on its own. The techniques that enable this type of inference while building an AI system include: 

1. machine learning mechanisms that learn from data to achieve certain objectives.

2. logic-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved. 

With this definition in mind, the AI Act then classifies these autonomous systems according to their risks to society, creating a uniform framework across all EU countries:

Banned AI: Some AIs are prohibited due to the unacceptable risks they pose. These include systems used for government or corporate “social scoring,” certain biometric systems (like those for emotion monitoring at work), or games or bots that could encourage unsafe or compulsive behavior in children. 

High-Risk AI: These include applications like medical AI tools, critical infrastructure, credit loans, or recruitment software. They must meet strict standards for accuracy, security, and data quality, with ongoing human oversight to avoid profiling and personal identification. 

Moderate-Risk AI: This category includes front-facing systems like chatbots and AI-generated content. They must make explicit to users they’re interacting with AI. Content like deepfakes should be labeled that they have been artificially made.  Transparency and labeling are key.

Low risk: Most AI systems (spam filters and AI-enabled video games, etc.) will face no enforcement scrutiny under the Act, but developers may voluntarily adopt to specific guidelines.

It lays down further conditions required to develop and deploy trusted AI systems, both for developers when processing personal data during the development phase, and for users who may seek to pour personal data into a system during the deployment phase.  

Its Timeline: The Ban, the Code of Practice, Harmonization

The AI Act entered into force on August 2, 2024, though most of its rules faze in at different times over the course of the next 2 years. In February 2025 the ban on prohibited practices goes into effect, and later that August all regulatory bodies –the AI Office, European AI Board, etc.– must be in place. On August 2, 2026, full enforcement arrives, with each member state having set up a regulatory agency at the national level. 

In terms of the ban, certain companies have already started to modify their product rollout based on the Act. Meta will not release an advanced version of its Llama AI model in multimodal form in the EU, citing the “unpredictable” behavior of regulators.  

Likewise, on August 8th, the social media platform X agreed to pause using European user data to train its AI system, after the Irish High Court found that the personal data of millions of EU users were being fed as input into Grok, its AI search tool, in Spring 2024 without any opt-out option available until July. 

The European Commission has launched a year-long consultation on a “Code of Practice on GPAI Models”, with AI developers and academics invited to submit their perspectives on a final draft. This will also set the parameters of the “AI Office,” the enforcement agency that gives teeth to the AI Act.

Human Rights and Europe 

Critics have been quick to suggest that, as Europe is home to only two of the twenty top tech platform companies, its AI regulation is some form of “sour grapes” protectionism. However, this view is flippant. Yes, there are ongoing battles between the US and EU — issues of data privacy, digital taxation, and antitrust—but the Act is clearly built on some of Europe’s most defining legislation. In many ways, it continues the trajectory of the EU’s most ambitious work.  

The European Convention on Human Rights was signed in Rome on November 4, 1950, by the twelve member states of the Council of Europe. Enforced by the European Court of Human Rights in Strasbourg, the Convention was a milestone in international law.   

It was the first legal entity to give binding force to some of the rights stated in the 1948 Universal Declaration of Human Rights. It was also the first treaty to establish a supranational court to ensure that the parties fulfilled their responsibilities, and which could challenge decisions taken by their own national courts. (Any individual, group of individuals, company, or NGO can petition the Strasbourg Court, once all lower venues have been exhausted.) It has now become an urtext for EU relations. To even join the Council of Europe, a state must first sign and ratify the ECHR. 

The convention itself has sixteen protocols, with article 8 the most pertinent here. Article 8 provides the right to one’s “private and family life, his home and his correspondence”, with caveats related to public safety, morality, national security, and the economic well-being of the country. This article clearly provides a right to be free of unlawful searches, but as it protects a “private and family life,” it also clearly provides the direction of a broader interpretation.

This “right to privacy” was not in the UN’s 1948 Universal Declaration of Human Rights. The fact that it is given explicit prominence in European law is telling.

Europe’s focus on privacy has obvious touchstones in its 20th-century history. The Nazi regime abused personal data to identify and annihilate its selected out-groups. Ruthless surveillance tactics further evolved with East Germany’s Stasi and the postwar Warsaw Bloc secret police in general. Governmental data collection practices have a dark past on the continent, and thus the right to data privacy is now closely tied to the issue of human dignity in Europe than perhaps the US. 

“Human-centric Digitization”

In 1981, the Council of Europe created the world’s first international treaty to assure data protection. This convention applied certain rules to the “automatic processing of personal data” and is probably the foundational basis of the EU’s 2018 General Data Protection Regulation (GDPR).  

The GDPR calls for a certain transparency in processing personal data, curtailing the quantity and restricting it to certain purposes. It designates a “privacy by design” protocol that requires companies to ingrain the GDPR rules into their initial design of services.   

The “right to be forgotten” is perhaps the most unique obligation related to the GDPR. This gives any person the right to force platforms to “delink” their name from information that is no longer valid. The Court of Justice of the EU played a key role in shaping this issue through its landmark Google Spain case (2014), in which Mario Costeja Gonzalez, a Spanish citizen, requested that the search engine remove results that linked him to a bankruptcy that had been resolved 15 years prior.  The court judged that Google must honor all requests to pull content proven to be invalid or out-of-date from its search algorithm. 

The GDPR is the world’s toughest data privacy law, and it has a long reach. Any corporation anywhere, if they collect data on EU citizens, can see massive penalties. 

Responding to a rise in cyber breaches and cloud computing, when tracking cookies was becoming insidious, the regulation had an immediate impact. The now ubiquitous “opt-in for cookies” notification is a product of the law, as tech platforms have adhered to its aims even in the US to avoid extraterritoriality issues.   

In this way, the law did succeed in creating a broader global consensus, a “shared vision of human-centric digitization.” As EU Commission VP Josep Borrell described at the time:

The 1948 Universal Declaration of Human Rights established the dignity of the individual, the right to privacy and to non-discrimination, and the freedoms of speech and belief. It is our common duty to make sure that the digital revolution lives up to that promise.

The GDPR’s enforcement arm, the European Data Protection Board, requires that each member state establish a “data protection authority” to enforce its rules. Fines can reach 20 million euros or 4% of a company’s total annual turnover. 

The AI Act does not modify the GDPR but builds on it.  

The Brussels Effect   

Despite massive lobbying against the GDPR before its passing, most of the big tech platforms have now embraced the regime. Meta chose to extend many GDPR protections globally to the company’s 2.8 billion Facebook users. Google revised its privacy policy based on it, and Apple now carries out OS impact assessments globally according to GDPR protocols. Microsoft has gone further, implementing the GDPR’s “privacy by design” and baking it into the early development of its products.

This appears to be yet another example of “The Brussels Effect.” These tech giants know that the size of the EU consumer market is simply too big to ignore. The second-largest economy on earth, Europe has an affluent population of 450 million and a GDP of $17.5 trillion. They enjoyed stunning success: Google is 90% of search in the 27-member union; Apple rakes in a quarter of its global revenue there; and Meta’s Facebook has 410 million monthly active EU users.  

The Brussels Effect can be clearly seen in the adoption of EU laws by foreign nations around the world. As of 2024, more than 150 countries have adopted domestic privacy laws, and most of them resemble the GDPR in some ways. It has essentially become the norm in many parts of the world as governments see it as an easy template for their own regimes.  

This can be seen on every continent. Brazil’s data privacy laws of 2018 emulate the GDPR’s broad definition of personal data. Nigeria’s Data Protection Bill of 2023 often uses exact parlance in sections, though with caveats about public morality. India’s PDPB bill, though withdrawn in 2021, was quite similar. With so many countries now operating with GDPR-like rules, it becomes harder for those nations creating data laws to justify a marked difference from the global norm. 

The effect is even seen in the corporate structure of a few firms. Meta, for example, altered its corporate structure –shifting its Africa, Asia, Australia, and Middle East divisions out of its Irish corporate entity and placing them within its US legal structure. This thus keeps African or Asian users from seeking legal addresses under the EU’s GDPR. 

Anu Bradford has made the point that the Brussels Effect of the GDPR works precisely because it targets the “inelastic” aspect of the market –consumers living in a jurisdiction, and not fleet-footed capital. But it does work to capital’s advantage on one level. Companies always prefer standardization over customization, particularly since compliance is onerous. Customization for too many countries is unappealing, costly, and involves more legal fees for the tech giants. In some way, GDPR does work to tech’s advantage in bringing legal clarity to a large, 27-member state zone.  

The desire for an “adequacy decision” from the EU might also explain the GDPR adoption worldwide. Those nations with privacy laws deemed “adequate” by GDPR standards can be allowed data transfers from the EU. This obviously helps with a foreign nation’s corporate competitiveness, providing more business opportunities in the zone. Canada, New Zealand, Argentina, Uruguay, and Israel are a few of the notable countries granted decisions.  

Ironically, the US doesn’t have an adequacy decision from the EU, a fact that has placed the legality of the data flows between the US and EU in contention and has been the subject of numerous lawsuits.   

AI Convention and AI Act: Velvet Glove, Iron Fist?

The theoretical basis for the AI Act appeared five years ago. In 2019, the European Commission published “The Ethics Guidelines for Trustworthy AI.” This document – which stated that “AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition, or herd humans” – arguably set the course for the AI Act. It stresses the importance of a human-centric artificial intelligence, in which “natural persons” must be able to “override” algorithms when needed to protect “fundamental rights.” 

In June 2023, more than a year before the AI Act was signed, the Council of Europe unveiled its inaugural draft of the “AI Convention on AI and Human Rights.” Comprising 34 articles, the document –like others by the Council– aims to formulate a broader open-ended framework of standards, not just within Europe. Its focus: data privacy, protection against discrimination, and the potential misuse of AI deployment. Like the GDPR, it aims to create a regulatory path that other nations may follow.  

In these articles, we can clearly see the founding principles of the EU AI Act. However, two other articles are also designated: each party must provide effective remedies for human rights violations, and each must have the ability to prohibit those systems that are incompatible with the convention’s core principles.

This EU approach is focused on securing the individual and collective rights of citizens in a digital society. They proactively ensure that often opaque AI processes won’t harm a society’s democratic political culture or trammel fairness in the distribution of its benefits.  

The European Declaration on Digital Rights and Principles for the Digital Decade, adapted in December 2022, proclaims that “people are at the center of the digital transformation” and emphasizes “the importance of democratic functioning of the digital society and economy.” All technological solutions should:

  1. Benefit everyone and improve the lives of all people in the EU.
  2. Technological solutions should also respect people’s rights, enable their exercise and promote solidarity and inclusion.” 

This political statement is interesting in its humanist focus. It identifies “democracy, fairness, and fundamental rights” as key values guiding EU policymaking. 

Pre-eminence: China’s Approach to AI

In contrast to the EU, China has developed its own AI policy, one that is less rights-driven and focused more on sovereignty, economic development, and implementation. It follows from Beijing’s belief, clearly written in both its “Dual Circulation” and “Made in China 2025” policies, that emerging technologies and high-tech manufacturing will be key to twenty-first century dominance.

Due to state funding, powerful tech firms, and select universities like Tsinghua, the country has emerged as a major player in machine learning and AI research. Notable players in the sector include:

Huawei:  AI chips and telecommunications infrastructure.

Baidu:  Autonomous driving / natural language models. 

Alibaba:  E-commerce algorithms / cloud computing 

Tencent:   AI-driven social media & medical imaging / healthcare solutions.

01.AI:   This Chinese unicorn startup is pushing the LLM envelope with its open-source model Yi-34B.

Between 2014 and 2023, China filed over 38,210 AI patents, more than all other nations combined. Even the US military is playing catchup with China’s PLA on the AI front, which is developing a new type of “intelligentized” warfare, looking to create wholly unmanned, swarm combat systems and better situational awareness. The DoD’s Replicator Program is something of a “Hail Mary” effort by the US to get to the Chinese level in AI-enabled swarm drones. 

Over the past several years, China has moved to implement some of the world’s toughest regulations on data and AI.  In contrast to the EU’s focus on state oversight regarding data privacy, fairness, and “human guidance,” Beijing’s policies make frequent reference to the necessary balance between “security” and “development.” For years China has been implementing the public facial recognition systems and “social scoring systems” that are now clearly outlawed by the EU AI Act. More machine learning and artificial intelligence will give these suppressive measures additional teeth.  

As early as 2017, China began placing AI as a new strategic pillar within its national agenda.  That year, the State Council unveiled its “New Generation Artificial Intelligence Development Plan,” with the aim of making the mainland the world’s AI leader by 2030. Like Made in China 2025, this act is comprehensive and focused on harnessing multiple drivers: economic growth, national security, and enhanced social services. The plan’s emphasis is on seizing the strategic initiative, creating the speedy diffusion from theory to application across multiple spheres, and finding dominance through innovation by 2030. 

After ChatGPT exploded on the world stage in late 2022, China was one of the first nations to issue targeted regulations on generative AI, releasing its “Interim Measures for the Management of Generative AI Services.” These set out restrictions on LLM (large language model) training and outputs of LLMs and require AI services “with the capacity for social mobilization” to carry out a security assessment and file pertinent algorithms with state regulators before being made public.  

Because of these “Measures,” since 2023, all LLMs developed by China’s tech platforms must gain state approval before going public. In response to this, Apple pulled nearly a hundred apps that offered AI chatbot service from its China store before the measures became enforced. 

China’s AI strategy –which seeks all developments to align with state objectives while maintaining strict control over information— could further entrench geostrategic splits as it is exported to the Global South. According to Rutgers University Fellow Shaoyu Yun:

Even if China doesn’t outpace the U.S. in developing the latest AI models, its applications can still significantly impact the geopolitical landscape. By integrating AI into areas like biotechnology, industrial engineering, and state security, Beijing can export its controlled AI systems to other authoritarian regimes. This would not only spread China’s model of governance but also consolidate its influence in regions antagonistic to Western ideals.

In this regard, the issue of AI lies not in its novelty but in its strategic deployment in the service of state control. For Yun, China’s approach suggests a fundamental rule in international relations: for new technology to alter a balance of power, it doesn’t need to be pre-eminent or the world’s best. It just needs to be the most effectively wielded.

AI Regulation, American Style

Enforceable regulation does not yet exist in the US at the national level, but there have been developments. In mid-2023 the White House obtained a set of voluntary commitments on AI risk from fifteen big firms at the cutting edge of the industry. It also released its “Blueprint for an AI Bill of Rights” which sets out a preliminary approach to data privacy and safety.  

More prominently, on October 30, 2023, the Biden administration announced its “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”  This order is focused on national security and misinformation: the US government must be informed of any developer tests that show a risk to national security, and the National Institute of Standards and Technology (NIST) will set standards for “red team” testing (i.e. testing to break the AI models pre-launch to expose problems). It also tasks the Commerce Department to create “watermarking” for AI-generated content so that Americans can recognize deepfakes and know that the communications they receive (particularly from government bodies) are authentic.   

In addition, the order created a new entity –the US AI Safety Institute—which will explore AI safety across “national security, public safety, and individual rights.” Housed in the National Institute of Standards and Technology (NIST) that has been created, with a leadership team appointed in April 2024 by the Commerce Department, this institute will not be an enforcement agency, but a policy center.  

Biden’s EO suggests how US policy will unfold: it will be industry-friendly, offering a voluntary shift by business to best practices, and rely on –like what happened with crypto over the past decade—the various executive agencies to craft their own rules (with input from NIST).

Though hailed as a step forward, the EO remains more carrot than stick. The watermarking technologies that the EO points to are not yet built and may be difficult to ensure. Also, the order does not actually require the tech platforms to use these technologies or even require that AI companies adhere to NIST standards or testing methods. Most of the EO relies only on voluntary cooperation.

Unlike the EU’s AI Act which was passed at the highest level of government, or its AI Office, which will be an enforcement agency operative by August 2, 2025, the US Safety Institute appears to be more of a policy center, one that can be marginalized or “institutionally captured” by whatever political party that is in power. 

This approach remains friendly to tech, emphasizes self-regulation, and has no punitive measures. It also ignores the bigger issue of training models to minimize foreseeable harm outside of national security issues. According to chief ethics scientist Margaret Mitchell, this is a “whack-a-mole” approach, responding to emerging problems instead of requiring best data practices for the start: “The biggest concern to me in this is it ignores a lot of work on how to train and develop models to minimize foreseeable harms.”

At present, the US is unlikely to pass any AI legislation at the national level in the foreseeable future. The 118th Congress (2023-2024), notable for its political infighting, may end up as the least productive legislative session in US history.  

Interestingly, though, there has been a lot of state-level action. Sixteen states had enacted AI legislation, with more than 400 AI bills introduced at that level in 2024, six times more than in 2023.

Colorado is the first state in the nation with an AI law on the books that will be enforceable. The Colorado Artificial Intelligence Act is at its core anti-discrimination legislation, focusing on any bias caused by AI in the context of a “consequential decision” –specifically any decision that can “significantly” impact an individual’s legal or economic interests, whether it be employment, housing, credit, lending, educational enrollment, legal services, and insurance. (In many ways, it is stricter but also more nebulous than the EU’s restrictions on social scoring, and there is now pushback by Colorado businesses that fear its wide mandate will trigger lawsuits.)

Other major states like California, Connecticut, New York, and Texas, are starting the process.  In February, the California State Legislature introduced Senate Bill 1047, which would require safety testing of AI products before they are released. It would require AI developers to prevent deployers from creating any derivative models that could cause harm. Last year, Connecticut passed Senate Bill 1103 which regulates state procurement of AI tools. 

This emerging patchwork of state laws could be tough for companies –even the tech titans– to manage. That is why major players like Microsoft, Google, and OpenAI have all called for regulations at the national level, feeling that this growing number of state laws will crimp the adoption of AI due to the perceived compliance burden. According to Adrienne Fischer, a lawyer with Basecamp Legal, a Denver law firm monitoring state AI bills: “This fragmented regulatory environment underscores the call for national laws that will provide a coherent framework for AI usage.”

Conclusion:  Techno-Democracies Unite

In the regulatory discourse of both China and the EU, there is always the unspoken actor: the US. Whereas China’s process is designed for national economic success vis a vis a truculent American hegemon, the EU’s process is focused on protecting its unique sense of culture and rights-centered governance from overriding big-tech dominance. 

Indeed, for the EU, its contests with the US about data privacy, digital taxation, and antitrust have been going on for nearly three decades. In many ways, the Europeans have been playing catch up to the “move fast and break things” libertarianism of US tech since the mid-1990s, when the opening chapter of the internet began.  

For decades, the US has urged other nations to deploy a non-regulatory, market-oriented approach to tech. The very first effort at an international consensus to digitization embodied this laissez-faire attitude. In 1997 the Clinton administration’s framework for global electronic commerce codified that “markets maximize individual choice and individual freedom” and its 2000 EU-US Joint Statement assured that both parties agreed “that the expansion of electronic commerce will be essentially market-led and driven by private initiative.” 

However, as the scope and power of the tech platforms became so central to daily life in the developed world, a governance issue has arisen. 

Software has now “eaten” many societal processes whole. Digital providers often replace –at least in de-facto, operative ways– local governments as rule setters via their terms of service and community norms. As a result, these global tech companies often provide consumers with digital resources more effectively than some smaller nations, a trend which becomes even more extreme with AI.  

Geopolitically, there will be growing differences between how authoritarian and democratic nations will promote—or weaponize—their AI industries. Because China operates more as a state-capitalist society, its regulatory model reflects its focus on top-down control and national power. Its own historical sense of a “Middle Kingdom” centrality has kept it at odds with the US-led, Brittan Woods-derived, international order.   

As it seeks to become the world leader in most strategic technologies within the decade, China is pouring money into AI development. An Australian “tech competitiveness” think tank recently stated that the mainland is now the world leader in research on almost 90% of critical technologies, essentially switching places with the US in two decades due to heavy state funding.  It is also making a concerted push to bring the developing world to its tech table. Via Huawei, it has been quick to outmaneuver the West and fund many regimes in the developing world with their digital buildouts. It has ambitious projects in Asia and Africa, the “100 smart cities movement” in Indonesia being a perfect example. 

Domestically, the US often operates by post-facto litigation and piecemeal actions from different states. Its political process at the national level is often buffeted by powerful lobbying. Pressure from lobbyists and the money-driven nature of politics in the US often means the deepest pockets will hold sway. Without proactive regulation, reckless AI initiatives clearly risk privacy, covert social scoring, and quiet disenfranchisement. This could lead to disaffection with the Western model.  

This has seriously troubling implications for the United States and its allies. As Anu Bradford has suggested: “the delays that come with democratic rulemaking . . . allow China to quickly operationalize in the absence of democracy. “   

EU regulation may save the US from itself in many ways. Europe’s AI Act could help implement a broader consensus among Western powers and their allies. Technological cooperation among allies will be essential for geopolitical reasons, but also for better visibility and coherence at the business level. Corporations like to avoid the compliance costs, obviously, but the AI Act will also foster the same type of “adequacy decision” coherence that has happened via the GDPR for value-sharing corporations hoping for business access.

Creating a broader consensus between the Western democracies and its allies is exactly what is needed as the systemic rivalry with China emerges. A rights-driven model will be more compelling to a larger swath of the world, including Japan, South Korea, Brazil, and India.  Just as the EU’s Convention of Human Rights was both a statement of values and a rebuke of what was happening behind the Iron Curtain at the time, the AI Act makes crystal clear in its values the contrast between what a 21st-century rights-driven “techno-democracy” will look like vis a vis a 21st century, state-centric “techno-autocracy.”On August 2, 2024, Europe’s AI Act entered into force, becoming the world’s first comprehensive legal framework on the issue of artificial intelligence. Operative across all 27 of its member states, the legislation provides a holistic set of rules for all players in the AI ecosystem –from developers to exporters and deployers.  

Though the Act got scant press in the US, it is clearly the opening salvo on what real legal proscriptions will look like for any business engaging in generative AI. Enforcement will be punitive – with fines of up to 35 million euros or 7% of global annual revenue (whichever is higher)—so even a large company operating outside the EU will likely contour to its agenda to avoid extraterritorial issues. 

As landmark legislation, it will cast an enduring shadow. AI will soon affect nearly every field of human endeavor. Recent studies suggest that worldwide artificial intelligence may add $2.6 trillion to $4.4 trillion annually to global economic output in the coming decade. AI is already altering the geostrategic landscape, as China deploys it in its military modernization and the US seeks to rebalance this unprecedented new threat to the existing international security architecture.  

Indeed, AI’s promise of epochal transformations portends effects that can’t yet be fully comprehended. To its credit, the EU has risen to the challenge of setting up preliminary guard rails. 

By designing a regulatory framework based on human rights and fundamental values, the EU believes it can develop an AI ecosystem that is inherently safer and more likely to not harm anyone. In this way, the EU aspires to be the global leader in safe AI.  

However, important questions remain. How does the act define artificial intelligence? How does it build on European legal precedent in its effort to protect its citizens? How does it compare to the Chinese and American approaches to AI regulation?  

From a geopolitical perspective, does the act ultimately help the US in the long run, forging a broader consensus as the West competes globally with China for a new era of technological supremacy?   

The Act’s Definition of AI and Risk

Originating from a European Commission proposal aimed at a “human-centric” approach to artificial intelligence, the final Act –a text totaling 50,000 words– is divided into 113 Articles, 180 recitals, and 13 annexes.

It categorizes AI systems based on their potential harm, aiming to ensure scrutiny, oversight, and –in extreme cases– outright bans on those products deemed dangerous.  

In earlier policy iterations, the European Commission’s definition of AI was criticized for being too broad. It was eventually modified to approximate the existing OECD definition and now focuses on two key characteristics of AI systems. Article 3(1) spells it out explicitly:  

  1. An “AI system” is a machine-based system designed to operate with varying levels of autonomy.
  1. An AI system exhibits adaptiveness after deployment and can infer –based on either explicit or implicit objectives– from the input it receives, how to generate outputs.  (These can be predictions, content, recommendations, or decisions that can influence physical or virtual environments.) 

This issue of inference is key. In its Recital 13, the act is explicit in that does not cover “systems that are based on the rules defined solely by natural persons to automatically execute operations.” Thus, the capacity of an applicable AI system to infer is what takes from the more commonplace data processing. It enables learning, reasoning, and can fashion new modeling on its own. The techniques that enable this type of inference while building an AI system include: 

1. machine learning mechanisms that learn from data to achieve certain objectives.

2. logic-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved. 

With this definition in mind, the AI Act then classifies these autonomous systems according to their risks to society, creating a uniform framework across all EU countries:

Banned AI: Some AIs are prohibited due to the unacceptable risks they pose. These include systems used for government or corporate “social scoring,” certain biometric systems (like those for emotion monitoring at work), or games or bots that could encourage unsafe or compulsive behavior in children. 

High-Risk AI: These include applications like medical AI tools, critical infrastructure, credit loans, or recruitment software. They must meet strict standards for accuracy, security, and data quality, with ongoing human oversight to avoid profiling and personal identification. 

Moderate-Risk AI: This category includes front-facing systems like chatbots and AI-generated content. They must make explicit to users they’re interacting with AI. Content like deepfakes should be labeled that they have been artificially made.  Transparency and labeling are key.

Low risk: Most AI systems (spam filters and AI-enabled video games, etc.) will face no enforcement scrutiny under the Act, but developers may voluntarily adopt to specific guidelines.

It lays down further conditions required to develop and deploy trusted AI systems, both for developers when processing personal data during the development phase, and for users who may seek to pour personal data into a system during the deployment phase.  

Its Timeline: The Ban, the Code of Practice, Harmonization

The AI Act entered into force on August 2, 2024, though most of its rules faze in at different times over the course of the next 2 years. In February 2025 the ban on prohibited practices goes into effect, and later that August all regulatory bodies –the AI Office, European AI Board, etc.– must be in place. On August 2, 2026, full enforcement arrives, with each member state having set up a regulatory agency at the national level. 

In terms of the ban, certain companies have already started to modify their product rollout based on the Act. Meta will not release an advanced version of its Llama AI model in multimodal form in the EU, citing the “unpredictable” behavior of regulators.  

Likewise, on August 8th, the social media platform X agreed to pause using European user data to train its AI system, after the Irish High Court found that the personal data of millions of EU users were being fed as input into Grok, its AI search tool, in Spring 2024 without any opt-out option available until July. 

The European Commission has launched a year-long consultation on a “Code of Practice on GPAI Models”, with AI developers and academics invited to submit their perspectives on a final draft. This will also set the parameters of the “AI Office,” the enforcement agency that gives teeth to the AI Act.

Human Rights and Europe 

Critics have been quick to suggest that, as Europe is home to only two of the twenty top tech platform companies, its AI regulation is some form of “sour grapes” protectionism. However, this view is flippant. Yes, there are ongoing battles between the US and EU — issues of data privacy, digital taxation, and antitrust—but the Act is clearly built on some of Europe’s most defining legislation. In many ways, it continues the trajectory of the EU’s most ambitious work.  

The European Convention on Human Rights was signed in Rome on November 4, 1950, by the twelve member states of the Council of Europe. Enforced by the European Court of Human Rights in Strasbourg, the Convention was a milestone in international law.   

It was the first legal entity to give binding force to some of the rights stated in the 1948 Universal Declaration of Human Rights. It was also the first treaty to establish a supranational court to ensure that the parties fulfilled their responsibilities, and which could challenge decisions taken by their own national courts. (Any individual, group of individuals, company, or NGO can petition the Strasbourg Court, once all lower venues have been exhausted.) It has now become an urtext for EU relations. To even join the Council of Europe, a state must first sign and ratify the ECHR. 

The convention itself has sixteen protocols, with article 8 the most pertinent here. Article 8 provides the right to one’s “private and family life, his home and his correspondence”, with caveats related to public safety, morality, national security, and the economic well-being of the country. This article clearly provides a right to be free of unlawful searches, but as it protects a “private and family life,” it also clearly provides the direction of a broader interpretation.

This “right to privacy” was not in the UN’s 1948 Universal Declaration of Human Rights. The fact that it is given explicit prominence in European law is telling.

Europe’s focus on privacy has obvious touchstones in its 20th-century history. The Nazi regime abused personal data to identify and annihilate its selected out-groups. Ruthless surveillance tactics further evolved with East Germany’s Stasi and the postwar Warsaw Bloc secret police in general. Governmental data collection practices have a dark past on the continent, and thus the right to data privacy is now closely tied to the issue of human dignity in Europe than perhaps the US. 

“Human-centric Digitization”

In 1981, the Council of Europe created the world’s first international treaty to assure data protection. This convention applied certain rules to the “automatic processing of personal data” and is probably the foundational basis of the EU’s 2018 General Data Protection Regulation (GDPR).  

The GDPR calls for a certain transparency in processing personal data, curtailing the quantity and restricting it to certain purposes. It designates a “privacy by design” protocol that requires companies to ingrain the GDPR rules into their initial design of services.   

The “right to be forgotten” is perhaps the most unique obligation related to the GDPR. This gives any person the right to force platforms to “delink” their name from information that is no longer valid. The Court of Justice of the EU played a key role in shaping this issue through its landmark Google Spain case (2014), in which Mario Costeja Gonzalez, a Spanish citizen, requested that the search engine remove results that linked him to a bankruptcy that had been resolved 15 years prior.  The court judged that Google must honor all requests to pull content proven to be invalid or out-of-date from its search algorithm. 

The GDPR is the world’s toughest data privacy law, and it has a long reach. Any corporation anywhere, if they collect data on EU citizens, can see massive penalties. 

Responding to a rise in cyber breaches and cloud computing, when tracking cookies was becoming insidious, the regulation had an immediate impact. The now ubiquitous “opt-in for cookies” notification is a product of the law, as tech platforms have adhered to its aims even in the US to avoid extraterritoriality issues.   

In this way, the law did succeed in creating a broader global consensus, a “shared vision of human-centric digitization.” As EU Commission VP Josep Borrell described at the time:

The 1948 Universal Declaration of Human Rights established the dignity of the individual, the right to privacy and to non-discrimination, and the freedoms of speech and belief. It is our common duty to make sure that the digital revolution lives up to that promise.

The GDPR’s enforcement arm, the European Data Protection Board, requires that each member state establish a “data protection authority” to enforce its rules. Fines can reach 20 million euros or 4% of a company’s total annual turnover. 

The AI Act does not modify the GDPR but builds on it.  

The Brussels Effect   

Despite massive lobbying against the GDPR before its passing, most of the big tech platforms have now embraced the regime. Meta chose to extend many GDPR protections globally to the company’s 2.8 billion Facebook users. Google revised its privacy policy based on it, and Apple now carries out OS impact assessments globally according to GDPR protocols. Microsoft has gone further, implementing the GDPR’s “privacy by design” and baking it into the early development of its products.

This appears to be yet another example of “The Brussels Effect.” These tech giants know that the size of the EU consumer market is simply too big to ignore. The second-largest economy on earth, Europe has an affluent population of 450 million and a GDP of $17.5 trillion. They enjoyed stunning success: Google is 90% of search in the 27-member union; Apple rakes in a quarter of its global revenue there; and Meta’s Facebook has 410 million monthly active EU users.  

The Brussels Effect can be clearly seen in the adoption of EU laws by foreign nations around the world. As of 2024, more than 150 countries have adopted domestic privacy laws, and most of them resemble the GDPR in some ways. It has essentially become the norm in many parts of the world as governments see it as an easy template for their own regimes.  

This can be seen on every continent. Brazil’s data privacy laws of 2018 emulate the GDPR’s broad definition of personal data. Nigeria’s Data Protection Bill of 2023 often uses exact parlance in sections, though with caveats about public morality. India’s PDPB bill, though withdrawn in 2021, was quite similar. With so many countries now operating with GDPR-like rules, it becomes harder for those nations creating data laws to justify a marked difference from the global norm. 

The effect is even seen in the corporate structure of a few firms. Meta, for example, altered its corporate structure –shifting its Africa, Asia, Australia, and Middle East divisions out of its Irish corporate entity and placing them within its US legal structure. This thus keeps African or Asian users from seeking legal addresses under the EU’s GDPR. 

Anu Bradford has made the point that the Brussels Effect of the GDPR works precisely because it targets the “inelastic” aspect of the market –consumers living in a jurisdiction, and not fleet-footed capital. But it does work to capital’s advantage on one level. Companies always prefer standardization over customization, particularly since compliance is onerous. Customization for too many countries is unappealing, costly, and involves more legal fees for the tech giants. In some way, GDPR does work to tech’s advantage in bringing legal clarity to a large, 27-member state zone.  

The desire for an “adequacy decision” from the EU might also explain the GDPR adoption worldwide. Those nations with privacy laws deemed “adequate” by GDPR standards can be allowed data transfers from the EU. This obviously helps with a foreign nation’s corporate competitiveness, providing more business opportunities in the zone. Canada, New Zealand, Argentina, Uruguay, and Israel are a few of the notable countries granted decisions.  

Ironically, the US doesn’t have an adequacy decision from the EU, a fact that has placed the legality of the data flows between the US and EU in contention and has been the subject of numerous lawsuits.   

AI Convention and AI Act: Velvet Glove, Iron Fist?

The theoretical basis for the AI Act appeared five years ago. In 2019, the European Commission published “The Ethics Guidelines for Trustworthy AI.” This document – which stated that “AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition, or herd humans” – arguably set the course for the AI Act. It stresses the importance of a human-centric artificial intelligence, in which “natural persons” must be able to “override” algorithms when needed to protect “fundamental rights.” 

In June 2023, more than a year before the AI Act was signed, the Council of Europe unveiled its inaugural draft of the “AI Convention on AI and Human Rights.” Comprising 34 articles, the document –like others by the Council– aims to formulate a broader open-ended framework of standards, not just within Europe. Its focus: data privacy, protection against discrimination, and the potential misuse of AI deployment. Like the GDPR, it aims to create a regulatory path that other nations may follow.  

In these articles, we can clearly see the founding principles of the EU AI Act. However, two other articles are also designated: each party must provide effective remedies for human rights violations, and each must have the ability to prohibit those systems that are incompatible with the convention’s core principles.

This EU approach is focused on securing the individual and collective rights of citizens in a digital society. They proactively ensure that often opaque AI processes won’t harm a society’s democratic political culture or trammel fairness in the distribution of its benefits.  

The European Declaration on Digital Rights and Principles for the Digital Decade, adapted in December 2022, proclaims that “people are at the center of the digital transformation” and emphasizes “the importance of democratic functioning of the digital society and economy.” All technological solutions should:

  1. Benefit everyone and improve the lives of all people in the EU.
  2. Technological solutions should also respect people’s rights, enable their exercise and promote solidarity and inclusion.” 

This political statement is interesting in its humanist focus. It identifies “democracy, fairness, and fundamental rights” as key values guiding EU policymaking. 

Pre-eminence: China’s Approach to AI

In contrast to the EU, China has developed its own AI policy, one that is less rights-driven and focused more on sovereignty, economic development, and implementation. It follows from Beijing’s belief, clearly written in both its “Dual Circulation” and “Made in China 2025” policies, that emerging technologies and high-tech manufacturing will be key to twenty-first century dominance.

Due to state funding, powerful tech firms, and select universities like Tsinghua, the country has emerged as a major player in machine learning and AI research. Notable players in the sector include:

Huawei:  AI chips and telecommunications infrastructure.

Baidu:  Autonomous driving / natural language models. 

Alibaba:  E-commerce algorithms / cloud computing 

Tencent:   AI-driven social media & medical imaging / healthcare solutions.

01.AI:   This Chinese unicorn startup is pushing the LLM envelope with its open-source model Yi-34B.

Between 2014 and 2023, China filed over 38,210 AI patents, more than all other nations combined. Even the US military is playing catchup with China’s PLA on the AI front, which is developing a new type of “intelligentized” warfare, looking to create wholly unmanned, swarm combat systems and better situational awareness. The DoD’s Replicator Program is something of a “Hail Mary” effort by the US to get to the Chinese level in AI-enabled swarm drones. 

Over the past several years, China has moved to implement some of the world’s toughest regulations on data and AI.  In contrast to the EU’s focus on state oversight regarding data privacy, fairness, and “human guidance,” Beijing’s policies make frequent reference to the necessary balance between “security” and “development.” For years China has been implementing the public facial recognition systems and “social scoring systems” that are now clearly outlawed by the EU AI Act. More machine learning and artificial intelligence will give these suppressive measures additional teeth.  

As early as 2017, China began placing AI as a new strategic pillar within its national agenda.  That year, the State Council unveiled its “New Generation Artificial Intelligence Development Plan,” with the aim of making the mainland the world’s AI leader by 2030. Like Made in China 2025, this act is comprehensive and focused on harnessing multiple drivers: economic growth, national security, and enhanced social services. The plan’s emphasis is on seizing the strategic initiative, creating the speedy diffusion from theory to application across multiple spheres, and finding dominance through innovation by 2030. 

After ChatGPT exploded on the world stage in late 2022, China was one of the first nations to issue targeted regulations on generative AI, releasing its “Interim Measures for the Management of Generative AI Services.” These set out restrictions on LLM (large language model) training and outputs of LLMs and require AI services “with the capacity for social mobilization” to carry out a security assessment and file pertinent algorithms with state regulators before being made public.  

Because of these “Measures,” since 2023, all LLMs developed by China’s tech platforms must gain state approval before going public. In response to this, Apple pulled nearly a hundred apps that offered AI chatbot service from its China store before the measures became enforced. 

China’s AI strategy –which seeks all developments to align with state objectives while maintaining strict control over information— could further entrench geostrategic splits as it is exported to the Global South. According to Rutgers University Fellow Shaoyu Yun:

Even if China doesn’t outpace the U.S. in developing the latest AI models, its applications can still significantly impact the geopolitical landscape. By integrating AI into areas like biotechnology, industrial engineering, and state security, Beijing can export its controlled AI systems to other authoritarian regimes. This would not only spread China’s model of governance but also consolidate its influence in regions antagonistic to Western ideals.

In this regard, the issue of AI lies not in its novelty but in its strategic deployment in the service of state control. For Yun, China’s approach suggests a fundamental rule in international relations: for new technology to alter a balance of power, it doesn’t need to be pre-eminent or the world’s best. It just needs to be the most effectively wielded.

AI Regulation, American Style

Enforceable regulation does not yet exist in the US at the national level, but there have been developments. In mid-2023 the White House obtained a set of voluntary commitments on AI risk from fifteen big firms at the cutting edge of the industry. It also released its “Blueprint for an AI Bill of Rights” which sets out a preliminary approach to data privacy and safety.  

More prominently, on October 30, 2023, the Biden administration announced its “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”  This order is focused on national security and misinformation: the US government must be informed of any developer tests that show a risk to national security, and the National Institute of Standards and Technology (NIST) will set standards for “red team” testing (i.e. testing to break the AI models pre-launch to expose problems). It also tasks the Commerce Department to create “watermarking” for AI-generated content so that Americans can recognize deepfakes and know that the communications they receive (particularly from government bodies) are authentic.   

In addition, the order created a new entity –the US AI Safety Institute—which will explore AI safety across “national security, public safety, and individual rights.” Housed in the National Institute of Standards and Technology (NIST) that has been created, with a leadership team appointed in April 2024 by the Commerce Department, this institute will not be an enforcement agency, but a policy center.  

Biden’s EO suggests how US policy will unfold: it will be industry-friendly, offering a voluntary shift by business to best practices, and rely on –like what happened with crypto over the past decade—the various executive agencies to craft their own rules (with input from NIST).

Though hailed as a step forward, the EO remains more carrot than stick. The watermarking technologies that the EO points to are not yet built and may be difficult to ensure. Also, the order does not actually require the tech platforms to use these technologies or even require that AI companies adhere to NIST standards or testing methods. Most of the EO relies only on voluntary cooperation.

Unlike the EU’s AI Act which was passed at the highest level of government, or its AI Office, which will be an enforcement agency operative by August 2, 2025, the US Safety Institute appears to be more of a policy center, one that can be marginalized or “institutionally captured” by whatever political party that is in power. 

This approach remains friendly to tech, emphasizes self-regulation, and has no punitive measures. It also ignores the bigger issue of training models to minimize foreseeable harm outside of national security issues. According to chief ethics scientist Margaret Mitchell, this is a “whack-a-mole” approach, responding to emerging problems instead of requiring best data practices for the start: “The biggest concern to me in this is it ignores a lot of work on how to train and develop models to minimize foreseeable harms.”

At present, the US is unlikely to pass any AI legislation at the national level in the foreseeable future. The 118th Congress (2023-2024), notable for its political infighting, may end up as the least productive legislative session in US history.  

Interestingly, though, there has been a lot of state-level action. Sixteen states had enacted AI legislation, with more than 400 AI bills introduced at that level in 2024, six times more than in 2023.

Colorado is the first state in the nation with an AI law on the books that will be enforceable. The Colorado Artificial Intelligence Act is at its core anti-discrimination legislation, focusing on any bias caused by AI in the context of a “consequential decision” –specifically any decision that can “significantly” impact an individual’s legal or economic interests, whether it be employment, housing, credit, lending, educational enrollment, legal services, and insurance. (In many ways, it is stricter but also more nebulous than the EU’s restrictions on social scoring, and there is now pushback by Colorado businesses that fear its wide mandate will trigger lawsuits.)

Other major states like California, Connecticut, New York, and Texas, are starting the process.  In February, the California State Legislature introduced Senate Bill 1047, which would require safety testing of AI products before they are released. It would require AI developers to prevent deployers from creating any derivative models that could cause harm. Last year, Connecticut passed Senate Bill 1103 which regulates state procurement of AI tools. 

This emerging patchwork of state laws could be tough for companies –even the tech titans– to manage. That is why major players like Microsoft, Google, and OpenAI have all called for regulations at the national level, feeling that this growing number of state laws will crimp the adoption of AI due to the perceived compliance burden. According to Adrienne Fischer, a lawyer with Basecamp Legal, a Denver law firm monitoring state AI bills: “This fragmented regulatory environment underscores the call for national laws that will provide a coherent framework for AI usage.”

Conclusion:  Techno-Democracies Unite

In the regulatory discourse of both China and the EU, there is always the unspoken actor: the US. Whereas China’s process is designed for national economic success vis a vis a truculent American hegemon, the EU’s process is focused on protecting its unique sense of culture and rights-centered governance from overriding big-tech dominance. 

Indeed, for the EU, its contests with the US about data privacy, digital taxation, and antitrust have been going on for nearly three decades. In many ways, the Europeans have been playing catch up to the “move fast and break things” libertarianism of US tech since the mid-1990s, when the opening chapter of the internet began.  

For decades, the US has urged other nations to deploy a non-regulatory, market-oriented approach to tech. The very first effort at an international consensus to digitization embodied this laissez-faire attitude. In 1997 the Clinton administration’s framework for global electronic commerce codified that “markets maximize individual choice and individual freedom” and its 2000 EU-US Joint Statement assured that both parties agreed “that the expansion of electronic commerce will be essentially market-led and driven by private initiative.” 

However, as the scope and power of the tech platforms became so central to daily life in the developed world, a governance issue has arisen. 

Software has now “eaten” many societal processes whole. Digital providers often replace –at least in de-facto, operative ways– local governments as rule setters via their terms of service and community norms. As a result, these global tech companies often provide consumers with digital resources more effectively than some smaller nations, a trend which becomes even more extreme with AI.  

Geopolitically, there will be growing differences between how authoritarian and democratic nations will promote—or weaponize—their AI industries. Because China operates more as a state-capitalist society, its regulatory model reflects its focus on top-down control and national power. Its own historical sense of a “Middle Kingdom” centrality has kept it at odds with the US-led, Brittan Woods-derived, international order.   

As it seeks to become the world leader in most strategic technologies within the decade, China is pouring money into AI development. An Australian “tech competitiveness” think tank recently stated that the mainland is now the world leader in research on almost 90% of critical technologies, essentially switching places with the US in two decades due to heavy state funding.  It is also making a concerted push to bring the developing world to its tech table. Via Huawei, it has been quick to outmaneuver the West and fund many regimes in the developing world with their digital buildouts. It has ambitious projects in Asia and Africa, the “100 smart cities movement” in Indonesia being a perfect example. 

Domestically, the US often operates by post-facto litigation and piecemeal actions from different states. Its political process at the national level is often buffeted by powerful lobbying. Pressure from lobbyists and the money-driven nature of politics in the US often means the deepest pockets will hold sway. Without proactive regulation, reckless AI initiatives clearly risk privacy, covert social scoring, and quiet disenfranchisement. This could lead to disaffection with the Western model.  

This has seriously troubling implications for the United States and its allies. As Anu Bradford has suggested: “the delays that come with democratic rulemaking . . . allow China to quickly operationalize in the absence of democracy. “   

EU regulation may save the US from itself in many ways. Europe’s AI Act could help implement a broader consensus among Western powers and their allies. Technological cooperation among allies will be essential for geopolitical reasons, but also for better visibility and coherence at the business level. Corporations like to avoid the compliance costs, obviously, but the AI Act will also foster the same type of “adequacy decision” coherence that has happened via the GDPR for value-sharing corporations hoping for business access.

Creating a broader consensus between the Western democracies and its allies is exactly what is needed as the systemic rivalry with China emerges. A rights-driven model will be more compelling to a larger swath of the world, including Japan, South Korea, Brazil, and India.  Just as the EU’s Convention of Human Rights was both a statement of values and a rebuke of what was happening behind the Iron Curtain at the time, the AI Act makes crystal clear in its values the contrast between what a 21st-century rights-driven “techno-democracy” will look like vis a vis a 21st century, state-centric “techno-autocracy.”

Featured/Headline Image Caption and Citation: Computer code, Image source from Flickr | CC License, no changes made

]]>
8493
Navigating Liability in Autonomous Robots: Legal and Ethical Challenges in Manufacturing and Military Applications https://yris.yira.org/column/navigating-liability-in-autonomous-robots-legal-and-ethical-challenges-in-manufacturing-and-military-applications/ Thu, 06 Mar 2025 20:15:21 +0000 https://yris.yira.org/?p=8296

The increasing integration of autonomous robots into manufacturing and military operations has redefined human and machine interaction. While these systems promise precision, efficiency, and cost savings, they also present urgent legal and ethical challenges. What happens when an autonomous assembly-line robot malfunctions and injures a worker? Who bears responsibility when an AI-driven military drone independently selects a target? These questions are no longer hypothetical. Incidents have already occurred where autonomous drones allegedly engaged human targets without human command, demonstrating the profound risk of deploying AI in high-risk environments. Meanwhile, industrial robots and autonomous delivery systems in the private sector are performing tasks once under human supervision, raising the urgency of determining legal liability when things go wrong. Product liability and negligence frameworks struggle to address the complexities of AI-driven systems, necessitating the development of new, dynamic legal structures that balance innovation with ethical responsibility in both civilian and military contexts.

Traditionally, liability frameworks have relied on well-established negligence and product liability principles, which struggle to accommodate the complexities of autonomous decision-making. Existing legal doctrines assume clear human oversight, yet AI systems operate with varying degrees of independence, making liability attribution ambiguous. The debate over whether an AI-driven system should be classified as  “products” or  “services” complicates the issue further, particularly as machine-learning algorithms evolve over time, rendering conventional liability rules inadequate. A comparative analysis of regulatory approaches in the United States, Europe, Japan, and China highlights key divergences in legal philosophy, risk tolerance, and enforcement strategies. Understanding these differences is essential for shaping future legal frameworks that balance technological progress with accountability.

A fundamental issue underlying AI liability is responsibility fragmentation. Unlike traditional tools that function under direct human control, AI-driven systems operate autonomously based on algorithmic decision-making. In product liability cases, manufacturers are generally held accountable for design flaws, but what happens when an AI system  “learns” harmful behavior over time? Some legal scholars advocate for strict liability on manufacturers, similar to pharmaceutical industry regulations, while others propose shared responsibility models that include software developers, operators, and even end-users. 

The challenges are particularly acute in military applications, where the concept of intent—critical in criminal law—becomes nearly impossible to attribute to an artificial system. The intent is fundamental to distinguishing lawful from unlawful actions in war.  Legal scholars such as Marta Bo, in Meaningful Human Control over Autonomous Weapon Systems: An (International) Criminal Law Account, examines how the absence of direct human intent complicates accountability for war crimes. The inability to assign intent to autonomous weapons presents a major challenge for international law, underscoring the necessity of new legal frameworks to ensure ethical oversight in autonomous warfare. 

Different jurisdictions have responded to these challenges in varying ways, shaped by their legal traditions, economic priorities, and ethical perspectives. Europe has taken the most proactive stance, leading discussions on legal personhood for AI, mandating transparency in decision-making, and enforcing stringent consumer protection measures. The European Parliament has explored the concept of “electronic personhood” to assign liability when human accountability is difficult to determine. This approach aims to close the “responsibility gap,” though critics argue that granting legal status to AI systems may obscure accountability rather than clarify it. The European Union’s regulatory model also emphasizes explainability. Under the General Data Protection Regulation (GDPR), AI-driven decisions impacting individuals must be transparent and interpretable. While this regulation primarily targets data privacy, it indirectly affects autonomous robotics, particularly in sectors like healthcare and finance, by ensuring that AI decision-making can be scrutinized legally. Furthermore, the EU’s proposed  Regulation on Artificial Intelligence introduces a risk-based classification for AI systems, imposing strict compliance requirements on high-risk applications, including autonomous robotics.

In contrast, the United States has taken a more reactive approach, relying on case law and existing liability doctrines. American legal frameworks still primarily treat AI-driven robotics as products, holding manufacturers liable for defects but often failing to account for evolving machine-learning models that evolve post-sale. The 2018 Uber self-driving car accident exemplifies this issue, as debates arose over whether responsibility lay with Uber, the vehicle manufacturer, or the AI system itself. This case highlighted the shortcomings of U.S. liability frameworks in addressing autonomous AI. While regulatory bodies such as the National Highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) have begun exploring AI regulation, the U.S. remains largely dependent on sector-specific guidelines rather than comprehensive federal legislation.

Japan, known for its cultural acceptance of automation and robotics, has adopted a hybrid approach that prioritizes human oversight while promoting technological progress. In industries such as autonomous vehicles, Japanese law requires the continued presence of human “drivers” or operators who bear ultimate responsibility, even as automation increases. This reflects Japan’s broader approach to AI governance strategy, which integrates AI into society while maintaining strict human control over critical decisions.

Conversely, China has embraced AI and robotics with fewer regulatory constraints, prioritizing rapid technological advancement and economic growth. The Chinese government has made substantial investments in AI infrastructure, but its regulatory framework focuses primarily on state control, especially in surveillance and national security applications, raising ethical concerns. Unlike Japan, which promotes AI development with strong ethical oversight, China’s approach has facilitated widespread use of AI-driven mass surveillance, facial recognition, and predictive policing with minimal accountability. These applications present significant risks, including privacy violations and algorithmic biases that may reinforce social inequalities. As China continues expanding its AI capabilities, experts argue that stronger ethical safeguards and independent regulatory mechanisms are necessary to balance innovation with the protection of civil liberties.

Manufacturing environments further illustrate the complexity of liability allocation in autonomous robotics. Who is at fault if a robotic arm in an automotive factory malfunctions due to a software glitch and causes injury? For instance, in December 2023, a Tesla software engineer suffered serious injuries when a malfunctioning robot at the company’s Austin factory attacked him, digging its claws into his back and arm. Similarly, in November 2023, a South Korean worker was fatally crushed by an industrial robot that mistook him for a box of vegetables. These incidents highlight the challenges in determining fault when accidents involve complex interactions between human workers and autonomous systems. In traditional product liability, the manufacturer would be held accountable for a defective product, but if the malfunction stems from a third-party software update rather than a hardware flaw? Courts have struggled with such scenarios, often defaulting to negligence standards requiring proof of design, production, or maintenance failures. These challenges expose the inadequacy of current liability frameworks in addressing AI autonomy. As robots increasingly make independent decisions, the conventional approach of attributing fault to a single entity—whether the manufacturer or operator—becomes insufficient. This underscores the need for new legal models that accommodate evolving AI capabilities while ensuring clear accountability.

Liability concerns are even more pressing in military applications. Autonomous drones and robotic weapon systems operate in environments where split-second decisions determine mission success or humanitarian disaster. The 2020 incident in Libya, where an AI-driven drone allegedly engaged human targets autonomously, illustrates the ethical and legal dangers of allowing AI to make lethal decisions. Although international humanitarian law governs the use of weapons, existing frameworks struggle to assign responsibility, prompting proposals for hybrid liability models or the treatment of  AI as a legal entity. IHL principles like distinction, proportionality, and command responsibility remain difficult to enforce with autonomous systems, as they lack human intent and contextual judgment. International discussions, including the Campaign to Stop Killer Robots and debates within the Convention on Certain Conventional Weapons (CCW), highlight a growing consensus on the need for regulation, with proposals ranging from preemptive bans to soft law approaches. Given the rapid evolution of military AI, establishing global accountability standards is crucial to ensuring ethical warfare and preventing the unchecked deployment of autonomous weapons.

In response to growing concerns over the militarization of AI, several major technology companies have implemented self-regulation initiatives to ensure responsible AI development. Companies like Google, Microsoft, and OpenAI have committed to ethical AI policies prohibiting their technologies from being used for autonomous lethal weapons. Google’s AI Principles explicitly reject the development of AI for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Similarly, OpenAI has called for increased oversight and global cooperation to prevent the misuse of advanced AI in warfare. Industry-led initiatives, such as the Partnership on AI and Tech Accord for Responsible AI, seek to establish voluntary ethical guidelines for AI development, promoting transparency, accountability, and human oversight in AI-driven military applications. However, critics argue that self-regulation alone is insufficient, emphasizing the need for enforceable international laws to complement these voluntary commitments and ensure AI is developed in alignment with humanitarian principles.

As the debate over AI liability continues, several key issues must be addressed. First, legal systems must clearly define whether AI-driven systems should be treated as products or services. Many AI-driven robots rely on continuous software updates and real-time learning, making traditional product liability models insufficient. Legal frameworks must evolve to recognize this dynamic nature while ensuring accountability and consumer protection. 

Second, regulations must strike a balance between fostering innovation and safeguarding ethical and legal standards. Overregulation risks stifling technological progress in fields where AI-driven automation has transformative potential- such as healthcare, logistics, and environmental sustainability. However, insufficient oversight could lead to severe ethical and legal consequences, particularly in military applications where AI decision-making carries life-or-death implications.

Global cooperation is critical in establishing standardized liability norms for AI. Just as cybersecurity and data privacy require cross-border collaboration, AI regulation must also transcend national boundaries. Binding international agreements—akin to treaties governing nuclear and chemical weapons—could prevent the unchecked militarization of autonomous AI.  Intergovernmental organizations such as the United Nations, the OECD, and the G7 can play a central role in developing global AI safety and liability standards, ensuring that AI is deployed responsibly. Bilateral and multilateral agreements could align national AI policies and prevent regulatory loopholes that corporations might otherwise exploit. Furthermore, industry-led initiatives, including AI ethics councils and cross-border compliance frameworks, could further promote responsible AI governance. The integration of autonomous robots into society is inevitable, but without cohesive global policies, liability and accountability will remain unresolved. As AI systems assume greater autonomy and evolve, so must our legal framework, as the challenge is not merely to regulate them but to ensure that human oversight remains intact. To safeguard both innovation and accountability, governments must urgently develop new legal structures that reflect the dynamic nature of AI technologies. Adapting traditional liability models alone is not enough – they must also foster international cooperation to ensure that this development aligns with ethical standards and human rights. A robust international legal framework that balances innovation with responsibility is essential to prevent unforeseen risks and ensure that technological advancement serves the public good without compromising safety or morality.

Featured/Headline Image Caption and Citation: A Ghost Robotics Vision 60 Prototype Provides Security, Image sourced from Picryl | CC License, no changes made

]]>
8296