The Geopolitics of AI Regulation 

Codingpic

On August 2, 2024, Europe’s AI Act entered into force, becoming the world’s first comprehensive legal framework on the issue of artificial intelligence. Operative across all 27 of its member states, the legislation provides a holistic set of rules for all players in the AI ecosystem –from developers to exporters and deployers.  

Though the Act got scant press in the US, it is clearly the opening salvo on what real legal proscriptions will look like for any business engaging in generative AI. Enforcement will be punitive – with fines of up to 35 million euros or 7% of global annual revenue (whichever is higher)—so even a large company operating outside the EU will likely contour to its agenda to avoid extraterritorial issues. 

As landmark legislation, it will cast an enduring shadow. AI will soon affect nearly every field of human endeavor. Recent studies suggest that worldwide artificial intelligence may add $2.6 trillion to $4.4 trillion annually to global economic output in the coming decade. AI is already altering the geostrategic landscape, as China deploys it in its military modernization and the US seeks to rebalance this unprecedented new threat to the existing international security architecture.  

Indeed, AI’s promise of epochal transformations portends effects that can’t yet be fully comprehended. To its credit, the EU has risen to the challenge of setting up preliminary guard rails. 

By designing a regulatory framework based on human rights and fundamental values, the EU believes it can develop an AI ecosystem that is inherently safer and more likely to not harm anyone. In this way, the EU aspires to be the global leader in safe AI.  

However, important questions remain. How does the act define artificial intelligence? How does it build on European legal precedent in its effort to protect its citizens? How does it compare to the Chinese and American approaches to AI regulation?  

From a geopolitical perspective, does the act ultimately help the US in the long run, forging a broader consensus as the West competes globally with China for a new era of technological supremacy?   

The Act’s Definition of AI and Risk

Originating from a European Commission proposal aimed at a “human-centric” approach to artificial intelligence, the final Act –a text totaling 50,000 words– is divided into 113 Articles, 180 recitals, and 13 annexes.

It categorizes AI systems based on their potential harm, aiming to ensure scrutiny, oversight, and –in extreme cases– outright bans on those products deemed dangerous.  

In earlier policy iterations, the European Commission’s definition of AI was criticized for being too broad. It was eventually modified to approximate the existing OECD definition and now focuses on two key characteristics of AI systems. Article 3(1) spells it out explicitly:  

  1. An “AI system” is a machine-based system designed to operate with varying levels of autonomy.
  1. An AI system exhibits adaptiveness after deployment and can infer –based on either explicit or implicit objectives– from the input it receives, how to generate outputs.  (These can be predictions, content, recommendations, or decisions that can influence physical or virtual environments.) 

This issue of inference is key. In its Recital 13, the act is explicit in that does not cover “systems that are based on the rules defined solely by natural persons to automatically execute operations.” Thus, the capacity of an applicable AI system to infer is what takes from the more commonplace data processing. It enables learning, reasoning, and can fashion new modeling on its own. The techniques that enable this type of inference while building an AI system include: 

1. machine learning mechanisms that learn from data to achieve certain objectives.

2. logic-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved. 

With this definition in mind, the AI Act then classifies these autonomous systems according to their risks to society, creating a uniform framework across all EU countries:

Banned AI: Some AIs are prohibited due to the unacceptable risks they pose. These include systems used for government or corporate “social scoring,” certain biometric systems (like those for emotion monitoring at work), or games or bots that could encourage unsafe or compulsive behavior in children. 

High-Risk AI: These include applications like medical AI tools, critical infrastructure, credit loans, or recruitment software. They must meet strict standards for accuracy, security, and data quality, with ongoing human oversight to avoid profiling and personal identification. 

Moderate-Risk AI: This category includes front-facing systems like chatbots and AI-generated content. They must make explicit to users they’re interacting with AI. Content like deepfakes should be labeled that they have been artificially made.  Transparency and labeling are key.

Low risk: Most AI systems (spam filters and AI-enabled video games, etc.) will face no enforcement scrutiny under the Act, but developers may voluntarily adopt to specific guidelines.

It lays down further conditions required to develop and deploy trusted AI systems, both for developers when processing personal data during the development phase, and for users who may seek to pour personal data into a system during the deployment phase.  

Its Timeline: The Ban, the Code of Practice, Harmonization

The AI Act entered into force on August 2, 2024, though most of its rules faze in at different times over the course of the next 2 years. In February 2025 the ban on prohibited practices goes into effect, and later that August all regulatory bodies –the AI Office, European AI Board, etc.– must be in place. On August 2, 2026, full enforcement arrives, with each member state having set up a regulatory agency at the national level. 

In terms of the ban, certain companies have already started to modify their product rollout based on the Act. Meta will not release an advanced version of its Llama AI model in multimodal form in the EU, citing the “unpredictable” behavior of regulators.  

Likewise, on August 8th, the social media platform X agreed to pause using European user data to train its AI system, after the Irish High Court found that the personal data of millions of EU users were being fed as input into Grok, its AI search tool, in Spring 2024 without any opt-out option available until July. 

The European Commission has launched a year-long consultation on a “Code of Practice on GPAI Models”, with AI developers and academics invited to submit their perspectives on a final draft. This will also set the parameters of the “AI Office,” the enforcement agency that gives teeth to the AI Act.

Human Rights and Europe 

Critics have been quick to suggest that, as Europe is home to only two of the twenty top tech platform companies, its AI regulation is some form of “sour grapes” protectionism. However, this view is flippant. Yes, there are ongoing battles between the US and EU — issues of data privacy, digital taxation, and antitrust—but the Act is clearly built on some of Europe’s most defining legislation. In many ways, it continues the trajectory of the EU’s most ambitious work.  

The European Convention on Human Rights was signed in Rome on November 4, 1950, by the twelve member states of the Council of Europe. Enforced by the European Court of Human Rights in Strasbourg, the Convention was a milestone in international law.   

It was the first legal entity to give binding force to some of the rights stated in the 1948 Universal Declaration of Human Rights. It was also the first treaty to establish a supranational court to ensure that the parties fulfilled their responsibilities, and which could challenge decisions taken by their own national courts. (Any individual, group of individuals, company, or NGO can petition the Strasbourg Court, once all lower venues have been exhausted.) It has now become an urtext for EU relations. To even join the Council of Europe, a state must first sign and ratify the ECHR. 

The convention itself has sixteen protocols, with article 8 the most pertinent here. Article 8 provides the right to one’s “private and family life, his home and his correspondence”, with caveats related to public safety, morality, national security, and the economic well-being of the country. This article clearly provides a right to be free of unlawful searches, but as it protects a “private and family life,” it also clearly provides the direction of a broader interpretation.

This “right to privacy” was not in the UN’s 1948 Universal Declaration of Human Rights. The fact that it is given explicit prominence in European law is telling.

Europe’s focus on privacy has obvious touchstones in its 20th-century history. The Nazi regime abused personal data to identify and annihilate its selected out-groups. Ruthless surveillance tactics further evolved with East Germany’s Stasi and the postwar Warsaw Bloc secret police in general. Governmental data collection practices have a dark past on the continent, and thus the right to data privacy is now closely tied to the issue of human dignity in Europe than perhaps the US. 

“Human-centric Digitization”

In 1981, the Council of Europe created the world’s first international treaty to assure data protection. This convention applied certain rules to the “automatic processing of personal data” and is probably the foundational basis of the EU’s 2018 General Data Protection Regulation (GDPR).  

The GDPR calls for a certain transparency in processing personal data, curtailing the quantity and restricting it to certain purposes. It designates a “privacy by design” protocol that requires companies to ingrain the GDPR rules into their initial design of services.   

The “right to be forgotten” is perhaps the most unique obligation related to the GDPR. This gives any person the right to force platforms to “delink” their name from information that is no longer valid. The Court of Justice of the EU played a key role in shaping this issue through its landmark Google Spain case (2014), in which Mario Costeja Gonzalez, a Spanish citizen, requested that the search engine remove results that linked him to a bankruptcy that had been resolved 15 years prior.  The court judged that Google must honor all requests to pull content proven to be invalid or out-of-date from its search algorithm. 

The GDPR is the world’s toughest data privacy law, and it has a long reach. Any corporation anywhere, if they collect data on EU citizens, can see massive penalties. 

Responding to a rise in cyber breaches and cloud computing, when tracking cookies was becoming insidious, the regulation had an immediate impact. The now ubiquitous “opt-in for cookies” notification is a product of the law, as tech platforms have adhered to its aims even in the US to avoid extraterritoriality issues.   

In this way, the law did succeed in creating a broader global consensus, a “shared vision of human-centric digitization.” As EU Commission VP Josep Borrell described at the time:

The 1948 Universal Declaration of Human Rights established the dignity of the individual, the right to privacy and to non-discrimination, and the freedoms of speech and belief. It is our common duty to make sure that the digital revolution lives up to that promise.

The GDPR’s enforcement arm, the European Data Protection Board, requires that each member state establish a “data protection authority” to enforce its rules. Fines can reach 20 million euros or 4% of a company’s total annual turnover. 

The AI Act does not modify the GDPR but builds on it.  

The Brussels Effect   

Despite massive lobbying against the GDPR before its passing, most of the big tech platforms have now embraced the regime. Meta chose to extend many GDPR protections globally to the company’s 2.8 billion Facebook users. Google revised its privacy policy based on it, and Apple now carries out OS impact assessments globally according to GDPR protocols. Microsoft has gone further, implementing the GDPR’s “privacy by design” and baking it into the early development of its products.

This appears to be yet another example of “The Brussels Effect.” These tech giants know that the size of the EU consumer market is simply too big to ignore. The second-largest economy on earth, Europe has an affluent population of 450 million and a GDP of $17.5 trillion. They enjoyed stunning success: Google is 90% of search in the 27-member union; Apple rakes in a quarter of its global revenue there; and Meta’s Facebook has 410 million monthly active EU users.  

The Brussels Effect can be clearly seen in the adoption of EU laws by foreign nations around the world. As of 2024, more than 150 countries have adopted domestic privacy laws, and most of them resemble the GDPR in some ways. It has essentially become the norm in many parts of the world as governments see it as an easy template for their own regimes.  

This can be seen on every continent. Brazil’s data privacy laws of 2018 emulate the GDPR’s broad definition of personal data. Nigeria’s Data Protection Bill of 2023 often uses exact parlance in sections, though with caveats about public morality. India’s PDPB bill, though withdrawn in 2021, was quite similar. With so many countries now operating with GDPR-like rules, it becomes harder for those nations creating data laws to justify a marked difference from the global norm. 

The effect is even seen in the corporate structure of a few firms. Meta, for example, altered its corporate structure –shifting its Africa, Asia, Australia, and Middle East divisions out of its Irish corporate entity and placing them within its US legal structure. This thus keeps African or Asian users from seeking legal addresses under the EU’s GDPR. 

Anu Bradford has made the point that the Brussels Effect of the GDPR works precisely because it targets the “inelastic” aspect of the market –consumers living in a jurisdiction, and not fleet-footed capital. But it does work to capital’s advantage on one level. Companies always prefer standardization over customization, particularly since compliance is onerous. Customization for too many countries is unappealing, costly, and involves more legal fees for the tech giants. In some way, GDPR does work to tech’s advantage in bringing legal clarity to a large, 27-member state zone.  

The desire for an “adequacy decision” from the EU might also explain the GDPR adoption worldwide. Those nations with privacy laws deemed “adequate” by GDPR standards can be allowed data transfers from the EU. This obviously helps with a foreign nation’s corporate competitiveness, providing more business opportunities in the zone. Canada, New Zealand, Argentina, Uruguay, and Israel are a few of the notable countries granted decisions.  

Ironically, the US doesn’t have an adequacy decision from the EU, a fact that has placed the legality of the data flows between the US and EU in contention and has been the subject of numerous lawsuits.   

AI Convention and AI Act: Velvet Glove, Iron Fist?

The theoretical basis for the AI Act appeared five years ago. In 2019, the European Commission published “The Ethics Guidelines for Trustworthy AI.” This document – which stated that “AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition, or herd humans” – arguably set the course for the AI Act. It stresses the importance of a human-centric artificial intelligence, in which “natural persons” must be able to “override” algorithms when needed to protect “fundamental rights.” 

In June 2023, more than a year before the AI Act was signed, the Council of Europe unveiled its inaugural draft of the “AI Convention on AI and Human Rights.” Comprising 34 articles, the document –like others by the Council– aims to formulate a broader open-ended framework of standards, not just within Europe. Its focus: data privacy, protection against discrimination, and the potential misuse of AI deployment. Like the GDPR, it aims to create a regulatory path that other nations may follow.  

In these articles, we can clearly see the founding principles of the EU AI Act. However, two other articles are also designated: each party must provide effective remedies for human rights violations, and each must have the ability to prohibit those systems that are incompatible with the convention’s core principles.

This EU approach is focused on securing the individual and collective rights of citizens in a digital society. They proactively ensure that often opaque AI processes won’t harm a society’s democratic political culture or trammel fairness in the distribution of its benefits.  

The European Declaration on Digital Rights and Principles for the Digital Decade, adapted in December 2022, proclaims that “people are at the center of the digital transformation” and emphasizes “the importance of democratic functioning of the digital society and economy.” All technological solutions should:

  1. Benefit everyone and improve the lives of all people in the EU.
  2. Technological solutions should also respect people’s rights, enable their exercise and promote solidarity and inclusion.” 

This political statement is interesting in its humanist focus. It identifies “democracy, fairness, and fundamental rights” as key values guiding EU policymaking. 

Pre-eminence: China’s Approach to AI

In contrast to the EU, China has developed its own AI policy, one that is less rights-driven and focused more on sovereignty, economic development, and implementation. It follows from Beijing’s belief, clearly written in both its “Dual Circulation” and “Made in China 2025” policies, that emerging technologies and high-tech manufacturing will be key to twenty-first century dominance.

Due to state funding, powerful tech firms, and select universities like Tsinghua, the country has emerged as a major player in machine learning and AI research. Notable players in the sector include:

Huawei:  AI chips and telecommunications infrastructure.

Baidu:  Autonomous driving / natural language models. 

Alibaba:  E-commerce algorithms / cloud computing 

Tencent:   AI-driven social media & medical imaging / healthcare solutions.

01.AI:   This Chinese unicorn startup is pushing the LLM envelope with its open-source model Yi-34B.

Between 2014 and 2023, China filed over 38,210 AI patents, more than all other nations combined. Even the US military is playing catchup with China’s PLA on the AI front, which is developing a new type of “intelligentized” warfare, looking to create wholly unmanned, swarm combat systems and better situational awareness. The DoD’s Replicator Program is something of a “Hail Mary” effort by the US to get to the Chinese level in AI-enabled swarm drones. 

Over the past several years, China has moved to implement some of the world’s toughest regulations on data and AI.  In contrast to the EU’s focus on state oversight regarding data privacy, fairness, and “human guidance,” Beijing’s policies make frequent reference to the necessary balance between “security” and “development.” For years China has been implementing the public facial recognition systems and “social scoring systems” that are now clearly outlawed by the EU AI Act. More machine learning and artificial intelligence will give these suppressive measures additional teeth.  

As early as 2017, China began placing AI as a new strategic pillar within its national agenda.  That year, the State Council unveiled its “New Generation Artificial Intelligence Development Plan,” with the aim of making the mainland the world’s AI leader by 2030. Like Made in China 2025, this act is comprehensive and focused on harnessing multiple drivers: economic growth, national security, and enhanced social services. The plan’s emphasis is on seizing the strategic initiative, creating the speedy diffusion from theory to application across multiple spheres, and finding dominance through innovation by 2030. 

After ChatGPT exploded on the world stage in late 2022, China was one of the first nations to issue targeted regulations on generative AI, releasing its “Interim Measures for the Management of Generative AI Services.” These set out restrictions on LLM (large language model) training and outputs of LLMs and require AI services “with the capacity for social mobilization” to carry out a security assessment and file pertinent algorithms with state regulators before being made public.  

Because of these “Measures,” since 2023, all LLMs developed by China’s tech platforms must gain state approval before going public. In response to this, Apple pulled nearly a hundred apps that offered AI chatbot service from its China store before the measures became enforced. 

China’s AI strategy –which seeks all developments to align with state objectives while maintaining strict control over information— could further entrench geostrategic splits as it is exported to the Global South. According to Rutgers University Fellow Shaoyu Yun:

Even if China doesn’t outpace the U.S. in developing the latest AI models, its applications can still significantly impact the geopolitical landscape. By integrating AI into areas like biotechnology, industrial engineering, and state security, Beijing can export its controlled AI systems to other authoritarian regimes. This would not only spread China’s model of governance but also consolidate its influence in regions antagonistic to Western ideals.

In this regard, the issue of AI lies not in its novelty but in its strategic deployment in the service of state control. For Yun, China’s approach suggests a fundamental rule in international relations: for new technology to alter a balance of power, it doesn’t need to be pre-eminent or the world’s best. It just needs to be the most effectively wielded.

AI Regulation, American Style

Enforceable regulation does not yet exist in the US at the national level, but there have been developments. In mid-2023 the White House obtained a set of voluntary commitments on AI risk from fifteen big firms at the cutting edge of the industry. It also released its “Blueprint for an AI Bill of Rights” which sets out a preliminary approach to data privacy and safety.  

More prominently, on October 30, 2023, the Biden administration announced its “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”  This order is focused on national security and misinformation: the US government must be informed of any developer tests that show a risk to national security, and the National Institute of Standards and Technology (NIST) will set standards for “red team” testing (i.e. testing to break the AI models pre-launch to expose problems). It also tasks the Commerce Department to create “watermarking” for AI-generated content so that Americans can recognize deepfakes and know that the communications they receive (particularly from government bodies) are authentic.   

In addition, the order created a new entity –the US AI Safety Institute—which will explore AI safety across “national security, public safety, and individual rights.” Housed in the National Institute of Standards and Technology (NIST) that has been created, with a leadership team appointed in April 2024 by the Commerce Department, this institute will not be an enforcement agency, but a policy center.  

Biden’s EO suggests how US policy will unfold: it will be industry-friendly, offering a voluntary shift by business to best practices, and rely on –like what happened with crypto over the past decade—the various executive agencies to craft their own rules (with input from NIST).

Though hailed as a step forward, the EO remains more carrot than stick. The watermarking technologies that the EO points to are not yet built and may be difficult to ensure. Also, the order does not actually require the tech platforms to use these technologies or even require that AI companies adhere to NIST standards or testing methods. Most of the EO relies only on voluntary cooperation.

Unlike the EU’s AI Act which was passed at the highest level of government, or its AI Office, which will be an enforcement agency operative by August 2, 2025, the US Safety Institute appears to be more of a policy center, one that can be marginalized or “institutionally captured” by whatever political party that is in power. 

This approach remains friendly to tech, emphasizes self-regulation, and has no punitive measures. It also ignores the bigger issue of training models to minimize foreseeable harm outside of national security issues. According to chief ethics scientist Margaret Mitchell, this is a “whack-a-mole” approach, responding to emerging problems instead of requiring best data practices for the start: “The biggest concern to me in this is it ignores a lot of work on how to train and develop models to minimize foreseeable harms.”

At present, the US is unlikely to pass any AI legislation at the national level in the foreseeable future. The 118th Congress (2023-2024), notable for its political infighting, may end up as the least productive legislative session in US history.  

Interestingly, though, there has been a lot of state-level action. Sixteen states had enacted AI legislation, with more than 400 AI bills introduced at that level in 2024, six times more than in 2023.

Colorado is the first state in the nation with an AI law on the books that will be enforceable. The Colorado Artificial Intelligence Act is at its core anti-discrimination legislation, focusing on any bias caused by AI in the context of a “consequential decision” –specifically any decision that can “significantly” impact an individual’s legal or economic interests, whether it be employment, housing, credit, lending, educational enrollment, legal services, and insurance. (In many ways, it is stricter but also more nebulous than the EU’s restrictions on social scoring, and there is now pushback by Colorado businesses that fear its wide mandate will trigger lawsuits.)

Other major states like California, Connecticut, New York, and Texas, are starting the process.  In February, the California State Legislature introduced Senate Bill 1047, which would require safety testing of AI products before they are released. It would require AI developers to prevent deployers from creating any derivative models that could cause harm. Last year, Connecticut passed Senate Bill 1103 which regulates state procurement of AI tools. 

This emerging patchwork of state laws could be tough for companies –even the tech titans– to manage. That is why major players like Microsoft, Google, and OpenAI have all called for regulations at the national level, feeling that this growing number of state laws will crimp the adoption of AI due to the perceived compliance burden. According to Adrienne Fischer, a lawyer with Basecamp Legal, a Denver law firm monitoring state AI bills: “This fragmented regulatory environment underscores the call for national laws that will provide a coherent framework for AI usage.”

Conclusion:  Techno-Democracies Unite

In the regulatory discourse of both China and the EU, there is always the unspoken actor: the US. Whereas China’s process is designed for national economic success vis a vis a truculent American hegemon, the EU’s process is focused on protecting its unique sense of culture and rights-centered governance from overriding big-tech dominance. 

Indeed, for the EU, its contests with the US about data privacy, digital taxation, and antitrust have been going on for nearly three decades. In many ways, the Europeans have been playing catch up to the “move fast and break things” libertarianism of US tech since the mid-1990s, when the opening chapter of the internet began.  

For decades, the US has urged other nations to deploy a non-regulatory, market-oriented approach to tech. The very first effort at an international consensus to digitization embodied this laissez-faire attitude. In 1997 the Clinton administration’s framework for global electronic commerce codified that “markets maximize individual choice and individual freedom” and its 2000 EU-US Joint Statement assured that both parties agreed “that the expansion of electronic commerce will be essentially market-led and driven by private initiative.” 

However, as the scope and power of the tech platforms became so central to daily life in the developed world, a governance issue has arisen. 

Software has now “eaten” many societal processes whole. Digital providers often replace –at least in de-facto, operative ways– local governments as rule setters via their terms of service and community norms. As a result, these global tech companies often provide consumers with digital resources more effectively than some smaller nations, a trend which becomes even more extreme with AI.  

Geopolitically, there will be growing differences between how authoritarian and democratic nations will promote—or weaponize—their AI industries. Because China operates more as a state-capitalist society, its regulatory model reflects its focus on top-down control and national power. Its own historical sense of a “Middle Kingdom” centrality has kept it at odds with the US-led, Brittan Woods-derived, international order.   

As it seeks to become the world leader in most strategic technologies within the decade, China is pouring money into AI development. An Australian “tech competitiveness” think tank recently stated that the mainland is now the world leader in research on almost 90% of critical technologies, essentially switching places with the US in two decades due to heavy state funding.  It is also making a concerted push to bring the developing world to its tech table. Via Huawei, it has been quick to outmaneuver the West and fund many regimes in the developing world with their digital buildouts. It has ambitious projects in Asia and Africa, the “100 smart cities movement” in Indonesia being a perfect example. 

Domestically, the US often operates by post-facto litigation and piecemeal actions from different states. Its political process at the national level is often buffeted by powerful lobbying. Pressure from lobbyists and the money-driven nature of politics in the US often means the deepest pockets will hold sway. Without proactive regulation, reckless AI initiatives clearly risk privacy, covert social scoring, and quiet disenfranchisement. This could lead to disaffection with the Western model.  

This has seriously troubling implications for the United States and its allies. As Anu Bradford has suggested: “the delays that come with democratic rulemaking . . . allow China to quickly operationalize in the absence of democracy. “   

EU regulation may save the US from itself in many ways. Europe’s AI Act could help implement a broader consensus among Western powers and their allies. Technological cooperation among allies will be essential for geopolitical reasons, but also for better visibility and coherence at the business level. Corporations like to avoid the compliance costs, obviously, but the AI Act will also foster the same type of “adequacy decision” coherence that has happened via the GDPR for value-sharing corporations hoping for business access.

Creating a broader consensus between the Western democracies and its allies is exactly what is needed as the systemic rivalry with China emerges. A rights-driven model will be more compelling to a larger swath of the world, including Japan, South Korea, Brazil, and India.  Just as the EU’s Convention of Human Rights was both a statement of values and a rebuke of what was happening behind the Iron Curtain at the time, the AI Act makes crystal clear in its values the contrast between what a 21st-century rights-driven “techno-democracy” will look like vis a vis a 21st century, state-centric “techno-autocracy.”On August 2, 2024, Europe’s AI Act entered into force, becoming the world’s first comprehensive legal framework on the issue of artificial intelligence. Operative across all 27 of its member states, the legislation provides a holistic set of rules for all players in the AI ecosystem –from developers to exporters and deployers.  

Though the Act got scant press in the US, it is clearly the opening salvo on what real legal proscriptions will look like for any business engaging in generative AI. Enforcement will be punitive – with fines of up to 35 million euros or 7% of global annual revenue (whichever is higher)—so even a large company operating outside the EU will likely contour to its agenda to avoid extraterritorial issues. 

As landmark legislation, it will cast an enduring shadow. AI will soon affect nearly every field of human endeavor. Recent studies suggest that worldwide artificial intelligence may add $2.6 trillion to $4.4 trillion annually to global economic output in the coming decade. AI is already altering the geostrategic landscape, as China deploys it in its military modernization and the US seeks to rebalance this unprecedented new threat to the existing international security architecture.  

Indeed, AI’s promise of epochal transformations portends effects that can’t yet be fully comprehended. To its credit, the EU has risen to the challenge of setting up preliminary guard rails. 

By designing a regulatory framework based on human rights and fundamental values, the EU believes it can develop an AI ecosystem that is inherently safer and more likely to not harm anyone. In this way, the EU aspires to be the global leader in safe AI.  

However, important questions remain. How does the act define artificial intelligence? How does it build on European legal precedent in its effort to protect its citizens? How does it compare to the Chinese and American approaches to AI regulation?  

From a geopolitical perspective, does the act ultimately help the US in the long run, forging a broader consensus as the West competes globally with China for a new era of technological supremacy?   

The Act’s Definition of AI and Risk

Originating from a European Commission proposal aimed at a “human-centric” approach to artificial intelligence, the final Act –a text totaling 50,000 words– is divided into 113 Articles, 180 recitals, and 13 annexes.

It categorizes AI systems based on their potential harm, aiming to ensure scrutiny, oversight, and –in extreme cases– outright bans on those products deemed dangerous.  

In earlier policy iterations, the European Commission’s definition of AI was criticized for being too broad. It was eventually modified to approximate the existing OECD definition and now focuses on two key characteristics of AI systems. Article 3(1) spells it out explicitly:  

  1. An “AI system” is a machine-based system designed to operate with varying levels of autonomy.
  1. An AI system exhibits adaptiveness after deployment and can infer –based on either explicit or implicit objectives– from the input it receives, how to generate outputs.  (These can be predictions, content, recommendations, or decisions that can influence physical or virtual environments.) 

This issue of inference is key. In its Recital 13, the act is explicit in that does not cover “systems that are based on the rules defined solely by natural persons to automatically execute operations.” Thus, the capacity of an applicable AI system to infer is what takes from the more commonplace data processing. It enables learning, reasoning, and can fashion new modeling on its own. The techniques that enable this type of inference while building an AI system include: 

1. machine learning mechanisms that learn from data to achieve certain objectives.

2. logic-based approaches that infer from encoded knowledge or symbolic representation of the task to be solved. 

With this definition in mind, the AI Act then classifies these autonomous systems according to their risks to society, creating a uniform framework across all EU countries:

Banned AI: Some AIs are prohibited due to the unacceptable risks they pose. These include systems used for government or corporate “social scoring,” certain biometric systems (like those for emotion monitoring at work), or games or bots that could encourage unsafe or compulsive behavior in children. 

High-Risk AI: These include applications like medical AI tools, critical infrastructure, credit loans, or recruitment software. They must meet strict standards for accuracy, security, and data quality, with ongoing human oversight to avoid profiling and personal identification. 

Moderate-Risk AI: This category includes front-facing systems like chatbots and AI-generated content. They must make explicit to users they’re interacting with AI. Content like deepfakes should be labeled that they have been artificially made.  Transparency and labeling are key.

Low risk: Most AI systems (spam filters and AI-enabled video games, etc.) will face no enforcement scrutiny under the Act, but developers may voluntarily adopt to specific guidelines.

It lays down further conditions required to develop and deploy trusted AI systems, both for developers when processing personal data during the development phase, and for users who may seek to pour personal data into a system during the deployment phase.  

Its Timeline: The Ban, the Code of Practice, Harmonization

The AI Act entered into force on August 2, 2024, though most of its rules faze in at different times over the course of the next 2 years. In February 2025 the ban on prohibited practices goes into effect, and later that August all regulatory bodies –the AI Office, European AI Board, etc.– must be in place. On August 2, 2026, full enforcement arrives, with each member state having set up a regulatory agency at the national level. 

In terms of the ban, certain companies have already started to modify their product rollout based on the Act. Meta will not release an advanced version of its Llama AI model in multimodal form in the EU, citing the “unpredictable” behavior of regulators.  

Likewise, on August 8th, the social media platform X agreed to pause using European user data to train its AI system, after the Irish High Court found that the personal data of millions of EU users were being fed as input into Grok, its AI search tool, in Spring 2024 without any opt-out option available until July. 

The European Commission has launched a year-long consultation on a “Code of Practice on GPAI Models”, with AI developers and academics invited to submit their perspectives on a final draft. This will also set the parameters of the “AI Office,” the enforcement agency that gives teeth to the AI Act.

Human Rights and Europe 

Critics have been quick to suggest that, as Europe is home to only two of the twenty top tech platform companies, its AI regulation is some form of “sour grapes” protectionism. However, this view is flippant. Yes, there are ongoing battles between the US and EU — issues of data privacy, digital taxation, and antitrust—but the Act is clearly built on some of Europe’s most defining legislation. In many ways, it continues the trajectory of the EU’s most ambitious work.  

The European Convention on Human Rights was signed in Rome on November 4, 1950, by the twelve member states of the Council of Europe. Enforced by the European Court of Human Rights in Strasbourg, the Convention was a milestone in international law.   

It was the first legal entity to give binding force to some of the rights stated in the 1948 Universal Declaration of Human Rights. It was also the first treaty to establish a supranational court to ensure that the parties fulfilled their responsibilities, and which could challenge decisions taken by their own national courts. (Any individual, group of individuals, company, or NGO can petition the Strasbourg Court, once all lower venues have been exhausted.) It has now become an urtext for EU relations. To even join the Council of Europe, a state must first sign and ratify the ECHR. 

The convention itself has sixteen protocols, with article 8 the most pertinent here. Article 8 provides the right to one’s “private and family life, his home and his correspondence”, with caveats related to public safety, morality, national security, and the economic well-being of the country. This article clearly provides a right to be free of unlawful searches, but as it protects a “private and family life,” it also clearly provides the direction of a broader interpretation.

This “right to privacy” was not in the UN’s 1948 Universal Declaration of Human Rights. The fact that it is given explicit prominence in European law is telling.

Europe’s focus on privacy has obvious touchstones in its 20th-century history. The Nazi regime abused personal data to identify and annihilate its selected out-groups. Ruthless surveillance tactics further evolved with East Germany’s Stasi and the postwar Warsaw Bloc secret police in general. Governmental data collection practices have a dark past on the continent, and thus the right to data privacy is now closely tied to the issue of human dignity in Europe than perhaps the US. 

“Human-centric Digitization”

In 1981, the Council of Europe created the world’s first international treaty to assure data protection. This convention applied certain rules to the “automatic processing of personal data” and is probably the foundational basis of the EU’s 2018 General Data Protection Regulation (GDPR).  

The GDPR calls for a certain transparency in processing personal data, curtailing the quantity and restricting it to certain purposes. It designates a “privacy by design” protocol that requires companies to ingrain the GDPR rules into their initial design of services.   

The “right to be forgotten” is perhaps the most unique obligation related to the GDPR. This gives any person the right to force platforms to “delink” their name from information that is no longer valid. The Court of Justice of the EU played a key role in shaping this issue through its landmark Google Spain case (2014), in which Mario Costeja Gonzalez, a Spanish citizen, requested that the search engine remove results that linked him to a bankruptcy that had been resolved 15 years prior.  The court judged that Google must honor all requests to pull content proven to be invalid or out-of-date from its search algorithm. 

The GDPR is the world’s toughest data privacy law, and it has a long reach. Any corporation anywhere, if they collect data on EU citizens, can see massive penalties. 

Responding to a rise in cyber breaches and cloud computing, when tracking cookies was becoming insidious, the regulation had an immediate impact. The now ubiquitous “opt-in for cookies” notification is a product of the law, as tech platforms have adhered to its aims even in the US to avoid extraterritoriality issues.   

In this way, the law did succeed in creating a broader global consensus, a “shared vision of human-centric digitization.” As EU Commission VP Josep Borrell described at the time:

The 1948 Universal Declaration of Human Rights established the dignity of the individual, the right to privacy and to non-discrimination, and the freedoms of speech and belief. It is our common duty to make sure that the digital revolution lives up to that promise.

The GDPR’s enforcement arm, the European Data Protection Board, requires that each member state establish a “data protection authority” to enforce its rules. Fines can reach 20 million euros or 4% of a company’s total annual turnover. 

The AI Act does not modify the GDPR but builds on it.  

The Brussels Effect   

Despite massive lobbying against the GDPR before its passing, most of the big tech platforms have now embraced the regime. Meta chose to extend many GDPR protections globally to the company’s 2.8 billion Facebook users. Google revised its privacy policy based on it, and Apple now carries out OS impact assessments globally according to GDPR protocols. Microsoft has gone further, implementing the GDPR’s “privacy by design” and baking it into the early development of its products.

This appears to be yet another example of “The Brussels Effect.” These tech giants know that the size of the EU consumer market is simply too big to ignore. The second-largest economy on earth, Europe has an affluent population of 450 million and a GDP of $17.5 trillion. They enjoyed stunning success: Google is 90% of search in the 27-member union; Apple rakes in a quarter of its global revenue there; and Meta’s Facebook has 410 million monthly active EU users.  

The Brussels Effect can be clearly seen in the adoption of EU laws by foreign nations around the world. As of 2024, more than 150 countries have adopted domestic privacy laws, and most of them resemble the GDPR in some ways. It has essentially become the norm in many parts of the world as governments see it as an easy template for their own regimes.  

This can be seen on every continent. Brazil’s data privacy laws of 2018 emulate the GDPR’s broad definition of personal data. Nigeria’s Data Protection Bill of 2023 often uses exact parlance in sections, though with caveats about public morality. India’s PDPB bill, though withdrawn in 2021, was quite similar. With so many countries now operating with GDPR-like rules, it becomes harder for those nations creating data laws to justify a marked difference from the global norm. 

The effect is even seen in the corporate structure of a few firms. Meta, for example, altered its corporate structure –shifting its Africa, Asia, Australia, and Middle East divisions out of its Irish corporate entity and placing them within its US legal structure. This thus keeps African or Asian users from seeking legal addresses under the EU’s GDPR. 

Anu Bradford has made the point that the Brussels Effect of the GDPR works precisely because it targets the “inelastic” aspect of the market –consumers living in a jurisdiction, and not fleet-footed capital. But it does work to capital’s advantage on one level. Companies always prefer standardization over customization, particularly since compliance is onerous. Customization for too many countries is unappealing, costly, and involves more legal fees for the tech giants. In some way, GDPR does work to tech’s advantage in bringing legal clarity to a large, 27-member state zone.  

The desire for an “adequacy decision” from the EU might also explain the GDPR adoption worldwide. Those nations with privacy laws deemed “adequate” by GDPR standards can be allowed data transfers from the EU. This obviously helps with a foreign nation’s corporate competitiveness, providing more business opportunities in the zone. Canada, New Zealand, Argentina, Uruguay, and Israel are a few of the notable countries granted decisions.  

Ironically, the US doesn’t have an adequacy decision from the EU, a fact that has placed the legality of the data flows between the US and EU in contention and has been the subject of numerous lawsuits.   

AI Convention and AI Act: Velvet Glove, Iron Fist?

The theoretical basis for the AI Act appeared five years ago. In 2019, the European Commission published “The Ethics Guidelines for Trustworthy AI.” This document – which stated that “AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition, or herd humans” – arguably set the course for the AI Act. It stresses the importance of a human-centric artificial intelligence, in which “natural persons” must be able to “override” algorithms when needed to protect “fundamental rights.” 

In June 2023, more than a year before the AI Act was signed, the Council of Europe unveiled its inaugural draft of the “AI Convention on AI and Human Rights.” Comprising 34 articles, the document –like others by the Council– aims to formulate a broader open-ended framework of standards, not just within Europe. Its focus: data privacy, protection against discrimination, and the potential misuse of AI deployment. Like the GDPR, it aims to create a regulatory path that other nations may follow.  

In these articles, we can clearly see the founding principles of the EU AI Act. However, two other articles are also designated: each party must provide effective remedies for human rights violations, and each must have the ability to prohibit those systems that are incompatible with the convention’s core principles.

This EU approach is focused on securing the individual and collective rights of citizens in a digital society. They proactively ensure that often opaque AI processes won’t harm a society’s democratic political culture or trammel fairness in the distribution of its benefits.  

The European Declaration on Digital Rights and Principles for the Digital Decade, adapted in December 2022, proclaims that “people are at the center of the digital transformation” and emphasizes “the importance of democratic functioning of the digital society and economy.” All technological solutions should:

  1. Benefit everyone and improve the lives of all people in the EU.
  2. Technological solutions should also respect people’s rights, enable their exercise and promote solidarity and inclusion.” 

This political statement is interesting in its humanist focus. It identifies “democracy, fairness, and fundamental rights” as key values guiding EU policymaking. 

Pre-eminence: China’s Approach to AI

In contrast to the EU, China has developed its own AI policy, one that is less rights-driven and focused more on sovereignty, economic development, and implementation. It follows from Beijing’s belief, clearly written in both its “Dual Circulation” and “Made in China 2025” policies, that emerging technologies and high-tech manufacturing will be key to twenty-first century dominance.

Due to state funding, powerful tech firms, and select universities like Tsinghua, the country has emerged as a major player in machine learning and AI research. Notable players in the sector include:

Huawei:  AI chips and telecommunications infrastructure.

Baidu:  Autonomous driving / natural language models. 

Alibaba:  E-commerce algorithms / cloud computing 

Tencent:   AI-driven social media & medical imaging / healthcare solutions.

01.AI:   This Chinese unicorn startup is pushing the LLM envelope with its open-source model Yi-34B.

Between 2014 and 2023, China filed over 38,210 AI patents, more than all other nations combined. Even the US military is playing catchup with China’s PLA on the AI front, which is developing a new type of “intelligentized” warfare, looking to create wholly unmanned, swarm combat systems and better situational awareness. The DoD’s Replicator Program is something of a “Hail Mary” effort by the US to get to the Chinese level in AI-enabled swarm drones. 

Over the past several years, China has moved to implement some of the world’s toughest regulations on data and AI.  In contrast to the EU’s focus on state oversight regarding data privacy, fairness, and “human guidance,” Beijing’s policies make frequent reference to the necessary balance between “security” and “development.” For years China has been implementing the public facial recognition systems and “social scoring systems” that are now clearly outlawed by the EU AI Act. More machine learning and artificial intelligence will give these suppressive measures additional teeth.  

As early as 2017, China began placing AI as a new strategic pillar within its national agenda.  That year, the State Council unveiled its “New Generation Artificial Intelligence Development Plan,” with the aim of making the mainland the world’s AI leader by 2030. Like Made in China 2025, this act is comprehensive and focused on harnessing multiple drivers: economic growth, national security, and enhanced social services. The plan’s emphasis is on seizing the strategic initiative, creating the speedy diffusion from theory to application across multiple spheres, and finding dominance through innovation by 2030. 

After ChatGPT exploded on the world stage in late 2022, China was one of the first nations to issue targeted regulations on generative AI, releasing its “Interim Measures for the Management of Generative AI Services.” These set out restrictions on LLM (large language model) training and outputs of LLMs and require AI services “with the capacity for social mobilization” to carry out a security assessment and file pertinent algorithms with state regulators before being made public.  

Because of these “Measures,” since 2023, all LLMs developed by China’s tech platforms must gain state approval before going public. In response to this, Apple pulled nearly a hundred apps that offered AI chatbot service from its China store before the measures became enforced. 

China’s AI strategy –which seeks all developments to align with state objectives while maintaining strict control over information— could further entrench geostrategic splits as it is exported to the Global South. According to Rutgers University Fellow Shaoyu Yun:

Even if China doesn’t outpace the U.S. in developing the latest AI models, its applications can still significantly impact the geopolitical landscape. By integrating AI into areas like biotechnology, industrial engineering, and state security, Beijing can export its controlled AI systems to other authoritarian regimes. This would not only spread China’s model of governance but also consolidate its influence in regions antagonistic to Western ideals.

In this regard, the issue of AI lies not in its novelty but in its strategic deployment in the service of state control. For Yun, China’s approach suggests a fundamental rule in international relations: for new technology to alter a balance of power, it doesn’t need to be pre-eminent or the world’s best. It just needs to be the most effectively wielded.

AI Regulation, American Style

Enforceable regulation does not yet exist in the US at the national level, but there have been developments. In mid-2023 the White House obtained a set of voluntary commitments on AI risk from fifteen big firms at the cutting edge of the industry. It also released its “Blueprint for an AI Bill of Rights” which sets out a preliminary approach to data privacy and safety.  

More prominently, on October 30, 2023, the Biden administration announced its “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”  This order is focused on national security and misinformation: the US government must be informed of any developer tests that show a risk to national security, and the National Institute of Standards and Technology (NIST) will set standards for “red team” testing (i.e. testing to break the AI models pre-launch to expose problems). It also tasks the Commerce Department to create “watermarking” for AI-generated content so that Americans can recognize deepfakes and know that the communications they receive (particularly from government bodies) are authentic.   

In addition, the order created a new entity –the US AI Safety Institute—which will explore AI safety across “national security, public safety, and individual rights.” Housed in the National Institute of Standards and Technology (NIST) that has been created, with a leadership team appointed in April 2024 by the Commerce Department, this institute will not be an enforcement agency, but a policy center.  

Biden’s EO suggests how US policy will unfold: it will be industry-friendly, offering a voluntary shift by business to best practices, and rely on –like what happened with crypto over the past decade—the various executive agencies to craft their own rules (with input from NIST).

Though hailed as a step forward, the EO remains more carrot than stick. The watermarking technologies that the EO points to are not yet built and may be difficult to ensure. Also, the order does not actually require the tech platforms to use these technologies or even require that AI companies adhere to NIST standards or testing methods. Most of the EO relies only on voluntary cooperation.

Unlike the EU’s AI Act which was passed at the highest level of government, or its AI Office, which will be an enforcement agency operative by August 2, 2025, the US Safety Institute appears to be more of a policy center, one that can be marginalized or “institutionally captured” by whatever political party that is in power. 

This approach remains friendly to tech, emphasizes self-regulation, and has no punitive measures. It also ignores the bigger issue of training models to minimize foreseeable harm outside of national security issues. According to chief ethics scientist Margaret Mitchell, this is a “whack-a-mole” approach, responding to emerging problems instead of requiring best data practices for the start: “The biggest concern to me in this is it ignores a lot of work on how to train and develop models to minimize foreseeable harms.”

At present, the US is unlikely to pass any AI legislation at the national level in the foreseeable future. The 118th Congress (2023-2024), notable for its political infighting, may end up as the least productive legislative session in US history.  

Interestingly, though, there has been a lot of state-level action. Sixteen states had enacted AI legislation, with more than 400 AI bills introduced at that level in 2024, six times more than in 2023.

Colorado is the first state in the nation with an AI law on the books that will be enforceable. The Colorado Artificial Intelligence Act is at its core anti-discrimination legislation, focusing on any bias caused by AI in the context of a “consequential decision” –specifically any decision that can “significantly” impact an individual’s legal or economic interests, whether it be employment, housing, credit, lending, educational enrollment, legal services, and insurance. (In many ways, it is stricter but also more nebulous than the EU’s restrictions on social scoring, and there is now pushback by Colorado businesses that fear its wide mandate will trigger lawsuits.)

Other major states like California, Connecticut, New York, and Texas, are starting the process.  In February, the California State Legislature introduced Senate Bill 1047, which would require safety testing of AI products before they are released. It would require AI developers to prevent deployers from creating any derivative models that could cause harm. Last year, Connecticut passed Senate Bill 1103 which regulates state procurement of AI tools. 

This emerging patchwork of state laws could be tough for companies –even the tech titans– to manage. That is why major players like Microsoft, Google, and OpenAI have all called for regulations at the national level, feeling that this growing number of state laws will crimp the adoption of AI due to the perceived compliance burden. According to Adrienne Fischer, a lawyer with Basecamp Legal, a Denver law firm monitoring state AI bills: “This fragmented regulatory environment underscores the call for national laws that will provide a coherent framework for AI usage.”

Conclusion:  Techno-Democracies Unite

In the regulatory discourse of both China and the EU, there is always the unspoken actor: the US. Whereas China’s process is designed for national economic success vis a vis a truculent American hegemon, the EU’s process is focused on protecting its unique sense of culture and rights-centered governance from overriding big-tech dominance. 

Indeed, for the EU, its contests with the US about data privacy, digital taxation, and antitrust have been going on for nearly three decades. In many ways, the Europeans have been playing catch up to the “move fast and break things” libertarianism of US tech since the mid-1990s, when the opening chapter of the internet began.  

For decades, the US has urged other nations to deploy a non-regulatory, market-oriented approach to tech. The very first effort at an international consensus to digitization embodied this laissez-faire attitude. In 1997 the Clinton administration’s framework for global electronic commerce codified that “markets maximize individual choice and individual freedom” and its 2000 EU-US Joint Statement assured that both parties agreed “that the expansion of electronic commerce will be essentially market-led and driven by private initiative.” 

However, as the scope and power of the tech platforms became so central to daily life in the developed world, a governance issue has arisen. 

Software has now “eaten” many societal processes whole. Digital providers often replace –at least in de-facto, operative ways– local governments as rule setters via their terms of service and community norms. As a result, these global tech companies often provide consumers with digital resources more effectively than some smaller nations, a trend which becomes even more extreme with AI.  

Geopolitically, there will be growing differences between how authoritarian and democratic nations will promote—or weaponize—their AI industries. Because China operates more as a state-capitalist society, its regulatory model reflects its focus on top-down control and national power. Its own historical sense of a “Middle Kingdom” centrality has kept it at odds with the US-led, Brittan Woods-derived, international order.   

As it seeks to become the world leader in most strategic technologies within the decade, China is pouring money into AI development. An Australian “tech competitiveness” think tank recently stated that the mainland is now the world leader in research on almost 90% of critical technologies, essentially switching places with the US in two decades due to heavy state funding.  It is also making a concerted push to bring the developing world to its tech table. Via Huawei, it has been quick to outmaneuver the West and fund many regimes in the developing world with their digital buildouts. It has ambitious projects in Asia and Africa, the “100 smart cities movement” in Indonesia being a perfect example. 

Domestically, the US often operates by post-facto litigation and piecemeal actions from different states. Its political process at the national level is often buffeted by powerful lobbying. Pressure from lobbyists and the money-driven nature of politics in the US often means the deepest pockets will hold sway. Without proactive regulation, reckless AI initiatives clearly risk privacy, covert social scoring, and quiet disenfranchisement. This could lead to disaffection with the Western model.  

This has seriously troubling implications for the United States and its allies. As Anu Bradford has suggested: “the delays that come with democratic rulemaking . . . allow China to quickly operationalize in the absence of democracy. “   

EU regulation may save the US from itself in many ways. Europe’s AI Act could help implement a broader consensus among Western powers and their allies. Technological cooperation among allies will be essential for geopolitical reasons, but also for better visibility and coherence at the business level. Corporations like to avoid the compliance costs, obviously, but the AI Act will also foster the same type of “adequacy decision” coherence that has happened via the GDPR for value-sharing corporations hoping for business access.

Creating a broader consensus between the Western democracies and its allies is exactly what is needed as the systemic rivalry with China emerges. A rights-driven model will be more compelling to a larger swath of the world, including Japan, South Korea, Brazil, and India.  Just as the EU’s Convention of Human Rights was both a statement of values and a rebuke of what was happening behind the Iron Curtain at the time, the AI Act makes crystal clear in its values the contrast between what a 21st-century rights-driven “techno-democracy” will look like vis a vis a 21st century, state-centric “techno-autocracy.”

Featured/Headline Image Caption and Citation: Computer code, Image source from Flickr | CC License, no changes made

Author