Code and Control: How AI is Reshaping US Border Control

immigration ai yris

In recent years, artificial intelligence (AI) has become a key component of the US Department of Homeland Security’s (DHS) technological strategy, particularly in border enforcement and immigration control. From facial recognition at airports to algorithmic cargo screening and predictive threat assessments, the DHS has embraced AI under the promise of national security and operational innovation. Given strong public concerns regarding surveillance overreach and algorithmic bias, it is essential to thoroughly examine this innovation and establish necessary safeguards. The intersection of AI, border control, and human rights raises urgent questions: Who benefits from these technologies? Who is harmed? And how are systemic inequalities reinforced through digital governance?

The AI Push at DHS

According to the DHS AI Use Case Inventory, Customs and Border Protection (CBP) has the highest number of distinct AI applications among all DHS branches, with 75 identified use cases across various operational contexts. These systems are designed to scan cargo, validate identities, detect anomalies, and assess potential threats at ports of entry. Other DHS branches, such as US Citizenship and Immigration Services (USCIS) and Immigration and Customs Enforcement (ICE), also heavily rely on AI technologies to process applications, track migration trends, and support enforcement actions.

These systems are either classified as in deployment or pre-deployment phases. For example, 31 cases are currently deployed within CBP, with 13 marked as potentially impacting public safety and rights. Notably, many of these uses involve the collection of biometric data, including facial recognition and facial capture technologies, which are known to have disproportionately high error rates for people of color and raise heightened privacy concerns.

AI initiatives within DHS are formally governed by frameworks such as the Office of Management and Budget’s (OMB) Memorandum M-24-10 and President Biden’s Executive Order 14110 on trustworthy AI. These policies require agencies to designate Chief AI Officers, establish governance boards, and implement risk management practices. According to its principles, DHS’s AI must be “lawful, mission-appropriate, and mission-enhancing” as well as “safe, secure, responsible, trustworthy, and human-centered.” The oversight of these AI deployments is managed in part by the DHS Privacy Office and the Office for Civil Rights and Civil Liberties (CRCL). 

Yet, despite these procedural safeguards, critics argue that the system lacks genuine human rights accountability, particularly when AI intersects with immigration enforcement, a domain already saturated with structural racism and xenophobia.

Racialized Implications of AI at the Border

Civil society organizations, including the Promise Institute for Human Rights and the Black Alliance for Just Immigration (BAJI), have voiced grave concerns about the racial justice implications of AI at the border. In a 2023 thematic hearing before the Inter-American Commission on Human Rights (IACHR), these groups highlighted how AI-driven border technologies exacerbate existing racial discrimination, particularly against Black migrants. The Promise Institute’s submission warned that AI-enabled policies of deterrence and border externalization perpetuate the same patterns of exclusion and abuse that have historically plagued US immigration systems.

The testimony argued that without a racial justice lens, the deployment of border technologies only intensifies the racial discrimination already at the heart of the US immigration system. Technologies like facial recognition, which studies have shown to be less accurate for people with darker skin tones, are particularly prone to reinforcing bias when used in enforcement scenarios. 

These concerns are not hypothetical. The use of AI in immigration contexts has already resulted in wrongful detentions, misidentifications, and an erosion of due process. Predictive algorithms used to assess risk or determine case prioritization can reflect and reproduce biased data, creating feedback loops that disproportionately target racialized communities.

Governance Gaps and Power Imbalances

While the DHS has proposed its AI Governance Board as a mechanism for oversight, this council is primarily composed of senior DHS officials, including representatives from CBP, ICE, and other enforcement agencies. Independent human rights experts, public defenders, and civil society actors are absent. This power imbalance limits the scope of internal accountability and fails to represent the perspectives of those most affected by AI policies.

The lack of external oversight is compounded by leadership volatility. The inaugural Chief AI Officer of DHS, Eric Hysen, served during the Biden administration until January 2025. His successor, David Larrimore, departed only months later in April 2025, leaving the role vacant at a critical time. This instability raises questions about the continuity of ethical governance and long-term strategic oversight of DHS’s AI strategy. Several use cases in the DHS inventory are either too new to assess for rights implications or have been prematurely deemed non-impacting despite clear potential concerns. For example, face recognition technologies, a known flashpoint for civil liberties violations, are in active use but have not consistently been flagged as rights-impacting in DHS’s internal assessments.

Legal Compliance vs. Ethical Responsibility

The DHS insists that its AI practices are lawful and compliant with privacy and civil liberties policies. But legality does not equal justice. Policies like M-24-10, while well-intentioned, focus heavily on internal procedures and documentation rather than substantive human rights protections. They do not require external review by civil society or provide meaningful mechanisms for affected individuals to seek redress. The US government’s approach to AI in border control often assumes that technological development leads to better governance. This assumption ignores the broader socio-political context in which these technologies operate, one marked by decades of exclusionary immigration laws, racial profiling, and militarization of the southern border.

International Implications and Global Trends

The United States is not alone in deploying AI for border enforcement. Countries such as the United Kingdom, Australia, and members of the European Union have similarly adopted AI technologies to monitor and manage migration. This global trend raises concerns about a growing international norm where surveillance and control are prioritized over human rights. As wealthier nations expand AI surveillance at their borders, the brunt of its harms is disproportionately felt by migrants and asylum seekers from the Global South, regions historically shaped by economic exploitation, climate disruption, and political instability. As such, the consequences of border technologies trickle far, affecting migrants and asylum seekers who are already navigating precarious conditions. Recognizing this global dimension is crucial to building AI governance frameworks that are both ethically grounded and internationally accountable.

The Need for Rights-Based AI

A genuinely responsible AI strategy for border enforcement must go beyond internal compliance checklists. It must incorporate binding human rights standards, transparency mechanisms, and participatory design processes that center the experiences of marginalized communities. The United States has an opportunity to lead by example in aligning its national security strategies with democratic values and human dignity. But that can only happen if the deployment of AI at the border is held accountable not just to bureaucratic standards, but to the people it affects most deeply.

Artificial intelligence, while often beneficial, is not neutral. When embedded in immigration enforcement, it risks reinforcing the same inequalities that civil rights movements have long fought to dismantle. As the DHS continues to expand its AI capabilities, the need for critical scrutiny and structural reform becomes more urgent. Technology should not be a tool for deepening exclusion, but a means to uphold justice. That goal demands transparency, accountability, and a fundamental rethinking of what it means to secure a border in a way that respects human rights.

Featured/Headline Image Caption and Citation: AI Code, Image sourced from European AlternativesCC License, no changes made

Author