Home Tehnoloģija Amnestija: AI uzraudzības riski “uzlādē ASV deportācijas

Amnestija: AI uzraudzības riski “uzlādē ASV deportācijas

17
0

 

 

Automated surveillance tools powered by artificial intelligence (AI) are being used to track migrants, refugees and asylum seekers in the US, raising serious human rights concerns, according to a report by Amnesty International.

Amnesty’s analysis of documents obtained from the Department of Homeland Security (DHS) highlights how two systems – Babel X, powered by Babel Street, and Palantir’s immigration OS – have automated surveillance and mass surveillance capabilities that are being used to justify the government’s aggressive immigration enforcement operations in the region .

The organization claims the tools are part of the State Department’s AI-driven “Catch and Revoke” initiative, which combines social media monitoring of foreign individuals, visa status tracking, and automated threat assessments related to foreign individuals on visas. The practice has already been criticized for violating the First Amendment rights of people living in the United States.

Amnesty warns that the speed and scale at which these technologies can identify people and infer their behavior could lead to mass visa revocations and deportations.

“This is profound regarding the US government deploying invasive AI-powered technologies in the context of a mass deportation program,” said Erika Guevara-Rosas, senior director of research, advocacy, policy and campaigns at Amnesty International.

“The forced catch and release initiative, facilitated by AI technologies, risks arbitrary and unlawful visa revocation, detention, deportation, and human rights abuses.”

Tools

Babel X Babel X, developed by Babel Street, has been used by U.S. Customs and Border Protection (CBP) since at least 2019. It collects a huge amount of personal information, including names, emails, phone numbers, IP addresses, employment records, and mobile advertising IDs that reveal device locations. The tool can also monitor social media posts.

Amnesty says this information is fed into AI systems that scan social media for “terrorism”-related content, which can then be used to decide whether an individual’s visa should be revoked. Once a visa is revoked, Immigration and Customs Enforcement (ICE) agents can be sent to deport the person in question.

Palantir’s Immigration Lifecycle Operating System (Immigration OS) was launched following a $30 million contract with ICE in April 2025. The system integrates data sets across agencies, allowing ICE to create electronic case files, link investigations, and track personal information about immigrants. Its updated features include improved arrests based on ICE priorities, real-time monitoring of “self-detainees,” and identification of priority deportation cases, particularly visa overstayers.

According to Amnesty, the use of such tools has been critical to US authorities increasing deportations. However, the organization warns that it also increases the risk of illegal activity by using multiple public and private sources without proper oversight.

The NGO contacted both companies, and Babel did not provide any comment, and Palantir said its product was not used to power the administration’s catch-and-release efforts.

The report also notes that probabilistic systems like these often rely on behavioral inferences that can be discriminatory. For example, pro-Palestinian content can be falsely classified as anti-Semitic, reinforcing existing prejudices.

“Algoritmi ir sociāli konstruēti, un mūsu pasaule ir balstīta uz sistemātisku rasismu un vēsturisku diskrimināciju,” sacīja Jorkas universitātes bēgļu likumu laboratorijas direktore, kas specializējas migrācijā un cilvēktiesībās, un bēgļu likumu laboratorijas direktore Petra Molnara. “Šie rīki atkārtos neobjektivitāti, kas jau ir raksturīga imigrācijas sistēmai, nemaz nerunājot par jauniem radīšanu, pamatojoties uz ļoti problemātiskiem pieņēmumiem par cilvēku uzvedību.”

Molnārs uzsvēra, ka ir pamatā “sistēmiskas diskriminācijas slānis, kas to visu samazina”, pamatojoties uz pieņēmumu, ka “cilvēki kustībā ir kaut kā drauds”.

“Galu galā tas attiecas uz dehumanizāciju. Tas ir centrālais stāstījums, ko stumj Trumpa administrācija,” viņa sacīja. “Ir palielinājies eksponenciāls tehnoloģiju un uzraudzības mehānismu pieaugums, kas arvien vairāk tiek ieroči pret cilvēkiem kustībā un mobilajās kopienās.”

Amnestija kritizē arī Palantir un Bābeles ielu par nepietiekamas cilvēktiesību rūpības veikšanu, apgalvojot, ka uzņēmumi ir atbildīgi par to, lai viņu tehnoloģijas netiktu izvietotas tādā veidā, kas pārkāpj cilvēktiesības.

Molnārs norādīja uz Ruggie principiem – ANO ietvaru, kas šajā jomā nosaka korporatīvos pienākumus: “Šis ir neatkarīgs standarts privātiem uzņēmumiem. Viņiem ir jāievēro starptautiski tiesību principi, kad runa ir par tehnoloģiju izstrādi un ieviešanu.”

Molnaram ideāls risinājums būtu saistīts ar “izturīgiem cilvēka tiesībām, kas respektē ietvaru”, ieskaitot cilvēktiesību un datu ietekmes novērtējumus, kas veikti visā projekta dzīves ciklā. Bet viņa uzsvēra nepieciešamību pēc “sabiedrības izpratnes par to, ko šie uzņēmumi dara” un “noteiktu uzņēmumu atsavināšana”.

“Starp cilvēkiem, kuri faktiski attīsta tehnoloģiju un skarto kopienu, ir jābūt atklātam dialogam, jo ​​šobrīd ir šī siena starp cilvēkiem, kuri attīsta tehnoloģiju, un cilvēkiem, kuriem tehnoloģija sāp,” viņa sacīja.

“Šīs ir tendences, kuras esmu redzējis visā pasaulē. Tas nav tikai Amerikas Savienotajās Valstīs, bet es domāju, ka Amerikas Savienotās Valstis ir jaunākā manifestācija.”

Datoru nedēļa sazinājās ar Palantir un Babel Street par Amnesty ziņojuma radītajām bažām, un uzdeva vairākus turpmākus jautājumus, tostarp to, kā firmas strādā, lai samazinātu algoritmisko neobjektivitāti, pasākumus, ko viņi veic, lai izvairītos no negatīvas cilvēktiesību ietekmes uz viņu izvietošanu, un vai kāds no uzņēmumiem ir konsultējies ar skartajām migrantu kopienām.

Neviens no tām nebija atbildējis līdz publicēšanas laikam.

Lielbritānija paralēles

Līdzīgi modeļi parādās Lielbritānijā. Cilvēktiesību kampaņas dalībnieki migrantu tiesību tīklā (MRN) ir izpētījuši AI izmantošanu uz robežas un uzsvēruši tā pieaugošo lomu uzraudzības tehnoloģijās, piemēram, sejas atpazīšanā.

“AI technologies are being used under the guise of efficiency. It allows border immigration systems to become automated. It reduces the need for human intervention to make borders dependent on patrol or physical walls,” said an MRN spokesperson.

The organization argues that the government’s reliance on private contractors risks creating and exacerbating an already “digitally hostile environment.” But, they add, it is often difficult to obtain information about how these technologies are being used.

For example, in his investigation into the Home Office’s deployment of Anduril Maritime Sentry Towers on the southeast coast of England, researcher Samuel Storey had to submit 27 separate Freedom of Information (FOI) requests. While the Home Office claimed that the towers were designed to support “environmental protection,” Storey claims that they are used to monitor migrant crossings between Britain and France.

“The FOI system is an extension of state secrecy. It’s not really a tool for freedom of information, but rather an extension of the state’s ability to withhold or disclose,” he said.

MRN has also raised issues of data access and privacy.

“It’s been incredibly difficult to figure out, but this data will be stored somewhere, and we suspect it will be stored in one of these Amazon Web Services centers because they have huge contracts with the government. Personal data is one of the most valuable things that companies can have,” the spokesperson said.

They added that if private companies were to store the data, those companies would have access to that data, and additional entities, such as foreign governments, could also technically access it.

A previous Computer Weekly report warned that the new EVISA systems could be used to track migrants and support immigration enforcement.

source