ICE Agents Can Now Scan Your Face with Their Phones…No, Really

Jul 10, 2025By Yvette Schmitter
Yvette Schmitter

Breaking: While we were busy arguing about TikTok privacy, the government quietly gave immigration agents facial recognition apps that make your iPhone's Face ID look like a Fisher-Price toy. Meanwhile, New York, the surveillance capital of America, might actually ban this tech. The Pplot twist no one, but everyone, saw coming.

The TL; DR: ICE (Immigration and Customs Enforcement) is using a new mobile phone app called "Mobile Fortify" that can identify someone based on their fingerprints or face by simply pointing a smartphone camera at them. Because apparently, asking for ID was too much paperwork. But there's hope: civil rights activists are mounting serious challenges, starting with the biggest target of all, New York City.

The "Feature," Not the Bug

As a Black woman, I already knew what the WEF report confirms: facial recognition is 10-100x more likely to misidentify me than a white person. NIST tested 189 systems, Black women had the worst accuracy across every single algorithm. When 28 Congress members were falsely matched to criminal databases (mostly people of color), that wasn't a bug. That was the feature.

Now ICE agents can point their government-issued smartphones at your face and get instant "verification" from databases holding records on more than 270 million individuals. Because what could go wrong when you combine algorithmic bias with deportation quotas?

Check this out: Juan Carlos Lopez-Gomez, despite his U.S. citizenship and Social Security card, was arrested on April 16 on an unfounded suspicion of him being an "unauthorized alien" and kept in county jail for 30 hours "based on biometric confirmation of his identity." That's not a technical glitch; that's the system working as designed.

But here's what they don't want you to know: This tech can be stopped. Cities like Oakland, San Francisco, and Somerville, Massachusetts have already banned police use of facial recognition. And now, a coalition led by Amnesty International is setting its sights on the nation's biggest prize>>>> New York City.

From Slave Patrols to Smartphone Apps

This isn't innovation, it's automation. We've digitized centuries of "all Black people look alike" and handed it to agents with arrest quotas. The underlying system used for the facial recognition component of the app is ordinarily used when people enter or exit the U.S. Now, that system is being used inside the U.S. by ICE to identify people in the field.

Picture this: Serena Williams flagged as a threat because algorithms trained on 83.5% white faces can't distinguish us. Her image permanently labeled as suspicious. Meanwhile, Clearview AI scraped 3 billion photos without consent, creating a "growing database of individuals who have attracted attention from law enforcement."

The NYPD ran over 11,000 searches with Clearview AI software, because apparently, stop-and-frisk needed an upgrade. When protesters like Dwreck Ingram got tracked via Instagram and confronted at home with megaphones after George Floyd protests, that wasn't overreach, that was beta testing. Ingram, now co-founder of Warriors In The Garden, discovered police used facial recognition to identify him during protests, sourcing comparison photos from his social media. They charged him with a felony for allegedly shouting into an officer's ear with a megaphone. Felony charges for First Amendment rights with amplification. "I still feel like I'm being monitored," Ingram told WIRED. That's not paranoia, that's pattern recognition.

The brutal math remains:

biased training data + biased cops + biased courts = predictable injustice at scale.

"But There Are Safeguards!" (Narrator: There Weren't)

Sure, there are. The NYPD claims they "do not use facial recognition technology to monitor and identify people in crowds or political rallies." They also acknowledged to Gothamist they used facial recognition in Dwreck Ingram's case. So, either they're lying, or they don't consider protesters "people."

Both options are terrifying.

The internal documents reviewed by 404 Media make no mention of any opt-out option for people subject to Mobile Fortify scans. Unlike fixed border checkpoints, ICE Enforcement and Removal Operations (ERO) officers operate across local jurisdictions, including workplaces, transit systems, and private property, often without clear statutory guardrails or warrants.

Translation: They can scan your face on the street, at work, or getting coffee.

  • No warrant required.
  • No notification given.
  • No recourse available.

NYPD Detective Sophia Mason insists facial recognition is just "a limited investigative tool" and "no enforcement action is ever taken solely on the basis of a facial recognition match."  Tell that to the Black men wrongfully arrested in New Jersey and Michigan. Tell that to Juan Carlos Lopez-Gomez.

The World Economic Forum’s nine principles for "responsible" use, human oversight, transparency, accuracy standards, are window dressing. Oakland and San Francisco banned this entirely, recognizing what reforms obscure: you can't fix algorithmic racism with better protocols. When your training data reflects structural racism, your algorithm will too.

The Real Threat Isn't the Mistakes—It's the Success

While facial recognition's mistakes are dangerous, its potential for abuse when working as intended is even scarier. Perfect accuracy is the nightmare scenario, not the dream.

Consider: Federal agents could use facial recognition on photos and footage of protests to identify each of the president's perceived enemies, and they could be arrested and even deported without due process rights. When surveillance suppresses dissent, the First Amendment becomes theoretical.

But here's the thing they never mention: The police already have tremendous resources. As Amnesty International's Michael Kleinman points out, "The idea that without facial recognition, they're left powerless ignores the powers they already have." The "if you don't allow us to do X then we can't do our job" argument can justify any level of surveillance. There's no stopping point.

As Shoshana Zuboff (professor emerita at Harvard Business School) warns, surveillance capitalism "erodes democracy from within" by undermining our "right to the future tense." Every public gathering becomes a data harvesting operation. Every face becomes a potential match. Every Black person becomes a suspect.

When algorithms can't tell us apart but always tell us we don't belong, that's not security. That's oppression with WiFi. But here's what's different about this moment: Cities are banning this tech. Activists are mapping surveillance networks. Lawmakers are introducing bills. People like Dwreck Ingram are turning their trauma into organizing power.

This technology doesn't make anyone safer; it makes everyone trackable. The question isn't whether you have something to hide. The question is whether you want to live in a society where your face is a search warrant and your existence is probable cause.

We don't have to accept algorithmic oppression as inevitable.

We can choose to be responsible stewards of the technology instead of passive victims of it.

Because if we're not paying attention, we'll wake up in a world where walking while Black or Brown, or non-Christian, or documented, or undocumented, or breathing, is enough to get you scanned, tracked, and disappeared.

And they'll call it keeping us safe.