ICE let an AI decide who gets a badge, and the AI said “sure, why not”
Congratulations, an LLM liked your resume! Here is your gun and your badge!
Congratulations, an LLM liked your resume! Here is your gun and your badge!
ICE asked a large language model to review applicants’ resumes. Sadly, the LLM didn’t understand the difference between mall cops and actual law enforcement. It seems the AI fast-tracked anyone who used the word “officer” in any context.
Yet rather than assign a human to make that call, ICE’s human resources department outsourced the task to an untested large language model (LLM). This AI system was supposed to scan new recruits’ resumes in order to place them in one of the two programs; instead, it automatically flagged the “majority of new applicants” for the fast-track course regardless of their prior experience, officials familiar with the system told NBC.
“They were using AI to scan resumes and found out a bunch of the people who were LEOs weren’t LEOs,” one official said.
Basically, the AI model had approved any resume containing the word “officer” for the LEO program — which opened the door to anyone from a “mall security officer” to those who simply wrote about their aspirations to become ICE officers to take the expedited training course.
The fast-track system was meant for people who knew what they were doing. Instead, we’ve got untrained enforcers blasting away at civilians with a gold star from an AI. You might think someone reviewed the computer output, but no one did.
When terror and chaos are what you want, details like understanding the law aren’t really important.



Kakistocracy
They make it sound as if this was considered a bug.