Artificial intelligence is becoming a routine part of American life, from search engines and customer service to health tools and workplace software. Yet public comfort with the technology is not keeping pace with its spread. A new comparison making the rounds online — that people hate AI even more than they hate ICE, poll finds — captures a broader reality in U.S. opinion research: Americans remain deeply skeptical of AI, and in some surveys that skepticism is stronger than disapproval toward major government institutions, including U.S. Immigration and Customs Enforcement.
That does not mean the public views AI and ICE in the same way, or for the same reasons. ICE is judged through politics, immigration enforcement, and civil liberties. AI is judged through trust, safety, jobs, misinformation, and accountability. But recent polling shows both face serious image problems — and AI’s trust deficit is proving especially broad, cutting across industries and daily use cases.
The phrase “People Hate AI Even More Than They Hate ICE, Poll Finds” reflects a comparison between separate public opinion surveys rather than a single head-to-head poll. In one recent national survey from Marquette Law School, 60% of U.S. adults said they disapproved of the work of ICE, while 40% approved. The poll was conducted from January 21 to January 28, 2026, among 1,003 adults nationwide, with a margin of error of plus or minus 3.4 percentage points.
By contrast, several recent surveys show AI drawing even wider distrust. A YouGov survey published in late 2025 found that while most Americans use AI in some form, no sector tested achieved a net positive trust score. The survey said skepticism was especially high in sensitive areas such as finance and healthcare, and slightly more Americans reported that their trust in AI had declined rather than increased.
Pew Research Center also found a sharp gap between public sentiment and expert opinion. In a report published on April 3, 2025, Pew said only 11% of U.S. adults were more excited than concerned about the increased use of AI in daily life, compared with 47% of AI experts. Pew’s findings were based on a survey of 5,410 U.S. adults conducted August 12 to August 18, 2024, and a separate survey of 1,013 AI experts.
Taken together, the data suggests that Americans may not simply be cautious about AI. Many are actively uneasy about how it is being deployed, who controls it, and whether safeguards are keeping up.
Public distrust of AI is rooted in several overlapping concerns. Unlike many earlier technologies, AI is arriving not as a single product but as a system embedded across work, education, media, healthcare, finance, and government. That breadth makes the risks feel harder to contain.
Polling points to a few recurring fears:
Healthcare is a particularly revealing example. A KFF poll released in August 2024 found that 63% of the public were not confident that AI chatbots provide accurate health information. Even among AI users, 56% said they lacked confidence in chatbot accuracy for health questions.
YouGov found a similar pattern across sectors. Americans may use AI tools for convenience, but that does not translate into trust. This gap matters because adoption without confidence can produce a fragile market: people rely on systems they do not fully believe in, while regulators and companies struggle to reassure them.
The comparison is provocative, but it needs context. ICE is a politically charged federal agency whose approval and disapproval ratings are closely tied to views on immigration policy, border enforcement, and presidential politics. AI, by contrast, is not one institution. It is a fast-growing technological ecosystem spanning major companies, startups, public agencies, and consumer platforms.
That distinction matters because distrust of AI is more diffuse. Americans are not reacting to one office, one leader, or one policy. They are reacting to a sense that the technology is moving faster than the rules, and faster than public understanding. In that sense, the line that people hate AI even more than they hate ICE, poll finds is less about a literal ranking of enemies and more about the scale of public anxiety around automation.
There is also a paradox at work. AI use is rising even as trust remains weak. According to YouGov, most Americans now use AI, but trust has not improved overall. That suggests familiarity alone is not solving the legitimacy problem.
One of the clearest findings in recent research is the divide between those building AI and those living with it. Pew found experts are far more optimistic than the general public about AI’s role in daily life. That mismatch may be fueling the backlash. When industry leaders emphasize productivity and innovation, many consumers hear risk, opacity, and loss of control.
The same report found limited confidence in government regulation. A majority of both the public and AI experts said they had not too much or no confidence that the government would regulate AI effectively. That is a striking point of agreement in an otherwise divided debate.
For technology companies, the message is clear: adoption metrics alone are not enough. If users feel pressured into AI systems they do not trust, backlash can grow quickly. That can affect product rollout, brand reputation, and regulatory scrutiny.
For policymakers, the polling raises a harder question. Americans appear to want stronger guardrails, but they are not convinced institutions can deliver them. Rasmussen reported in 2025 that 72% of voters were concerned about AI, while 77% supported laws requiring AI systems to protect constitutional rights such as free speech and religious freedom. While that survey comes from a different polling organization with its own methodology, it reinforces the broader pattern of concern and demand for oversight.
For workers, distrust often centers on practical consequences rather than abstract ethics. People worry about whether AI will replace tasks, monitor performance, or make flawed decisions about hiring, pay, insurance, or credit. Those concerns are likely to intensify as AI tools move deeper into white-collar and service-sector jobs.
The broader significance of the “People Hate AI Even More Than They Hate ICE, Poll Finds” framing is that it captures a rare bipartisan discomfort. Americans disagree sharply on immigration, but AI skepticism often cuts across ideological lines, even if the reasons differ. Some fear censorship and centralized control. Others fear discrimination, labor disruption, or corporate concentration.
That makes AI politically volatile. It is now central to economic policy, national security, education, and media, yet it lacks the public legitimacy that usually supports large-scale technological change. If trust does not improve, future fights over AI governance could become more intense, not less.
According to Pew Research Center, the public is far less enthusiastic than experts about AI’s growing role in everyday life, a gap that may shape future regulation and adoption. According to YouGov, there is still no major sector in which more Americans trust AI than distrust it, underscoring how broad the skepticism has become.
The idea that people hate AI even more than they hate ICE is a sharp headline, but the underlying message is serious. In the United States, AI is expanding faster than public trust. Recent polling shows Americans use the technology, encounter it often, and still remain uneasy about its accuracy, fairness, and oversight.
That distrust does not mean AI adoption will stop. It does mean the next phase of the AI economy will depend less on novelty and more on accountability. Companies, regulators, and public institutions now face the same challenge: proving that AI can be useful without becoming unaccountable. Until that happens, skepticism is likely to remain one of the defining facts of the U.S. AI debate.
It refers to a comparison between separate polls showing strong public disapproval of ICE and even broader distrust of AI in many contexts. It is a shorthand for the depth of public skepticism, not a literal single-question ranking.
The available reporting points to separate surveys rather than one direct head-to-head poll. Marquette measured ICE approval, while YouGov, Pew, and others measured trust and concern around AI.
Common reasons include fears about misinformation, bias, job loss, weak regulation, privacy, and errors in high-stakes areas such as healthcare and finance.
Yes. Recent polling shows most Americans use AI in some form, even though trust remains low and in some cases has declined.
Some international polling suggests the U.S. is more skeptical than countries such as China and several other markets surveyed globally.
It could, but polling suggests many Americans are not yet confident that government will regulate AI effectively. Better transparency, clearer rules, and accountability for harms are likely to be central to any trust recovery.
Earning extra income on the side has never been easier, but the tax side of…
Follow the Artemis 2 Crew as they become the first humans to travel beyond Earth…
Get the latest on Iran Says It Hit Oracle Facilities in UAE, what happened, why…
Watch Rocky from ‘Project Hail Mary’ sleep with the perfect accompaniment. Enjoy this soothing scene…
Celebrate the Deadpool & Wolverine moment designed for you to gawk at Hugh Jackman’s chiseled…
Follow NASA’s Artemis 2 mission blasts off as astronauts begin their crewed Moon journey. Get…