Apple co-founder Steve Wozniak has again questioned whether artificial intelligence can match human judgment, saying in public remarks reported in March 2025 that he does not use AI much and does not believe machines can replace people. The comments, delivered during events in Barcelona and echoed in later appearances, matter because Wozniak has long been both an early computing pioneer and a persistent critic of overclaiming in AI. His position adds a human-centered counterpoint to a market increasingly built around automation, copilots, and generative models.
💡
Wozniak’s core argument is consistent across multiple appearances:
AI can assist people with ideas and productivity, but it does not equal human intelligence, responsibility, or emotional judgment, according to remarks reported in 2023, 2025, and at Lehigh University in February 2026.
March 2025 Remarks Put Human Judgment at the Center
Wozniak’s latest widely reported skepticism came during Mobile World Congress week in Barcelona in early March 2025. Dataconomy reported on March 4, 2025 that Wozniak warned AI has a “dark side,” objected to its overly broad use, and argued that people should express their own ideas rather than outsource thinking to software. Times of India separately reported from the same event that he said, “I trust the I, but not the A,” underscoring his distinction between intelligence as a human trait and “artificial” outputs generated by machines.
That framing is important because it does not reject AI outright. Instead, Wozniak presents AI as a tool that can support a person who is already thinking critically. LehighValleyNews reported in February 2026 that he told an audience AI can give an intelligent human many ideas, but that no one has yet given machines true human intelligence. In that sense, his skepticism is less about software capability in narrow tasks and more about the gap between pattern generation and accountable reasoning.
Key Public Positions From Steve Wozniak on AI
| Date | Venue/Report | Main Point |
|---|---|---|
| May 2023 | BBC comments cited by Times of India | Humans must take responsibility for AI-generated content |
| March 4, 2025 | Barcelona/Talent Arena coverage | AI has risks and should not replace personal expression |
| February 2026 | Lehigh University event | AI can provide ideas, but machines still lack human intelligence |
Source: Times of India, Dataconomy, LehighValleyNews | Accessed March 24, 2026
Why “I Don’t Use AI Much” Carries Weight in 2026
Wozniak’s skepticism stands out because he is not an outsider to computing. He co-founded Apple in 1976 with Steve Jobs and helped build the Apple I, one of the foundational products of the personal computer era, as noted by LehighValleyNews. When a figure with that background says he does not rely heavily on AI, the statement reads less like technophobia and more like a warning against dependency.
His argument has stayed remarkably stable over time. In 2023, Times of India summarized his BBC-linked comments by saying AI lacks emotions and therefore cannot replace humans, while also making scams and misinformation harder to detect. In 2025 and 2026, the emphasis shifted toward critical thinking and authorship: people should use AI to generate options, but not surrender judgment. That continuity suggests Wozniak is not reacting to one product cycle. He is responding to a broader cultural shift in which AI systems are increasingly marketed as substitutes rather than assistants.
Timeline of Wozniak’s AI Skepticism
March 2023: Wozniak signs a public letter calling for a pause in training the most powerful AI systems, citing broader risks around advanced models.
May 2023: He says humans must remain responsible for AI output and warns about scams and misinformation, according to reports citing a BBC interview.
March 4, 2025: At Barcelona events around MWC, he warns AI has a dark side and says widespread use can weaken independent expression.
February 2026: At Lehigh University, he says AI can help generate ideas but still falls short of human intelligence.
How Wozniak Separates Assistance From Replacement
The most useful way to read Wozniak’s comments is as a distinction between augmentation and replacement. He appears comfortable with software helping people brainstorm, code, or organize information. LehighValleyNews reported that he described AI as a source of ideas for an intelligent human who then decides where to go next. That is a narrow but practical endorsement.
What he rejects is the stronger claim that AI can stand in for human accountability. Times of India reported that he said a human has to take responsibility for what AI generates. That matters in education, journalism, law, medicine, and software development, where output quality is only part of the issue. The other part is ownership: who verifies facts, who catches bias, who accepts liability, and who understands the social consequences of an error.
His concern about critical thinking also fits that framework. Ara’s English-language report from March 2025 said Wozniak warned that people who use AI without questioning it can lose the ability to think critically. That is not a technical complaint about model architecture. It is a behavioral warning about what happens when convenience displaces reasoning.
ℹ️
Wozniak’s skepticism is not anti-technology.
Across interviews and public appearances, he describes AI as useful for generating ideas, while arguing that emotion, responsibility, and critical judgment remain human functions.
2023 to 2026: A Consistent Warning on Scams, Hype, and Overreach
Wozniak’s public record shows he has moved between caution and optimism on AI over the past decade, but his recent line is more disciplined than alarmist. Older interviews from 2015 and 2017 captured both fears about computers overtaking humanity and later remarks that he was less worried about robots taking over. By contrast, the 2023-2026 period centers on concrete risks: misinformation, criminal misuse, loss of critical thinking, and exaggerated claims that AI equals human intelligence.
That makes his latest comments more relevant to the present AI cycle. The debate in 2026 is no longer about whether AI exists as a useful tool. It clearly does. The harder question is whether organizations and consumers are using it in ways that preserve human review. Wozniak’s answer appears to be that they should, because the technology still lacks the emotional depth, self-awareness, and responsibility that define human decision-making.
Human Skills Wozniak Says AI Does Not Replace
| Capability | Why It Matters | Source Basis |
|---|---|---|
| Responsibility | Someone must verify and own AI output | 2023 comments reported by Times of India |
| Critical thinking | Users can become passive if they stop questioning outputs | March 2025 report by Ara |
| Human intelligence and judgment | AI can suggest ideas but does not equal human reasoning | February 2026 Lehigh event |
Source: Times of India, Ara, LehighValleyNews | Accessed March 24, 2026
What Wozniak’s View Means for the AI Debate in the US
For US readers, Wozniak’s position lands in the middle of a polarized conversation. One side markets AI as an efficiency engine that can automate white-collar work at scale. The other warns about existential risk. Wozniak’s public comments point to a narrower, more immediate issue: people may hand over too much cognitive work to systems that still require supervision. That is a practical concern for schools, employers, developers, and media companies.
His message is also unusually durable. From signing the 2023 pause letter to warning in 2025 about widespread AI use and repeating in 2026 that machines still do not possess human intelligence, he has kept returning to the same principle: AI may be useful, but humans still matter because they remain the source of judgment, ethics, and accountability.
Frequently Asked Questions
Frequently Asked Questions
What did Steve Wozniak say about AI replacing humans?
Across reports in 2023, 2025, and 2026, Wozniak said AI can help generate ideas but does not replace human intelligence, emotion, or responsibility. He has argued that a human must remain accountable for AI-generated output.
Did Wozniak say he does not use AI much?
Multiple 2025 reports tied to his Barcelona appearances describe him as skeptical of heavy AI use and emphasize that he does not rely on it extensively. The broader thrust of those remarks is that people should think for themselves rather than outsource expression to AI tools.
Is Steve Wozniak against AI entirely?
No. His public comments indicate a more limited position. He has said AI can be useful for giving people ideas and supporting creative or technical work, but he rejects the claim that it can fully substitute for human judgment.
Why does Wozniak think humans still matter in the AI era?
He points to responsibility, critical thinking, and emotional understanding. In his view, these are human functions that AI does not genuinely possess, even when it produces fluent or convincing outputs.
Has Wozniak warned about AI risks before?
Yes. He signed the 2023 open letter calling for a pause on training the most powerful AI systems and has repeatedly warned that AI can aid scams, misinformation, and overreliance on machine-generated content.
Conclusion
Steve Wozniak’s skepticism about AI is not a rejection of innovation. It is a reminder that useful software and human replacement are not the same thing. His comments from March 2025 through February 2026 show a consistent view: AI can assist, suggest, and accelerate, but it still depends on people to judge truth, accept responsibility, and think critically. In a market full of larger claims, that may be the most durable point of all.
Disclaimer: This article is for informational purposes only. Information may have changed since publication. Always verify information independently and consult qualified professionals for specific advice.