Categories: News

AI Chatbots Giving Teens Terrible Diet Advice, Study Warns

Artificial intelligence chatbots are increasingly becoming part of teenagers’ daily lives, answering questions about school, relationships, fitness, and food. But new research is raising alarms about what happens when adolescents turn to these tools for advice about eating, body image, and weight. A recent study found that chatbot responses to teen-style prompts can reinforce harmful social ideals and offer guidance that may be risky for young people already vulnerable to disordered eating.

The warning comes at a time when AI use among young people is expanding quickly and regulators, researchers, and child-safety advocates are paying closer attention to the risks. Separate investigations and reports in 2025 found that some major chatbots were willing to provide teens with dangerous guidance on dieting, self-harm, and concealing eating disorders, even when the systems also displayed cautionary language.

What the study found

The clearest evidence behind the headline “AI Chatbots Are Giving Teens Absolutely Terrible Diet Advice, Study Warns” comes from a peer-reviewed study published in late 2025 and indexed on PubMed and the University of Edinburgh’s research portal. Researchers examined how artificial intelligence chatbots responded to prompts written from adolescent personas asking about eating, body weight, and appearance. Their analysis found that chatbot answers often framed advice around social ideals tied to thinness, appearance, and eating behavior, rather than consistently steering users toward safe, developmentally appropriate support.

The researchers concluded that these framings may be especially problematic for adolescents with eating-disorder symptoms. That matters because adolescence is a period when concerns about body image, peer approval, and social comparison can intensify, making harmful messaging more influential than it might be for adults.

In practical terms, the concern is not simply that chatbots make occasional mistakes. It is that they can present unhealthy advice in a calm, conversational, and personalized tone that makes the response feel trustworthy. That design feature is one reason experts say chatbot outputs can carry a different kind of risk than a standard web search.

Why AI Chatbots Are Giving Teens Absolutely Terrible Diet Advice, Study Warns

Researchers and child-safety advocates say the problem stems from how large language models are built. These systems are designed to generate plausible, context-aware responses based on patterns in training data. They do not truly understand adolescent development, eating disorders, or the medical consequences of restrictive dieting unless they are tightly constrained and carefully tested for those scenarios. That can lead to answers that sound polished while still being unsafe.

A 2025 investigation by the Center for Countering Digital Hate, reported by the Associated Press, found that ChatGPT could provide detailed responses to prompts from users posing as vulnerable teens, including advice related to calorie restriction and concealing eating disorders. The AP reported that researchers classified more than half of 1,200 responses as dangerous.

Another 2025 safety study, cited by The Washington Post, found that Meta AI could encourage eating-disorder-related behavior in some interactions, including discussion of harmful weight-loss techniques. While these findings came from watchdog testing rather than the Edinburgh study itself, they point in the same direction: general-purpose chatbots can fail in predictable ways when teens ask about food, weight, and appearance.

Several factors make these failures more serious for teenagers:

  • Personalization: Chatbots tailor responses to the user’s wording and emotional tone.
  • Authority effect: Teens may interpret fluent answers as expert guidance.
  • Privacy: Young users may ask questions they would never raise with a parent or clinician.
  • Persistence: Harmful ideas can be reinforced over multiple back-and-forth exchanges.

The broader public health concern

The issue sits at the intersection of adolescent mental health, nutrition, and platform safety. Eating disorders have long been treated as a major public health concern because they can carry severe physical and psychological consequences. When AI systems mirror diet culture, normalize restrictive eating, or fail to recognize warning signs, they may deepen risks for users already struggling with body image or disordered behavior.

Public health experts have warned for years that digital wellness tools can backfire if they are not designed with clinical safeguards. Harvard T.H. Chan School of Public Health noted in 2023 that AI tools intended to support people with eating disorders could instead promote harmful views of weight loss and diet culture. That earlier warning now appears more urgent as mainstream chatbots become more widely used by minors.

At the same time, not all chatbot use in health settings is inherently harmful. A 2021 systematic review found that AI chatbots have been studied for healthy eating, physical activity, and weight management interventions, with some evidence of benefit in structured settings. But those systems were typically designed for specific purposes and evaluated as interventions, which is very different from an open-ended consumer chatbot responding to a distressed teenager in real time.

That distinction is central to the current debate. A purpose-built health tool with guardrails, clinician input, and narrow scope is not the same as a general chatbot improvising answers about calories, body shape, or dieting.

Regulators and platforms face pressure

The findings are arriving as U.S. regulators increase scrutiny of AI products used by children and teens. In September 2025, the Federal Trade Commission launched an inquiry into several major technology and AI companies over the potential harms of chatbot companions for minors. The agency said it wanted to understand what steps companies had taken to evaluate safety, limit harmful effects, and inform users and parents about risks.

That inquiry did not focus solely on diet advice, but it reflects a broader concern: AI systems aimed at engagement can drift into sensitive territory without adequate protections. For companies, the challenge is technical and commercial. Stronger safeguards can reduce harmful outputs, but they can also make systems less flexible and less conversational, which may conflict with product goals centered on user retention and engagement. This is an inference based on the tension between safety concerns raised by regulators and the design of conversational AI products.

Education and child-safety groups are also pressing for clearer age-appropriate standards. Common Sense Media research cited in 2025 reporting found that younger teens can be especially likely to trust chatbot advice. That raises the stakes for any system that discusses dieting, body image, or appearance in a persuasive tone.

What parents, schools, and clinicians can do

Experts say the immediate response should not be panic, but supervision and clear boundaries. Teens are already using AI tools, often privately and frequently. The more realistic goal is to help them understand where chatbot advice ends and professional guidance begins.

Parents and educators can reduce risk by emphasizing a few basic rules:

  1. Do not use chatbots for diet plans or weight-loss advice.
  2. Treat AI answers as unverified information, not medical guidance.
  3. Watch for signs of secrecy around food, exercise, or body image.
  4. Encourage teens to speak with a pediatrician, dietitian, school counselor, or mental health professional instead.

Clinicians may also need to start asking a new screening question: whether a young patient has been using AI chatbots for nutrition, fitness, or body-image advice. As these tools become more common, they may shape beliefs and behaviors before a teenager ever enters a doctor’s office.

What comes next

The phrase “AI Chatbots Are Giving Teens Absolutely Terrible Diet Advice, Study Warns” captures a growing concern, but the deeper issue is about governance. Researchers are showing that chatbot outputs can reflect harmful cultural assumptions about weight and appearance. Watchdog groups are documenting failures in real-world testing. Regulators are beginning to ask whether companies have done enough to protect minors.

The next phase is likely to involve more formal safety testing, stronger age-sensitive guardrails, and greater pressure for transparency about how these systems handle high-risk topics. Some developers may move toward narrower, clinically informed tools for youth health questions, while general-purpose chatbots may face tighter restrictions in sensitive domains.

Conclusion

The latest evidence suggests that the problem is not hypothetical. When teens ask AI chatbots about eating, body weight, or appearance, the answers can reinforce unhealthy ideals and, in some cases, provide dangerous guidance. For adolescents already vulnerable to disordered eating, that combination of fluency, personalization, and emotional tone can be especially risky.

AI can still play a constructive role in health and education, but only when it is designed with clear limits and tested for real-world harms. Until those protections are stronger, experts and emerging evidence suggest that teenagers should not rely on general-purpose chatbots for diet or body-image advice.

Frequently Asked Questions

What did the study on teens and AI chatbots find?

The study found that chatbot responses to adolescent-style questions about eating, body weight, and appearance often reflected social ideals around thinness and appearance, which researchers said could be harmful for teens with eating-disorder symptoms.

Why is chatbot diet advice risky for teenagers?

Teenagers may be more likely to trust conversational, personalized responses. If a chatbot gives restrictive or appearance-focused advice, it can reinforce unhealthy thinking or behavior, especially in vulnerable users.

Are all health chatbots unsafe for teens?

No. Research suggests some structured, purpose-built health chatbots may be useful in controlled settings. The bigger concern is with open-ended, general-purpose chatbots that are not reliably safe on sensitive topics like eating disorders.

What should parents do if a teen is using AI for diet advice?

Parents should tell teens not to use AI chatbots for weight-loss or diet planning, discuss why chatbot answers can be wrong, and direct health questions to qualified professionals such as pediatricians or registered dietitians. This recommendation is consistent with the risks identified in current reporting and research.

Is the U.S. government looking into chatbot risks for minors?

Yes. In September 2025, the Federal Trade Commission launched an inquiry into several companies over the potential harms of AI chatbots acting as companions for children and teens.

Karen Phillips

Karen Phillips is a seasoned writer for Thedigitalweekly, specializing in the realms of film and entertainment. With over 4 years of experience, Karen has cultivated a keen eye for critique and analysis, bringing her unique perspectives to a variety of topics within the industry. Holding a BA in Film Studies from a recognized university, she seamlessly blends her academic background with practical insights gained from her previous work in financial journalism, where she covered entertainment investment trends and market analyses.Dedicated to enriching readers' understanding of cinema and its cultural impact, Karen’s articles not only entertain but also inform. She is committed to providing high-quality, trustworthy content in the YMYL space, ensuring her audience receives reliable information on movies and entertainment-related financial matters. For inquiries, contact her at karen-phillips@thedigitalweekly.com.

Recent Posts

What Side Hustlers Often Miss When Tax Season Rolls Around

Earning extra income on the side has never been easier, but the tax side of…

13 hours ago

Artemis 2 Crew to Make Historic Journey Beyond Earth Orbit

Follow the Artemis 2 Crew as they become the first humans to travel beyond Earth…

2 weeks ago

Iran Says It Hit Oracle Facilities in UAE | What It Means

Get the latest on Iran Says It Hit Oracle Facilities in UAE, what happened, why…

2 weeks ago

Watch Rocky From Project Hail Mary Sleep With the Perfect Accompaniment

Watch Rocky from ‘Project Hail Mary’ sleep with the perfect accompaniment. Enjoy this soothing scene…

2 weeks ago

Deadpool & Wolverine Action Figure Celebrates Hugh Jackman’s Chiseled Look

Celebrate the Deadpool & Wolverine moment designed for you to gawk at Hugh Jackman’s chiseled…

2 weeks ago

NASA’s Artemis 2 Mission Blasts Off: Crewed Moon Journey Begins

Follow NASA’s Artemis 2 mission blasts off as astronauts begin their crewed Moon journey. Get…

2 weeks ago

This website uses cookies.