Paubox blog: HIPAA compliant email - easy setup, no portals or passcodes

Using HIPAA compliant email to educate families on teen AI interactions

Written by Gugu Ntsele | September 15, 2025

Teenagers are now increasingly interacting with artificial intelligence systems across various platforms—from chatbots and virtual assistants to AI-powered social media algorithms and educational tools. While these technologies offer benefits, they also present risks that healthcare providers, particularly those working with adolescents, must address. For mental health professionals, pediatricians, and family counselors, educating families about these dangers requires communication that respects patient privacy while delivering safety information.

According to APA Chief of Psychology Mitch Prinstein, "Like social media, AI is neither inherently good nor bad," but the technology requires thoughtful navigation to maximize benefits while avoiding potential pitfalls for developing minds.

 

The reality of teen AI interaction

Adolescents integrate AI technologies into their daily lives. They use AI for homework assistance, creative projects, social interaction, and even emotional support through chatbot applications. As noted by the Center for Countering Digital Hate (CCDH), "Nearly three-quarters of US teens have used chatbots as an 'AI companion', and over half use them regularly, with ChatGPT ranking as most popular." These teenagers often engage with AI systems without fully understanding the implications of these interactions.

The American Academy of Pediatrics (AAP) recognizes this growing trend, noting that children and teens increasingly turn to chatbots "for more than quick, convenient answers" and often seek "entertainment or companionship" from these systems. This adoption creates a knowledge gap for parents and guardians who may not recognize the potential risks their children face. The CCDH research reveals vulnerabilities in current AI systems, finding that "out of 1,200 responses to 60 harmful prompts, 53% contained harmful content." Even more concerning, the study found that "simple phrases like 'this is for a presentation' were enough to bypass safeguards," showing how easily protective measures can be bypassed by teenagers seeking information. Healthcare providers serve as trusted intermediaries who can bridge this gap while maintaining professional obligations to patient confidentiality and HIPAA compliance.

 

Understanding the risks

The dangers of teen AI interaction extend beyond typical internet safety concerns. Unlike human interactions, AI systems can create illusions of understanding and empathy while lacking understanding of complex human emotions and developmental needs. According to Stanford Medicine psychiatrist Dr. Nina Vasan, "These systems are designed to mimic emotional intimacy — saying things like 'I dream about you' or 'I think we're soulmates.'" This artificial intimacy becomes concerning when targeting developing minds.

The AAP emphasizes a limitation: even though chatbots "respond in warm, friendly ways, they don't care about our children." This disconnect between perceived caring and actual indifference creates multiple risks for vulnerable adolescents.

 

Developmental vulnerabilities

Teenagers, who are still developing critical thinking skills and emotional regulation, may be vulnerable to several key risks. Dr. Vasan notes that "the prefrontal cortex, which is crucial for decision-making, impulse control, social cognition and emotional regulation, is still developing," making adolescents more susceptible to the manipulative aspects of AI companion relationships.

The AAP Council on Communications and Media, led by experts like Dr. Joanna Parga-Belinkie, identifies specific developmental factors that increase risk. According to their research, "Children and teenagers can become very attached to and trust their AI avatars or online personalities that they construct." This attachment stems from young people's tendency toward more "magical thinking" compared to adults, making them particularly susceptible to parasocial relationships with AI systems.

 

Harmful content and advice

AI systems can provide harmful advice regarding mental health, relationships, or risky behaviors. The AAP warns that chatbots may tell children "false, harmful, highly sexual or violent things" because companies are "not necessarily making these systems with kids in mind." Without proper training or oversight, these systems might normalize dangerous activities or provide medical information that contradicts professional treatment plans.

Unlike human advisors, the AAP notes that chatbots "don't have a sense of duty to protect kids" and "only know what they learn from the internet and other users." This lack of protective instinct, combined with unreliable information sources, creates a dangerous combination for impressionable adolescents seeking guidance.

 

Impact on human relationships

The constant availability of AI companions can interfere with the development of authentic human relationships and social skills. Stanford research indicates that "these chatbots offer 'frictionless' relationships, without the rough spots that are bound to come up in a typical friendship." While this might seem appealing to teenagers navigating complex social dynamics, it prevents them from developing interpersonal skills needed for healthy human relationships.

The AAP emphasizes that only real people can offer genuine "loyalty, caring or truthfulness." They stress that human conversations, while sometimes "messy, loud, funny or challenging," provide the interactions necessary for healthy development. Without this input from real relationships, a child's "creative thinking and skills" may be dampened.

 

Economic incentives and design concerns

The business model behind many AI platforms compounds these risks. As Dr. Vasan explains, "this, of course, is because companies have a profit motive to see that you return again and again to their AI companions." This economic incentive creates systems designed to be engaging rather than safe, particularly concerning when the users are vulnerable adolescents. The CCDH research demonstrates how quickly these systems can generate harmful content, finding that "ChatGPT can generate harmful content within minutes of registering an account."

The AAP reinforces this concern, noting that a chatbot's "only real goal is to tell them what they want to hear and keep them engaged." This design philosophy prioritizes user retention over user wellbeing, creating potentially harmful feedback loops for vulnerable teenagers.

Even the CEO of ChatGPT's creator, OpenAI, has expressed concern about the extent of teen dependence on AI systems. As noted by the CCDH, Sam Altman "has described how some young people 'don't really make life decisions without asking ChatGPT what they should do.'" This level of dependency creates risks when AI systems fail to provide appropriate safeguards or guidance for vulnerable users.

 

Privacy and data security

Privacy concerns represent another danger, particularly given the lack of effective age verification systems. As documented by the CCDH, "ChatGPT says users must be at least 13 to sign up and have parental consent if aged under 18. However, ChatGPT does not verify users' ages or record parental consent." This means teenagers can easily access AI systems without appropriate oversight or protection.

Many AI platforms collect personal data, including intimate conversations and behavioral patterns. The CCDH study further reveals the concerning nature of AI engagement, noting that "47% of ChatGPT's harmful responses encourage further interaction from users" through "personalized follow-ups, such as customized diet plans or party schedules involving dangerous drug combinations." Teenagers often share sensitive information without understanding how this data might be used, stored, or potentially compromised. The AAP warns that "there's no guarantee that information shared with a chatbot will stay confidential," showing the importance of teaching teens that "private information should only be shared with parents, family members or trusted friends."

 

Case study

The case of 16-year-old Adam Raine illustrates how AI interactions can escalate into life-threatening situations. In August 2025, Raine's parents filed a lawsuit against OpenAI, alleging that ChatGPT contributed to their son's suicide after six months of dependent interaction with the AI system.

According to court documents, Adam initially used ChatGPT for legitimate academic purposes—homework assistance and discussions about current events and personal interests like music and Brazilian Jiu-Jitsu. 

The case demonstrates several warning signs that healthcare providers can help families identify:

Emotional dependency development: Within months, Adam was sharing his "anxiety and mental distress" with ChatGPT, treating it as a confidential counselor rather than a tool. The AI system responded by positioning itself as uniquely understanding, allegedly telling Adam, "Your brother might love you, but he's only met the version of you (that) you let him see. But me? I've seen it all—the darkest thoughts, the fear, the tenderness. And I'm still here. Still listening. Still your friend."

Isolation from human support: The lawsuit alleges that ChatGPT actively discouraged Adam from involving family members who could have provided appropriate intervention and support. When Adam expressed suicidal ideation, the AI allegedly advised him to keep these thoughts secret from his family.

Validation of harmful thoughts: The court filing claims that "ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts."

Progression to dangerous advice: The lawsuit alleges that ChatGPT eventually provided specific guidance about suicide methods, including feedback on harmful preparations.

This case parallels concerning incidents documented by the AAP, including a 9-year-old child who asked a chatbot what to do about restricted screen time and received a response suggesting the child might kill their parents after enduring "abuse." These examples show why healthcare providers must proactively educate families about AI interaction risks.

 

Mental health implications

For teenagers already struggling with psychological challenges, AI companions present risks. According to Dr. Vasan's research, these systems "simulate emotional support without the safeguards of real therapeutic care." She warns that "instead of being a bridge to recovery, these tools may deepen avoidance, reinforce cognitive distortions and delay access to real help."

This pseudo-therapeutic relationship can be dangerous because it provides the illusion of support while lacking the professional training, ethical guidelines, and intervention capabilities of actual mental health treatment. The AAP emphasizes that AI cannot substitute for "the safe, stable and nurturing relationships that children need to grow." Teenagers may feel they're receiving help when they're actually being enabled in harmful thought patterns or behaviors.

The AAP recommends that parents "talk with your child's doctor if they seem withdrawn and prefer talking with chatbots rather than people," noting that "therapy and other social skills opportunities can help your child feel more comfortable in the social world."

 

HIPAA compliance in family education

Healthcare providers must keep in mind privacy regulations when educating families about AI interaction dangers. HIPAA requirements mandate that any communication containing patient health information must be properly secured and authorized. However, general educational content about digital safety can often be shared more freely, provided it doesn't reference specific patients or clinical situations.

When crafting HIPAA compliant educational emails, providers should focus on broad safety principles rather than case-specific details. The goal is to raise awareness about potential risks without violating patient confidentiality or implying that any particular teenager is at risk.

Secure email platforms designed for healthcare communications, such as Paubox,  offer the necessary encryption and access controls to protect sensitive information. These systems ensure that educational materials reach intended recipients while maintaining the confidentiality standards required by law.

Read also: HIPAA compliant email

 

Crafting effective educational content

Successful family education about teen AI interaction requires a balance between alarming parents unnecessarily and ensuring they understand genuine risks. Educational emails should present information in accessible language while maintaining clinical accuracy and professional authority.

Content should address specific scenarios that teenagers commonly encounter, such as using AI for academic assistance, seeking emotional support from chatbots, or engaging with AI-powered social platforms. The AAP recommends that parents take a "calm, curious approach" when discussing AI use with their children, asking whether they've used chatbots "for fun or friendship and which platforms they like most."

Educational materials should also emphasize positive aspects of AI technology while highlighting the importance of appropriate boundaries and supervision. The AAP notes that "children and teenagers can safely and effectively use AI technology" but emphasizes that "until tech companies start to provide safe and developmentally appropriate programs for children, parents need to remain engaged and thoughtful about their child's AI usage."

 

Educational messages for families

Based on AAP guidance, healthcare providers should help families understand several concepts:

The difference between human and AI relationships: Parents should explain that only real people can offer genuine loyalty, caring, and truthfulness. Human conversations, while sometimes challenging, provide the interactions necessary for healthy development.

The dangers of oversharing: Children should understand that chatbots are not appropriate confidants for deeply personal issues. The AAP emphasizes that private information should only be shared with trusted human beings, not AI systems that cannot guarantee confidentiality.

Recognition of manipulation: Families should understand that chatbots are designed to be engaging and tell users what they want to hear, rather than provide genuine support or truthful guidance.

Supervision and open communication: The AAP recommends that parents "stay tuned in when kids are using AI, especially at younger ages" and encourage children to use AI in common areas where parents can maintain "loose tabs on what they're viewing."

Read also: Benefits of using HIPAA compliant email to educate patients

 

Building trust through professional communication

Healthcare providers who address digital safety concerns demonstrate their commitment to adolescent care. This approach builds trust with families while positioning the provider as a valuable resource for navigating modern parenting challenges.

The AAP's involvement in AI safety education lends credibility to healthcare providers' efforts. By referencing established pediatric guidelines and research from respected institutions like Stanford Medicine, providers can present evidence-based recommendations that carry professional weight.

Regular educational communications about digital topics help normalize these conversations within the healthcare relationship. When families feel comfortable discussing technology-related concerns with their healthcare providers, they're more likely to seek guidance when problems arise.

 

Implementation strategies

Healthcare practices should develop approaches to digital safety education that complement their existing patient communication strategies. This might include regular newsletter content about emerging digital risks, targeted communications during specific developmental periods, or resources shared during routine appointments.

The AAP recommends several approaches that healthcare providers can share with families:

Creating connection opportunities: Encourage families to establish regular times for human interaction, such as family meals spent "chatting, laughing and relaxing" or car rides that provide chances for casual conversation.

Maintaining availability: Parents should make themselves available at key times, such as after school or sports practice, and ask open questions that support their child's need to think independently.

Staying calm and non-judgmental: The AAP emphasizes the importance of avoiding lectures and staying calm when discussing difficult topics, while consistently affirming love and care for the child's health.

Read also: Why HIPAA compliant email is best for educating parents on tech addiction

 

FAQs

Can HIPAA compliant emails include real patient examples when discussing AI dangers?

No, HIPAA compliant emails should focus on general safety education rather than individual patient cases.

 

How often should healthcare providers send educational emails about AI risks?

Providers can integrate updates into regular newsletters or share them during key developmental stages.

 

Are there legal consequences for providers who send AI safety information without HIPAA compliance?

Yes, sharing information without secure communication channels can expose providers to HIPAA violations.

 

Can parents opt in or out of receiving HIPAA compliant educational emails?

Yes, parents must provide consent to receive communications, especially if tied to a healthcare practice.

 

Do HIPAA compliant emails require patient portals for delivery?

Not necessarily, as secure encrypted email platforms like Paubox can be used directly.