In November 2025, Health Sector Coordinating Council (HSCC) published a preview of its upcoming AI-cybersecurity guidance for the health sector. This guidance represents a coordinated, sector-wide attempt to help healthcare organizations manage the growing risks associated with AI adoption across clinical, operational, and administrative functions.
What’s being introduced is not a single document but a suite of resources, organized across five major domains, or “workstreams,” that together form a broad yet structured roadmap for safe, responsible, and resilient AI use in healthcare.
Go deeper: HSCC previews upcoming AI cybersecurity guidance for the health sector
Healthcare organizations are using AI-driven tools: from predictive analytics and scheduling to diagnostic support, clinical decision support, imaging, and administrative workflows. As the study Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges notes, “Artificial Intelligence (AI) holds promise for transforming the delivery system to become safer, more effective, less wasteful, and more patient-centered.” The study also notes that AI has “demonstrated success in preventing sepsis, improving diagnostic accuracy in radiology and pathology, and reducing clinicians’ documentation burden.” However, AI in healthcare goes beyond just convenience and efficiency, it involves handling highly sensitive data, patient privacy, and outcomes that have a direct impact on human lives.
Traditional cybersecurity safeguards, such as firewalls, network segmentation, user access controls remain important, but they were built with conventional IT architectures in mind, not with complex AI systems that may involve machine learning models, embedded device AI, third-party algorithms, and chained vendor-supplied services. This has led to the expansion of the attack surface with vulnerabilities rising from data storage or transmission, model training data, supply chains, third-party dependencies, and operational misuse.
Given the potential impact on patient safety, data confidentiality, and continuity of care, a failure to address AI risks could become a patient-safety crisis. The HSCC’s guidance aims to establish a unified, sector-wide framework that helps healthcare organizations adopt AI responsibly by defining clear governance structures, strengthening cyber-defense capabilities, improving supply-chain transparency, and promoting secure-by-design practices across all AI-enabled systems. By providing practical tools, common terminology, and step-by-step recommendations, the guidance seeks to ensure that AI enhances clinical care rather than introducing new vulnerabilities.
According to HSCC’s announcement and preview materials, here is how HSCC has broken down the complex challenge of AI cybersecurity into five workstreams.
This workstream focuses on building a common language and understanding across the health sector.
The objective of this workstream is to ensure that clinical, administrative, and cybersecurity teams share consistent definitions of terms like “machine learning,” “model drift,” “adversarial attack,” “LLM,” and more. This shared vocabulary helps avoid misunderstandings, ensures coherent risk assessments, and fosters effective cross-functional dialogue.
More broadly, this workstream includes training and awareness programs so stakeholders at all levels, from executives to frontline clinicians, understand what AI can do, what risks it poses, and how to engage safely.
Recognizing that AI introduces new cyber-threat vectors, this workstream aims to build practical, operational playbooks for detecting, responding to, and recovering from AI-related incidents. The guidance will cover more than “headline” AI systems like large language models (LLMs), but will also cover predictive ML systems and embedded device AI driving medical devices.
Key deliverables:
AI deployment needs technical safeguards, organizational structure, accountability, and lifecycle oversight. The Governance workstream aims to deliver exactly that.
The guidance is expected to include:
Governance is critical for compliance and ensuring consistent, transparent, responsible AI deployment across an organization.
Another major focus is on medical devices that embed AI. These include diagnostic tools, imaging systems, monitoring devices. Rather than retrofitting security after deployment, HSCC advocates that security be built from the design stage onward.
Key elements of this workstream:
For medical-device makers and health systems alike, this could significantly raise the bar for safety, transparency, and trust in AI-enabled equipment.
Read also: Security-by-design principles in breach-ready systems
Many healthcare organizations rely on AI tools, models, or services developed by external vendors such as cloud providers, analytics platforms, device manufacturers, software vendors. The “third-party” supply chain often hides significant risk. As a study by the Ponemon Institute in collaboration with the Healthcare Sector Coordinating Council found, many organizations remain unaware of the potential risks posed by their suppliers. The study notes that “Only 19
percent of respondents (IT and IT security practitioners) say their organizations have a complete inventory of their suppliers of physical goods, business-critical services and/or third-party information technology.” This workstream aims to bring that risk into view.
Planned guidance includes:
Even before the full guidance becomes available, health organizations (hospitals, clinics, device makers, health-IT vendors) can, and should, begin preparing.
Here are some practical measures organizations can take in anticipation:
With HSCC’s preview release, the healthcare sector now has a clear signal: AI is a transformative force that comes with serious cybersecurity, privacy, and operational risks.
The full guidance suite is expected to roll out during the first quarter of 2026. Once published, these documents could become the gold standard for AI deployment in healthcare.
See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)
The documents are not regulatory mandates. However, they are developed in collaboration with public- and private-sector experts and are expected to become widely recognized best practices. Regulators, auditors, and insurance providers may increasingly use these frameworks as benchmarks of due diligence.
HSCC’s work is unique because it combines cybersecurity, medical device safety, clinical risk, and operational resilience, rooted in the realities of healthcare environments.