5 min read
Strengthening healthcare security using HSCC’s AI guidance
Tshedimoso Makhene
December 08, 2025
In November 2025, Health Sector Coordinating Council (HSCC) published a preview of its upcoming AI-cybersecurity guidance for the health sector. This guidance represents a coordinated, sector-wide attempt to help healthcare organizations manage the growing risks associated with AI adoption across clinical, operational, and administrative functions.
What’s being introduced is not a single document but a suite of resources, organized across five major domains, or “workstreams,” that together form a broad yet structured roadmap for safe, responsible, and resilient AI use in healthcare.
Go deeper: HSCC previews upcoming AI cybersecurity guidance for the health sector
Why HSCC’s AI cybersecurity guidance matters
Healthcare organizations are using AI-driven tools: from predictive analytics and scheduling to diagnostic support, clinical decision support, imaging, and administrative workflows. As the study Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges notes, “Artificial Intelligence (AI) holds promise for transforming the delivery system to become safer, more effective, less wasteful, and more patient-centered.” The study also notes that AI has “demonstrated success in preventing sepsis, improving diagnostic accuracy in radiology and pathology, and reducing clinicians’ documentation burden.” However, AI in healthcare goes beyond just convenience and efficiency, it involves handling highly sensitive data, patient privacy, and outcomes that have a direct impact on human lives.
Traditional cybersecurity safeguards, such as firewalls, network segmentation, user access controls remain important, but they were built with conventional IT architectures in mind, not with complex AI systems that may involve machine learning models, embedded device AI, third-party algorithms, and chained vendor-supplied services. This has led to the expansion of the attack surface with vulnerabilities rising from data storage or transmission, model training data, supply chains, third-party dependencies, and operational misuse.
Given the potential impact on patient safety, data confidentiality, and continuity of care, a failure to address AI risks could become a patient-safety crisis. The HSCC’s guidance aims to establish a unified, sector-wide framework that helps healthcare organizations adopt AI responsibly by defining clear governance structures, strengthening cyber-defense capabilities, improving supply-chain transparency, and promoting secure-by-design practices across all AI-enabled systems. By providing practical tools, common terminology, and step-by-step recommendations, the guidance seeks to ensure that AI enhances clinical care rather than introducing new vulnerabilities.
What the preview reveals
According to HSCC’s announcement and preview materials, here is how HSCC has broken down the complex challenge of AI cybersecurity into five workstreams.
Education and enablement
This workstream focuses on building a common language and understanding across the health sector.
The objective of this workstream is to ensure that clinical, administrative, and cybersecurity teams share consistent definitions of terms like “machine learning,” “model drift,” “adversarial attack,” “LLM,” and more. This shared vocabulary helps avoid misunderstandings, ensures coherent risk assessments, and fosters effective cross-functional dialogue.
More broadly, this workstream includes training and awareness programs so stakeholders at all levels, from executives to frontline clinicians, understand what AI can do, what risks it poses, and how to engage safely.
Cyber-operations and defense
Recognizing that AI introduces new cyber-threat vectors, this workstream aims to build practical, operational playbooks for detecting, responding to, and recovering from AI-related incidents. The guidance will cover more than “headline” AI systems like large language models (LLMs), but will also cover predictive ML systems and embedded device AI driving medical devices.
Key deliverables:
- An “AI Cyber Resilience and Incident Recovery Playbook” for containment, recovery, and fallback workflows.
- A “Clinical Workflow Threat Intelligence Playbook” to integrate AI-driven threat intelligence into everyday operations, supporting both security and the continuity of clinical services.
- Tailored guidance for establishing risk factors and guardrails for different kinds of AI (device-embedded, predictive analytics, third-party AI tools), and embedding those into existing cybersecurity frameworks.
Governance
AI deployment needs technical safeguards, organizational structure, accountability, and lifecycle oversight. The Governance workstream aims to deliver exactly that.
The guidance is expected to include:
- A governance framework applicable to organizations of any size, helping them embed AI-specific governance controls into their existing compliance and security programs.
- An “AI Governance Maturity Model,” so organizations can benchmark their AI readiness, identify gaps, and gradually improve controls as their use of AI scales.
- Alignment with legal and regulatory requirements, including data privacy and medical-device oversight, helping organizations ensure that their AI use doesn’t run afoul of regulations.
Governance is critical for compliance and ensuring consistent, transparent, responsible AI deployment across an organization.
Secure-by-design for AI-enabled medical devices
Another major focus is on medical devices that embed AI. These include diagnostic tools, imaging systems, monitoring devices. Rather than retrofitting security after deployment, HSCC advocates that security be built from the design stage onward.
Key elements of this workstream:
- Encouraging the use of an AI Bill of Materials (AIBOM) or Trusted AI BOM (TAIBOM). Analogous to a “nutrition label” for software, this would give buyers and users visibility into what algorithms are inside a device, how they were developed, and what dependencies they have, thus enhancing transparency and traceability.
- Ensuring cross-functional collaboration: from engineers to cybersecurity staff to regulatory/compliance teams to clinical users. Security isn’t just “IT’s job.”
- Embedding risk taxonomy, supply-chain scrutiny, and lifecycle security measures (from development to post-market maintenance). This helps mitigate threats like data poisoning, model manipulation, or supply-chain compromise.
For medical-device makers and health systems alike, this could significantly raise the bar for safety, transparency, and trust in AI-enabled equipment.
Read also: Security-by-design principles in breach-ready systems
Third-party AI risk and supply chain transparency
Many healthcare organizations rely on AI tools, models, or services developed by external vendors such as cloud providers, analytics platforms, device manufacturers, software vendors. The “third-party” supply chain often hides significant risk. As a study by the Ponemon Institute in collaboration with the Healthcare Sector Coordinating Council found, many organizations remain unaware of the potential risks posed by their suppliers. The study notes that “Only 19
percent of respondents (IT and IT security practitioners) say their organizations have a complete inventory of their suppliers of physical goods, business-critical services and/or third-party information technology.” This workstream aims to bring that risk into view.
Planned guidance includes:
- Standards and best practices for vendor vetting, procurement, lifecycle management of third-party AI tools.
- Contractual language templates (or expectations) for data handling, privacy, security, transparency about model provenance, bias testing, and reporting obligations.
- Oversight mechanisms to ensure that third-party AI tools comply with the same governance, security, and compliance requirements as internally developed AI systems.
What the guidance means for healthcare organizations
Even before the full guidance becomes available, health organizations (hospitals, clinics, device makers, health-IT vendors) can, and should, begin preparing.
Here are some practical measures organizations can take in anticipation:
- Conduct an “AI inventory”: Map out all AI/ML tools currently in use. These could be embedded in devices, analytics platforms, operational/administrative tools, clinical decision-support, vendor-supplied software etc. Document where they are used, what data they consume or process, what dependencies they have (local, cloud, third-party), and what level of autonomy they have.
- Form a cross-functional AI governance or oversight committee: Bring together clinical leadership, IT/cybersecurity, procurement, compliance/regulatory, quality assurance to govern AI adoption, deployment, and lifecycle management.
- Update vendor contracts and procurement policies: When procuring third-party AI tools or devices, include security, transparency, supply-chain disclosures, vendor accountability, model documentation (AIBOM/TAIBOM), privacy guarantees, and contractual obligations for maintenance, updates and incident reporting.
- Develop or strengthen incident-response and continuity plans: Extend existing cybersecurity incident response plans to cover AI-specific risks (model poisoning, data corruption, adversarial manipulation). Include fallback workflows if AI tools become unavailable or compromised.
- Train staff across disciplines: Use the foundational document such as “AI in Healthcare: 10 Terms You Need to Know” as a start. Educate clinicians, admin staff, and IT about AI fundamentals, common AI risks, and the organization’s AI-security policies.
What’s next
With HSCC’s preview release, the healthcare sector now has a clear signal: AI is a transformative force that comes with serious cybersecurity, privacy, and operational risks.
The full guidance suite is expected to roll out during the first quarter of 2026. Once published, these documents could become the gold standard for AI deployment in healthcare.
See also: HIPAA Compliant Email: The Definitive Guide (2025 Update)
FAQS
Will organizations be required to follow this guidance?
The documents are not regulatory mandates. However, they are developed in collaboration with public- and private-sector experts and are expected to become widely recognized best practices. Regulators, auditors, and insurance providers may increasingly use these frameworks as benchmarks of due diligence.
What makes this guidance different from existing AI-ethics frameworks?
HSCC’s work is unique because it combines cybersecurity, medical device safety, clinical risk, and operational resilience, rooted in the realities of healthcare environments.
Subscribe to Paubox Weekly
Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.
