5 minute read | January.09.2026
FDA Commissioner Marty Makary announced policy shifts that ease the path to market for certain digital health technologies, including AI- and generative AI-enabled clinical decision support software (CDS) and consumer wearables.
Unveiled at the Consumer Electronics Show, the policy preview outlines 2026 updates to FDA guidances that expand enforcement discretion for specific CDS functions and broaden the general wellness policy for non-invasive wearables reporting physiologic metrics – while maintaining risk-based oversight where software substitutes for clinical judgment or influences time-critical care.
FDA issued updated versions of two cornerstone digital health guidances: Clinical Decision Support Software and General Wellness: Policy for Low Risk Devices.
The 2026 CDS guidance expands enforcement discretion where software provides a single, clinically appropriate recommendation and otherwise satisfies Non-Device CDS criteria, including enabling a health care provider to independently review the basis for the recommendation.
This applies to AI, including certain generative AI features, so long as clinicians can understand and verify the underlying logic and data inputs.
The 2026 wellness guidance clarifies that a broader set of non-invasive, consumer wearables reporting physiologic metrics, including blood pressure, oxygen saturation or glucose-related signals, may fall under enforcement discretion if intended solely for general wellness and paired with non-diagnostic notifications (for example, advising that professional evaluation may be helpful when values fall outside wellness ranges).
This is a material expansion from 2019 and may accommodate more AI-derived metrics and generative AI-generated insights when confined to wellness use.
The revised policy reduces a core friction point: the need to engineer around “single recommendation” outputs solely to avoid device classification. If a single recommendation is clinically appropriate and the tool otherwise fits Non-Device CDS criteria – including transparent, clinician-reviewable logic – FDA intends to exercise enforcement discretion. This can lower transaction costs, accelerate time-to-market and unlock investment in AI and generative AI features.
The key gating factor remains explainability and clinician reviewability. Where models are opaque (for example, black-box large language models), time-critical, or directive in nature, device oversight should still be expected.
The broadened wellness posture creates more room for non-invasive devices that report physiologic measures – often computed or summarized using AI or generative AI – to remain outside device regulation when framed strictly as wellness. Product teams can integrate additional sensors and AI-derived or generative AI-generated insights if marketing and labeling stay within general wellness.
Any implied or explicit diagnostic, treatment, or disease claims will trigger device status and related quality and premarket obligations. FDA also signaled priority enforcement against higher-risk use cases, particularly where AI outputs guide clinical care without adequate human oversight or validation.
Last year’s WHOOP friction over an uncleared blood pressure feature highlights the shift.
Under the 2026 guidance, non-invasive features that present blood pressure or other physiologic readings can remain in wellness if they:
With disciplined claims and user messaging, AI-derived or generative AI-generated physiologic insights that previously risked device classification may now be structured to fit within enforcement discretion. Any clinical or diagnostic positioning would still require a device pathway.
Reassess portfolios against the 2026 criteria to determine whether any single-recommendation AI or generative AI features can credibly qualify as Non-Device CDS with sufficient transparency and human oversight; otherwise, plan for device pathways and early FDA interaction.
Recalibrate labeling to leverage the expanded wellness policy for AI-generated physiologic insights while rigorously avoiding diagnostic or treatment implications.
FDA’s approach reflects a strategic rebalancing: greater tolerance for innovation at the low-risk, wellness, and clinician-aid end of the spectrum, particularly for AI and generative AI tools, coupled with continued scrutiny where software substitutes for clinical judgment or influences time-sensitive care.
Expect increased reliance on enforcement discretion for Non-Device CDS, expanded wellness safe harbors for non-invasive wearables reporting physiologic metrics and continued risk-based oversight for higher-impact AI uses – especially black-box or generative AI models in clinical workflows – emphasizing human oversight, transparency, and post-market performance.
For product, regulatory and legal teams, the immediate task is to benchmark current and planned AI and generative AI features against the revised CDS and wellness criteria and tighten claims and messaging to remain squarely within the updated boundaries.
If you have any questions, please contact Georgia Ravitz, Thora Johnson, Shari Esfahani, Jeremy Sherer, Amy Joseph or another Orrick team member.