By: New Technologies Committee

The integration of artificial intelligence (AI) into optometric practice presents significant challenges for compliance with the Health Insurance Portability and Accountability Act (HIPAA). AI systems are designed to improve performance by ingesting and analyzing large volumes of user-provided data, yet vendors often provide limited transparency regarding data storage, retention, and use for model training. When protected health information (PHI) is involved, this lack of clarity creates substantial compliance risk.

Any use of AI involving PHI requires a valid Business Associate Agreement (BAA) from the vendor. The system must operate in a data-isolated environment with contractual assurances that PHI is not retained or used for training purposes. Most consumer and small business AI platforms do not meet these requirements. For example, OpenAI offers BAAs only through certain enterprise level healthcare products, not through standard business plans.

“ChatGPT for Healthcare” is a new product and is marketed as HIPAA-compliant, however there is limited publicly available information regarding eligibility criteria or minimum licensing requirements. It is distinct from the patient-facing “ChatGPT for Health,” which is not intended for patient and not clinical use. OpenAI has indicated that ChatGPT for Health will incorporate enhanced safety measures, including not using conversations to train models, and may allow patients to link medical records to improve responses. However, this model raises significant privacy concerns. Patients should be cautioned against using such tools, as security risks remain high and future changes to terms and conditions could permit secondary use or monetization of health data.

Several third-party AI vendors currently market tools designed for HIPAA-regulated settings. These include clinical documentation tools (Doctora, EVAA Scribe, Twofold Health, Freed AI) and practice management platforms (EliseAI, Prosper AI, Weave, iTrust). Even with these vendors, compliance depends on contract terms and ongoing oversight.

The regulatory landscape remains unsettled. No federal statute specifically governs AI in healthcare. Oversight relies on existing authorities such as HIPAA enforcement and FDA regulation of software classified as a medical device. State laws, including the Texas Responsible Artificial Intelligence Governance Act and the California Frontier Artificial Intelligence Act, address certain AI uses but do not alter HIPAA obligations (though the Texas law requires disclosure of AI involvement in care).

We are currently in an era of deregulation. Health Data, Technology, and Interoperability (HTI) was updated in late 2025 to reduce the regulatory burden on data security in healthcare. However, practitioners should be ready for more strict guidelines for keeping PHI safe in the age of AI once the pendulum swings in future administrations.

Best practices include updating BAAs when vendors add AI features and revising Notices of Privacy Practices to disclose AI use. Proactive transparency reduces risk and builds patient trust as regulations continue to evolve.

Diamond Partners

Gold Partners

Bronze Partners