While Artificial Intelligence and large language models have the potential to transform the world and health care in particular, these are new technologies that pose security risks.
- Confidentiality. These systems do not guarantee confidentiality. Never provide ChatGPT with information that could be considered Protected Health Information (PHI) or Personal Identifiable Information (PII).
- Harm. These models are not currently designed to provide medical advice and they may generate incorrect information which could harm patients.
- Misinformation or Bias. Since ChatGPT was trained on publicly available information, it may not have learned all information including recent information or proprietary information that wasn't available to it. ChatGPT's answers may be incorrect, biased, or incomplete which could harm patients.