Skip to Main Content

Artificial Intelligence: Risks

Opportunities and risks of using large language models like ChatGPT, plus resources for Advocate Health - Midwest teammates

Risks

  • Breach of confidentiality. ChatGPT records all the information that you put into it. So, if you share confidential information with it, you may be breaking HIPAA laws or policies concerning Protected Health Information.
  • Bias. ChatGPT was trained on freely information available on the internet. This information may have been biased and ChatGPT may perpetuate and duplicate those biases. 
  • Misinformation. ChatGPT is designed to simulate human language by predicting strings of words that it thinks will match the question you have asked it. Therefore, it can create information that has no basis in fact which are being called "AI hallucinations." These hallucinations may result from the model's biases, from its inherent lack of human understanding, or other factors. Librarians are particularly concerned about the references to non-existent papers that ChatGPT is giving its users. A recent study found that in a sample of 115 references generated by ChatGPT only 7% were authentic and accurate. There is an abundance of anecdotal evidence supporting the hypothesis that ChatGPT often creates citations rather than references real citations.
  • Plagiarism. ChatGPT and other large language models may regurgitate content that they trained on without properly citing it. If you use it, you may find that you have inadvertently incorporated someone else's work into your own without proper attribution.

Patient Safety and Security

While Artificial Intelligence and large language models have the potential to transform the world and health care in particular, these are new technologies that pose security risks.

  • Confidentiality. These systems do not guarantee confidentiality. Never provide ChatGPT with information that could be considered Protected Health Information (PHI) or Personal Identifiable Information (PII).
  • Harm. These models are not currently designed to provide medical advice and they may generate incorrect information which could harm patients.
  • Misinformation or Bias. Since ChatGPT was trained on publicly available information, it may not have learned all information including recent information or proprietary information that wasn't available to it. ChatGPT's answers may be incorrect, biased, or incomplete which could harm patients.

Cybersecurity Policy on ChatGPT