Can AI in Healthcare Be Trusted? | WSJ Tech News Briefing
Tech News Briefing for Monday February 6th
The Wall Street Journal's Zoe Thomas introduces the tech news briefing for Monday, February 6th.
Can We Trust AI in Healthcare?
WSJ reporter Eric Neeler and researcher and inventor Ramachalapa discuss their book "Can We Trust AI" and how it applies to healthcare. They explore the use of AI in healthcare, its potential benefits, and challenges.
Cutting Edge Uses of AI in Healthcare
- AI feeds on large amounts of data generated by healthcare such as screens of the body, tumor biopsies, photographs of skin lesions, patient records, vital signs.
- Combining AI with medicine can take a computer algorithm with a large amount of data to look for patterns that would otherwise be impossible or take too long to figure out.
- The COVID pandemic led doctors and computer scientists to work together to predict hospitalization rates using Vital Signs and symptoms. However, different hospitals had different sorts of patient populations making it difficult to pull all this data together.
Challenges with Using GPT in Healthcare
- There is an issue of accuracy when using commercial chatbots for telehealth during the pandemic.
- Trustworthy chat GPD is needed before products come out since there are no guardrails yet.
Limits and Possibilities of AI in Healthcare
In this section, the speaker discusses some limits that exist for AI in healthcare, including access to data and concerns around autonomy. They also discuss the possibilities of using synthetic data to train AI systems.
Limits of AI in Healthcare
- Minority populations that lack access to healthcare or have a history of discrimination may not be included in medical studies.
- Autonomy is a concern for many people when it comes to AI. There are fears around General AI, but right now, AI only exists within the data set where it lives or in the algorithm it lives.
- Cybersecurity and data security concerns exist around using internet-connected devices like heart monitors and sensors that can be hacked.
Possibilities of Using Synthetic Data
- AI can only handle what has been seen before, but synthetic data can be used to prepare and train AI systems for handling things that have not happened yet.
- Introducing more synthetic environments for training AI systems could be an exciting development.
Examples of Effective Implementation
- Small companies identified by the National Institute of Aging are promoting research on using AI effectively and ethically in healthcare.
- As we see more FDA approvals, we will see more integration of AI into hospitals. It may not always be visible as it could work in the background with better ways of triaging patients who come into emergency rooms.
Cybersecurity Concerns
- Large amounts of accessible data pose cybersecurity threats. The same security precautions taken with digital health should apply to internet-connected devices used with AI tools.
Overall, while there are limits and concerns surrounding the use of AI in healthcare, there are also exciting possibilities for using synthetic data to train AI systems and improve patient care. It is important to address cybersecurity concerns and ensure that AI is used effectively and ethically in healthcare.