Artificial Intelligence (AI) holds out much promise for healthcare, but it comes with challenges, including amplifying biases or misinformation and cyber security concerns, says a World Health Organization guidance on managing AI in health, responsibly.

AI tools could transform the health sector, said the UN health agency, given the increasing availability of healthcare data and analytical techniques – whether machine learning, logic-based or statistical. However, “AI technologies, including large language models, are being rapidly deployed, sometimes without a full understanding of how they may perform, which could either benefit or harm end-users, including health-care professionals and patients,” it added.

“When using health data, AI systems could have access to sensitive personal information, necessitating robust legal and regulatory frameworks for safeguarding privacy, security, and integrity..,” said the WHO.

The Indian healthcare landscape also has start-ups using AI tools to improve healthcare outcomes through reducing timelines or addressing problems in low-resource settings. And while they welcomed better governance, it should not be at the expense of innovation, they caution.  

Pointing to challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation, WHO Director General Dr Tedros Adhanom Ghebreyesus said the new guidance would “support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimising the risks”.

Improving outcomes

On AI’s potential to improve health outcomes, the WHO pointed to  strengthening of clinical trials; improving medical diagnosis; and supplementing health care professionals’ knowledge and skills. In fact, in places with a lack of medical specialists, AI can help in interpreting retinal scans and radiology images, among many others, it said.

Kalyan Sivasailam, Founder and CEO of 5C Network, told businessline that concerns involving data privacy and patient identity, for instance, are addressed by existing laws that require anonymised data. There is a need to be more stringent, at the point the data is shared [by institutions], for example, he said.  The healthtech segment was still nascent, he said, calling for measures to spur innovation while protecting patient safety.  A Tata1MG-backed-healthtech startup, 5C Network is a radiology interpretation platform.

WHO outlined measures to manage AI healthtech responsibly. It stressed on transparency, to foster trust, through documenting the entire product lifecycle and tracking development processes. For risk management, issues like intended use, continuous learning, human interventions, training models, and cybersecurity threats must be comprehensively addressed, with simple models, it said.

Externally validating data and being clear about intended use of AI helps assure safety and facilitate regulation. A commitment to data quality, through rigorously evaluating systems pre-release, it said, was vital to ensure systems did not amplify biases and errors. 

The challenges posed by important, complex regulations — such as the General Data Protection Regulation (GDPR) in Europe and the Health Insurance Portability and Accountability Act (HIPAA) in the US — are with an emphasis on understanding the scope of jurisdiction and consent requirements, in service of privacy and data protection, the note said. Fostering collaboration between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners, could also help ensure products and services stay compliant with regulation throughout their lifecycles, it added.

comment COMMENT NOW