AI is Changing Healthcare, But Can India Protect Patient Privacy? | AIM


India’s digital infrastructure in the healthcare industry has seen rapid technological advancements, reimagining good health along with equitable and efficient care. As per the World Economic Forum (WEF), AI has transformed the pharmaceutical research industry, driving 30% of new drug discoveries by 2025. 

According to the Global Outlook and Forecast 2025-2030, AI in the drug discovery market was valued at $1.72 billion in 2024 and is projected to reach $8.53 billion by 2030, with a compound annual growth rate (CAGR) of 30.59%. Moreover, companies like IBM Watson, NVIDIA, and Google DeepMind are collaborating with pharmaceutical organisations to support AI-driven drug discovery.  

In another area of health tech, AI is digitising patient records and decentralised AI models, helping improve diagnostic accuracy while safeguarding patients’ right to privacy. 

During an interaction with AIM, Rajan Kashyap, assistant professor at the National Institute of Mental Health and Neuro Sciences (NIMHANS), pointed out that government initiatives such as increasing the number of seats in medical and paramedical courses, implementing mandatory rural health services, and developing Indigenous low-cost MRI machines are contributing significantly to hardware development in the AI innovation cycle. 

Growth of Healthtech 

Kashyap believes that the country is making notable strides in the healthcare technology field through several initiatives, including the Genome India project, the Consortium on Vulnerability to Externalising Disorders and Addictions (cVEDA), and the Ayushman Bharat Digital Mission, which aim to improve understanding of India’s clinical health. 

He pointed out that work being done in areas like genomics, big data analytics, AI, and machine learning (ML) is actively redefining clinical outcomes and operational efficiency.

Kashyap highlighted Bengaluru-based startup BrainSightAI, which is innovating diagnostics for neurological disorders. Earlier this year, it raised $5 million from a Pre-Series A round, using which it plans to expand to tier 1 and 2 cities in India and obtain FDA certification for access to the US and allied markets.

Moreover, Niramai Health Analytics offers AI-powered breast cancer screening tools. Their Thermalytix device is an affordable, portable and radiation-free method of detecting breast abnormalities, and works for women of all ages and breast densities.

Meanwhile, Biocon, one of India’s largest biopharmaceutical companies, uses AI in biosimilar development, which uses predictive modelling to understand the complexities of biologic behaviours, reduce formulation failures and speed up regulatory compliance. The company also introduced Semglee, the world’s first interchangeable biosimilar insulin for diabetes and has expanded patient access through partnerships with Eris Lifesciences. 

The increasing costs of research and development in drug discovery have forced pharmaceutical companies to welcome innovative solutions, and AI has been a powerful enabler. 

Is Sensitive Information Handled with Care? 

While innovations are great for technology development in the healthcare industry, there are growing concerns about data security within healthcare organisations. Netskope Threat Labs reported that doctors have been consistently uploading sensitive patient information to unauthorised websites and cloud services like ChatGPT and Gemini.

Kashyap believes patient confidentiality is often overlooked in the healthcare industry. “During my professional experience at AI labs abroad, I observed that organisations enforced strict data protection regulations and mandatory training programs…The use of public AI tools like ChatGPT or Gemini was strictly prohibited, with no exceptions or shortcuts allowed,” he said. 

The risk of unintentionally exposing protected health information through AI platforms is high. AI systems are vulnerable to data breaches, hacking, and the potential for re-identification even with anonymised data. According to the National Institutes of Health in the US, the risk increases due to the growing use of cloud-based AI models. This is why healthcare organisations are relocating patient data beyond protective measures into these cloud-based solutions.

Kashyap also warns that while anonymisation reduces risks, it does not fully protect against hacking or data breaches. He highlighted that research shows brain scans like MRIs can disclose personal details about a patient, and with further analysis, even sensitive information like financial data could be revealed.

“I strongly advocate for strict adherence to protected data-sharing protocols when handling clinical information. In today’s landscape of data warfare, where numerous companies face legal action for breaching data privacy norms, protecting health data is no less critical than protecting national security,” he added.

Government Initiatives and the Healthcare Industry 

According to Netskope’s report, organisations should deploy approved generative AI applications to centralise their use in a monitored and secured manner. This approach aims to reduce reliance on personal accounts and “shadow AI”. Although healthcare workers use personal GenAI accounts, the number has decreased from 87% to 71% over the past year as organisations adopt approved GenAI solutions.

Moreover, the report calls for deploying data loss prevention policies that define the type of data shared on these platforms while adding another layer of security for healthcare employees. 

“India is still in the process of formulating a comprehensive data protection framework. While the pace may seem slow, India’s approach has traditionally been organic, carefully evolving with consideration for its unique context,” Kashyap said. 

He suggested that the government must prioritise developing interdisciplinary med-tech programs, particularly those integrating AI with medical education. 

“Misinformation and fake news pose a significant threat to progress. In a recent R&D project I was involved in, public participation was disrupted due to the spread of misleading information. It’s crucial that legal mechanisms are in place to counteract such disruptions, ensuring that innovation is not undermined by false narratives,” he concluded.



Source link

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles