Earlier this week, Dr. Marty Makary, the commissioner of the U.S. Food and Drug Association and Dr. Vinay Prasad, the director of the Center for Biologics Evaluation and Research, published an article in the Journal of the American Medical Association about the FDA’s plans to use artificial intelligence to accelerate decision-making in a realm of health-related fields.

Exactly what AI use at the FDA will look like is still to be determined, but the article stated that AI could be used to:

  • accelerate the approval of drugs and devices
  • reduce animal testing
  • address “concerning” ingredients in food.

Many people within the health and science world (and beyond) are concerned by this move, including Elisabeth Marnik, a scientist and science communicator based in Maine, who is particularly worried about the shift to AI after fake citations made their way into a Make America Healthy Again report just a few weeks ago, likely due to unsystematic AI use.

There are other concerns, too.

“The FDA’s move to explore AI for accelerating drug and device approvals and for food ingredient oversight marks a critical inflection point in regulatory innovation, but it also introduces a series of legal, ethical and structural tensions that can’t be glossed over,” Stacey B. Lee, a professor of law and ethics at Johns Hopkins Carey Business School and Bloomberg School of Public Health, told HuffPost via email.

The Department of Health and Human Services didn’t immediately respond to HuffPost’s request for comment.

Below, experts share more thoughts on the FDA’s AI implementation.

Some say it’s hypocritical to use AI to speed along processes.

“It’s a bit disorienting because it feels like they are doing the thing that they are also criticizing in that they’re on this big agenda of almost re-litigating all processes established by our public health agencies to ensure ‘safety and efficacy’ and yet they’re also wanting to expedite things,” said Jessica Malaty Rivera, an infectious disease epidemiologist.

Moving quickly through processes while guaranteeing “safety and efficacy” generally don’t go hand-in-hand. “That feels like you can’t have both things at the same time,” she added.

Health and Human Services Secretary Robert F. Kennedy Jr. has said he wants to put many vaccines through longer trials, and while the FDA’s AI use hasn’t been pegged for vaccine trials, “they are still saying that they want to shorten review times and speed up the delivery of treatments to people who need them,” Malaty Rivera said.

Kennedy also repeatedly promises “radical transparency” in health care, yet AI threatens that transparency, experts say.

“I haven’t been able to find great, transparent information about exactly where and how they’re using AI. They talk about using it for review, but in what way?” said Marnik.

There are also legal concerns.

“The law doesn’t prohibit innovation, but it does demand accountability,” said Lee.

“Any AI implementation must be subject to clear statutory authority, rigorous oversight and published methodologies to preserve public trust,” she added.

This goes back to transparency.

The core concern is opacity. AI tools, especially proprietary or black-box models, can obscure how decisions are made. If a drug is greenlit or a food ingredient is flagged based on an algorithm that the public can’t scrutinize, it erodes due process and patient safety,” Lee said. “This move also raises critical structural questions: If an AI system contributes to a faulty approval or missed red flag, who’s accountable? The software developer? The FDA staffer who relied on the tool?”

There is currently a large liability gap, she noted — “and the regulatory framework hasn’t caught up.”

As MAHA vilifies various foods and ingredients, it’s worrisome to have AI weigh in on “concerning ingredients.”

When it comes to using AI to look at the “concerning” ingredients in food in the U.S., Malaty Rivera has concerns.

“MAHA and Marty Makary, in particular, continue to spread misinformation about the safety of food,” she said. “They continue to malign things that are not harmful to people, like seed oils, they continue to misrepresent even the ingredients of baby formula.”

Many of the words used to vilify certain foods and certain ingredients are wellness marketing gimmicks, “things like non-GMO and organic and pushing things like beef tallow and raw milk,” she added.

She also voiced concerns about the data and language that could be used to inform the AI systems when it comes to food regulations.

“I don’t trust the people that are in charge of these decisions to make evidence-based decisions on food ingredients. I really don’t,” she said.

There are fears of biases in AI, too.

“We also need to consider bias in training data. If the AI is trained on historically biased data — say, clinical trials that underrepresent women or communities of color, we risk automating disparities in approvals or warnings,” Lee said.

Research shows that AI itself is biased and even is known to prop up racist stereotypes.

More, how would AI handle the so-called “DEI-related” words that are currently banned from science research by the Trump administration?

“I don’t even know if, because of all these banned words, if applications that even have words that have been considered banned would even pass through these AI models designed by people creating the word bans,” added Malaty Rivera, who added that she wants to know how equity and unbiased review will be ensured in the AI process.

“I would love to see the methodology. I would love to see the ways in which it’s not going to cause further harm,” she said.

But, there is some use for AI in modern science.

Many people are leery of AI, and for good reason. It’s taking jobs, has plagiarism issues, is linked to privacy concerns and, as mentioned above, it’s known to be biased.

But, when used properly, there are pros to AI both at the FDA and in everyday life.

“In food regulation, especially, this could be a breakthrough. AI can scan molecular structures and evaluate safety profiles at a scale no human team could match,” said Lee. “But the system has to be designed to prioritize health, not convenience,”

“I do think eventually AI will be a useful tool in helping streamline things and potentially even helping analyze big data sets,” Marnik said, “but I think that there’s a lot of steps we have to go through to make sure that’s actually happening correctly before it’s used on such a federal level.

Malaty Rivera noted that AI isn’t going anywhere and while it could be useful to review thousands of pages of information, it would be a mistake to completely remove humans from the process.

The FDA’s AI program needs to have guardrails in place to ensure it’s working properly.

No matter what AI is used for at the FDA, processes must be in place to ensure fairness and accuracy, experts say.

“If this is going to be what you want to eventually use, there should be essentially a scientific process to establish that the system is actually working as well as you think it’s working and is actually working as well as a human review process,” Marnik said, adding that without systems (and people) to make sure AI is functioning properly, there can be major issues.

Lee noted informed decisions need to be reviewed, explained and equitable for patients, researchers and policy makers.

“AI in health care is not just a tech issue; it’s a trust issue,” Lee said.

Otherwise, more people could be led to distrust the medical system. And levels of distrust are already high, with roughly two-thirds of Americans expressing a lack of faith in the medical establishment.

“The FDA’s AI pilot comes at a moment when public trust in health institutions is already fragile. Getting this right means building the right guardrails now, not after the first high-profile failure,” Lee said. “That includes independent audits, transparent reporting, human oversight and clear legal responsibility.”

Malaty Rivera said if she believed in the scientific rigor and integrity of the people at the helm of the FDA, the use of AI in health and science would be one thing.

“But I don’t. And so I don’t trust these decision makers to be designing and/or navigating these tools,” she said.

Malaty Rivera also added that the people appointed under the MAHA regime have an agenda that isn’t evidence-based science.

“The agenda is this alternative, contrarian version of health and wellness that is often spreading a lot of harmful misinformation,” she said.

Marnik added that she doesn’t think AI is currently able to do rigorous scientific reviews or find limitations associated with scientific data, which is crucial for the FDA when it comes to medical approvals.

“AI is only as good as the prompts and the directions that you give it, and what it’s been trained on,” Marnik said. “So, ultimately, I think this is too soon, and I would like to know more about exactly how they plan to use it.”



Read the full article here

Share.
Exit mobile version