The Food and Drug Administration (FDA) is testing a new artificial intelligence tool called CDRH-GPT, intended to streamline the approval of medical devices like pacemakers and insulin pumps. However, according to two individuals familiar with the tool, it is still in beta and plagued by technical issues.
These include connectivity problems with internal FDA systems, inability to access the internet or recent studies, and difficulties with document uploads and user queries. The tool is currently limited in its capabilities, raising doubts about its readiness.
Aggressive AI Push Amid Layoffs Raises Readiness, Oversight, and Ethics Concerns at FDA
The Center for Devices and Radiological Health (CDRH), the division using CDRH-GPT, was affected by mass layoffs earlier this year. Though some reviewers were retained, much of the supporting infrastructure was lost, putting strain on regulatory workflows.
Commissioner Dr. Marty Makary, who assumed leadership in April, has been aggressively pushing AI integration and has set a June 30 deadline for implementation. While Makary claims progress is ahead of schedule, internal sources suggest the tool is far from ready, raising concerns about rushing deployment under pressure.

Experts warn that the FDA’s enthusiasm for AI may outpace the technology’s actual capabilities. Arthur Caplan, a medical ethics expert, stressed that AI still requires significant human oversight to ensure accurate and safe decision-making in medical device approvals. Law professor Richard Painter also raised concerns about potential conflicts of interest if FDA officials involved with AI have financial ties to AI companies. Without strong ethical safeguards, the integrity of regulatory decisions could be compromised.
FDA’s Elsa Tool Faces Accuracy Issues Amid Job Security Concerns and Hasty Rollout
Alongside CDRH-GPT, the FDA has launched another AI tool named Elsa for general tasks like summarizing adverse event reports. While Commissioner Makary hailed its time-saving potential, internal feedback suggests Elsa also has performance issues. Staff testing revealed inaccurate or incomplete summaries, and the tool currently cannot manage complex regulatory responsibilities. Despite hard work from FDA staff, both tools require more development before they can fully support the agency’s mission.
Internally, some FDA employees view the push for AI with skepticism and concern for their job security. Already burdened by layoffs and a hiring freeze, staff fear that the drive to automate may signal future job reductions rather than relief from overwork. While AI holds promise for improving efficiency, many believe its rollout has been too aggressive and premature. Continued development, transparency, and ethical oversight will be critical for AI to truly support the FDA’s complex and vital work.