SHARE

We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›

Thousands of applications, algorithms, and pieces of software have been developed in the healthcare field over the past few years, all with the aim of improving patient health and assisting doctors as they make clinical decisions. Apps and computer programs are designed to do everything from track menstrual periods to help anesthesiologists during surgery. In many cases, they’ve also been proposed to help guide decision making around when tumors should be biopsied or when medication should be delivered.

As the number of products has increased, the Food and Drug Administration has been working through the best way to oversee the products. Figuring out which software programs are considered medical devices—and are therefore subject to regulation by the agency—and which are not has been crucial.

So many new products are reaching the market that it’s hard for the agency to keep up, says Daniel Rubin, professor of biomedical data science at Stanford University. “They should be doing more. I understand the financial constraints and process constraints,” he says. “But the proliferation of algorithms is huge. They don’t have the resources to deal with them the same way they do with drugs.”

In a new proposed guidance, out last week, which revises another plan released two years ago, the FDA clarifies which products it considers devices. The agency also says that it plans to stratify health software by risk: Products that aren’t considered to be risky, but would still be considered medical devices, won’t face the same regulatory scrutiny as those that could be harmful to patients if things go wrong.

“We believe our proposed approach…strikes the right balance between ensuring patient safety and promoting innovation by clarifying which products would be the focus of FDA’s oversight and which would not,” said FDA principal deputy commissioner Amy Abernethy in a statement.

The recently updated guidance firms up the new definition of a “medical device,” which was first amended by the 2016 21st Century Cures Act. It clarifies that the policy excludes software like electronic medical records, or programs used to handle administrative tasks (like billing, or organizing lab test data). Apps that help patients see and manage their own health records are not devices, nor are programs people use to track things like blood sugar or blood pressure that they would then share with their doctors.

Apps that help encourage people to maintain a “healthy lifestyle” also aren’t medical devices, the FDA says: Those include activity trackers, for example, and programs that help with stress. ‘Such technologies tend to pose a low risk to patients, but can provide great value to consumers and the healthcare system,” Abernethy wrote. Those programs, though, can’t claim to diagnose, prevent, or treat any type of disease.

Any software or application that is considered a medical device under the new definition will technically be open to regulation by the FDA. However, the agency says that it will consider the potential risk posed by a product before it enforces that regulation. To figure out a product’s level of risk, the agency turned to a framework created by the International Medical Device Regulators Forum. It classifies medical software into a number of categories based on its intended function (to treat or diagnose patients, to direct patient management, or to help assist with patient management) and how critical that function is. A high-impact software, for example, might analyze an MRI image of a stroke and determine the type of stroke, therefore directing the type of treatment used. On the other hand, a low-impact software—which the FDA might not take regulatory action on—might measure patient breathing to predict when they would have an asthma attack.

Rubin says he’d like to see more oversight of all types of software, particularly because it’s still not yet clear how useful most of it will be. “Some things are more serious, though. For example, anything that guides a change of therapy is risky,” he says.

The FDA will also regulate any software that makes a medical recommendation, but does not make it clear to a doctor how it reached those decisions. That targets artificial intelligence programs that aren’t transparent, and do their work without companies revealing their algorithms—so doctors have to trust that they’re making the right call. Abernethy gave the example of a program that predicts which diabetic patients would be at risk of a heart attack after having surgery.

“In this case, if the [clinical decision support] provides information that is not accurate (e.g., inappropriately identifies a patient as low risk when he is high risk), then any misidentification could lead to inappropriate treatment and patient harm,” Abernethy wrote.

The FDA continues to stress that it hopes its policies facilitate the growth of novel health technologies. But rapid evolution and growth in the field of health technology means they’re still scrambling to keep up with that growth. “They’re still trying to figure out what they’re going to do,” Rubin says. “They don’t have this solved.”