This demanded a cross-industry summit—so now medical, security pros attend CyberMed.
A medical (cyber)simulation from the 2018 CyberMed Summit (credit:University of Arizona / CyberMed Summit)
Many articles about cybersecurity risks in healthcare begin with descriptions of live simulations (so when in Rome). Imagine a doctor completely unaware of what they’re walking into triaging two patients: one in need of a hospital cardiac catheterization lab after an irregular electrocardiogram (EKG) reading, the other suffering from a stroke and needing a CT scan. All systems are down due to ransomware, so the physician working through the scenario can’t access electronic health records or use any of the assessment methods modern medicine is so reliant on. So, what to do?
There are all kinds of scary scenarios like this that become possible when a hospital or other healthcare provider gets pwned. And the health industry has consistently been getting pwned as of late. In 2019, health organizations continued to get hit with data breaches and ransomware attacks, costing the sector an estimated $4 billion. Five US healthcare organizations reported ransomware attacks in a single week last June. A Michigan medical practice closed last spring after refusing to pay ransomware to attackers. And in 2018, when comparing a range of work sectors that included education, healthcare, general professions, and finance, healthcare entities’ portion of all breaches and security incidents was at 41 percent—the highest percentage of any sector. The attacks are even becoming more severe and more sophisticated, too.
It’s not hard to imagine other modern nightmares like the EKG swap above. For example, malfunctioning pacemakers could lead to patients experiencing shocks they don’t need, or blood type databases could get switched and cause chaos due to an integrity attack. All four of these scenarios were in fact conducted during the two latest CyberMed Summits, a conference founded in the aftermath of 2017’s WannaCry attacks. “The world’s only clinically-oriented health-care cybersecurity conference” now annually brings together physicians, security researchers, medical device manufacturers, healthcare administrators, and policymakers in order to highlight and hopefully address vulnerabilities in medical technology.
These days, CyberMed may be the quickest way to get a sense of what’s at stake in a wildly vulnerable healthcare ecosystem where hospitals frequently run out-of-date or unsupported software and where there’s currently no financial incentive to patch patients’ medical devices. After talking with individuals from both medical and security backgrounds at the most recent summit, it’s clear a myriad of issues have come together in a somewhat (im)perfect storm. And this community is hoping today’s sad state of healthcare cyber hygiene can be fixed before anyone gets hurt or killed.
The “Last Mile” awareness problem
Borrowing a term from the telecommunications industry, the theme of the 2019 summit in November was “solving the last mile problem.” How do experts in the intersection of cybersecurity and medicine get what they know propagated to the people who need it?
“It’s great if we are at the CyberMed Summit, we’re talking to the FDA, we’re talking to the device manufacturers, and we’re talking to the people in hospitals at the C-suite level that make many decisions. We come up with all these great ideas and we come up with all this awareness about these problems, but if it doesn’t filter down to the individual clinician with the individual patient at the bedside, then all of it is really for naught,” said Dr. Jeff Tully, a co-founder of CyberMed and a pediatrician and an anesthesiology fellow at the University of California Davis. “If the concept of this big systemic movement is not translated to individual people, then it’s not as effective.”
“I have a lot of patients that I need to take care of, and I have only a finite amount of time to take care of them,” said Dr. Christian Dameff, Tully’s co-founder and the medical director of cybersecurity at the University of California, San Diego. “Even with my cybersecurity expertise and my understanding of these problems, I still really wrestle with the thought of, ‘If I’m only going to see this patient for 15 minutes and might not ever see them again, do I talk to them about patching their pacemaker, or do I talk to them about their horribly uncontrolled diabetes and high blood pressure? Ideally, those things would not be mutually exclusive, but that’s just not the reality of modern medicine and modern healthcare.”
It’s a problem that Dr. Suzanne Schwartz, associate director for science and strategic partnerships in the Food and Drug Administration (FDA)’s Center for Devices and Radiological Health, says is the organization’s biggest challenge. How can medical professionals bring in patients and providers who need to be aware of and participate in cybersecurity-related discussions across the industry? It’s why the FDA convened a public meeting of its patient engagement advisory committee meeting last fall to specifically discuss medical device cybersecurity. (An entire webcast of the seven-hour event is still available online.)
“Patients can be really important drivers here, patients that have implantable devices that have cybersecurity-related concerns associated with them, or patients that have connected devices at home or elsewhere,” Schwartz said. “It is important that they be best informed and that they be positioned to have conversations with their physicians in order to understand the importance of receiving updates and patches and that when vulnerabilities are identified that those vulnerabilities are appropriately assessed and mitigated so that their devices continue to function safely and effectively.”
The Food and Drug Administration headquarters in White Oak, Maryland. Credit: Getty | Congressional Quarterly
A late start and lack of oversight
Beyond simply raising awareness, no organization has made more advances to improve the medical device security space than the FDA under the leadership of Schwartz, who advises manufacturers to “bake security into the design.” Tully said he “greatly admires” Schwartz’s work so far and her basic thought process on how to approach all the interested parties on this issue (device manufacturers, health providers, security researchers, etc). And the CyberMed founder specifically cited the FDA’s We Heart Hackers initiative—an FDA call for device makers to work with researchers that manifests itself in projects like DEF CON’s medical device research challenge—an example of how Schwartz has been able to bring these communities together toward a common goal.
That said, it’ll take time for many of the advances Schwartz’s FDA has made to filter down to most US hospitals, especially since some of the original challenges have accrued over a period of 10 to 15 years. “Even the push on manufacturers to take some of these issues into account in their early design phase will take a three- to five-year ramping up period,” said Tully. “There’s this unaddressed space in the middle where we have insecure devices that are already out there and used as part of workflows, and they aren’t going to be cycled out for another five to 10 years.” On top of all that, of course, the FDA’s authority is limited—it’s not in charge of the implementation or configuring devices in the network, for instance.
“We’re not able to address the cybersecurity issues within healthcare alone,” Schwartz said of the FDA. “It’s not only that it’s not within our entire scope of authority, but there are so many other levers that are critical here that have to be utilized.” To encourage more of those figurative levers to be pulled, the FDA is working on catalyzing its efforts through public/private partnerships, including the Healthcare Sector Coordinating Council, which brings together the government and the private sector (including industry and healthcare delivery organizations).
Device manufacturers have initially pushed back against the FDA’s regulatory authority but have (somewhat) relented in certain areas. Dameff pointed out that a big regulatory push addressing cybersecurity hasn’t happened for healthcare delivery organizations yet. Some large healthcare organizations recognize the risk and are investing resources in prevention, he said, but, “for the vast majority of hospitals out there, this is the 15th and 16th priority on a list including making enough money to exist next year.” As a consequence, he says, many healthcare providers aren’t even aware of the issue despite the recent rash of bad press.
A need for vulnerability disclosure programs
Among the FDA’s current suggestions to the private sector, the organization encourages medical device manufacturers to adopt coordinated vulnerability disclosure programs (CVDP), which allow researchers to report their findings in a secure way. In cybersecurity at large, this has become common practice, with tech giants from Apple to Microsoft even boasting bounty programs to encourage the secure reporting of bugs. Although the number of medical device manufacturers with known CVD programs has risen, it is still incredibly low at the start of 2020—as you can see from the chart we’ve put together (1-10 above and 11-25 below).
Of the top 25 device manufacturers (based on a report by MD&DI looking at 2018 revenue), only nine have publicly known coordinated vulnerability disclosure programs, as documented by I Am The Cavalry. (Two additional top-25 companies appear to have some reporting mechanism but did not respond to emails requesting clarification.) Of the top 115 manufacturers, the number is 12—roughly just 10 percent. A total of six companies had some form of contact mechanism for security vulnerabilities. One company, Elekta, stated it was in the process of publishing a coordinated vulnerability disclosure program. Some companies Ars reached out to stated their devices did not have any software or were not connected to the Internet. Others did not know what a coordinated vulnerability disclosure program was or declined to comment on their security setup. The majority did not respond at all.
In the grand scheme of healthcare infrastructure, the effort and resources necessary to instate something like this is relatively low. Not having a public program signals that these companies haven’t put much effort into thinking about their process and response. And simply having a coordinated vulnerability disclosure program listed on a webpage to allow researchers to report what they’ve found in a secure way is just the first step of a process.
“Once you have the details of these vulnerabilities, it’s not like you just enter that into some form and you’re on your way. There’s analysis that has to happen,” said Billy Rios, security researcher and founder of WhiteScope LLC. “In the healthcare world, there’s usually a risk analysis that has to happen to ask, ‘Hey, can someone actually leverage this to hurt or kill someone?’ Maybe the researcher doesn’t know. So the manufacturer has this responsibility to go through this process of making sure that they understand the vulnerability in its entirety, that they understand the risks associated.”
Rios, along with colleague Jonathan Butts, discovered vulnerabilities in Medtronic’s pacemakers and insulin pumps and were critical of Medtronic for delays in addressing the vulnerabilities and for the lack of comprehensiveness in the company’s updates. Vulnerabilities like this that could hurt someone need to be reported to the FDA, and the process could eventually include issuing safety advisories or recalls to hospitals. “The [vulnerability reporting] webpage is really just the first step in this whole process, and it’s usually not the webpage that gets messed up,” said Rios. For the more severe vulnerabilities, it’s usually the process of what to do with that information that gets messed up, he said.
That’s because, to date, manufacturers have a tendency to push back against researcher findings. “Manufacturers have an incentive to minimize the impact of the vulnerability,” Rios said. The researcher pointed out that this happened for all three recalls in the past five years (including St. Jude’s pacemaker recall for vulnerabilities discovered by MedSec): manufacturers initially pushed back on what researchers said an attack chain would look like. This default response has often forced researchers to put together a proof of concept of the attack chain.
“They’re basically forced to write code that could hurt or kill someone to prove that what they’re saying is technically valid,” Rios explained. “I wish researchers didn’t have to do that.”
While submitting vulnerabilities such as privacy risks can be done through a coordinated vulnerability disclosure program, for now Rios recommends that security researchers who find a bug that can hurt or kill someone go through ICS-CERT or DHS instead. In his experience, that route makes sure the coordination process to notify those affected isn’t delayed.
| Rank | Company | CVDP? | URL/Notes |
|---|---|---|---|
| 11 | EssilorLuxottica SA | NO | |
| 12 | Baxter International Inc. | NO | Company lists an email address for contact: “Product Security Questions: Customers with a specific question about any Baxter product can reach out to productsecurity@baxter.com or contact their Baxter service representative” |
| 13 | Boston Scientific Corporation | YES | https://www.bostonscientific.com/en-US/customer-service/product-security.html |
| 14 | Zimmer Biomet Holdings, Inc. | NO | |
| 15 | Novartis AG | NO | “Please note that we do not disclose information about our internal processes,” the company said. |
| 16 | Olympus Corp. | NO | The company has a reporting mechanism that isn’t quite a CVDP: https://medical.olympusamerica.com/customer-resources/product-security |
| 17 | 3M Company | NO | |
| 18 | Terumo Corporation | NO | Ars found a reporting mechanism, but it was not a CVDP. |
| 19 | Smith & Nephew plc | NO | |
| 20 | Canon Inc. | NO | Company at least has an incident response team and contact info for customers wanting to inquire about posted vulnerabilities: https://us.medical.canon/download/canon-security-incident-response |
| 21 | DENTSPLY SIRONA, Inc. | NO | |
| 22 | Edwards Lifesciences Corporation | NO | Company has a dedicated product security page, but not an explicit CVDP: https://www.edwards.com/about-us/product-security |
| 23 | Intuitive Surgical, Inc. | NO | |
| 24 | HOYA CORPORATION | NO | |
| 25 | Hologic, Inc. | NO |
Lack of access to equipment
In addition to issues with the vulnerability disclosure process, researchers often can’t get their hands on equipment they want to look at. “There’s all this life saving medical technology, but researchers generally only get access to the stuff they can buy off eBay, and only then at great personal cost themselves,” said Beau Woods, a Cyber Safety Advocate for the I Am the Cavalry initiative, an Entrepreneur-in-Residence at the FDA, and a Fellow with the Atlantic Council. “Sometimes they need a loading dock that has the thousand dollar piece of equipment data spot. Access to equipment is a real problem.”
One notable exception in recent years has been DEF CON’s Biohacking Village device lab and the #WeHeartHackers initiative, where organizations can state their intent to work collaboratively with security researchers. Still, building out a cyber range allowing hackers access to devices could help solve this issue, as could mutual agreements with hospitals to share analysis of testing without violating NDAs.
Who can handle a Software Bill of Materials?
The FDA is asking for a software bill of materials (SBOM), or list of software components in a system, in their next round of pre-marketing approval guidance. This could empower hospitals to purchase medical equipment from companies with good security hygiene by comparing the bill of materials to known vulnerabilities, which in turn encourages manufacturers to improve their development practices without any heavy-handed regulations they argue could stifle innovation. An SBOM would also help hospitals know which devices use some of the protocols that might be vulnerable and what type of clinical use situations they should be concerned about.
In a world where every manufacturer provided an SBOM for each product, it would enable a systemic risk analysis across the industry for common flaws, Woods explained. And if a software component used by many medical devices across the industry is identified as vulnerable, it would allow agencies (ISAC, ISAO, SCC, FDA) and even the manufacturers themselves to quickly see who is affected and how.
“SBOM would also help procurement find existing vulnerabilities that companies are now passing on but not telling the buyers,” Woods said. “Somewhere like the Mayo Clinic would be better able to distinguish among market alternatives and better able to factor true cost and risk the same way a company in the financial sector does.”
Hospitals might choose to select devices with fewer packages that are consistently unreliable, something Woods says is already happening in the financial sector. “Bank of American does that today. They don’t allow Apache Struts or Bouncy Castle in any of their software because they’re so historically bad and cause so many problems.”
That said, a software bill of materials would require coordination on the hospital level, too. And adding more administration requirements to a hospital’s already long to-do list could be quite a barrier.
“If a vendor gives a hospital an SBOM for a device, who’s responsible for checking that every time a new vulnerability comes up?” Dameff asked. “And who’s responsible for letting the patient know that their device is vulnerable to a new exploit, or an exploit that’s been around for two months?”
Currently, even some of the bigger hospital systems might not know what to do with the SBOMs at first or be able to effectively manage that amount of information. Even if it’s required of manufacturers, “it could be one of those things where they are drowning in data and don’t have the person power or expertise to use that information,” Dameff added. While device manufacturers making more secure devices and making mechanisms for hospitals to find out when devices are more vulnerable is important, “If you don’t also give hospitals the resources or expertise to figure that out, then it’s not going to be effective,” he said.
Credit: the_burtons / Getty Image
Hospitals are bad at patching
Hospitals are notoriously bad at running up-to-date software and patching medical devices for their patients. Patching medical devices takes time and resources. Not only are there no regulatory requirements for healthcare organizations to do so, there are no incentives, either. (There isn’t even a billing code for it in 2020.) Without a regulatory mechanism, clear demonstrable patient risk, or some kind of incentive, it’s not surprising that competing priorities take precedence.
CVSS (Common Vulnerability Scoring System) risk scores for medical devices have come under criticism for not accounting for exploitability and patient safety components. According to Medical device data security company MedCrypt’s analysis of ICS-CERT (The Industrial Control Systems Cyber Emergency Response Team) cybersecurity disclosures, there’s no correlation between a vulnerability’s CVSS score and the likelihood that the manufacturer will make a patch available.
Yet another issue is that there’s no standardized protocol for patching, and it’s not clear whether it should be the domain of IT, biomedical engineering, or some kind of combination of the two departments. “This is a classical argument in hospital management structures,” said Dameff. Teaching biomedical engineers how to be responsible for medical devices is like “creating a bunch of unicorns,” he said. “You have to spend a huge amount of resources to basically double major each and every one of your biomed people, which is really hard to do.”
A more scalable model, he believes, would be to have security talent under IT, but working very closely with the biomed team, with some integration and some of the same reporting structures. Patching would be a collaborative effort. “Every time you patch a medical device, you have a clinical engineer, working on validating the pre- and post- patch for a medical device to make sure it’s working, and you have IT making sure that the patch actually closed the vulnerability,” he explained.
Lack of research on impact
Trying to pinpoint which specific actions would move the needle or help solve the last-mile problem isn’t easy, in part due to limited research. “There’s a huge absence of data at almost every aspect of discussion,” said Tully. “We don’t have most of the foundational data to make good arguments or decisions on these types of things.” In fact, there’s so little research that when Tully is discussing concerns related to ransomware attacks, he points to a 2017 study published in the New England Journal of Medicine looking at delays in emergency care and mortality rates for those who were taken to the hospital after heart attacks during major US marathons. Funding and support for research would allow for actual ransomware epidemiology research, for example, and help decision-makers determine what the biggest problems are and test various hypotheses about how to fix them.
Some research has been published, but it has had its own issues. For example, research publicized on PBS and in Krebs on Security focused on how hospitals that had been breached did worse with door-to-EKG times and had worse 30-day mortality rates than other hospitals. It concluded that it was security measures put in to remediate the breach that led to worse outcomes. Dameff and two colleagues wrote a response to the paper, pointing out that the authors failed to measure which security controls were put in place after a breach and to compare them to the measures in the non-breached hospitals (which may also have rolled out security controls such as multi-factor authentication). The paper also reported an increase in EKG time between the two groups.
“It just doesn’t make sense because if you get breached, you’re not going to put a password on your EKG machine. Your EKG machine was not the source of your breach,” Dameff explained. “Furthermore, most emergency departments do paper EKGs. So it doesn’t even make sense how to log back into electronic health records. Something like increasing password complexity, for example, would explain that two-minute increase. It just doesn’t make sense.”
The control room at the 2019 CyberMed Summit Credit: University of Arizona
Understanding “risk”
Part of the problem, Dameff said, is that doctors view risk through the lens of their medical training. That understanding of “risk” doesn’t exactly equate to how the cybersecurity community understands risk.
Tully, Dameff, and their colleagues wrote commentary in response to research discussing pacemaker vulnerabilities. The research looked at adverse outcomes from patching, but it missed the mark when it comes to risk assessments.
“That’s how doctors think and that’s how we’re trained to think in medical school. They want us to do Biostatistics, and they teach us about sensitivity and specificity, number needed to treat, and number treated to harm. This is how we’re taught to practice evidence-based medicine,” said Dameff. “This is a really good thing except when it fails to understand that cybersecurity does not follow the traditional risk paradigms.”
Simply put, measuring the side-effect profile of a medicine on a cohort of the population or looking at the percentage of people who might get the flu and how to mitigate that is very different from assessing vulnerabilities in medical devices.
“Cyber risk is very different. It has to do with exploitability, not traditional understandings of risk that doctors understand,” Dameff continued. “We have intelligent adversaries, we have evolving threats and all you need is connectivity to have widespread impact.”
Although Tully and Dameff’s commentary in response to the article made an analogy to vaccinations to discuss patching, the original authors disagreed. Teaching physicians that the understanding of the calculus and risk they learned in medical school is the wrong paradigm for viewing cybersecurity risks remains an uphill battle.
Lack of staffing
Even if all of the above problems were magically solved before 2021, there would still be a fundamental issue that affects the state of healthcare security: hospitals, like many organizations these days, have a limited amount of personnel and resources. And often the first area to get cut or bypassed can be IT.
Tully is quick to point out that small hospitals, for example, would often benefit more from some extra assistance as opposed to a mandate to respond to any punitive measures from regulatory oversight. For example, implementing fines for outdated software could overburden an already thin staff and collection of resources. But a simple “Cash for Clunkers” program could help smaller critical-access rural hospitals replace outdated technologies without worrying about their bottomline.
“I do think it’s also important to note that every dollar spent on remediation or recovery from an event is a dollar that can’t be put toward patient care,” he told Ars. “In an industry with one of the thinnest margins out there, that translates into a real and significant cost.”
Even if hospitals could ultimately incorporate some security benchmarks, it doesn’t mean they’d be deployed securely, Dameff warns. He points to a 2017 Health and Human Services Task Force report that states that many small organizations can’t afford to retain a single full-time security person on staff. “Because of that, we can make huge advances and seem like we’re fixing a big part of the problem in one space, but then realize that all our work was only making minimal change because the hospitals that took that securely designed technology and then deployed it don’t know what they’re doing or have their own alternatives,” he said.
Realistically, this may be the one hurdle beyond the scope of the communities gathered at CyberMed. But as one particular ’80s cartoon once said, knowing is half the battle. No dates for the next CyberMed have been set yet, but Tully, Dameff, and many of their guests intend to continue this fight. For 2020 and beyond, these medical professionals-turned-conference planners now aim to keep educating all the different parties involved in healthcare cybersecurity, to develop curriculum and work on patient awareness, and to figure out how we can all move beyond high-level conversations and communicate directly to the patients who are most affected. After all, those customers may perhaps be the party that can instigate the most change in the end.
Yael Grauer (@yaelwrites) is an investigative tech journalist based in Phoenix. She has written for WIRED, Slate, The Intercept, and others. Her PGP key and other secure channels are available here: https://yaelwrites.com/contact/. She previously wrote about VPNs, Dark (UI) Patterns, and microchipping for Ars.