H-ISAC Webinar: Why Data Is Your Strongest Asset In Healthcare Security

Arthur Braunstein
Arthur Braunstein

26  min read | min read | 27/04/2023

Device-level telemetry can provide invaluable information about the integrity, quality, and reliability of medical devices. Such insights, in turn, can be leveraged to reduce the need for patching, improve post-market surveillance, satisfy regulatory requirements, and support the parties using HSCCs model contract language.

Most importantly, perhaps, they help establish the foundation for data-driven collaboration between device manufacturers and healthcare providers, helping ensure patient safety.

Join Arthur Braunstein, Sternum VP of Sales and Business Development, and Dylan Hall H-ISAC’s Cyber Threat Intelligence Analyst as they talk about:

  • Ways to collect and use data from connected medical devices
  • The impact data can have on MDM/HDO collaboration
  • Real-world examples of data being used to improve device security and reliability
  • The far-reaching advantages that shared understanding can provide


Dylan Hall: So welcome everyone and thank you for attending today’s webinar. My name is Dylan Hall. I’m a Cyber Threat Intelligence Analyst with Health-ISAC and I will be introducing today’s speaker from Sternum and the overall objective of today’s webinar.

In this webinar, Why Data Is Your Strongest Asset in Healthcare Security, we will look at ways to collect and use data from connected medical devices to support MDM and HDO organizations. We’re also demonstrating how data can be used as a facilitator for collaboration between MDMs and HDOs and finally help discover the far-reaching benefits that this data can provide.

Before we introduce today’s speaker, I would like to inform everyone that the webinar will be recorded and made available should you wish to revisit today’s content. Feel free to post questions in the available Q and A or at the end of the presentation so that we can address today’s presentation. We should have about 10 minutes or so to answer some questions at the end.

So with that, I will pass it over to our speaker Arthur Braunstein and with that, I will pass it over to you, Arthur.

Arthur Braunstein: Thank you Dylan. I appreciate the introduction and we will go ahead and start. The topic today is data and shared responsibility and maybe to be a little bit more specific, I’m going to talk about two intersecting problems. One of them is shared responsibility and what it means. The intersection with post-market surveillance is talked about more and more these days.

The emphasis that we see on post-market surveillance recently is underscoring certain problematic practices in security as well as gaps in security technology and these are having an effect as well. They tend to entrench certain practices that are either unproductive and ultimately make the HDO-MDM relationship adversarial at times.

So we’re going to talk about what the reasons behind that are as well as discuss a technology paradigm that will produce shareable data, analytics in particular, of common interest to MDMs and HDOs that will encourage collaboration and reinforce the concept of shared responsibility.

Just to introduce our company Sternum, I will say two things. First of all, we have a long heritage as a company in cybersecurity, in particular, cybersecurity for embedded systems both from the defensive and the offensive side.

So we have a solid understanding of what threat actors do. Also a strong heritage in medical devices, medical device compliance and medical device protection.

One of the topics that’s always sort of interesting is to figure out what questions about yourself and about your products and about the situation, about your user products you can answer today. As we seek a common language between MDMs and HDOs, knowing where the gaps are is helpful in doing that. So there are kind of four questions that I often ask when I’m talking to a CISO of a hospital or head of product security at a medical device manufacturer.

They’re relatively straightforward. How many devices do you have? Where are they? What condition are they in? Are you able to really meet both the letter and the spirit of post-market surveillance? One of the really important ones is how do I find out if there’s an issue? Do I know about it before somebody tells me about it or somebody calls technical support and says there’s an issue?

Then particularly with respect to patching because that is a practice that everybody is talking about and it is entrenched today. What am I actually spending on it and what am I really spending on it? Not what do I think I’m spending on it.

I have to say I’ve never spoken with anybody who can answer yes to all four. Generally, the best you get is a maybe and this shows that there’s lots of improvement in this whole area of collaboration and shared responsibility particularly in a post-surveillance or in a world where post-surveillance, post-market surveillance is so important.

Let’s start by talking a little bit about the concept of shared responsibility and particularly the FDA’s perspective on it and the idea was to encourage collaboration and that’s really the central word, right? We know that we have a shared interest in cybersecurity. HDOs and MDMs do. Patients certainly do. Regulators are doing their best to enable that.

So a lot of work has gone into a sort of clarifying what shared responsibility means, assigning responsibilities. All done in the background though of a simple but recurring and consequential technological fact which is that once HDOs receive a medical device, they don’t have the ability to install additional security protection on that. So the FDA recognizing that silos are the enemy of collaboration has promoted the idea of shared responsibility.

The idea of shared responsibility and the reality of it and practicality of it are a little bit different and in many cases what has happened is the idea of shared responsibility sort of ossified into silos of activity and maybe what’s even worse into ritualization of security and security practices. Patching and SBOM are examples. I will speak more about them in a moment.

But the bottom line is that HDOs and MDMs have very different expectations of one another and that’s reasonable and understandable. They have different interests and different ideas about who’s responsible for what.

So you end up with a situation where sort of everybody does what they’re supposed to. We did a patch. We wrote a patch. Now it’s your problem to deploy the patch but that isn’t necessarily helpful from a security perspective and in fact, often has a negative effect both on security and on the general atmosphere of collaboration. One example is it tends to be slowing the adoption of connectivity which would be an obvious benefit for everybody.

So there are various rules of the game when it comes to shared responsibility and the different standards and procedural frameworks and regulatory frameworks are not designed for the purpose of shared responsibility but they’re the context in which shared responsibility unfolds and they’re the framework in which the different actors in cybersecurity around medical devices are held accountable for what they are supposed to do.

So if we take a look through this, we see the Omnibus Act for example where the force of law can be applied to ensuring that medical device manufacturers meet certain requirements, certain provisions around making sure that their devices are secure, that there are updates. SBOM factors factor into that.

The Model Contract Language from HSCC is an attempt to address some of the gaps and differences in interest contractually by giving language around these same issues, many of these same issues for MDMs and HDOs to negotiate when they’re working on the sale.

The one thing that I would say about the HSCC language, about the Model Contract Language and we will see this is a recurring theme is that it tends to favor the HDOs. It doesn’t look a whole lot like collaboration when every provision begins “The supplier shall …”As a matter of fact it’s the case in the Model Contract Language.

The FDA-MITRE Playbook is another framework that people operate in. It’s more oriented towards HDOs and towards resilience and towards the HDO’s obligation to manage devices carefully and then the European MDR is another framework that many people are operating in.

That tends to be more administrative. Local authorities in Europe define the specific standards and it’s more life cycle oriented which is actually a good thing. That’s a good way to approach it. But one of the commonalities that we see in all of these is they’re all talking about the same things, monitoring threat detection, threat modeling, and vulnerability management.

But they’re all updated and they’re updated frequently. So we see updates for example from 2016 and 2018. HSCC has gone under the model contract line which has undergone an overhaul since 2020. Omnibus Act incorporates principles from the Patch Act, and so on and so forth.

One of the things that are curious and is really indicative about this is the fact that these requirements or frameworks are routinely updated, suggesting that they’re out of date almost at the moment that they are produced, but would suggest that the creation of new technology and the way that technology is being used is pulling away from the ability of these frameworks to safeguard them.

In fact, that’s only part of the problem. Another is what do the regulations actually say. Are they really clear and are they helpful? From the MDM perspective and the HDO perspective, one question is obviously, OK, if they’ve been changing so frequently in the past, what’s going to keep them from changing again? What’s going to keep me from being out of compliance with them in a year or in two years?

This tends to have a centrifugal effect on the idea of collaboration. I got two examples here. One is in a comment from Ernst and Young on the EU Data Act which is pretty stark, right? It’s difficult to determine whether a specific product or service falls within the scope of the regulation. That seems like one of the first things that you should know.

The Omnibus Act is interesting as well because it has what seems on the surface like a clear definition of a cyber device. What device should fall under the scope of the Omnibus Act? But in fact, it’s not at all straightforward as I will show now on the next slide.

This is an example of a theory being translated into practice, right? There’s an old saying, in theory, there is no difference between theory and practice. In practice there is.

This is a LinkedIn conversation among some very serious and very knowledgeable and very prominent practitioners in the area of medical device cybersecurity, responding to a post about the definition of a cyber device, the Omnibus Act’s definition of a cyber device and questions that came up.

To cut a long story short, the conclusion was nobody really knows which devices do or don’t fall into the scope of the definition of a cyber device. Does the ability to connect to the internet mean it’s directly connected to the internet or connected through one hub? Does it mean it has the ability to connect but isn’t connected or if it has the ability to connect, it falls within the scope?

What about all the other attack vectors besides the internet? So this is an example of a case where attempting to introduce clarity actually reduced clarity because if you would ask most MDMs what a cyber device was before this, they probably would have had an answer.

The reasons for this are not insanity. Everybody is working conscientiously to get this right. Some of the commentaries, maybe taking a step back a bit. Some of the commentaries around the efforts and requirements for regulating medical device security kind of show you what’s going on.

There’s a comment from Underwriters Lab here which speaks about technical, logistical and economic challenges. I think that’s a very, very important point because there are things that can be proposed, things that you can do which simply make no operational or no economic sense. We have to be careful of steering clear of those. We need to give MDMs and HDOs something practical and economical to work with. But the bottom line here is translation is that this is pretty complicated stuff that we’re all working on together and the strategies on this need work.

There’s a commentary on the FDA’s quality system which references the generality of the regulation. Generality is code for – it’s vague. It’s not clear. People interpret these things differently. But I think more importantly part of what it’s saying is that this is not really addressing the root cause. We’re addressing symptoms of an issue rather than the root cause and then the Model Contract Language in this commentary on that, makes a diplomatic statement about miscommunication and inconsistent contract terminology and in fact, that may be correct.

But in fact, I think it’s worthwhile acknowledging that both HDOs and MDMs have very different jobs and very different interests, and very different perspectives on cybersecurity. Giving them a common denominator and a common language is not the easiest thing in the world to do and if we just think about it, security at an HDO is often handled by somebody whose background is in IT security. Product security from an MDM is often handled by somebody who’s an engineer.

Those are more different worlds than would appear and gluing them together, and making them unified requires effort, requires work, requires communication and as I will show you, requires data that both can use for analysis and decision-making.

The result of all of this is that there’s this constant cycle of change and uncertainty. Regulators come up with new rules for MDMs. HDOs are busy doing their job protecting patient safety which is exactly what they should be doing, as well as privacy.

When MDMs and HDOs get together and they talk about their products, they often use Model Contract Language for defining responsibilities. Model Contract Line would shift. So a lot of the liability for product security and product safety for that matter too, is the MDM. This drives up the cost and effort and the level of ritual for the MDMs. But it’s unclear that it really helps a whole lot in the area of security. In fact, there are questions about whether it helps at all.

Then because there are doubts about security or whether it’s helping, regulators come in with new regulations and new rules which start to cycle again and again and again. We’ve kind of seen that through the frequent introduction of new regulations and with their new provisions.

So root causes matter and I think it’s helpful to take a step back and think about why we’re seeing these issues, particularly because everybody is – certainly everybody that I talk to in the industry is quite sincere about product security.

I think a big part of the answer is that devices are engineered to be secure from sort of a hardware and a logical perspective. They’re hardened if you will. But because of a connected world and because there are so many of these devices, the post period of time, the post-market time is becoming a greater and greater risk. I mean that’s the longest part of a device’s life is going to be in the market when it’s being used, not when it’s being designed or when it’s in a warehouse waiting to be shipped.

That’s the period of time when it’s in a clinic or in some patient’s home that it can be compromised. We really don’t have very reliable, very good ways of protecting them in the field. They don’t even have very good ways of knowing what’s going on with them in the field and we’ve introduced patching as a remedy and a number of other things.

But the net is that we’re making devices that outlive their security and that’s part of what’s causing both this constant cycle of new rules and new regulations, the uncertainty, as well as keeping – making it difficult for MDMs and HDOs to talk to one another in a non-siloed way.

So as we look at this, it’s sometimes helpful to look at the situation exactly the way it is and understand why what we’re doing is problematic and then on the basis of that, that root cause being the case, what things we can do about it, what things we can do to get MDMs and HDOs collaborating.

So the situation is like this as Palo Alto Networks Unit 42 said, “Seventy-five percent of infusion pumps have unpatched vulnerabilities.” So that’s what we’re dealing with today and that’s what we will be dealing with tomorrow and a year from now and five years from now.

So unpatched vulnerabilities are a fact of life in these infusion pumps and the moment these are patched, there are going to be new infusion pumps or new devices with new vulnerabilities.

From our end, we have an interesting point, which is that it takes six months to patch a high-severity vulnerability. That’s if it’s patched at all. There are examples here where it takes three years to patch a high-severity vulnerability. A low-severity vulnerability is a judgment call, right? It can be low severity according to one analyst and high severity according to another.

So the net here is that we have a really, really long period of time between when a vulnerability is disclosed and when a patch is available for it and that entire period of time, that vulnerability is exploitable or attackable, not to mention all the time that it existed before disclosure because vulnerabilities preexist disclosure.

To compound matters on the next slide, what we see is that 80 percent of attacks have used a vulnerability that was reported three years ago or more. So these vulnerabilities are around for a long time. They represent a window of exposure and patching is unable to keep up with them and so in the context of shared responsibility, what we’re dealing with is a form of collaboration, which requires HDOs and MDMs to work together collaboratively on patches in an environment like this where they’re always going to be behind and they’re only going to get further behind.

So maybe just to say a couple of words about patching. The concept of patching has taken on a life of its own. It has its merits and it has its demerits. The demerits are pretty big and it’s worth knowing about them as we sort of think about the next evolution of security for medical devices and probably the two biggest merits are the one that we just spoke about which is that they’re late by definition.

They leave a window of exposure for an attacker that can be years or decades in the making and then the second is the other point that I was making which is that patching is a process. It can take on a life of its own and can become sort of a form of religion. We do patching because we do patching. Patching becomes the goal rather than risk reduction.

There are attempts to patch around the severity of a vulnerability, CVSS scores and so on and so forth. But as a practical matter, it is a difficult, challenging, distracting, expensive and often ineffective thing to do. Put differently, patching is – I talked to one head of product security who told me that for him, patching is like cauterizing a wound without anesthesia. It’s a last resort, something you do if there’s no other alternative. They’re not crazy about SBOM either, which is an administrative process but it doesn’t really represent control and doesn’t represent some foundation for HDOs and MDMs doing anything together.

So if that’s patching and I’m focusing on patching, I’m spending some time on patching at this point because vulnerabilities and protecting vulnerabilities from exploitation are so important. The question is, what do we do? What’s the alternative to that?

I mentioned that that’s a last resort and that’s the case. So if we’re going to find something better, we have to be able to define it and what gets interesting is it’s not difficult to idealize a paradigm that would represent an alternative to patching as a sole control against exploitation of vulnerabilities.

So if we sort of sit back and think about that problem, that problem being the post period and the problem of exploitation, particularly of unknown vulnerabilities, we need three things, right? We need to instrument our devices and we need to do that at a firmware level because when it comes to exploitation, exploits happen in the firmware. It would be nice if we could get some asset management capability and some life cycle, software development life cycle and so on the capability from that too. But got to have some presence on the device, some form of instrumentation.

Then the next thing that we would like to have is continuous monitoring, basically watching the device or watching whole fleets of devices and developing models for alerts of things that we should be looking at and analytics but maybe more strategically, looking at – being aware of the state and condition of every device at any time. So you have an analytical capability with some anomaly detection built in. Anomaly detection is important because it helps you find unknowns and it’s the unknown unknowns that can hurt you.

Then the third area is active controls and active controls are around protection from exploits and zero days and the key to active controls is that they have to operate without dependencies. They should be able to operate autonomously and deterministically.

Now I’ve walked through this model, this paradigm before with other people and what they’ve said is, “Hey, this looks a lot like EDR,” and my reply is, “Yeah, it does. EDR was a response to a very similar set of problems on different devices. But the paradigm is completely valid for embedded systems.”

It has to be implemented differently because you’ve got resource-constrained devices. So the technology has to have a different foundation so it’s properly adapted for embedded systems. There’s nothing around EDR as a concept, as a paradigm that’s not applicable to embedded systems.

What this does, is change the kinds of dialogues that MDMs and HDOs can have because instead of having a conversation about a particular patch for a particular vulnerability, they can start talking about the resistance to very specific classes and types of threats and MITRE is a great source of this. They identify the top 25 and so you can imagine a conversation between an MDM and an HDO or instead of talking about an SLA for patching, they’re talking about the capability to prevent or to be protected from a particular threat identified in the MITRE Top 25 and that would be a pretty good foundation for evaluating the cyber resistance, the cyber capability of a particular technology.

Furthering that dialogue, there are technical means of being able to demonstrate both in that conversation between the MDM and the HDO but also to a regulator that this resistance is built into the device and this is a post-market proposition. The concept that I’m talking about here is something that you can discuss and you can realize during the runtime of the device.

So when we think about patching, we think about patching a particular vulnerability. But as we discussed before, getting to know that that vulnerability exists and getting a patch can leave a window of exposure for decades.

But in a different scenario, you can imagine that a vulnerability might be disclosed. The thing about vulnerabilities, about CVEs is that there are instances of CWEs, common weakness enumeration, which is a family of vulnerabilities and technology that correlates CVEs with CWEs and then demonstrates that protection is implemented or enabled at the level of the CWE can allow an MDM to assert with authority. That vulnerability is unexploitable because you have technology which protects you against exploitation of that entire family of vulnerabilities.

In other words, a CWE and protection from a particular CWE will generalize to innumerable other vulnerabilities. So that is a much different approach to dealing with vulnerabilities than reflexive or ritual patching and it takes away this issue of having to judge, whether it’s high severity or low severity or medium severity or exploitable or not exploitable. You’re just protected from it.

And so here, just a couple of examples. I’m using some case studies from some of our customers just to illustrate how a technology like this would work in a real-world as well as how it would catalyze kind of like kind of conversation.
This is an example of a stack corruption attack being prevented, something that can – you generally want your stacking areas to do this sort of thing. Some products, some developers do, and some don’t. Sometimes there are third-party developers who are not as fastidious about this as they should be.

But that’s an important point by the way to make, which is that this paradigm should cover not just the code of the MDM but also any other code that they are incorporating into the product. Usually, that’s going to be Bluetooth or something communications software. But whatever it is, it has to protect that.

But anyway, this is an example of a stack corruption attack being alerted. Now, this is the sort of thing that MDMs and HDOs really can’t talk about today because they don’t have the instrumentation for doing it.
And I have one more example, which is a heap corruption attack. So quite reasonable and it happens all the time where an HDO will ask an MDM, “Do you have stacking errors enabled?’ and the answer will be yes. But the corruption of the stack is not the only means of exploiting a device. Heap corruption is also a way of doing it, and that’s a lot trickier. There are no heaping areas or none that I really know of that have been commercialized.

And so, this reflects the idea that I was speaking about before where you go through the MITRE top 25, you talk about the families of different kinds of attacks and together working backward say, “The result of your ability to protect me from a heap corruption attack should be the ability to produce a report like this.” And then you have something to talk about. Then an MDM can say to an HDO, “We detected this heap corruption attack. It’s something that you should know about.”

The MDM can also take a look at this fleet-wise to see if all of their devices for example are under attack or maybe it’s not their devices, maybe just somebody is attacking a particular version of Bluetooth that happens to be on their device which could be a widespread attack. Either way, it’s something that both they and their HDO should know about, well above and well beyond the simple existence of a vulnerability.

And the outcome of this can be a decision very different from patching. A decision can be a particular vulnerability is unexploitable and so I can defer or avoid the patch. And the point here is that the conversation can be around the practicality and the reliability of the technological means of detecting and preventing the exploitation of a particular vulnerability rather than a default to a patch and there’s this foundation of data behind that that everybody can look at, everybody can understand, and everybody can use for making a decision.

And it gets – I guess one of the number one rules for judging the effectiveness of a paradigm or strategy is to see if it’s predictive of what it had worked in the past. And I was talking to a product security practitioner the other day who happened to ask me about SweynTooth and I said, “Well, why are you talking about that? That thing is years old.” And he said, “Well, the disclosure was years old but we still have products out there with software in it which still has the vulnerabilities and they haven’t been patched yet.”

And we see here even in the disclosure, the language saying patches are provided, it will take a while for patches to make their way to the actual user-owned devices. I mean that’s as strong a statement of the problem of patching as any that I’ve ever seen. And you sort of think, “Well, imagine three or four years ago, had there been active controls on some medical device using one of these components with this vulnerability in the Bluetooth stack, instead of patching eventually, the answer could have been yeah, it’s protected from exploitation.” Very different sort of conversation, a very different level of assurance.

Also here, as we take a look at all of this hardware using the software with all of these vulnerabilities, we get a sense of how complex this problem really is and that trying to solve this administratively without some sort of active controls is going to make it really difficult and also really frustrating and have people pointing fingers at one another more than collaborating.

And then there is the whole question of future-proofing. The thing works vulnerabilities that there’s something works libraries that are used in medical devices that communicate with the PTC ThingWorx platform and they had vulnerabilities in them. Right now – and this highlights the issue of a third-party risk by the way. Two different conversations, one of which is OK, we are going to figure out how to patch all of this.

And so, PTC presumably will release a patch for this. People will figure out whether they’re really exposed or not when they want to update. A whole different sort of thing if you can just say that software is protected from – with its vulnerability, it’s protected from exploitation by some fort of active controls.

I want to shift now a little bit from the issue of active controls and vulnerabilities even though there are huge amounts of time and a huge amount of energy going into dealing with them to a broader issue of monitoring and threat detection. And like the issue of vulnerabilities, many people have been thinking about this and trying to figure out what’s the best way to handle it. And the FDA has started proposing ideas for this back in 2016 about incorporating threat detection capabilities as well as certain detecting responses, compensating controls, and vulnerability management, compensating controls.

But the recommendations back then were nonbinding. However, the fact that they were not binding doesn’t mean that they weren’t useful. It doesn’t even mean that they were not right. In fact, they were right, those capabilities are part and parcel of the paradigm that we discussed earlier.

In fact, we see in today’s version of the model contract language that MDMs are expected to have some runtime detection or response capability. And if they don’t have it, they are expected to get it soon. Two years is the number that’s used here, the period of time that’s used here.
And moreover, what’s interesting is the model contract language binds. Basically, makes the nonbinding recommendations of the FDA binding. The expectation is that MDMs will do this. And certainly, a good idea.

So the question then is, all right, what does this look like? What are the enablers of this? And there are really – and when I say this, I mean threat detection, threat analytics. And from the perspective of an embedded system, there are really three enablers.

One is traces and that’s an important point and I’ll highlight it because it means that you have to collect data from the device itself.

The second is some form of continuous monitoring which has to be very lightweight.

And then the third is an analytical capability and data visualization, data interpretation capability but with an underlying or at least a parallel anomaly detection engine for finding unknowns. And the reason is that when devices misbehave, they can misbehave in unpredictable or unknowable ways, and it takes some form of artificial intelligence, a machine learning often to find those things.

And once those capabilities are in place, things get very, very interesting because you now have access to all of this data that really represents a common denominator and a simple and straightforward way of communicating. So some alerts and some of the data you collect can be user-defined, but some can be anomalies.

So let’s take debugging for example. Debugging is sometimes a question mark. There are times they are supposed to be done and times when it’s not supposed to be done. However, assuming you can detect that and with the right instrumentation you can, it certainly would be useful for an MDM to know that because debugging can indicate and attempt to steal IP. It would also be useful for them to be able to notify whoever is using that particular device that this is happening so that you can collaborate on stopping it.

Something similar would occur around session behavior. Session behavior seems like one of those things and it is one of those things that’s fine until it’s not. But certain session behavior for example, a large number of sessions in short periods of time or sessions closing or not closing after they’re opened or staying open for unusually long periods of time or not opening when you expect them to open can be a problem.

So the idea is that with this instrumentation, you have an understanding of what expected behavior, what behavior to expect. And then when devices are performing in unexpected ways, you get an alert and then MDM and an HDO can talk about what that means and what they’re going to do about it.

A number of other examples are brute force attack is also something that is difficult to get evidence of. An instrument or device makes it quite straightforward. And there are different kinds of conversations that MDMs and HDOs would have. Is the MDM the one that is under attack? Is the HDO the one that is under attack? Is it both? Are other HDOs under attack?

So the availability of this information allows for – particularly, its availability quickly allows for a really topical and useful decision-making. Even something simple as a client connection, the client connecting to an IP address, could be a violation of policy. It could be something that’s perfectly normal.

But effectively, as we talked about tracing and we talked about monitoring, you also want to have configured the alerts so you can see whether devices are doing something that you’ve said they’re not supposed to do or that a clinic, a user, or an HDO doesn’t want them to do.

Physical and environmental variables are important too. It’s a truism that product safety and product security are either almost the same thing. If they’re not quite the same thing, they’re close enough that they are interchangeable. And the temperature is one of those variables that can matter. And having analytical capability around that as well as a mechanism for alerting both the MDM and the HDO to variations or anomalies can be valuable from both a product performance, but also a patient safety perspective.

Going back to this idea of anomaly detection and expected behavior, if devices are supposed to communicate for example and that communication could be fundamental to their ability to do what thing they are supposed to do and it doesn’t communicate, that can indicate some kind of problem or some kind of failure. From the MDM’s perspective, that failure could be local. It might not be a failure at all. It might be someone just plugged it.

But it could also be a failure in one location. It could be a failure for a particular generation of devices. It could be a failure geographically. Either way, the MDM is going to want to know about this. They are going to want the information before they start getting calls and they are going to want to be able to understand how they should talk to and what they should say to their customers, to their HDOs about this.

Communication is an issue that comes up again and again and again. You have idiopathic disconnects. You have subpar performance and noisy signal environments.

Instrumentation and continuous monitoring provide you with diagnostics to understand this and to optimize for performance.

This is a case study that engineers love around bug detection. One of the by-products of active controls is that they can be used for detecting certain bugs like memory leaks that are difficult to detect with static analysis and with pen testing. And if you are an engineer and you are dealing with either the likelihood you’re going to release a product with bugs in it or worse, with the requirement to fix those bugs later, trying to diagnose them, that’s a headache. And so the availability of this capability is a real plus for engineers and developers.

And I’ve anonymized but taken the liberty of showing a note from a developer about how important this was for him to have this capability, which brings us to wrap up.

The status quo of frameworks and regulations and contractual language and practices and traditions and so on around cybersecurity tends to be siloed. I mean things like hardening and compliance are very siloed. Their MDMs do their kind of compliance. HDOs do theirs. And there has been a lot where they collaborate.

Patching and SBOM have certainly a communications dimension to them but there’s not a lot of collaboration. It really sort of still exists in silos. OK. We will get SBOM and if we find out about some vulnerability, we will find out who the vendor is and then we will get a patch and we will send it to you and you will deploy it or you will not deploy it. Still problematic and still siloed.

And kind of preserving the same issue that we’ve talked about all along which is that in a post-market world, there’s not a lot going on. There’s not a lot of data coming in organized, structured, routine, regular, or real-time ways to allow MDMs and HDOs to collaborate.

And so, one of the most interesting things about the paradigm that we talked about before, this combination of instrumentation, monitoring, and active controls is that it starts to change the KPIs that are available or MDMs and HDOs because instead of thinking in terms for example of the vulnerabilities that have been disclosed, they can think of the vulnerabilities that are irrelevant. We don’t have to worry about use-after-free exploits or out-of-bounds rights or those sorts of things. And that’s something that they can jointly work on.

They can think in terms of exploits nullified rather than joint management of SBOM.

For the post-market area, an interesting KPI is a time to insight. How long does it take me to find out something important or something interesting?

Meantime, the remediation is a nice KPI. How much time can I take out of the diagnostic in the troubleshooting process? How many patches have I avoided? Never mind how many I’ve done or how long it takes me to patch. How many have I avoided altogether?

And ultimately, the goal of this is to have safe and secured devices, safe and secured devices meeting the letter and spirit of the framework that we spoke about. Overlaid on top of shared collaboration, MDMs and HDOs are sincerely collaborating together in the interest of patient safety and cybersecurity, sharing interest through sharing data. And ultimately, being able to answer the 4 questions that we spoke about at the beginning of this presentation with a yes knowing the answer to all of those questions and being able to say yes and to be able to say yes not just now but throughout the life of whatever device is being discussed.

This is enjoyable to speak about. I appreciate the opportunity to present.

Dylan Hall: Awesome stuff, Arthur. I would like to thank our speaker and his insight on why data is a strong asset in healthcare security and our listeners for taking the time to attend the webinar. This concludes this portion of the webinar. At this time, we would like to open it up for questions.

All right. Great. So we will get into those questions. I’m seeing one in here. So I’ll just read that off. Where do MDMs have the most risk under the Omnibus Act pre or post?

Arthur Braunstein: Yeah. That’s a good question. It’s funny. There’s an old saying, the more things change, the more they stay the same. There are certainly some unknowns in pre right now. But MDMs tend to have pretty good compliance teams and in working with the FDA. They will figure this out.

The new rules are a lot like the old rules. They just have more teeth. The real issue is what it has always been. It’s in the post. And there are some factors that make posts even more concerning for them than they used to be. HDOs are sort of under fire and they are holding the MDMs responsible for all sorts of risk factors, exploitation leading to a breach or privacy or safety issues and so on.

And because the past controls are contemplated and neither the law nor the regulations really addressed this, the burden for dealing with that is on the MDMs. So there is obviously some business risk to these changes about getting a device approved. But once it’s approved, the lion share of that risk is really in a post environment.

Dylan Hall: OK. Awesome. I’m taking a look here. I’m getting another one. So how will model contract language on cyber affect MDM’s long term?

Arthur Braunstein: Yeah. Well, the short answer is probably not a good way. It kind of ties in with the point we made before, which is a lot of the controls are saying exactly the same. It’s just that there are teeth in enforcing them. So if the controls are the same, the only question then is who is responsible if something goes wrong?

And we are seeing that in the language. It’s the MDMs. The HDOs are trying to shift that to the MDMs.

The other thing that MDMs are going to have to think about is awareness of dynamic controls, sort of the EDR-like controls that we spoke about before growing, and that’s the kind of thing that HDOs are very familiar with. Their CISOs almost all have that and deal with it every day. And so, they are likely to either start asking for it or start favoring vendors who make it in some form available to them.

Dylan Hall: Very interesting. Kind of adding on to that though, is something like EDR realistic for medical devices?

Arthur Braunstein: I guess the way that I would put it is that EDR – if you think of EDR as a technology or as a product, if you look at CrowdStrike or SentinelOne or someone like that in a Gartner sense of the word, then it absolutely isn’t. Never has been and it never will be. But agents are too heavy for the embedded systems used in medical devices. And the way EDR scans is just maladaptive for an embedded system.

But as a concept, it makes a lot of sense. The basic concept is security instrumentation that detects and protects against exploits and monitors security-related things or security-related events. And so as a concept, it certainly makes sense. And technologically, it’s realistic. The key to it is that it has to be elegant. And you can do brute force things on a PC that you can’t do on an infusion pump or a cardiac program. But if you can solve the brute force problem and there are technologies out there that do it, it’s not only realistic but highly desirable.

Dylan Hall: OK. Interesting. So how will cybercriminals react to that the Omnibus Act?

Arthur Braunstein: Yeah. Well, I don’t know any cyber criminals so I haven’t asked them. But I supposed I can put myself in the shoes of one and try to pretend that I was one and kind of the way that I might look at it is that the laws are telling people to do more of what I already know I can bypass anyway. So, it isn’t clear to me that it fundamentally changes my life as a cybercriminal. I suppose it might introduce some inconveniences here and there but I don’t think it’s a game-changing thing from the perspective of a cybercriminal.

Dylan Hall: OK. I’ll ask one more here and hopefully, we can get some time back in our membership again. So should more be done to secure medical devices before they are submitted for approval?

Arthur Braunstein: That’s a loaded question in a lot of ways because, in some respects, it sort of presupposes that people are not doing enough. And while I suppose there are some people who may not be doing enough, my sense is that everybody is pretty sincere and working pretty hard at cybersecurity.

I tend to think more in terms of what controls would be helpful to an HDO and harmful to a cybercriminal. I think in those two terms.

Threat modeling which a lot of people are doing has a role there. It’s part of the equation, not the full equation. But certainly, I think that’s important to think about what harm you are doing to a cybercriminal and how you’re helping and working with your HDOs.

Dylan Hall: OK. Awesome. Well, I guess we will wrap it up. I’d like to thank our speaker, Arthur, again and his insights on why data is a strong asset to healthcare security and our membership again for attending the webinar. If there are no more questions or comments from you, Arthur, I’ll close out the webinar and we will see you guys at the next one.

Arthur Braunstein: No comments. Very enjoyable. Have a great day, everybody.

Dylan Hall: All right. Thank you. See you guys.

JUMP TO SECTION

Enter data to download case study

By submitting this form, you agree to our Privacy Policy.