Webinar: How Medtronic Secures, Monitors & Debugs Critical Devices

23  min read | 03/04/2023

Lian Granot
Lian Granot
Chas Meyer
Chas Meyer

Do you have to deal with operational issues, BT connection, or battery depletions?
Did these take too much of your time, possibly even hurting the business?
Would you (and your customers) benefit from real-time visibility into device metrics and logs?

Join Chas Meyer, Sr. Principal Product Security Engineer at Medtronic, and Lian Granot, Sternum’s CTO, as they sit down to discuss how Medtronic used Sternum to address these and other challenges with:

  • Zero-overhead runtime security
  • Live remote visibility
  • Device-level telemetry that streamlined debugging in the field

 


Lian Granot: Welcome to our webinar. Today we are hosting Chas Meyer from Medtronic and we are going to talk about how Medtronic secures, monitors and debugs critical devices. I am Lian Granot. I am the co-founder and CTO of Sternum. I have an extensive background in insider security, embedded development and Android development.

Chas Meyer: And I’m Chas Meyer, and as Lian said, I’m the Senior Principal Product and Security Engineer at Medtronic and I’ve worked in embedded systems for 40 years and recently switched to the cybersecurity team, and I look forward to this presentation.

Lian Granot: Thank you, Chas. Let’s get started. First, we will do a brief overview of the Sternum platform. The platform has three main layers. The first layer is runtime self-protection embedded in the edge devices themselves. It’s an agentless solution that basically works by embedding verification points throughout the system automatically.

Then when the device executes, normally, it will self-validate itself. The security itself is deterministic and autonomous. It does not require any internet connection.

The next layer in the Sternum platform is observability and continuous monitoring. In addition to the data collected from our runtime protection, you can use our SDK to collect any custom data that you would like on the system.

Using this data will provide you two things. The first thing, it is fleet-wide and also device drilled-down analytics. You can see how your entire fleet behaves. If you have a major error throughout your fleet or at least a specific device is malfunctioning, you can investigate that one if you want.

You think that Sternum is a normal detection engine. You can get alerted on issues before you get complaints from the customer. You can get alerted on a specific device which stop and behave differently and fix the issue before it affects any devices in your fleet.

In addition to that, the observability platform also provides an additional security layer and next they have like a platform that alerts you if the security anomaly detection not only performance and diagnostics.

The first layer in the Sternum platform is the operational and business insights. When you collect data from devices especially your fleet, it’s huge. You collect a lot of data. So you have to get actionable insights. This is why Sternum Business Insights allow you to do so without any effort.

We know what business insights are relevant to manufacturers and our platform will automatically allow you to gain those insights.

For example, you can automatically understand your battery level throughout the fleet to do a document in case of battery malfunction. You can see the quality or stability of the devices in the fleet and understand if you need to invest more R&D resources in improving your devices and you can get many more insights using the Sternum platform. Today we are going to cover some of those insights.

So now we are going to talk about some use cases that our observability and online security allow you to gain basically. Chas, do you want to maybe do a quick overview of that?

Chas Meyer: Sure. Sure. After a year of collecting data in the field on one of our products, we took a look at the areas that are logged by the firmware and when we log an error, it’s not always an error. Sometimes it’s like if you use a Bluetooth stack API and the Bluetooth connection was lost, you might get an error indication back from the stacks. So we’ve logged that as an error. So it doesn’t necessarily mean that these are all firmware errors, but they are just non-optimal behaviors in the system.

So when we collect up this data for one of our products, we found a lot more error logging than we had expected and it turns out that there were two error sources that were responsible for the vast majority of the logging and when we looked at the code, we found that it was just a design mismatch in the intent of the error logging where when you go to schedule an invent for a delayed execution, you can come around and cancel that event if you decide that you don’t want it to execute and oftentimes, that canceling can be performed on a function that’s no longer scheduled. We log that as an error even though it’s a normal operation in most of the cases because we wanted to catch the cases where if we went to cancel it and it wasn’t there, we would know.

But it turns out that we were – all the reporting that particular use case in the error logging. So once you boil away the overhead like that, we drill down into the meaning of the error reports first and then we use that for going and taking a look at the source code and seeing if these were actual bugs and how to resolve them.

Lian Granot: So how did you do while monitoring before Sternum?

Chas Meyer: Well, that’s a really good question because it’s incredibly difficult in an embedded system to get observability even during development. Because if you’re logging these errors, who looks at the logs? Where are those logs? How accessible are they? And unless you have a really good logging system throughout, you’re going to have things happen that you just don’t know about and until we integrated Sternum into our system, we would go through the development process and the formal testing without this kind of visibility.

So it’s possible then to have escaped for some of these bugs and by integrating Sternum into our product, we can see in near real-time what kinds of things are happening so that prior to the lease, we can take corrective action.

Lian Granot: So I have another question for you following that. So maybe it’s more of a statement because I think we met many, many embedded developers from many industries. Some of them had some kind of let’s say monitoring but from what we saw, it’s a lot of effort just collecting thereof. But then to actually do the analytics on them or to investigate them, that requires like a very complicated analytic system that no one can actually put the effort into, right? That’s like a new product.

Chas Meyer: Yeah, and oftentimes in the embedded world, you have a very targeted set of skills to embedded systems and the C language, embedded firmware programming and doing full stack development across the cloud to a backend where this data can be collected and analyzed is usually a capability that is just not available within the company.

If it is available, it takes a fair amount of time and effort and staff to build it out and maintain it. So having a drop-in product that’s ready to go right from the beginning, that gets us to the ability to observe this data without having to develop the capability ourselves is really a godsend.

Lian Granot: Yeah, definitely. Do you think devices that have I would say rare connection to the internet still need some kind of a monitoring?

Chas Meyer: Absolutely. There are all sorts of things about the environment that you’re operating in and the nature of that connection and the quality of that connection that you need to know how the system is performing in the field and that’s about the only way to get it.

Lian Granot: Yeah, yeah, especially I think when you try to provide your customer or patients the best service they can get. Right.

Chas Meyer: Yeah. All this information helps us with improving product reliability and quality so that our customers have a better experience with it.

Lian Granot: I imagine they were – like you have a talk for a thesis or for basically any desktop computer or someone can guide you through the phone and then see what has happened in the system live. You can have that capability maybe in the future for critical devices like medical devices. Let’s see the telemetry from the implants and from other critical equipment and everything, solving issues remotely, completely remotely in time. So let’s talk about Bluetooth.

Chas Meyer: Yeah, that’s my favorite topic. I’ve been working on Bluetooth for over 10 years now and it’s very interesting to me how you can go through development and not have exposure to the kinds of environments that you run into in the field and you really can’t know what those environments are going to be like until you do field it and then getting data back on how it’s actually performing is invaluable for refining what you do during development to stress test your system.

So we collected up a lot of Bluetooth data that we will probably be talking about later on how we were able to analyze used conditions in the field.

Lian Granot: Yeah. Now here, I don’t know how familiar the audience is with Bluetooth but we’re using Bluetooth Classic Serial Port Profile to communicate between our instruments and our iPad and Bluetooth has 79 channels that it uses, that it hops between that spread across the same spectrum that’s used for Wi-Fi and so those channels can be adaptively mapped out if there’s too much interference in that particular slice of the frequency band.

So one measure of the quality of your Bluetooth connection is how many channels are usable and Bluetooth has an algorithm for mapping in and out channels to adapt to the noise in the environment and you can read that channel map from the Bluetooth hardware and see which channels are actually in use.

So what we’ve done is we’ve taken snapshots of the channel map every 30 seconds so that we have a historic map of how it’s performing in that environment and you see here this was taken from lab data where we worked in an anechoic chamber injecting noise on Wi-Fi channels 1 and 11 with channel 6 being untouched.

What you can see there is that on the look of the Bluetooth channel master, that corresponds to Wi-Fi channel 1 and you can see that most of those channels are disabled most of the time. You can see some spikes there on the backdrop. That’s a sum of all the number of occasions where that Bluetooth channel was enabled and those spikes are the Bluetooth chip periodically probing that channel to see if it’s usable yet. It doesn’t mean it’s actually usable. It just means that the chip is seeing if it’s recoverable yet or not.

So you can see on the Wi-Fi channel, one is interfering with a broad range of Bluetooth channels. Same thing on the right-hand side with Wi-Fi channel 11.

In the middle, we see the spectrum allocated to Wi-Fi channel 6 where we didn’t introduce any noise. But one of the odd findings here is why is there a notch in the middle of that spectrum where we’re unable to use those channels in that small notch.

You can see on the right-hand side there where I’ve circled that area on the plot. There are six Bluetooth channels in there that are just mapped out and it’s in a usable area of the spectrum. So this is an unexpected finding.

By the time you have this much interference in the system, losing six channels is a big deal. So we would like to know what’s going on with those six channels and likely be reaching out to the manufacturer of this Bluetooth chip to see if they can help us understand what might be going on here.

Lian Granot: That’s great. So how can you actually detect it without – like what’s the tradition if we’re going to inspect such issues, let’s say without remote observability?

Chas Meyer: Well, traditionally you can get this kind of information if you have like a protocol analyzer. But that doesn’t help you with fielded products. We’ve had reports of Bluetooth interference issues at some sites where it has a significant noticeable impact on the performance of the system.

So we will send a team out there to go and take a look at what’s happening. We will bring protocol analyzers and instrumented equipment and we will take a look at what’s going on and we see nothing. The problem is, that in these environments, the noise is changing over time.

Like for example, I would expect that Wi-Fi utilization in the hospital would probably peak around lunchtime because people are taking lunch breaks and surfing the web and if you’re using a Bluetooth system during that time, you would get a different pattern of interference in the Bluetooth channels than say at closing time when everybody goes home and nobody is using Wi-Fi.

So when you show up, to look at this stuff matters and by having this kind of data collected from the field, we can take a look at what the real operating conditions were at the time whenever a problem does show up and not worry about sending a team out there to hopefully be able to capture.

Like remote debugging. I kind of like to think of it as spying on the Wi-Fi environment, around the noise environment for Bluetooth.

Lian Granot: With much less equipment, I guess.

Chas Meyer: Yeah, it’s built into the equipment we already have.

Lian Granot: Yeah, that’s amazing. Actually leveraging that very simple metric. Maybe the amount of development channels actually detect such detailed behavior that is affected by so many parameters.

Chas Meyer: Yeah. Well, I have one finding where I built up a channel map like this from a place that – like I said before, this was lab data. But I did take a look at one of the sites that were having issues with Bluetooth performance and it was very strange because everything was fine for the first 40 minutes and the channels were actually pretty good and then I have no idea what happened in the environment but the last 20 minutes, almost all the channels were really struggling.

When Bluetooth has a limit on the number of channels you can map out, you’re required to keep at least 20 channels active regardless of the environment and in this particular case, it was wiped out down to 20 channels and they were scattered across the spectrum which indicates that no one of the channels was really any better than the other.

So that gives us some basis now for going back to that environment and trying to find what changed because apparently, something was turned on because it was a very drastic change in the environment. Maybe some piece of equipment was being used that generated noise in this spectrum. We don’t know yet but at least we could see that it was just a really sharp drop-off indicating that there was some event that triggered it.

Lian Granot: So in the system, you mentioned you have capital components that have been monitored. I wonder if you had seen different behavior when you look at data from the two different devices.

Chas Meyer: Yeah, actually we do, and I was quite surprised by that too we have one piece of equipment that we call a base and one piece of equipment that we call a patient connector and they all use Bluetooth to an iPad.

But one thing that’s different about the patient connector is that it’s often we call it over a low energy Bluetooth to an implant and so the patient connector has two simultaneous Bluetooth connections. One to a low-energy Bluetooth implant and one to the iPad and so I will expect the performance between those two products to be equivalent when they’re operating in the same environment but it’s not. Part of the reason for that, apparently – and this is an area that deserves a little more investigation on our part.

But apparently, when you’re operating this Bluetooth chip in dual mode, the performance on that chip in terms of these channel maps changes when you’re using both BLE and Bluetooth Classic at the same time. It doesn’t perform as well because it’s trying to struggle with maintaining channels between two different interfaces whereas the other piece of equipment, the base, just has that one single connection and it’s the same Bluetooth chip and the same firmware behind it. But it just gets better results in the same environment. So that was a surprise.

Lian Granot: So you can actually get more insight about how the hardware itself functions that maybe no one else has that, even the company that built the output itself, the chip itself.

Chas Meyer: Yeah.

Lian Granot: That’s impressive.

Chas Meyer: Well, and so that’s a really good point there. That ability to compare is incredible, and useful because we can compare between two pieces, different pieces of equipment. But what it also allows us to do is to compare between performance before and after a firmware change. Like we just recently did a release where we contracted to have the Bluetooth stack modified to improve performance and we had fielded that now and I’m looking forward to seeing the data from that to how things performed before that change. So this is a way of getting concrete feedback from the field as to what kind of impact your changes are making.

Lian Granot: Definitely. Do you think this method of work is applicable on other short-range communication platforms?

Chas Meyer: I would think so. I think some of this is going to depend on the technology and what kind of data is available. In an embedded system, that’s going to be part of the trick of observability. It’s trying to figure out what parts of the system can provide data that you can leverage to learn about this.

For example, if a Bluetooth developer doesn’t know about the ability to read the channel maps from an implant, you won’t get this data. So depending on what kind of interfaces you have and what kind of data you can get from it, you will have to do some digging to figure out what are the most useful parameters for characterizing the performance.

That was an interesting exercise in all this too. A lot of this was originally just exploratory, but let’s see what happens if we monitor this.

Lian Granot: Another very interesting topic for many, many device manufacturers is battery, right? It’s so precious for the device to remain functioning that had to have a battery that is sufficient for its use, right? So I wonder what kind of benefits do you see from battery monitoring.

Chas Meyer: Right. This may not sound like a particularly major issue to people. But as an example, we have some home monitors where one of the components in the home monitor has a rechargeable battery and if you know anything about rechargeable batteries, they have a life span, and the more often you recharge them, the less the full charge capacity you get and eventually over time, their capacity is so compromised that the system really can’t hold a charge for a reasonable amount of time.

So we have one product in the field that there are millions of these literally. There are over a million products in the field that has a lithium-ion battery in them and they have an expected lifetime of about five to ten years.

So once they got close to that ten-year limit, some of the batteries started having failures; either their capacities were so rundown or the lithium cells themselves had issues. It started happening at a much higher rate than expected and so we had an unexpected problem after this product had been out in the field a number of years with the battery failure rate ramping up to a level that was very difficult to manage. In that product, we had no way of knowing that that was coming.

So on the product I worked on here with the same kind of battery in it, I added observability tracing so that we can keep tabs on the health of the battery and one of the things we want to know for example, is how much has the full charge capacity depleted over time.

We have targeted the battery to provide three hours of runtime and as that full charge capacity drops, that runtime also drops and our users would prefer to have the battery last all day. We all know what it’s like to have a phone that you have to recharge every three hours versus every three days. So we put monitoring in there and we’re keeping a close eye now on the fleet of these battery-powered instruments. How many of them have had their batteries deplete to below 60 percent say of the original capacity?

That’s going to give us an early warning on when we might expect to start seeing issues from the field on the performance of these particular units and so it could give us the opportunity to do advanced planning to triage the issue and maybe proactively replace those units before they become a problem and manage the situation in a corrective manner rather than reactively.

Lian Granot: You can also use metrics like if you have any temperature sensor to actually cross-check those metrics and compare it to the batteries and depletion of resources so you can actually try to see if there is any environmental effect of unexpected one on the performance.

Chas Meyer: Yes. As a firmware engineer, every now and then you learn something new about how your hardware performs and one of the things that I was startled by with this full charge capacity measurement that was reported up is I expected that it would be linearly decreasing over time.

But we saw some variability in it where sometimes the full charge capacity was unexpected to increase and I wondered at first if that was indicating some kind of problem with the hardware or some kind of fault in the firmware algorithm and it turns out that when you dig into a little deeper into the datasheet, they explain that temperature variations and current draw over time can affect short-term that capacity measurement. So it just helped us understand how to condition the data and not worry about upward fluctuations and the capacity.

Lian Granot: Do you think that monitoring or carefully observing the battery status can also help you create software that is more battery-saving? Is there a difference?

Chas Meyer: I think so. Well, one of the things that this type of observation has led to and I would like to point out that this is a very iterative process. In an embedded system, you take your first stab at what you think is going to be the most useful data and the idea is to try and have enough data there where you get core information like you know you can hack on but enough additional information so that you can have some chance at being able to answer questions that you haven’t thought of yet.

One of the questions we hadn’t thought of to implement in our traces was “What’s the discharge rate over time?” We have multiple different ways of using this instrument using different wireless connections to other devices and each one of those has a different profile on how rapidly it burns down the battery. But we don’t have a good idea on what that is.

So one of the things that we’re doing on our next iteration of the firmware is to actually measure during different use cases what the battery capacity is at the beginning and the end of that use case so we can measure over time what the depletion rate was and now have a basis for knowing if it’s used in this way, this is how fast it’s going to use up the battery and then in turn, we can use that for product design to size the battery capacity appropriately.

Lian Granot: So actually you can make a future decision on the future hardware based on this telemetry that you get in this existing equipment.

Chas Meyer: Yeah, and we’re in the process of building out a new platform that is a lower power platform that is intended to be a substitute or a replacement for the one that we’re talking about here. This data will help them drive the design of that future product and evaluate the power reduction, the power savings in that new design. So you have a basis of knowing how much better is it.

Lian Granot: So it’s like doing a retrospective on how everything functions in your existing product and basically creating the next best product that you can based on this information.

Chas Meyer: Yeah, yeah. It not only helps you iterate on refining the firmware itself. It also helps you iterate on refining your hardware.

Lian Granot: Exactly, which I think is one of the hardest parts because you can barely know what’s happening there, right? So you rely on the data sheets and you might encounter scenarios that are not tested ever with this algorithm.

Chas Meyer: Yeah, and again there’s a lot of this stuff you can do by analysis to it and you want to do it by analysis. But it’s always I think a little touchy to assume that your analysis is an accurate reflection of how it’s going to perform in the field and I always feel better having proof in front of me of what it really does as opposed to what we analyzed it to do.

Lian Granot: Yeah, yeah. I understand.

So we are now in our final slide for today. Before we proceed, I want to summarize a little bit what we talked so far.

So we went over a few observability use cases, all right? We talked about battery monitoring. We talked about Bluetooth monitoring and I think now we can add some additional final use cases that are relevant really too many companies that manufacture devices.

For example, what is the memory usage the device can actually really understand what is the quality of the software or the device that is running on it. So you can see if you have some untested flow that caused an extensive memory usage that was not detected in the test. You can see also how people interact with your device quickly if you have some kind of button or a screen or a combination of buttons.

You can actually figure out from the observability and see if those controllers are being used as expected or if there are many issues. You see for example a bottle [Phonetic] being tested repeatedly. That can indicate that the user does not know what to do or how to control the device. Did you encounter any insights like that when you used the Sternum platform?

Chas Meyer: Yeah. There have been some really interesting observations that we’ve gotten out of this data. I will start with the one that is most disturbing to me as a Bluetooth developer. It’s mentioned here that we can track Bluetooth communication. Quality is an uncovered issue in the Bluetooth library or chip module.

We have one metric that we’re tracking where on the low-energy Bluetooth side, we have periodic transmissions between our instrument and an implant that happens every 32 milliseconds.

If you measure the jitter between those, you have another measure of the quality of your communication length. Ideally, that jitter would be zero, and of course, that’s not the way it works. So we’ve developed a trace that would allow us to create a histogram of the different bins of the deltas in the jitter, anywhere between like five milliseconds and 70 milliseconds when the expected jitter is zero.

So you can monitor that over time as well, and it’s a different view of how the system is performing in the channel maps I saw something very unexpected in the lab when we were doing our anechoic testing where there was an expected pattern to the jitter where it broadens as noise is introduced into the environment because a periodic transmission will have to be retransmitted when there’s noise and that increases the jitter.

Then that means that the distance between that transmission and the next one will be shorter because we’ve spent some of that periodic time during retransmission.

So you see the jitter curve start to flatten out as noise increases, and what we saw is that that it would do that and it would look pretty reasonable but all of a sudden inverted where there was no change in the environment, no change in the data stream and all of a sudden we had extreme jitter alternating with really short intervals because those long delays would queue up the data and then there would be zero delays between them.

That kind of behavior shouldn’t have happened. So there’s something in the algorithm there on how it’s servicing the low-energy Bluetooth. That goes off into some kind of state that recovers once the connection has dropped and recreated.

So that’s an area of investigation that would help us improve the quality of the performance of our product in the field. So that’s one of my favorite examples. That matters a lot to me because these are really hard problems to get insight into.

Some of the other more interesting things though is we hear complaints from the field sometimes about the battery discharging while it’s being used and shutting itself down. We wonder, “Well, how often does that really happen?” So we instrumented the code and we can monitor that. We can see how often while they’re using it loses power and it turns out that it’s like 0.01 percent of the time. So that’s very useful information because if it was 10 percent of the time, that would be a lot different and then we didn’t need to take a closer look at how we’re managing the battery and the battery capacity.

So that seems to be a reasonable rate of occurrence. Another thing with the memory usage, that it can be really useful to embedded developers is in our RTOS environment, we have stacks for every task and those stacks are a fixed size.

During development, you have to ensure that that stack is big enough to handle the worst-case utilization of anything that that task is responsible for. So what we try and do is aim for a reserve where of 20 percent that we want to see the maximum utilization of that stack to be 80 percent.

So we put traces in the system to monitor whenever the utilization goes over 85 percent. So we’ve already gotten feedback on that where there are some cases where we needed to go back and increase the task of that size. This isn’t firmware that we’ve been using for over five years and we’ve never had the observability to see that.

Like Lian, you had mentioned going on to another thing. The button presses are of interest to like our human factors team because we – like some commercial products I’ve seen, they try and keep the user interface simple by having like a single multifunction button.

I have an espresso coffee machine that drives me nuts because there are about a dozen different button press patterns that you need to do to handle air conditions and change different parameters on how the coffeemaker works and you have to have a cheat sheet there.

So we understand that when users use this equipment in the field, how difficult is it for them to keep track of how to use that button? We have six different functions assigned to that button that have different functions depending on the context of when it’s being used. Like for example, if you want to turn the unit off, you have to press and hold the button for five seconds.

So we don’t want to allow the user to do that while the product is in use during a patient session though. So what we do is monitor how often they try and turn the patient connector off versus how often it actually is able to turn itself off based on use conditions. There’s a big disconnect there.

We’ve seen button press times of 10 and 20 seconds where I’m imagining that somebody is to sit and they’re smashing on that button wanting that thing to turn off and it won’t turn off. So that type of situation or that type of feedback can help us understand what kind of education and instruction to provide to the field on how to use this product. So it can help us in short then get insight into user confusion on our product.

Lian Granot: That’s amazing. So when we designed the system, we talked about a lot of use cases and we made a very generic system that could allow you to monitor anything. But the thing that you mentioned, the human factor or a thing like that, this is like behind any imagination that Sternum can provide so much value based on just monitoring how long a button is being pressed and when. So that’s like – that’s amazing.

Chas Meyer: Yeah, just little simple things like that. You wouldn’t think that monitoring a button press could be interesting but here we are.

Lian Granot: Definitely, especially in critical devices, right?

Chas Meyer: Yeah, yeah.

Lian Granot: OK. So our last subject for today is talking more about the security aspect of Sternum. So as we mentioned in the beginning, our offer includes both runtime protection, observability but also anomaly detection, and insights from our platform.

Here you can see what kind of benefits you get from this multilayer security solution. So from one aspect, you can get your IP, and intellectual property protected because if someone will try to hack the device in order to steal some of the algorithms or the secrets that they’re using the device in order to function, this runtime protection and also monitoring will actually let you know if someone tries to hack it or to actually debug it with JTAG so you can get alerted on that. You get tracking on that.

You can also monitor severe behavior, I would say anomalies in the device. So in the second example here, we see an anomaly that is being detected on a device that tries to interact with an implant over 100 times in a very short amount of time.

So this is a clear anomaly. That can be performed by someone researching the device and trying to Jtag it, for example, and this could be also a performance anomaly but in the Sternum platform, you can get insights on both.

Chas Meyer: Yeah, that has helped us during development too. There we had a bug once where there was a situation like that where something wasn’t being handled properly. We weren’t able to establish a connection and we were repeatedly trying to get past it. But there was a bug in the system that prevented it from happening. So all of a sudden on the dashboard, there’s this alert that pops up. Yeah, that was helpful and it drew attention.

Lian Granot: So we are done for today. I wanted to thank you all for joining and for watching us and especially thank you Chas for sharing us your time and your insights. That’s really I think so valuable. I think many people here for sure learned something new.

Chas Meyer: I was just going to say, well, thank you for inviting me Lian. It was a pleasure.

Lian Granot: Of course. If you have any questions, this is the time. Thank you.

So we got the first question here. Chas, I think this one is for you. Did you see any value from the active runtime protection?

Chas Meyer: Yes. There are two sources of value from it in my opinion. One is it’s really nice to know that our devices are not being hacked in the field. So not seeing alerts is good but we – on the flipside of that is we do every now and then see some alerts that happen during development because the runtime protection catches some common programming errors like buffer overflows.

Every now and then you might write some code that writes past the end of a buffer and we’ve had Sternum catch those and bring them to our attention. They ended up not having any impact on the runtime for the cases that we found. So we never would have known that and with that kind of thing where you start moving code around and data around, eventually, you’re going to have an overflow that does cause problems. So that was very helpful.

Lian Granot: Thank you. OK. The next question is also for you. You mentioned you have been collecting data for about a year now. I was wondering how long it took to get some insights from the data.

Chas Meyer: So it ends up being a pretty straightforward, simple process. Once we deployed our systems, it took very little to integrate Sternum into the system. So the actual investment and development time for us was really more about figuring out what data we wanted to send out for observation and once it was deployed, it was my first time looking at this kind of data coming in.

So it probably took a couple of months of getting fully oriented on how the dashboard works, how the queries work and looking at the data, making sense of it, looking back at the code and seeing where the data came from to double check whatever we need. But that in my opinion is pretty well overhead for the value that we get out of it. So I’m really thrilled that what a short path it was.

Lian Granot: OK. Another question is, what is the first positive rate for the security platform? So that one is for me. So the way that the security mechanism works as mentioned before is by inserting central education coding throughout the firmware and the algorithm that actually makes sure the device’s integrity remains as expected out of the deterministic. It means that there is no first authentication, which means something bad has happened.

For example, we have so many security levels. One area is detecting memory corruption. So if your memory was corrupted, there’s no first positive there. It means for sure corrupted and the device is now in an unknown state.

Another question. Given the uncertainty we discussed around Bluetooth in the hospital, what are the obstacles for class 3 devices using Bluetooth-emission critical tasks?

Chas Meyer: So I will take that one. The major challenge and uncertainty are around the wide range of variability that you can get in different operating environments. It appears that hospitals have a lot more Wi-Fi interference. So they’re a little bit more challenging to work in and what that leads us to do then is to look at what are the parameters we can tweak to improve the quality of the product and the quality of that communication either by adjusting transmit power levels. In some cases, that would make a difference and also with the new Bluetooth 5 specification.

There are different transition methods that allow lower data rates but higher error correction capabilities. So things like that will help perform improvements to that. Yeah, it’s very difficult if you depend entirely on Bluetooth to have certainty that all the environments are going to be manageable. So that also leads to design criteria for having a backup for when Bluetooth is too stressed.

Lian Granot: OK, let’s wait a couple more seconds for any additional questions.

I think that since there are no more questions, I think we can finally end this webinar for today. I want to thank you all for joining us. Thank you Chad and everyone, have a great rest of the day.

Chas Meyer: Thank you Lian. It was a pleasure, and thank you everyone for attending.

 

JUMP TO SECTION

Enter data to download case study

By submitting this form, you agree to our Privacy Policy.