Victoria wrote an excellent piece on her site about what’s become known as “pumphackingate.” In it, she gives a brief recap of the facts and some of the reactions that have appeared on other blogs. Here’s an even briefer recap, in case you don’t know anything about it: Some hacker/builder dude created a device that can control some insulin pumps remotely along with gathering data from them. Based on a comment I left over on Victoria’s site, here’s my take on the issue.
First off, I’m not surprised. Like any device that transmits and receives wirelessly, the signals from pumps and CGMs are interceptable. Furthermore, like any other device that communicates with limited access control—you just need to know (or sniff out or be able to guess) the six or seven digit code that’s used to connect with another device—they’re essentially open. From there it’s all just figuring out the protocols and the format of the data as it’s passed around. As someone who spent about ten years working with and occasionally reverse-engineering formats, I can tell you, it’s all just a matter of trial and error and careful observation. (If I were a hacker, my handle would be “gluX0se.”)
So, in a world where relatively few people have these medical devices—unlike, say, mobile phones or bluetooth devices—the device manufacturers essentially did the easy thing, which was to assume we use our medical devices in a trustable world where people don’t mess with medical devices. (BTW, who knew there was a free Vulnerability Management for Dummies e-book?)
There’s been a lot of unease in the community about the way that the information was presented to the press and the way that some outlets sensationalized it (e.g., “Black Hat: Lethal Hack and wireless attack on insulin pumps to kill people”). It’s hard not to agree with a lot of the criticism there. But I can’t criticize looking for security holes in medical devices. Nor can I fault the impulse to hack into own’s own medical device—even one that keeps people alive—or to help other people hack their devices. Not all hacking is scary villainy, but this incident certainly exposes some problems.
Using the AP to share this information leaves a bad taste in my mouth, but presenting the findings at the Black Hat Conference seems like the most appropriate way to publicly disclose this research. (And it is, in my mind, legitimate personal security research that should be shared openly.) I would have preferred that Radcliffe work more closely with the device manufacturers leading up to the announcement. (I’m assuming that he did not.)
On the other hand, just presenting the findings to the device manufacturers—as some would have liked—violates the hacker ethos, both the black hat and white hat versions. Part of hacking—the part that I can get down with—is when motivated hobbyists exploit technology to solve a problem (real or imagined). I have thought many times how great it would be to sniff the unprotected data that’s transmitted by my pump/CGM and skip the middleman of uploading data to a web site. I’ve even gone so far as to seek out the information that Radcliffe presented, but it wasn’t available at the time.
Device manufacturers limit our access to our own medical data and tightly control the way that we can interact with our devices. It’s understandable given the limitations put on them by the FDA, their own desire to help (not harm) customers/patients, and their lawyers’ desire to limit risk exposure. It does mean, though, that the enormous potential for third-party, patient-focused tools goes untapped. Those tools could benefit so much from being able to present data the way that their users want to see them: A dashboard light in a car, a desktop computer widget that display CGM values, a mobile app that records all of the data for later use, a device that calls parents of children with diabetes when something happens, an awesome mood ring displaying BG, etc.
I suspect (and once again I’m assuming here) that Radcliffe was intrigued by the rather obvious possibilities of unprotected communication, and that’s getting lost in the whole “malicious people ruining diabetics’ lives” reporting. I fear the notoriety this incident is garnering is going to scare manufacturers into closing exploitable security holes without providing a secure, replacement method for getting at all of that data. And that’s a shame.