USENIX Enigma 2016 – Hacking Health: Security in Healthcare IT Systems

USENIX Enigma 2016 – Hacking Health: Security in Healthcare IT Systems

♪ ♪ [ Applause ] -There’s a crisis in China, and I’m not talking about the financial crisis. That’s not my area. I’m talking about the government training and sponsoring over 30,000 hackers to come after our systems. And these poor hackers are being completely overwhelmed. You can see in this article that describes that they’re unable to keep up with us. And so they’re outsourcing some of this hacking to the Chinese. It’s described in this article. So, let me talk about my first security evaluation that I was hired to do. In 2003, the Federal Trade Commission was investigating one of the major rental-car companies because of complaints they received about the way that they handled personably identifiable information of their customers. I can’t tell you exactly which car company it was, but their logo is one of the ones that I included here. And they were doing things that were pretty bad, like distributing the credit-card numbers of their customers in the little ticket that you would get when you rent a car. They also made all kinds of mistakes in how they configured their systems and their networks. So, their computer-security practices were really, really bad. And they had, in some cases, no authentication whatsoever if you wanted to log in and get on their back-end system, where all their customer data was. So, it was really, really bad. And the FTC forced them to hire an independent, outside security expert to come up with a list of all the things they were doing wrong and then supervise them fixing it. This took place over the course of a year, where I would visit all the different facilities that they had, talk to their I.T. people, and then afterwards, when they tried to fix everything, went back and did another round and then wrote a report about it. And so I learned a lot about what it was like to do a security evaluation and just how bad some actual production I.T. systems really are. So, as a result of that experience, I decided that that was fun, and I founded a company to do security evaluations and started hiring former students from the lab and from Hopkins and from other places. And what we would do is we would go into companies, and, just like what I did for that rental-car company, we would look at their I.T., we would interview them, and then we would make recommendations, and sometimes we would even be hired to help them fix them. And we worked in different sectors, and so we kind of got to know what these sectors were like. In the financial sector, we found that usually the companies were pretty good. That’s where the money was, and they put a lot of effort into securing their systems. And so when we came in and we looked at their systems, we had some suggestions, but it wasn’t like we were starting from scratch. In the retail sector, we had a few customers, and we found that things actually weren’t so good usually. In fact, one of our big customers — I won’t tell you which — ended up being one of the ones that had a famous breach. It wasn’t our fault, I assure you. And we worked also in the software-security industry. A lot of our clients were companies that you would see in the booths at the RSA Conference, and we would basically go in and try to find bugs in their system and try to make recommendations for them to improve their products. And finally, we worked in the health-care sector. And when we had jobs in the health-care sector, we found that that was the absolute worst sector, in terms of their security. If you think about it, that’s pretty bad because we all interact with the health-care systems, right? Sadly but true, we all have to go to the doctor. We all sometimes go to the hospital. People that are close to us go there. And yet their data-security practices were so far below every other industry that we saw. So, I decided a few years later, in 2009, when I’d been working on e-voting for a long time, that I wanted to do research in this area. There’s a lot of low-hanging fruit, but there was also some very, very interesting research problems. And to do that, I did what I was calling the I.T. tour of the hospitals. So, I had access to Johns Hopkins’ departments, but also I had some relationships in Philadelphia — The Children’s Hospital. And I went to six different hospitals and got the I.T. tour, where I would go around with either a system administrator — or, in one case, the chief information security officer for the hospital — and spent a day looking at all of their systems. In one of the hospitals, I saw a robot that dispensed medications. It’s a big room with lots of little shelves, and each one has a different drug in it. And the robot arm would reach out, pull it, count out a dosage somehow, get it into a little device, and it was like a big Rube Goldberg, where things were sliding down into little bins. And in the end, a nurse would come and take a cart with all of these things that had the names and the room numbers of the patients and all the medications that they needed. And I was thinking, “Wow, you know, that thing is controlled by software. And what if something went wrong with that software? What if there was an attack that changed what it was doing?” Fortunately, they have protocols where they manually check that the pills are correct. But imagine what would happen if somebody were to attack the system and cause all the drugs to be wrong. And that would potentially be noticed, but people have to take the medications by a certain time. And if suddenly you had 1,000 people, you know, not getting their medication because of this mix-up, that would be really bad. So, a denial-of-service attack would be a problem. I also found so many different things about security that were being done wrong that I was completely shocked, and I’ll tell you a couple of them. One is that in one of the hospitals, there were 8,000 employees at every different level. Every single one of those employees had exactly the same access to all of the medical records in the system as every other employee. So, there was a doctor who did not go to that hospital for treatment, even though it was the best hospital around, because he didn’t want his colleagues and his coworkers to have access to his personal information about his medical care. I also discovered that, in the radiology department in one place, there was a nurse whose job it was to walk around to every work station every 45 minutes because there was a 50-minute time-out on the password if a doctor typed a password into the system. And she was supposed to type in the password for that doctor. That was one of her tasks. And so in this big room full of terminals, all of them had the doctors logged into them because this nurse was going around logging them in. So, they’re bypassing the security. I mean, it’s completely ridiculous to do this, and yet — I’m not making this up — that really happened. Another thing that I discovered was that there was a doctor who said that he liked to work from home when he had to do a lot of work with the medical records and wasn’t seeing patients. And I said, “So, how do you get access to the system?” He says, “Well, I have a VPN. I VPN into the hospital, and I get access to the records.” And I said, “So, you have a dedicated computer that you only use for that.” He’s like, “Well, yeah, unless my kids are playing games on it.” So, their kids were using this to play whatever games they were using, browse the web, and then he was getting on there and VPN’ing in and getting access to all the medical records in the hospital. And it actually took me a while to explain to him why this was a bad idea. Totally didn’t get it. So, the most egregious thing I saw and where I would see the most potential if I was an attacker had to do with the X-ray room. So, I go into this hospital and there’s this huge room and it’s completely empty except there’s a big computer in the middle of it. And there are all these empty shelves. And I say, “What is this?” He said, “Well, we used to keep all the films from all the X-rays in this room, but we don’t use those anymore. We don’t have physical films. It’s all done with software.” And you come in and you get an X-ray and you walk out with a disk. And since the format that’s used for X-rays is only used in 80% of the hospitals, you have to have some way for the other 20% of the places you might go with this X-ray to view the X-ray. And so the disk is loaded with a viewer program that auto-runs when you stick it into a computer and then lets you look at the X-ray. And so I thought, “Well, here’s a problem, right? This hospital is manufacturing all these disks that are being taken to other medical institutions and being stuck in doctors’ Windows machines, which are auto-running whatever software’s on there, completely trusting it, and getting access to their internal network.” And I thought, you know, one place to attack would be that machine that writes those disks. You write some malware on there, you can get a virus to get into all of these different systems. And I also asked doctors if they take disks from patients and they put them in a special machine that’s isolated from the rest of their system. And they said, “No, I put it into my desktop, where I do all my work.” And so the hygiene in these systems is absolutely horrible. Another thing that I found out is that one of the avenues that attacks are occurring to medical systems is through medical devices like a blood gas analyzer. So, there was a story about — what is a blood gas analyzer? I have no idea. But it’s running Windows, and all of the systems that are scanning for malware, for viruses on this hospital’s network were not looking at the blood gas analyzer. So, what would happen is, this device would get infected with something, infect the rest of the network, then they would clean it up, and the next day, everything would be infected, and they wouldn’t know why. And it’s because they weren’t scanning the medical devices. They were only scanning the desktops. There are dire predictions about medical records in health care. And one of the things that I was asked by some of my colleagues when I announced that I was going to be studying security and health care is, “Well, how is that any different from security in any other field, right? I mean, you do security for voting. You do security for banking. Isn’t it just the same problems that you’re doing in health care?” And so I made a list of differences, things that are unique to health care, which I think is actually very, very different from doing security work in other fields. First off, you start with the doctors. I have some friends who are doctors. And there’s a certain — I guess the word would be “arrogance” that goes along with doctors. And they don’t like to be told how to do things, and they don’t like to be told, “You know, this thing that you used to do that took 5 minutes is now gonna take 10 minutes.” In fact, I got into a bit of a discussion with one of my personal friends who’s a doctor. And he raised his finger, and he says, “The day that you security guys kill one of my patients is the day we will be able to start ignoring you guys.” Okay? So, it’s not like they want us to help them, right? They have a huge problem, but as far as they’re concerned, they haven’t been attacked, there’s no big problem, and they’re in charge. That’s tough to deal with. Then you’ve got patients. Patients don’t always follow directions. We’re all patients. We’re probably the patients that follow the directions, and the other 99% of patients probably aren’t as good, especially as things get much more sophisticated with online patient portals and smart pillboxes and all these other kind of things. You’ve got a whole bunch of staff. You’ve got the nurses, the clinical staff, and they’re all trying to do their job in the face of these doctors who are in charge and these patients who don’t listen. And you’ve got then the regulators, okay? Very few industries — Maybe the financial is another one, but very few other industries are as regulated as health care. And the regulators are well-meaning mostly, but they don’t often understand the implications of the things that they do. And you can look at the recent legislation as an example of that. The administration says, “Okay, we need to regulate security. So, I know how we can fix all the security problems in the world. We’ll just have everybody tell everybody else when they’re hacked, okay? That’s basically what it amounts to. We’re gonna share information.” Then you’ve got insurance companies. Their goal seems to be to spend as little money as possible. And you’ve got the medical-device manufacturers, who are entrepreneurs and trying to build things and get them out there, but they have to deal with the FDA. And imagine you get a device, and it goes for FDA approval. It gets FDA approval after three months. And then there’s a major hack, and there’s a problem discovered. And now are they able to fix it and still be FDA-approved? So, these are some of the kind of issues that we have to deal with in health care. health-care applications are getting more and more complex. Think about this — a medical database has to be available all the time because now patients are getting access to their database, to their data, right? So, I’m a patient. I log in. I want to see my medical records from my last doctor’s visit. The medical-record provider is being told, “You have to encrypt all of your records.” And then they’re being told, “Not only do you have to encrypt all of your records, but the patients have to be able to get to them anytime they want.” Okay, when the patient goes to get to the record, that record has to be decrypted, or the patient won’t be able to read it, which means you have to have the key sitting over there. And you’ve got key-distribution problems. And on top of that, people want to access from every mobile device that they have. And the data’s being stored in the cloud, which means that the owner of the data is actually not managing the data themselves. Cloud-service providers are subject to side-channel attacks and other kind of issues. And now that industry hasn’t even been regulated very much. They’re dealing with HIPAA and having to figure out, “Okay, well, on these servers, I’m gonna be storing medical data. Do I need separate protocols for that?” And one of the key points is that all of this information, all this health information, is being controlled by software. What’s being controlled by software? Well, I made a short list of things that are controlled by software. Technical people know that software is buggy. There’s no way to avoid it. We’re never gonna have perfect software. And software controls things like radiation dosage, dosage of medication in an infusion pump. In fact, infusion pumps which are hooked up to people and administer medicines are now being hooked up to the Internet and have been for some time now. The amount of supplies that are stocked in the intensive-care unit. I learned this in a neonatal unit at a hospital, that the lives of babies are determined and are dependent on them getting sponges, clamps, all of these things at the right time. That’s all controlled by a distributed-computing system. And the doctor working there told me that if that system goes down, babies will die. And the reason is, they used to have a paper backup system. The system worked well enough that, slowly, they stopped using it, and they no longer use it at all. It’s all automated, and it’s a distributed system. That’s pretty scary. Even the shifts of the doctors and the nurses — and you may not think of that as being a life-threatening avenue of attack, but if all the wrong doctors or nurses come at the wrong time, the hospital may not be staffed well enough. Obviously, the electronic health records. I mentioned this drug-dispensing robot. It’s controlled by software. And all the devices are now communicating. Each one of these things that is controlled by software is an avenue of threat. It’s an area where an attacker could basically cause mayhem, and anything controlled by software is potentially exploitable. Now, here’s a very disturbing picture. This is a person’s wrist that got injured. It got burned. And the way it happened was, this person was wearing a fitness tracker, like a Fitbit. And that device had a bug in the software that caused the sensors to go a little haywire and it was over-sampling and that caused this burn. Now, imagine — That’s something that happened by accident. What could you do intentionally? And as I’m contemplating that, I come up with this article that says, “Hackers can wirelessly upload malware to a Fitbit in 10 seconds.” So, this really could happen. One other story is, I bought a blood-pressure monitor to take my blood pressure at home. It seemed whenever I go to the hospital to have my blood pressure taken or the doctor’s office, I have a disease called white coat syndrome, and my blood pressure’s off the charts. And so they said, “Well, try taking it at home.” The device is hooked up to a phone, and that’s the interface for it. And as it started to tighten on my arm, I felt like my arm was gonna fall off. And needless to say, by the time it was done, the blood reading said I had two minutes to live. But it turned out it was just me being scared of this. But it’s controlled by software on the phone. If my iPhone gets hacked, that could basically result in me losing an arm. A lot of other devices are being hooked into people, and we’re going to see more and more convergence of things controlled by software being part of people and affecting your daily health. So, all I’ve done up till now is say bad things — “Health care is really dangerous. We’re all in trouble” — and I thought, “I have to give some constructive advice, like, ‘What can we do to fix it? ‘” And when I was a kid, I loved watching David Letterman. And for those young people, that’s like Stephen Colbert, but not as witty and not as funny. And I said, “What are the top 10 things we can do to get the biggest bang for the buck?” — meaning the least amount of effort to get the most benefit. And these aren’t in order. They’re just the top 10. And the first thing is, a medical device should run the application that it was intended for, right? A PACS radiology system should run radiology software and nothing else. So, let’s white-list applications and only run the things that should run on a device. Number 2 — let’s do the things we know how to do to protect the back-end systems, okay? We know how to do virus scanning. We know all kinds of hygiene, but we’re not doing it in the health-care systems. Another thing is, if you get a query on a medical database for records number 5 through 20,000, that’s not a legitimate query. We need to start profiling the queries and say, “Let’s not allow queries that are illegitimate. Let’s raise alarms, et cetera.” It’s obvious to a lot of security people, but multifactor authentication exists, and in terms of bang for the buck, that’s something we need to start doing in the health care I.T. Remember I talked about the doctor who went home and used his computer and then his kids were playing games on there? What we really should do is say, “Why not have a virtual machine install something like VMware on your home machine, open up a version of your system to access the records, then pause that virtual machine. Your kids can open up another virtual machine.” If you don’t have the money to buy two computers, you should at least be able to isolate these things. It’s not perfect security, but in terms of bang for the buck, I think it’s huge. Encrypted the data! Okay? [ Laughter ] [ Chuckles ] In terms of the cloud providers, there are these things called health information exchanges, HIEs. They allow you to go from one hospital to another hospital because they’re sharing all their data. But one of the problems is they get bogged down with lawyers and with all these legal problems whenever somebody new tries to enter, a new hospital tries to enter or a new HIE tries to enter. If somebody were to standardize a legal agreement between cloud-service providers who are storing medical data and health information exchanges, things would flow very seamlessly. We also need to control access control for chart accesses, and we need to log and get accountability whenever there’s an access of a chart. When the data itself identifies itself, like genomic data, it’s very difficult to anonymize it. This is more like a big bang and big buck because I don’t know how to do this, and it’s an area I recommend that research happens. Now, David Letterman would usually get a drum roll before the last one, so can we hear a little… It’s actually not the — It’s not the last one. This is just the — It’s not, like, bigger than the others. It’s just the last one. We need to authenticate clinical personnel. And this could be something via, like, smart badges with RFID, proximity sensors, which would give you one factor when you get close — because the way they’re doing it today is with a nurse who goes around and authenticates people. So, my final thoughts are that the health-care sector is unique. It’s a unique regulatory environment. We have different stakeholders. We depend a lot on software. And it does affect all of us personally because we, at some point, are going to have to deal with the medical system. So, we need to consider the security implications of new technologies. Thank you very much. [ Applause ]

1 Reply to “USENIX Enigma 2016 – Hacking Health: Security in Healthcare IT Systems”

  1. Speaking merely from my own experiences doing consulting work, the level of complacency that I've seen in the healthcare organizations I've worked with/for exceeds any I've seen in any other industry. The sharp mismatches between the levels of potential harm that compromises could wreak (in some scenarios, literally life vs. death) and the low level/s of awareness of or caring about security issues among personnel has left me, time and again, almost astonished. (Almost.)

    Doctors, nurses,, and other "credentialed" (ie. having letters after their names) personnel are invariably opposed to any measure that imposes even the slightest bit of new friction in their workflows. (Trying to convince doctors that requiring two-factor authentication to access Electronic Health Records is perfectly doable and realistic is such an exercise in frustration that it really does become comical. ) When it comes to computer security, compliance managers and assessors only care about, well, compliance with regulations, not making actual systems more secure. Although that's actually less annoying that in some fields (say, PCI compliance with retailers) because the healthcare regulations that bear on security (ie. the HIPAA cybersecurity provisions) are essentially silent & irrelevant when it comes to, you know, actually putting in place security measures. Staff members who work everyday with systems containing large amounts of highly-confidential (under law) information get somewhere between "almost zero" and "zero" security awareness & education training.

    Just…god-awful. (Again, at least in the organizations where I've personally done IT and security work. YMMV.)

Leave a Reply

Your email address will not be published. Required fields are marked *