"I will prescribe regimens for the good of my patients according to my ability and my judgment and never do harm to anyone."
This is one of the lines from the original Hippocratic Oath that physicians swore before commencing practice. What if we, as IT professionals, had to take a similar oath to do no harm? What if every line of code written, every instruction burned into the silica chip, every system that was designed or deployed had to be provided with this oath in mind?
When providing the basic IT infrastructure for health care, you get a taste of what that might feel like. At some level, there is always a sense that your work will touch someone’s life. After years of sweat and tears deploying one of the nation’s largest electronic health record (EHR) systems, I can say that knowledge changes how you make decisions.
When failure is not an option
Imagine that a patient is on the operating table, under anesthetic. The operation is about to begin, but then a system goes down. In a situation like this, high system availability is not good enough. We need continuous availability and the capacity to cope with extreme failure.
To achieve this level of system availability, we build in redundancy at all levels, from data centers down to the fiber link cable. We have backups of backups of backups of backups. Automated switchover technologies with the ability to intelligently sequence and bring up interdependent circles of support applications. Keeping in mind Murphy’s Law, we build systems that tolerate failure and work around it.
An oath to “do no harm” can also change how you think about data. Health care data is both extremely personal and durable. Medical records are intensely private, and they do not expire. As a result, at every infrastructure decision point we must ask ourselves, “how do we protect our members’ data?” And yet, as my colleague Ravi Krishnan discussed in his post, mobile behavior will be a game-changer in healthcare, that implies a world where data flows where it is needed, as it is needed. Infrastructure that prevents members and care providers from easily and quickly accessing data in appropriate ways and circumstances could actually “do harm.”
We need to design and build infrastructure that balances strict security with seamless, user-friendly access. And just as when we think about system availability, we must plan for extreme failure and design systems that can cope with it (e. g. mobile devices that can be wiped remotely and have restricted sandboxes that health data cannot leave).
Imagining a higher standard
Today, there are few systems where the costs of failure are catastrophic and measured in human lives. But as our world becomes increasingly automated, there will be more. Even in ordinary business settings, this Hippocratic thought exercise can be useful. IT professionals tend to be experimenters by nature, and we often have more patience with everyday technology failures than our users do.
So just for a moment, imagine what it would be like if an IT system that you use every day could never be allowed to go down or be breached -- no matter what. What would you do differently?