After the Instagram Ts&Cs palaver, an interesting piece on cloud service terms, conditions and acceptable use.
Of course almost all Health IT – related cloud services are based on infrastructure provided by one of the underlying vendors – Amazon, Rackspace, Azure, et al.
It is possible “…simply because a customer, a politician, or even a competitor claims there are issues with your — or your customers’ — activities” for the underlying vendor to shut down the service being provided – in the article, based on on allegations of DMCA transgression.
But Health IT – particularly clinical services – may have a comparable achilles heel – the certifications such as FDA 510(k) and EU CE Mark. It isn’t impossible for a given software product to be classified differently by different companies. For example, I know of two vendors selling effectively the same product – one of which has it classified as CE Class I, the other, Class IIa. The difference in effort between the two is not insignificant and depends entirely on legal opinion. It is possible (I don’t think it has happened) that the latter vendor has used the difference in legal opinion to pressure potential customers to avoid the possible risk should litigation come knocking, but then, it is up to individual hospitals to consider that as a possible hazard.
But for competing services in the cloud – a small word in the right ear that the cloud provider (at the level of Amzon, Microsoft or Rackspace) is hosting services that are incorrectly classified and may be open to legal action – would certainly generate a phone call to General Counsel. And action may well follow.
Simon Phipps in the article linked above suggests 3 mitigations of varying degrees of ‘implementability’. The third is part of what I would call Full Stack Redundancy – avoid single points of failure all through the hardware/software stack (at least where possible) – including cloud vendor. Its a tough ask, but hey:
Shoot for the moon. Even if you miss, you’ll land among the stars
So I came across a new book on MySQL replication, which looks pretty useful although a little expensive. The author biographies are particularly impressive – these are the guys who really should know about replication. One of those is the architect of MySQL’s row-based replication which set me thinking.
A while back I was implementing replication on a couple of DCM4CHEE servers and noticed that the master and slave became increasingly out of sync. At the time, the only option in MySQL was statement-based replication and I attributed (with no real evidence) the issue to known limitations of that form of replication. In any case – the alternative (DRBD) turned out to be quite adequate and that is the strategy I’ve used since.
However, more recently (November 2008), MySQL 5.1 included row-based replication and I never went back to review the strategy. Seeing this new book prompted me to spend a little time checking what the state of MySQL row-based replication is.
To cut a long story short, I ended up discovering Maatkit – “power tools for open source databases”, which, while written primarily with MySQL in mind, supports other open source databases (like PostgreSQL for example) and is certainly going to join the rest of my arsenal. The first use will be mk-table-checksum – designed to allow comparison of master and slave to ensure replication consistency.
I’ll try out the row-level locking and will a new toolset, can be comfortable that it works the way it should (or indeed, not).
BTW the O’Reilly site has a sample chapter which describes nicely the process of setting up simple replication – worth a look if you haven’t tried it before.
Getting the Hardware
Previous entry in this series: Introduction and Architecture.
To do this, we need 4 boxes. We need two servers and two NAS storage arrays. Actually, we’ll need at least 5 – since we’ll need some kind of network infrastructure to connect them, but I’m going to consider that outside the scope of these articles. There will be a need for at least 4 network ports (see discussion on servers, below). There are some wrinkles anyone not familiar with a datacentre environment might not think of, so I’ll try to spell them out.
The title of this series suggests that High Availability can be achieved on a shoestring. I believe that to be true, but there is something that should not ever be skimped on – the hardware. All of the software involved in this solution is free but hardware is not. Putting this solution in place presents a potential point of failure for your workflow, so you should pony up, and get the best hardware that you can afford.
Introduction and Architecture
In most clinical organisations, PACS is a critical part of the operational workflow. RIS is important but PACS must stand firm even when all around it have failed. In environments which include Emergency, Intensive Care and Theatre (to name a few), the non-availability of PACS can be a danger to patient safety.
Most vendors, however, cost HA not just as an extra, but as a luxurious extra, adding a significant percentage to the total cost of a system. It needn’t be that way. HA is a well understood practice and the tools and materials needed can be easily worked into a standalone package that can provide HA for almost any PACS (see section on limitations, below).
I’m going to describe a solution for HA based on free, open source software (note: NOT ‘freeware’. Know the difference). All of the software in this solution is mature, proven software used by major organisations the world over. Where there are choices I have selected software components that have full support options available, just in case you need a little more assurance.
Latest from the Blog:
- High Availability
- Lessons learned
- Open Source
- PACS General
- Project matters
- Martin Peacock on Virtual Servers and PACS
- Edward Mangiola on Virtual Servers and PACS
- Globalstorage on Business Continuity Planning in Health IT
- Martin P on Window/Levelling in a browser – CANVAS or server-trips?
- Martin P on 3D can be a dangerous game