Browsing all articles in Infrastructure

Pseudonymisation, mobile and ‘the cloud’

Posted Posted by Martin Peacock in Cloud, Infrastructure, Mobile     Comments No comments
Nov
21

I note a press release from the Information Commissioner’s Office publishing a code of best practice when it comes to data anonymisation, as well as the launch of a new body (UKAN) to promote said best practice.

While this is largely focussed on aggregate (e.g. statistical / epidemiology) practices, anonymisation (and more specifically, pseudonymisation) is an important topic in the conversation around the two big buzzwords of the day – ‘mobile’ and ‘cloud’

Mobile applications – particularly when combined with BYOD – has security issues. That much we all know, and there are solutions to those issues that are, by now, well understood, including encryption, virtualisation, compartmentalisation and zero-footprint-client.  But these come with compromises and issues of their own.  It would of course be simpler in terms of deployment and performance if native apps could be used without concerns over information security.  This is where pseudonymisation has a benefit – on the way to the device, all identifiable data is replaced with ‘alternative identities’, with the relationship between the original and alternative only known to a trusted system (aka ‘the server’).  On the way back, the original identity(ies) is restored and no privacy issues accrue.

But this may be more important ‘in the cloud’.  Access to patient data through the NHS IGSoC process demands careful planning and implementation of best-practice security measures (rightly so), and it is virtually (if not downright) impossible to ship Persoanlly Identifiable Data (PID) to locations outside the UK.  For the cloud, that is a problem.  Very often, restricting the location of data to within the UK will mean many of the benefits of cloud are not realisable. In some cases, it means that the choice of infrastructure supplier is limited and in some situations – potentially valuable services made entirely irrelevant.

A cloudy service launched in the US, for example, will often choose an infrastructure provider based in the US. That isn’t much good for organisations in the UK who cannot store PID outside the UK.  Pseudonymisation offers a way forward for those situations.

A note on SSD storage

Posted Posted by Martin Peacock in Infrastructure, PACS General     Comments No comments
Mar
21

Some of the more progressive PACS sites have already started implementing SSD technology as part of their performance management initiatives.  In general this is (and certainly should remain for the foreseeable future) one part of a multi-tier architecture that has SSD as the very-short-term, ultra-quick cache with less expensive technologies providing cheaper, deeper, possibly replicated and even (a subject for another post), possibly deduplicated.

However, image data is very often (not always, though*) identifiable and should be treated with appropriate confidentiality.  When it comes to decommissioning a bundle of hardware, this is often achieved by forensically wiping clean each disk (taking a screwdriver to the interface pins comes a poor second :-) ).

As noted on the Sophos Naked Security blog, wiping SSDs is considerably more complicated than the same process for their cousins, spinning disks, and to maintain security, encryption should be engaged from day 1. Making the encryption keys unrecoverable is much simpler than doing the same for the whole disk.

Martin

* Of course, some PACS implementations do not store identifiable metadata with the pixel data itself – only within the accompanying database – for some modalities. That’s obviously part of another discussion on what precisely ‘vendor neutral’ is and how data migration can be facilitated in an equitable world.

Windows Sys Admins – Your world is about to change

Posted Posted by Martin Peacock in Infrastructure     Comments No comments
Jan
16

So towards the end of last week an official Microsoft blog entry caused a bit of a stir.  Its not strictly news but has re-interated Microsoft’s position on how Windows Server systems will be administered in the future, which will undoubtedly have more of an effect in healthcare then many other industries.

The big change is a decisive movement away from the GUI console as the primary means of local administration.  That means no point-and-click with right-click context menus, no visual clues, and no ‘wizards’ – at least in the form that most folk are accustomed to.

The changes (for Windows Server 8) aren’t an immediate turnaround.  WS8 will have an optional minimal-GUI for the time being.  But even the official line from Microsoft is:

In Windows Server 8, the recommended application model is to run on Server Core using PowerShell for local management tasks and then deliver a rich GUI administration tool capable of running remotely on a Windows client.

.. which of course is the model that MS themselves have been using in tools like AD admin for many years.  But as RedmondMag opines:

Anyone who thinks “minimal GUI” mode is anything more than a holding measure is crazy. To me, this clearly says Microsoft is trying to get us off the console for good.

In most cases, in my opinion, the inconvenience will be quickly forgotten:

  • The vast majority of app-level administration for any software that truly belongs in an ‘enterprise’ infrastructure follows the recommended model already, and at worst, needs only a modest change in practice.
  • I expect Microsoft to expand their own range of remote-GUI tools to extend deeper into the OS/Hardware stack so that the range of admin functions that can only be performed directly on the server using PowerShell will be quite limited.  System Recovery and AV cleanup may be challenges but not by any means insurmountable.

However, healthcare offers a different category of challenge.  Not uniquely, but probably to a greater extent than many other industries.  That is one of the desktop app hosted on a server.

It happens less nowadays than in the past but is still (in my experience) a practice that derives from the way Health IT has evolved over the last 15 years or so in an environment where individual departments/specialties have more autonomy than equivalents in other industries.  I’m not debating the rights and the wrongs – just the reality.

I have seen applications coming through the door which have (to cut a long story short) questionable ‘enterprise’ credentials.  yes, there are MSDE databases (theoretically easily migratable to SQLServer but – in theory….), and yes Access databases, and HL7 interface engines written in VB or Delphi that run only on a desktop.  The requirements of clinical departments are many and varied and sometimes it is a necessary evil to accept such software.  Sometimes, the best that can be made of the situation is that they are hosted in a server environment so that some of the benefits can accrue – like redundant power supplies, controlled environment, or OS that doesn’t need to be rebooted once a week.

For those applications, either a new strategy is required or they need to be replaced.  There isn’t – necessarily – a panic.  Extended support for Server 2003 continues to 2015 and for Server 2008 until 2018, so there is time to consider carefully.  And another consequence of the HIT evolutionary process is that everyone accepts we’ll be living with multiple versions of Windows Server – as well as the LINUX, VMS, AIX, HP-UX and even (gasp) OSX Server installations routinely in use.

 

Storage: SSD and Tape News

Posted Posted by Martin P in Infrastructure     Comments No comments
Apr
15

We have news that 1TB SSDs are available (at a premium price, mind).  SSDs have particular advantages for medical archives:

  • Faster
  • More reliable (no moving parts)

… but however reliable they are backups are still necessary.  Disk space is so cheap nowadays that disk-to-disk backup is viable but many still prefer tape backup.  The future of LTO tape had been somewhat uncertain but we see now the LTO consortium announcing the plans (PDF) for LTO up to generation 8 (currently LTO is at Gen 5).

Generation 8 is set to allow up to 32TB per tape (compressed). Although there isn’t a formal schedule, past generation lifecycles suggest Gen 8 may be with us in 2017.

Disk storage continues to grow but gets faster too…

Posted Posted by Martin P in Infrastructure     Comments No comments
Nov
25

El Reg reports on a 5TB solid state disk capable (internally) of 6GB/s bandwidth.  Connecting to a motherboard via PCIe (version 2) limits useful transfer to 500MB/s but thats still repectable.  PCIe version 3, expected 2010, raises the game to 1GB/s

Add High Availability to your PACS on a shoestring: Part II

Posted Posted by Martin P in DCM4CHEE, High Availability, Infrastructure, PACS General     Comments 3 comments
Oct
19

Getting the Hardware

Previous entry in this series: Introduction and Architecture.

To do this, we need 4 boxes.  We need two servers and two NAS storage arrays.  Actually, we’ll need at least 5 – since we’ll need some kind of network infrastructure to connect them, but I’m going to consider that outside the scope of these articles.  There will be a need for at least 4 network ports (see discussion on servers, below).  There are some wrinkles anyone not familiar with a datacentre environment might not think of, so I’ll try to spell them out.

The title of this series suggests that High Availability can be achieved on a shoestring.  I believe that to be true, but there is something that should not ever be skimped on – the hardware.  All of the software involved in this solution is free but hardware is not.  Putting this solution in place presents a potential point of failure for your workflow, so you should pony up, and get the best hardware that you can afford.

read more

A supercomputer under your desk?

Posted Posted by Martin P in Infrastructure     Comments No comments
Oct
19

Right. Time to try and catch up.

Apparently even though Moore’s Law is still alive and kicking, the processing power needed to fulfill high resolution visualisations with the considerable increase in CT, MR and PET data volumes has outstripped the scope of simple desktop systems, and 3D visualisation has for the last few years been driven by the server.

that may change with a growing trend for desktop supercomputers at an approachable cost.

The system can be expanded to an 80-core system with a capacity of up to 960GB of memory.

That’ll do.  I’ll have one for Christmas.

A short note on RAID levels

Posted Posted by Martin P in Infrastructure     Comments 2 comments
Sep
21

So you have you shiny new NAS, or even JBOD (Just a Bunch of Disk) and you’re wondering how to make it safe a secure from disk failure.  For a long time, the standard answer would have been RAID 5 (with hot spare).  As a compromise between security and performance, RAID 5 always worked quite well.   Lose a disk to head crash or some other malady, and the dead disk is moved out, the hot spare moved in and rebuilt and everyone’s happy again.

But then disk capacities started to increase.  It is common now to have multi-TB disks in an array. The problem now becomes the amount of time it takes to rebuild the array.  It is now common for a disk rebuild to take many hours, which increases the chance that a second disk will fail during the process.  RAID 5 can’t handle that. Bang! You’re dead.

So in came RAID 6, which allows for 2 parity disks which means two disks can be lost out of the array and a rebuild still complete.  That rebalances the odds.  Or it did until disk capacities just kept on increasing.

So now Sun – increasingly repositioning itself as a storage company adds support for 3 parity disks into its storage line, underpinned by the rather cool (in a geeky kind of way) ZFS. While not the first product of it’s kind, the Sun storage line is strong and I’d love for the European competition regulators to get on and make a decision on the Oracle acquisition of Sun so customers know where they stand.  ZFS has been making good on its potential for a while now, and with the news over the summer than deduplication will be supported, anyone remotely interested in storage really must take a look.

22/09/09 EDIT: Larry Ellison seems to agree.

16/07/10 EDIT: I’ve been asked just how realistic is it that two disks would fail in quick succession?  If you took two disks at random – the answer would be that OK, perhaps its a bit of a stretch.  Disk failures do happen but not that often.  However, the disks in a RAID array are not disks at random.  There is a very high probability that all the disks in an array were manufactured at the same time, and of course, they were all spun up for the first time together.  This puts them on the same point in the bathtub curve.  That means that there is a higher probability of such drives failing in quick succession than the randomised scenario.  By how much?  That depends on a lot of factors – the quality of the hardware, the nature of the rebuild process, the precise position on the bathtub curve – and may not be a very large increase.  For the cost of an additional disk – would you take the risk?

Add High Availability to your PACS on a shoestring: Part I

Posted Posted by Martin P in DCM4CHEE, High Availability, Infrastructure, PACS General     Comments 3 comments
Sep
21

Introduction and Architecture

In most clinical organisations, PACS is a critical part of the operational workflow.  RIS is important but PACS must stand firm even when all around it have failed.  In environments which include Emergency, Intensive Care and Theatre (to name a few), the non-availability of PACS can be a danger to patient safety.

Most vendors, however, cost HA not just as an extra, but as a luxurious extra, adding a significant percentage to the total cost of a system.  It needn’t be that way.  HA is a well understood practice and the tools and materials needed can be easily worked into a standalone package that can provide HA for almost any PACS (see section on limitations, below).

I’m going to describe a solution for HA based on free, open source software (note: NOT ‘freeware’.  Know the difference).  All of the software in this solution is mature, proven software used by major organisations the world over.  Where there are choices I have selected software components that have full support options available, just in case you need a little more assurance.

read more

The end of OpenSolaris? A boon to ZFS?

Posted Posted by Martin P in Infrastructure     Comments No comments
Jul
14

Via /., Steven J. Vaughan-Nichols considers the likelyhood that Oracle, should the acquisition of Sun go ahead, will despatch OpenSolaris to the trash can.  Other open-source projects currently sponsored by Sun, but unwanted by Oracle, will have a good chance of survival because they have enough community behind them to keep either the original project going, or a fork.

OpenSolaris, however, very likely does not have enough of a community to do so.  That would be a real tragedy.  Many years ago I cut my unix-y teeth on Solaris (back in the days when vi and VMS EDT were the only word processors worth knowing) so I’ll certainly look back with fond memories. Even now, OpenSolaris is seen by many as a rock solid OS.

However, there may be some silver lining to this cloud.  read more