Browsing all articles in IHE

Interoperability in Health IT – Is it really so bad?

Posted Posted by Martin Peacock in IHE     Comments No comments
Jul
18

Short version:
Health IT is often compared unfavourably to IT in Financial Services, with, perhaps, a grass-is-always-greener perspective, but maybe HIT isn’t as badly placed as many think?  The challenges are greater, the investment lower, and still, the quality of solutions actually compares equitably.  In fact, HIT is probably better placed than FS-IT to step up to the next level. Perhaps we shouldn’t beat ourselves up *quite* so much.

 

 

Longer Version:

I came across a post in MedCityNews this morning (well, maybe more of an advertorial than a post but interesting nonetheless).  The story – in a nutshell – is about one particular vendor potentially using ‘Big Data’ to improve health outcomes.  The use of ‘Big Data’ to do with actual people is a topic of much debate – the furore around security agency access notwithstanding, there is much to debate.  But I’ll leave that to one side, for another day.


The line that really woke me up (to be fair, I was on the red-eye train into Dublin) was this:

Some equate it to finally catching up to where the banking and airline industries have been for years:

And that’s a position I’ve sympathised with (and even moaned about) in the past.  How, moaned I, can I get cash out of a hole in the wall in a foreign country thousands of miles from home in a matter of seconds, but two hospitals a couple of miles apart have such trouble sharing X-Rays?

But I’ve changed my position over the last 12 months or so.  While the 2012 IT failure at RBS didn’t end there for the bank , for those of us thankfully unaffected, it does offer an interesting – if brief and transitory – insight into the backoffice working of those august institutions.  Combined with some recent direct experience in the backoffice workings of financial services (FS) I can offer a different perspective.

While the pre-automation form of the two sectors are similar in terms of process depth and complexity, there are a number of differences in the two sectors, and between perception and reality:

  • Firstly, and perhaps most importantly, FS systems are seen from the outside as being superbly architected artifacts of elegance.  In fact, they are more like the angry-looking swan above, gracefully gliding over the surface while paddling like the clappers down below.  Behind the super-efficient delivery of flawless process and documentation lies (still!) a small army of humans greasing the wheels.  That isn’t to say that is bad, but it is contrary to many people’s expectations.
  • It may well be true that cash transactions between relatively unconnected organisations can be fairly efficient in real-time, but in fact, many transactions do not occur in real-time, especially at a consumer level.  Those who check bank statements might see debit card transactions from retail outlets batched over 2 or 3 days – especially at the weekend (of course – those statements can be botched as well as batched). Other batches might run only once per quarter or even once per year.  In the meantime, data/hardware/software can be cleaned, cranked and patched into position.
  • For many FS systems, daily operation is on a 12/5 basis.  When the small army of humans go home – the system is done for the day.  Life would be so much easier if all hospital IT systems could be switched into ‘maintenance’ mode at 6PM ready for backups, updates and general TLC.  Yes, there are the very-low latency, intensive systems for Trading and ForEx, and the like – to the point where folk are talking about firing neurinos through the middle of the earth to get milli-second advantages in network latency, but they are in the minority.
  • Even those statements may well be batched – to a point in the day when an intense period of dispatch can be prepared for in advance.  This is a luxury not afforded with anything like the same frequency in health IT (clinical, at least).
  • The transactions that do cross organisational bodies – whether in real time or not – are driven by legislative mandate and international agreement.  Despite  HIT having much more complete and overarching standards than FS, it is the lethargy of implementation that is holding it back.
  • The money invested in FS IT far outstrips that in health. Of course, all of the wolrd’s money goes through the banks – not once but many times as as it is transformed and re-invested, spent, or just plain lost.  The world’s economy in 2011 was 70 trillion – so there is plenty of incentive to invest heavily in systems and processes.
  • It doesn’t help that culturally, health IT investments tend towards the monolithic, one-size-fits-all solutions while banks and FS are much more prepared to engage in bespoke solutions – including self-help in many circumstances.  For me, that part is a mistake from those involved in HIT – I believe in having the capabiltity to adopt whichever approach offer the most benefit. But I know there are other opinions, and there are no answers that are right *all* of the time.
  • There is a constant ‘debate’ in Health IT about the quality of end-user applications, their user-experience (UX), sub-second response times, consistency of UI etc., that slows down system implementation and sometimes downright stops it.  FS-IT doesn’t seem to have that problem, even though some of the UX in end-user apps is pretty poor indeed – and the consumer -visible software is only a small part of it.

It isn’t really fair to make comparisons, HIT is at so many disadvantages.  But still, IHE offers a great way forward.  It has been slow, and there have been (still are and will continue to be) disagreements, but it is quite possible to conceive of a time when a significant proportion of the clinical record is fully interoperable.

There are still plenty of hills to climb and the gradient is somewhat steeper for H-IT than for FS-IT.  But what is there for HIT – as we speak – in the form of the standards being implemented internally within organisations as well as those standards that will allow for wider collaboration – offers a platform for expansion that many other industries can only envy.  It has yet to come to fruition, and work has yet to be done, but its not a bad place to be.  Its good to learn from the mistakes (or otherwise) in other sectors, but remember apples ain’t oranges -  GM notwithstanding.

Is Computing and Data Visualisation on the verge of a revolution?

Posted Posted by Martin P in Development, IHE, PACS General     Comments No comments
Mar
25

I’m a fan of the notion of punctuated equilibrium in evolutionary theory.  From Wikipedia:

When evolution occurs, it is localized in rare, rapid events of branching speciation…

… most of the time, things just truck along doing what needs to be done, and then over a short (relatively, in the case of evolution) period of time, some ‘big stuff’ happens.  That kind of feels where the computing and technology world is sitting right now.

In the course of innovation within the computing and data visualisation fields – particularly as applied within medical imaging, there has for many years been steady but linear progress pretty much corresponding to Moore’s Law.  Lets take a look at a few technology sectors:

  • Smartphones.  Clearly, these were languishing in the doldrums until Apple gave the industry a smart kick up the rear.  It took a little while for competitors to figure out how to follow, but follow they have and the next year or so will see some intense competition and major advances as a result.
  • Browsers.  There is at last widespread adoption (at least in product roadmaps) for HTML 5, even by Microsoft.  This is important.  Two elements in particular – Web Sockets will enable the development of server ‘push’ for CCOW implementations (an issue till now).  The Canvas element allows for in-place editing of images and image properties ‘in the browser’ with Javascript.  Watch this space for a little experimental implementation of Window/Leveling using the Canvas element.  As well as HTML5, many browsers are working on WebGL – a means to allow browsers to take advantage of hardware acceleration when rendering 3D.  3D was not too long ago a luxury item in Radiology but nowadays its a necessity.
  • Speaking of 3D.  Developments are arriving thick and fast.  From Avatar in a cinematic context, augmented reality, 3D on every conceivable device, and while this 3D print of a vein is intended for ‘replacement body parts’, 3D printing of objects is mainstream and may well turn out to be a valuable surgical preparation tool.

There is a common thread to many of these – the involvement of Google.  While the big G is by no means responsible for all advances, I think maybe it’s energy has a beneficial effect on innovation generally.  Google’s blunderbuss approach to innovation sometimes leaves me lukewarm.  As an example, consider Google’s many patents (for one and another) for scanning the pages of bound books.  innovative, perhaps, but it leaves me thinking – why not just guillotine to binding?  For 99% of books – thats ok surely?  But in general, Google is setting the pace – and many of the other big players in technology have woken up and contributing.

The rest of us can only benefit.

Two primary uses of CCOW

Posted Posted by Martin P in IHE     Comments No comments
Jul
31

There are two primary uses of CCOW:

  1. As a Single-Sign-On (SSO) mechanism.  This makes user’s lives easier.
  2. As a means to synchronise patient (and other) context.  This can make user’s lives a little easier, but more importantly, mitigates against cross-identification errors.

The first is the most common motivation for many (if not most) CCOW implementations.  Which is (IMHO) opinion , a little unfortunate, because I believe the second to be more important.  Having said that, once a CCOW implementation is into a site simply as SSO, there is always the possibility that context management is added later.

I see from a number of news sources that one implementation at least plans to use both aspects from get-go.

It can be done.  More should be trying. After all, EPR applications are approaching their 50th birthday.

Google Trends says IHE XDS is FUN

Posted Posted by Martin P in IHE     Comments No comments
Jul
10

In my day job as well as in a research thread I have on at the moment I have increasingly come across IHE XDS – a subset of IHE which provides for the transfer of clinical documents across the enterprise.  This is particularly important for systems with otherwise undefined integration profiles who can then (for example) exchange such items as discharge summaries into the clinical record.

Wondering idly if the rest of the world shared my increasing awareness, I turned to Google Trends and noticed something strange.

read more

Polling from a web browser (or not)

Posted Posted by Martin P in Development, IHE     Comments 2 comments
May
24

I recently wrote a few notes on the major challenge towards CCOW adoption in browser-based applications – avoiding the necessity for polling for context updates. There is a technology ‘umbrella’ going by the name of Comet which wraps quirks and tricks in a browser to achieve server side ‘push’ communication, but there may be a new kid on the block.

Cisco – who do have some pedigree in delivering useful technology, are launching an open source messaging protocol they’ve dubbed ‘Etch’, which is specifically designed for high volume traffic (the example given being 50,000 transactions/second) – not a limiting factor when dealing with RIS/PACS or CCOW, although round-trip latency is clearly an issue to consider.

But according to the CIO article (which appears to be the only publicly available knowledge at present)…

Another Etch feature that differentiates it from SOAP is the ability for the server to initiate message traffic to the client once a connection is established.

… now that sounds interesting. And, of course as CIO notes…

Cisco also is examining the possibility of establishing Etch as a standard. Marascio pointed out that Cisco is well represented in the IETF, the main standards body for Internet protocols.

… which might make a difference.

CCOW Implementation in a web browser

Posted Posted by Martin P in Development, IHE     Comments 2 comments
May
2

I’ve written before that I’m a fan of CCOW (given that it’s probably a little bloated for what it does). It still dismays me that so few vendors support it – obviously not enough buyers ask for it and I don’t know why. But there is an issue with the implementation of CCOW in applications written to run in web browsers – which an increasing number are, and it won’t be long before the majority of clinical application do so.

There are two defined interfaces to CCOW – ActiveX and Web (meaning http, but I’ll ignore the distinction just this once). I’ll immediately discount ActiveX from this discussion, since it is limited to IE and I believe browser-based software should be browser independent (in as much as possible/reasonable). Others may have a different position (and do!) but there you go. That’s me.

The CCOW web interface involves the following (massively simplified, of course) :
1. Application registers with a context broker to enter a given context.
2. Application listens on a port for http messages indicating a change in status within that context
3. http message arrives, application processes accordingly.

So where does the issue lay? There are no standards based mechanisms for browsers to listen on a socket, even though there are a small handful of mechanisms for effecting some kind of ‘push communication’ from the server (but only by using browser features in ways that they weren’t really designed to be, and then, browser independence is largely compromised again. There are good reasons – certainly with respect to security – why browsers are thus shackled. But in this case, there is also a good reason to unshackle.

read more

Interprocess Communication

Posted Posted by Martin P in Development, IHE     Comments No comments
Oct
28

Any system as complex as RIS/PACS is inevitably made up of a number of elements.
RIS elements will include:

  • HL7/IHE interfacing for registration, scheduling, and order notification and trnsmission.
  • Independent scheduling (i.e. independent of HIS/PAS)
  • Independent registration
  • Worklist generation
  • Dictation system integration
  • Transcription management

… amongst others

In the case of PACS these may include:

  • A DICOM services module
  • A Storage management module
  • HL7/IHE interfacing
  • Worklist generation
  • Image viewing (simple)
  • Image viewing (advanced visualisation)
  • Security management

….and many others

We’ll evolve this list & flesh it out in other posts on this blog.

One way to implement these elements is to generate a monster monolithic lump of software that does everything.  While some vendors choose to go in that direction (largely, to be fair, due to decisions made in the dim and distant past), from a software engineering perpective, that is self-destructive.  Even in proprietary software development, it is long appreciated that development teams should be kept modest in size.  In community-driven Open Source software there simply is no alternative but to provide features in self-contained modular units that small community teams can maintain.

Many of those modules will want to communicate with each other, or indeed with a central marshalling point, for a number of information transfers.  To avoid having to implement periodic polling (a heinous practice but unfortunately inevitable in many cases) , for example, the DICOM services may need to pre-emptively notify the image viewing modules that a new image/series/study is available.  In this example, the communication is almost certainly across different physical machines.

Equally, the modules themselves may be implemented via different technologies.  Technologies anticipated which may need to be involved in message transfer include Java, C#/CLR, PHP, Javascript, and likely others.

It is clearly beneficial to minimise the number of different mechanisms involved, and we are aspiring in the initial instance to a single mechanism.  We have enough experience in the crucible of real-world development to accept that ambitious aspirations sometimes fall by the wayside, and we are ready to adapt if necessary.

We have noted above that the requirements for messaging include cross-technology and both local and remote.  There is one other which we are considering a core requirement, and that is cross-application.  We believe that one (if not THE one) reason why healthcare applications have poor penetration globally is that they do not talk to each other very well. While HL7 messaging has become mainstream (and crucially so), ‘mainstream’ has only really happened since the turn of the century.  But end users do not get to see HL7 messaging – it happens in the background.  End users see RIS/PACS/LAB/PAS/HIS/ED/ICU et al sat on their desktop and each one must be logged into seperately.  Each time the patient view changes – the same patient must be found in each application.  This raises a risk that two applications on one desktop are focussed on different patients – and that simply has to be an invitation for mistakes.  How many times have tests/procedures been ordered for the wrong patient because the orderer didn’t notice the patient context in two applications is different?  I’ll bet it happens, and it shouldn’t.

It shouldn’t because its dangerous, but mostly it shouldn’t because it needn’t. The HL7 CCOW protocol is specifically designed to prevent (mostly) this scenario, but CCOW takeup is even slower than HL7 messaging.  I believe and have for some years that CCOW is one of the keys to IT truly working for Heathcare and we consider full CCOW support to be non-negotiable.  Of course that does not help with the other applications on the desktop, but I believe that in time CCOW will make its presence felt (despite the vendors).

CCOW comes with two mechanisms built in – ActiveX and HTTP. Disregarding ActiveX immediately as platform specific, we are left with HTTP. The question we have wrestled with is:

Can we use HTTP generally as a mechanism for IPC and specifically, build on the CCOW protocol to provide all of our messaging needs?

In fact it doesn’t take much scratching the surface to decide that the answer is no.  There are a number of scenarios with are not suited to the CCOW standard at all – specifically those involved in generating lists, or arrays of information.  But using the HTTP protocol as a transport layer, there are other higher-level layers that are specifically designed to fulfill those needs – XML and JSON being two in particular.  Of those, it is generally recognised (IMHO) that JSON is the more suited for regularised data transfer.

So for the moment, our plan is to use HTTP as a transport layer, with data-specific protocols layered over – certainly CCOW and JSON should see a good start to communication requirements and we can be prepared to add to that catalogue if necessary.  That means that each of the modules must have a embedded HTTP server serving at least those interfaces necessary for York.  In one sense that creates a complication but if we can keep the interface definitions simple (as, indeed, CCOW is once one  works out how to read the documentation), then the goal of modularity is made simpler.