Some of the more progressive PACS sites have already started implementing SSD technology as part of their performance management initiatives. In general this is (and certainly should remain for the foreseeable future) one part of a multi-tier architecture that has SSD as the very-short-term, ultra-quick cache with less expensive technologies providing cheaper, deeper, possibly replicated and even (a subject for another post), possibly deduplicated.
However, image data is very often (not always, though*) identifiable and should be treated with appropriate confidentiality. When it comes to decommissioning a bundle of hardware, this is often achieved by forensically wiping clean each disk (taking a screwdriver to the interface pins comes a poor second ).
As noted on the Sophos Naked Security blog, wiping SSDs is considerably more complicated than the same process for their cousins, spinning disks, and to maintain security, encryption should be engaged from day 1. Making the encryption keys unrecoverable is much simpler than doing the same for the whole disk.
* Of course, some PACS implementations do not store identifiable metadata with the pixel data itself – only within the accompanying database – for some modalities. That’s obviously part of another discussion on what precisely ‘vendor neutral’ is and how data migration can be facilitated in an equitable world.
Back in June I wrote that Mozilla’s policy of rapid-fire releases made it unsuitable for Enterprises – including those of the Health variety. Before updating a platform, IT departments must go through rigorous testing of all applications that sit on that platform, and it simply isn’t possible to do that at the rate Mozilla has been releasing new versions of Firefox.
To keep Firefox, then, has meant hosting applications on a platform that no longer has ongoing support or security fixes.
Which meant that any hospital looking to move towards the undeniable future of zero-footprint web image viewers, would end up relying on Internet Explorer.
I did say that one way out of the quandary was for Mozilla to volte-face. I offered an opinion that that was unlikely. Thankfully, I was wrong.
As covered in various places, Mozilla has changed its mind and will not only be providing a Long-Term-Support version, but has resurrected the Enterprise User Working Group for collaborative input to make Firefox even more Enterprise-friendly.
Game back on.
With ever-increasing volumes of data coming out of CT and MR modalities, the value offered by 3D reconstruction also increases. I’m not aware of many folk doing large volumes of 3D reporting or review but it should be treated with caution.
Simple volumetric reconstruction onto a normal monitor can be supplemented by the developing 3D visualisation technologies now becoming mainstream. Indeed, the most popular Open Source viewing application – Osirix – has a facility for visualising 3D with red/green glasses.
Kennedy recommended that institutions consider a business continuity system as an alternate mechanism to maintaining essential functions during downtime. This can be as simple as deploying a small public domain miniPACS, or as sophisticated as using a fully redundant primary PACS network, according to Kennedy.
DCM4CHEE is good for lots of things – including just that!
As a result, multiple methods of communication should be utilized, including phone calls and even posters. “You cannot overcommunicate,” he said. Also, anticipate misinformation and manage around it, he added.
I.e. chinese whispers (at best).
“One of the CT applications people I worked with many years ago gave me a wonderful piece of advice, and that’s to always carry a stopwatch,” Kennedy said. “I do that still, because sometimes it’s very hard to convince [a person who thinks it takes 35 seconds to load a CT scan]. But the stopwatch says five [seconds], and people will believe stopwatches.”
I’ve never resorted to carrying a stopwatch but the principle works for lots of user reports – “the system is down!” or “the system can’t do …..” as examples. Never assume end users have the same ability to distinguish between two elements that you do. Always be prepared to move around to the users perspective and translate.
There are things that cost a million dollars not that many years ago that now can be done for 1/100th or 1/1000th of that price. Many of our PACS vendors haven’t really leveraged that, …
Jeez I’ve been saying that for years. For additional archive storage to be costing the same as 5 years ago is criminal. Folk still pay, though. Like voting, people get the government they deserve.
When it’s time for a new PACS, even more issues come up, including PACS-to-PACS migration issues and the debate of whether to consider a PACS with a vendor-neutral archive.
There’s no such thing as vendor-neutral archive – only vendor-different (except, of course for fully Open Source solutions). Lets take the Carestream product as an example. If Carestream goes belly-up or decides to drop the product – would you get support and further development elsewhere? No. I thought not.
Update: If relying on user reports to determine performance problems isn’t always a good idea, basing a study on them mightn’t be either. I dare say it may well have the right conclusion but I’d worry about the quality of the raw data.
As reported by El Reg and Ajaxian, despite some confusion in the past whether IE 9 will support the CANVAS tag, it seems it not only will, but will be hardware accelerated where appropriate. This is big news for MI applications.
The prospect of fully features browser-based PACS clients without resorting to plug-ins is fast approaching.
On my ever-increasing “gotta get finished” list, is a benchmark for performing window/levelling of DICOM images in a browser using the HTML 5 CANVAS element. I might drop it off altogether now there is a proof-of-concept up on youtube. One of the reasons I’ve dropped it down the priority list is that while it does do the job, it requires first a drop down to 8 bit jpeg from the original (generally 12 or even 16-bit). So really, its only half a job. The bottom line is, to perform this job properly, one of three things is required:
- Browser standards start supporting multi-byte grayscale jpegs. Not very likely. HTML5 has struggled to gain universal acceptance (and even now, not really in its entirety). Another step back to the table won’t be for a long time.
- Plugins. Over the last few years we’ve seen a significant increase in the variety of browsers out there and that is only likely to continue (even if Gecko/Webkit are at the core of most). Maintaining plug-ins for all would be a nightmare and certainly, a retrograde step.
- Trips back to the server. Personally this is, in my opinion, the most likely. It is necessary to minimise the size of the image being processed and transmitted – perhaps using heavy compression/resizing during the window/levelling operation.
For the short- to medium-term, I am leaning towards the concept of trips back to the server as the best option:
- Once a user has indicated (mousedown) that W/L is required, a very low resolution image is generated (server-side) from the full image. For a 2kx2k CR, for example, a image 256×256 is generated and displayed in the browser – either as a thumbnail/preview or overlaid on the original, full-res image. It would clearly be advantageous to have the low-res image cached in memory in some form.
- Each W/L action (mousemove) is performed on the low-res image and displayed in real-time.
- Once the user is happy with the settings (mouseup), the W/L is performed on the full-resolution image, and piped down to the browser.
An example of this (albeit not involving server-trips) is the volume rendering engine included in Osirix (ITK). When the view of the volume is rotated – it drops down to a low resolution under the view perspective is final, when it returns to full resolution. Clearly not as good as fully hardware-assisted rendering, but quite usable all the same.
I’m a fan of the notion of punctuated equilibrium in evolutionary theory. From Wikipedia:
When evolution occurs, it is localized in rare, rapid events of branching speciation…
… most of the time, things just truck along doing what needs to be done, and then over a short (relatively, in the case of evolution) period of time, some ‘big stuff’ happens. That kind of feels where the computing and technology world is sitting right now.
In the course of innovation within the computing and data visualisation fields – particularly as applied within medical imaging, there has for many years been steady but linear progress pretty much corresponding to Moore’s Law. Lets take a look at a few technology sectors:
- Smartphones. Clearly, these were languishing in the doldrums until Apple gave the industry a smart kick up the rear. It took a little while for competitors to figure out how to follow, but follow they have and the next year or so will see some intense competition and major advances as a result.
- Speaking of 3D. Developments are arriving thick and fast. From Avatar in a cinematic context, augmented reality, 3D on every conceivable device, and while this 3D print of a vein is intended for ‘replacement body parts’, 3D printing of objects is mainstream and may well turn out to be a valuable surgical preparation tool.
There is a common thread to many of these – the involvement of Google. While the big G is by no means responsible for all advances, I think maybe it’s energy has a beneficial effect on innovation generally. Google’s blunderbuss approach to innovation sometimes leaves me lukewarm. As an example, consider Google’s many patents (for one and another) for scanning the pages of bound books. innovative, perhaps, but it leaves me thinking – why not just guillotine to binding? For 99% of books – thats ok surely? But in general, Google is setting the pace – and many of the other big players in technology have woken up and contributing.
The rest of us can only benefit.
So the AMICAS and Merge deal is going ahead. I have no great experience of either (other than Merge doing open source software a disservice with the free-but-proprietary-and-eventually-not-free-and-not-even-particularly-good eFilm. Not that they did anything actually wrong – but a disservice all the same), so I’m not going to even try to make any analysis.
But I will point to someone who does have sufficient experience, and who picks the most important element that any vendor brings to the table – people:
But AMICAS is far more than a collection of software. AMICAS is nothing less than the sum of its people. I know them well, I have worked with these people for years, and they are the finest in the business.
Mike Cannavo (it seems, the one and only PACSMan) misses it on his article on 2nd-time PACS-ers, but the reality is no matter what technology you have, people are the key, if people are pushing in the wrong direction, you’re going to have problems.
It’s taken me a long time to fully grok this, but I’m as sure now as I have been about pretty much anything that the choice of technology actually isn’t as important as many people think it is. I have certainly been on projects where the technology choice was largely driven by a massive, detailed matrix of what the solution MUST have, what the solution SHOULD have, and what it MAY or MAY NOT have. This matrix is then used to differentiate between a shortlist of 3 or 4 possible solutions. In fact, when one reduces the process to delivering specific benefits, actually ALL of those solutions could have done the job. I may have liked some more than others, but they would all have provided adequate return on investment at least from a technology point of view. The biggest differentiator is ALWAYS the people the vendor(s) bring to the project not just now – but along the entire system lifecycle.
Automating a process with information technology – or changing the technology behind an already automated process will always involve change, and people notoriously do not like change. I suspect what much of the massive detailed selection matrix is there to do (perhaps unconsciously) is try to minimise change rather than accepting it. This is why IT projects (including PACS) traditionally have such a high failure rate – the technology is expected (almost always unrealistically) to fill in the gaps where stakeholders have been unwilling to make necessary changes.
So how to ensure that the selected solution carries with it the people most likely to deliver the targeted benefits? Clearly, reputation helps. But a vendor getting a good reference from a site with a completely different culture to one’s own is quite irrelevant. It is clear to me that many organisations generally and certainly in Healthcare have no idea what their culture is – or even that they have one. Understanding where one is coming from is the first piece of the jigsaw.
And then both sides of the fence should court. Yup, it’s an old fashioned word but the alternatives don’t quite seem appropriate. Open Source solutions make this ritual a little easier but it can be done with proprietary vendors also. Start small (the first date) with a small, perhaps peripheral or standalone purchase. Don’t put all your eggs into one basket at first – you may not know who you’re going to end up with in the long term so keep an open mind.
If the first date works, go that little bit further (you know what I mean!). Go steady and get involved in a little interfacing. If that works out, bring your vendor into your core operations.
It may turn out to be a whirlwind romance. Or not. Either way, transition to engagement and marriage is through a process that suits all parties and maximises the chances of compatibility.
And what of those vendors that insist on talking about the big picture? Who won’t even get out of bed if you’re not about to spend in seven figures? Well that’s a bit like proposing on a first date. It happens, and sometimes it works, but not often.
Maybe you don’t have time to go through this process. Maybe you need to ‘get on with it’. Well, due diligence takes time whichever way it is done. Understanding who you’re getting into bed with should be part of that process.
A press release from Acuo announces the approval of a US patent for an “Asset Communication Format within a Computer Network”. Although it leaves room for future expansion, it is primarily focused on the Medical Imaging technology sector.
Not being based in the US, I had no qualms about taking a look and although I am very definitely not a lawyer, my reading of the full text (thanks to Damian) suggests that the scope of the patent covers 3 aspects of Medical Imaging Communication:
- Bandwidth throttling. Now it strikes me that isn’t exactly novel – even in 2001 when the patent was originally applied for.
- Distributed archiving. Having a number of independent ‘silos’ with one central consolidation point. Images could be in multiple archives (Redundant Array of Inexpensive Archives?) Hmm don’t know if that’s necessarily innovative but I guess if one accepts the premise of software patents (the US version), one could see that as ok.
- Fowarding engine rules based on AE Title and DICOM tags.
Now the 3rd one is just plain silly. If the USPTO could find prior art for that, then quite simply, they didn’t look nearly hard enough.
Still, we hear about ridiculous patents on a regular basis. Just one of many. Just one of many. Sigh
Latest from the Blog:
- High Availability
- Lessons learned
- Open Source
- PACS General
- Project matters
- Martin Peacock on Virtual Servers and PACS
- Edward Mangiola on Virtual Servers and PACS
- Globalstorage on Business Continuity Planning in Health IT
- Martin P on Window/Levelling in a browser – CANVAS or server-trips?
- Martin P on 3D can be a dangerous game