Browsing all articles in Blog

Windows Sys Admins – Your world is about to change

Posted Posted by Martin Peacock in Infrastructure     Comments No comments
Jan
16

So towards the end of last week an official Microsoft blog entry caused a bit of a stir.  Its not strictly news but has re-interated Microsoft’s position on how Windows Server systems will be administered in the future, which will undoubtedly have more of an effect in healthcare then many other industries.

The big change is a decisive movement away from the GUI console as the primary means of local administration.  That means no point-and-click with right-click context menus, no visual clues, and no ‘wizards’ – at least in the form that most folk are accustomed to.

The changes (for Windows Server 8) aren’t an immediate turnaround.  WS8 will have an optional minimal-GUI for the time being.  But even the official line from Microsoft is:

In Windows Server 8, the recommended application model is to run on Server Core using PowerShell for local management tasks and then deliver a rich GUI administration tool capable of running remotely on a Windows client.

.. which of course is the model that MS themselves have been using in tools like AD admin for many years.  But as RedmondMag opines:

Anyone who thinks “minimal GUI” mode is anything more than a holding measure is crazy. To me, this clearly says Microsoft is trying to get us off the console for good.

In most cases, in my opinion, the inconvenience will be quickly forgotten:

  • The vast majority of app-level administration for any software that truly belongs in an ‘enterprise’ infrastructure follows the recommended model already, and at worst, needs only a modest change in practice.
  • I expect Microsoft to expand their own range of remote-GUI tools to extend deeper into the OS/Hardware stack so that the range of admin functions that can only be performed directly on the server using PowerShell will be quite limited.  System Recovery and AV cleanup may be challenges but not by any means insurmountable.

However, healthcare offers a different category of challenge.  Not uniquely, but probably to a greater extent than many other industries.  That is one of the desktop app hosted on a server.

It happens less nowadays than in the past but is still (in my experience) a practice that derives from the way Health IT has evolved over the last 15 years or so in an environment where individual departments/specialties have more autonomy than equivalents in other industries.  I’m not debating the rights and the wrongs – just the reality.

I have seen applications coming through the door which have (to cut a long story short) questionable ‘enterprise’ credentials.  yes, there are MSDE databases (theoretically easily migratable to SQLServer but – in theory….), and yes Access databases, and HL7 interface engines written in VB or Delphi that run only on a desktop.  The requirements of clinical departments are many and varied and sometimes it is a necessary evil to accept such software.  Sometimes, the best that can be made of the situation is that they are hosted in a server environment so that some of the benefits can accrue – like redundant power supplies, controlled environment, or OS that doesn’t need to be rebooted once a week.

For those applications, either a new strategy is required or they need to be replaced.  There isn’t – necessarily – a panic.  Extended support for Server 2003 continues to 2015 and for Server 2008 until 2018, so there is time to consider carefully.  And another consequence of the HIT evolutionary process is that everyone accepts we’ll be living with multiple versions of Windows Server – as well as the LINUX, VMS, AIX, HP-UX and even (gasp) OSX Server installations routinely in use.

 

Firefox back in the frame for zero-footprint

Posted Posted by Martin Peacock in PACS General, Zero Footprint     Comments No comments
Jan
11

Back in June I wrote that Mozilla’s policy of rapid-fire releases made it unsuitable for Enterprises – including those of the Health variety.  Before updating a platform, IT departments must go through rigorous testing of all applications that sit on that platform, and it simply isn’t possible to do that at the rate Mozilla has been releasing new versions of Firefox.

To keep Firefox, then, has meant hosting applications on a platform that no longer has ongoing support or security fixes.

Which meant that any hospital looking to move towards the undeniable future of zero-footprint web image viewers, would end up relying on Internet Explorer.

I did say that one way out of the quandary was for Mozilla to volte-face. I offered an opinion that that was unlikely. Thankfully, I was wrong.

As covered in various places, Mozilla has changed its mind and will not only be providing a Long-Term-Support version, but has resurrected the Enterprise User Working Group for collaborative input to make Firefox even more Enterprise-friendly.

Game back on.

 

What Exactly IS PACS in a cloud?

Posted Posted by Martin Peacock in Blog     Comments No comments
Dec
22

AuntMinnieEurope has an interesting article predicting some trends for 2012.  Amongst others, it suggests the buzz around ‘Cloud PACS’ will remain just that – buzz with little substance (small degree of irony there perhaps?).

I agree, but I do disagree with their definition of what ‘cloud’ actually means.

Indeed what some vendors are referring to as cloud PACS is simply a virtualization of locally hosted systems, further hindering market adoption of “true” cloud PACS. True cloud PACS is defined as having both application and storage provisioning and hosting by a third-party server offsite from the end user (i.e., public cloud).

Well – having app and storage hosted by a third party off-site is today – and always has been – call co-location (or co-lo for geeks).  That isn’t the same as ‘cloud’.  Wikipedia defines cloud as computing-as-a-service, which is, I think, closer, but I would go even further.  I would tend more towards computing-as-an-amorphous-and-opaque-service.

Lets take GMail as an example. I use GMail as my primary service provider and its a classic example of what most people would think of as ‘cloud’.  From which of Google’s many datacentres is GMail served?

Credit: Techcrunch

I’ll be honest.  I haven’t a clue.  It could be any.  In fact as a service, different elements could be served from different locations – mail from Europe, contacts from US and search from South America.  I don’t know and importantly I don’t care.

Another example is the Amazon Elastic Computing Cloud (EC2).  Within EC2, it is possible to rent servers in units of compute cycles, memory and storage, and point roughly to where they should be located – as in ‘somewhere in Europe’ or ‘somewhere in the US’ – but the Amazon infrastructure could move those around for a number of reasons.  You may not be aware they are being moved, but moved they are.

In Amazon’s case it is able to do this by virtual of virtualisation – the technology that the article in AuntMinnie specifically says is not ‘cloud’.  Virtualisation also offers a concept that is quite opposite to AM’s definition – that of the ‘private cloud’.

A private cloud is one which is dedicated to the use of a particular organisation. It can be externally hosted, an internal implementation or indeed a hybrid.  But the properties of amorphous and opaque still apply.  A private cloud architecture often implementation might be 2 or 3 datacentres in different locations – say 2 within the grounds of a hospital, and one off-site.  This can be organised in a way to maximise disaster tolerance, for example.  Individual servers/services can be moved between datacentres without the user being aware.   The user just knows the service/data is coming from ‘somewhere’ *.

So the distinction is not between ‘cloud’ and virtualisation – or even between public and private.  It is between local and remote.  Services hosted remotely suffer from two handicaps:

  1. Trust.  Both from a security and privacy point of view, remote cloud services are still looked at with a slightly leery eye.
  2. Infrastructure.  Remote cloud services will almost inevitably end up going through public infrastructure.  Even in the ‘developed’ world, the high specifications required for bandwidth, latency and reliability are not available everywhere.

Both of those are improving – but I think for data-intensive applications like PACS, ‘the cloud’ is still to come over the horizon.

Paring the idea of private cloud to the bone, we could end up with another of AM’s predictions – that PACS as a ‘Managed Service’ will grow in popularity over the next couple of years.  I personally think such a solution is a seriously retrograde step in PACS as an industry – although it may have benefits in a minority of (small) organisations.  Why?  Another of AM’s articles that landed in the mail this morning  points to ESR guidelines for the communication of urgent and unexpected findings.  It is interested reading in itself, but the bottom line is that while technology can provide part of the solution, it can’t do everything.  That is because the delivery of healthcare in any organisation of meaningful size is a complex thing that will always be at least as much to do with people as technology.

That is the reason integration between the many silos of information that have evolved in a typical healthcare organisation has progressed so slowly.  Any to keep the wheels moving smoothly, I would argue that what is needed is more knowledge within the organisation that know the systems, the data, and how they interact, and not less.  Managed Services in any but the simplest of organisations is the wrong direction to go.

But that’s my opinion.

Martin Peacock

 

* It should be noted that virtualisation isn’t the only way to achieve this – but virtualisation makes the job of ‘clouding’ an existing service much easier.  Services can be engineered such that they can be implemented as a cloud service – software engineering techniques such as MVC help. And infrastructure measures such as clustering and load balancing go a long way to achieving similar objectives.  Whatever mechanism and architecture is used – this aspiration should be high on the list for any any mission-critical system.

 

 

 

Important DCM4CHEE security fix

Posted Posted by Martin Peacock in DCM4CHEE     Comments No comments
Jul
28

Stephen Wheat of Emory University has pointed out that a JBOSS vulnerability affects DCM4CHEE.  Click through to see details if they are of any interest but effectively the upshot is that while http GET and POST verbs are security restricted – other verbs (such as HEAD) are not.  This means remote users can run arbitrary code under the jboss user very often – root) without user credenti

It has been patched for dcm4chee-2.17.1 but the fix is easy enough to apply to previous versions.  In the file server/default/deploy/jmx-console.war/WEB-INF/web.xml find the following block of code (probably towards the bottom of the file):

<security-constraint>
<web-resource-collection>
<web-resource-name>HtmlAdaptor</web-resource-name>
<description>An example security config that only allows users with the
role JBossAdmin to access the HTML JMX console web application
</description>
<url-pattern>/*</url-pattern>
<http-method>GET</http-method>
<http-method>POST</http-method>
</web-resource-collection>
<auth-constraint>
<role-name>JBossAdmin</role-name>
</auth-constraint>
</security-constraint>

.. and remove the lines:

<http-method>GET</http-method>
<http-method>POST</http-method>

This ensures that all verbs are routed through the security checks by default.

Is Firefox Appropriate For Healthcare Now?

Posted Posted by Martin Peacock in Blog     Comments No comments
Jun
26

The last few days has seen a flurry of activity around the release of Firefox version 5.  Not all of it positive.  In fact, for enterprises – including healthcare institutions of any significant size – quite negative.

The release of Firefox 5 has, in the eyes of Mozilla, signalled the end of life (EOL) for Firefox version 4 – which means no more security updates.  It is expecting users that continue to use Firefox to upgrade to version 5 if they require security updates.  The new-style aggressive release cycle that Mozilla have put in place (not by any means a bad thing in itself) means security conscious users must go through a major-release upgrade every 3 months.

But- for enterprise users – that is nigh on impossible.  Before any major release upgrade all (or at the very least all critical) browser-based applications must be tested on the new platform.  It is inevitable that some will suffer some level of failure which must be relayed to vendors to implement corrective action.  That process requires time, resources, and on occasion, additional funding for upgrade licenses.  That this process (particularly the vendor-contributed part) is difficult is precisely why IE 6 still lurks balefully.  Even if this occurs – in the meantime there are still no security patches for the version actually installed.

To insist that this process be followed every three months is insane.

For vendors in healthcare and other regulated industries the vista is even worse.  I am currently building a product which – while currently can just about squeeze past certification as a medical device – will almost certainly later this year be classified as a medical device (at least in EU).  I do not have a problem with that – I firmly believe all software in any clinical context should be regulated (although I don’t think the medical device classification scheme does software QA processes little justice).  But I cannot rationally promote the product as suitable for use on a platform which either changes every 3 months or languishes in a security limbo.

Does it matter?  Do the browsers not all support HTML 5 with the Canvas tag?  Yes, they do. But Safari and Opera are still niche browsers so expecting those to fill in the gap is effectively ditching the ‘zero-footprint’ aspiration, and AFAIK Safari doesn’t support WegGL on Windows.  Microsoft has quite roundly criticised WebGL as a browser-based 3D rendering technology.  The Microsoft arguments have been roundly – and rightly – rebutted by Mozilla but it seems likely that IE will be, at best, a late adopter of WebGL.  Since 3D presentation is squarely in my product roadmap, I must reconsider building on WebGL and instead look at Silverlight.

So – reminiscent of the bad days 8-10 years ago, but realistically – I’m building a product for IE.  Unless something changes.

There are a number of things that could make this situation better:

  • Mozilla could volte-face on this issue and continue to provide security updates.  From the language used in various communications that seems unlikely.
  • Given the language, it is perhaps more possible that Mozilla mirror the Ubuntu-style release pattern of Long Term Support (LTS) releases every 4 or 8 non-LTS releases. Those speaking publicly for Mozilla so far have distanced themselves from this however.
  • Of course, Firefox is Open Source Software, which, in principle, anyone can support.  It is quite conceivable that a commercial entity appears to back-port security fixes to the equivalent of an LTS release.  That entity could easily be a commercial wing of Mozilla itself.
  • Microsoft could change their stance on WebGL.  That wouldn’t fix the Mozilla release cycle issue but would be a small help.

For the present, however, it would appear ‘zero-footprint’ ≡ IE

Announcing Fidelity – Support for DCM4CHEE

Posted Posted by Martin Peacock in Blog     Comments No comments
Jun
21

Inflection Technologies are happy to announce a new range of support packages for the inestimable open source archive software – DCM4CHEE. Unquestionably one of the most robust and feature-full archives (either proprietary or open source) in the market, it powers both open source and proprietary PACS installations in thousands of Hospitals around the world.

Increasingly, Hospitals are looking to augment their existing PACS installation with an archive sourced independently of their current vendor.  There are a number of reasons why this may be:

  • For some installations, much historic data is held offsite with a per-study charge for retrieval.  A seperate, cost-effective archive on commodity hardware can allow for historical data to be made available without incurring such charges.
  • The capital cost of expanding on-site storage can with many vendors be at an unreasonable premium.
  • Migrating to a new vendor can be a fraught process unless all the image data is held on an open, standards-based archive.

One of – if not the – most popular option for acheiving just this is the open source archive software DCM4CHEE.  It is unquestionably one of the most robust and feature-full archives (either proprietary or open source) in th emarket.

A barrier, however, to implementation has been a lack of formal support and services options.  This is why we are offering the Fidelity range of support packages for DCM4CHEE – for your peace of mind and to facilitate your efforts to take back control over your data.

Click here for more information.

Khronos Releases Final WebGL 1.0 Specification

Posted Posted by Martin P in Development     Comments No comments
Mar
4

Yesterday the Khronos Group – a consortium of media-centric companies including Dell, Google, HP, IBM, AMD, Apple, Intel, Nokia, nVidia, and many more, released the 1.0 version of the WebGL standard.

WebGL is a mechanism to extend JavaScript to allow HTML5-compliant browsers to render 3D graphics with hardware acceleration.

While other parts of the HTML5 specification is important for the future of browser-based imaging applications, 3D features such a virtualoscopies have become increasingly important with the explosion of data volumes generated by recent generation modalities such as CT.

WebGL is an important stepping stone therefore to a fully featured brpwser based Radiography viewing application.

However, as is noted here, there is a small paragraph slipped into the end of the press release that offers even greater hope for the future:

WebCL creates the potential to harness GPU and multi-core CPU parallel processing from a Web browser, enabling significant acceleration of applications such as image and video processing….

Now that is only in an exploratory phase at the moment but if that can be driven to widespread browser adoption we can truly say bye bye to ‘fat’ clients.

Launch of Inaugural Course in York Educate series

Posted Posted by Martin P in News     Comments No comments
Mar
3

Today we can announce the launch of the inaugural course in the York Educate series.

DICOM Troubleshooting is a 1 day on-site course to understand how to diagnose DICOM communications issues – particularly between vendors.  Click through to read more.

3D Visualisation – keeps on trucking

Posted Posted by Martin P in Development     Comments No comments
Oct
17

We’ve noted before there are continued advances in 3D visualisation.  While my own prediction for 2011 (as early as it is yet) is that we will see browser-based 3D modelling based on HTML5 and WebGL, there are other threads of development also.  The video below (thanks to New Scientist) seems to be the same as this Sony device we noted in October last year , but does put more of a medical context to the technology.

Using XML/XSLT for Dynamic User Access Protocols

Posted Posted by Martin P in DCM4CHEE, Development     Comments No comments
Sep
24

I have a small role in the setting up of the Irish national RIS/PACS system.  I hope I provide more solutions than problems but I’m sure that could be debated.  One of the questions that has appeared is of a category that appears in the implementation of pretty much any clinical system:  Can access to (some entity) be restricted by (some data element/property).  An example of this might be:

Can access be limited to the originating institution  for 24 hours after data generation?

The stock developer’s answer, is of course:

Yes.  But.  You have to define that element/property in advance and we can schedule it into a future release, based on available development time.

.. which isn’t terribly helpful.

Implementing a user-accessible scripting language into the system is one way of providing such a feature dynamically.  But its a tough job to retro-fit a scripting language into a system if the system was not originally designed with that in mind.  There may be another way – using data transformation.

XSLT is an XML-based language which provides transformation services for XML-based datasets.  So a dataset, when expressed in XML, can be transformed into some other representation – say HTML.  Importantly, the transformed dataset may contain some subset of the original data, but need not. It could equally be a dataset that is driven by the contents of the original dataset, but actually contain none of it.

The DCM4CHEE archive uses this idea, for example in defining forwarding rules based on the contents of DICOM fields in incoming images.  It applies an XSLT template such that, based on some set of criteria within the DICOM headers, the output is (in XML form), a parameterised list of destinations to which that image should be forwarded.

So how can this be used as a User Access Protocol?  Easy.  For each data item which is to be subject to User Access Control, package up its own element values, and element values from the wider context which are relevant, as an XML stream.  Run that through an XSLT parser with an output that defines what access the current user has.

So, for example, the dataset for an ‘order’ object may include, from its own data fields as well as wider context, the following data:

<?xml version=”1.0″ encoding=”UTF-8″?>
<?xml-stylesheet type=”text/xsl” href=”Order-Access.xsl”?>
<order>
<attr tag=”procedure”>CT Head</attr>
<attr tag=”orderplacer”>Jane Doe</attr>
<attr tag=”orderinstitution”>St Elsewhere</attr>
<attr tag=”placeddatetime”>2010-09-23T12:30-04:10</attr>
<attr tag=”hours-since-order”>23</attr>
<attr tag=”currentuser”>mpeacock</attr>
<attr tag=”userinstitution”>not St Elsewhere</attr>
</order>

.. that can be processed via XSLT:

<?xml version=”1.0″ encoding=”UTF-8″?>
<xsl:stylesheet xmlns:xsl=”http://www.w3.org/1999/XSL/Transform” version=”1.0″>
<xsl:output method=”xml” indent=”no”/>
<xsl:template match=”/order”>
<xsl:variable name=”orderinstitution” select=”attr[@tag='orderinstitution']“/>
<xsl:variable name=”userinstitution” select=”attr[@tag='userinstitution']“/>
<xsl:variable name=”hours-since-order” select=”attr[@tag='hours-since-order']“/>
<access>
<xsl:choose>
<xsl:when test=”$hours-since-order > 24″>
<read>True</read>
<write>True</write>
<delete>True</delete>
</xsl:when>
<xsl:otherwise>
<xsl:if test=”$userinstitution=$orderinstitution”>
<read>True</read>
<write>True</write>
<delete>True</delete>
</xsl:if>
<xsl:if test=”$userinstitution!=$orderinstitution”>
<read>False</read>
<write>False</write>
<delete>False</delete>
</xsl:if>
</xsl:otherwise>
</xsl:choose>
</access>
</xsl:template>
</xsl:stylesheet>

…to form output XML, based on :

<?xml version=”1.0″ encoding=”UTF-8″?>
<access>
<read>True</read>
<write>True</write>
<delete>False</delete>
</access>

So in this case, we can dynamically define quite complex access rules to restrict access to an order to the originating institution only, for the first 24 hours of its lifespan. Cool.

Note there is a bit of a kludge in the XML-packaged dataset.  I’ve put in a derived field (hours-since-order) which in itself requires some pre-casting of requirements.  The main reason is that, since I’ve used DCM4CHEE as an example, DCM4CHEE straight out of the box, is limited (by virtue of JBOSS, itself by virtue of Apache XALAN) to XSLT version 1.  The really useful date-processing functions are, alas, defined in XSLT version 2.  So the next logical stage I guess is to work out how to upgrade!