Browsing all articles in Open Source

Open Source Code Quality – Security – Spat

Posted Posted by Martin P in Open Source     Comments No comments

I’ll admit to laughing out loud reading this article on the BCS website on the supposed lack of security, accountability and management in Open Source projects.  To be honest, I laughed at the utter ridiculousness of the premise in the article which I would like to think that the thinking world has moved on from a long time ago.

The first responder then proceeded to take the article to pieces line-by-line, which was even funnier.  Or would be, if it weren’t putting the BCS editorial process in a really, really bad light.  The response was edited by mods but a full version is available here.

I’ll make a note that hasn’t yet been raised in the debate.  In the original article…

Alternatively open source applications can be updated via the community as developers release updates free-of-charge for the good of the open source users. However, there are no guarantees that the patch will be written and released at all, let alone the quality of the patch, as there is no overriding responsibility to provide a service level of any kind.

… demonstrates a fundamental lack of understanding of Open Source Software.  The core of any OSS is a single logical entity.  That entity may be a number of people in a number of geographic locations but it is nonetheless a single entity – though which contributions to the project are delivered – and potentially rejected.  For smaller, less mature projects, quality may not be a criterion on which contributions are judged.  Certainly for any remotely successful or mature project, quality requirements are very high on the list of priorities.  Coding standards, testing standards, runtime standards are very often specified explicitly in the project documentation and contributions can be and very often are rejected on the basis of non-compliance.

It is true that in Open Source Software, anyone can contribute.  But that does not mean the contribution automatically gets included in the package.  That is a decision that only trusted members of a core team can make.

Of course, the notion that closed-source software is inherently secure is nonsense.  As the latest vulnerability in the *most* closed software adequately demonstrates.

HIT Market Growth and Clinical Process Reengineering

Posted Posted by Martin P in Open Source     Comments No comments

I see from an article on BEyeNetwork that some are forecasting growth in the HIT market.  While there is much sense in the article:

The “big guys” such as Cerner and Epic are not interested in a price point below ten million dollars.

While the point of “ten million dollars” is perhaps a little overstated (by an order of magnitude or two), the idea is valid that many healthcare settings are simply under the radar, and that provides open source solutions with an achievable entry point.

But the author has at least one point very wrong:

Software is a part of every business process and every scalable delivery system. In that specific sense, healthcare is no different, though many details are and will remain distinct to the mission of improving human well being

For a long time I found it confusing and (because I didn’t understand why,) frustrating, that in an age where I can fly from my home in Ireland to a sunny spot in Portugal (or Morocco, or Thailand, or wherever) with a piece of plastic and in a remote corner on the edge of Europe, pull money out of a hole-in-the-wall, I cannot go two hospitals barely two miles apart and have one aware of my record in the other.  The answer is that patient care is very different and considerably more complex than a financial transaction.

The author goes on to say hospitals should:

Understand that the critical path to success includes reengineering the workflow to accommodate the efficiencies of the software…

As Doris might say, yes yes yes yes …. NO.  Clinical and patient care workflow SHOULD NOT be re-engineered to suit software.  There may well be improvements that can be made by re-engineering processes, but implementation of software should not be the driving force behind such re-engineering, but should support the processes that are there now, or may be in the future.

Vendors that understand that, and strive to understand ALL of the processes that go into patient care, do well.  Vendors that seek to impose an artificial vision of heathcare processes, tend not to do well.

NB The article included an idiom I  hadn’t heard before:

Interoperability remains the navel into the unknown…

While I could half-guess what it might mean, I consulted the Great and Good Google and it found only one reference which appears to be a psycho-therapists tome:

There are some patients who cannot get the feel of themselves or move on unless they feel they have seeped through the therapist’s dream navel into the unknown, and that what the therapist says or does grows out of the unknown depths , filters through the dream navel into images, gestures, statements.

It may be that a patient needs to be dreamt by an analyst before the former can make use of dreaming.


Multi-User Collaboration

Posted Posted by Martin P in Development, Open Source, PACS General     Comments No comments

Google has recently announced a new project: Wave. In the words of the project leaders – it is what email would be like if it were invented today.  Effectively, it is a platform for multi-user collaboration which is loosely based on some of the social networking paradigms but takes the ideas to an entirely new level, and has real potential for improving collaboration within PACS.

Collaborating across different locations and specialties has been a challenge that few PACS products address terribly effectively. Problems that can only be described as human-specific mean that relatively simple issues like critical results become more intractable. Google Wave, even with no PACS-specific consideration, gives a good stab at working through the technical and UI challenges.  That’s a good start.

Lead by the originators of the Google Maps (hey, there’s a cv)  product, it was made public at the Google I/O conference on 28th May but has no published software yet (available later this year, apparently, although even then it’ll probably be in beta).  The demo video on the site (1h 20m) is worth the time but potted highlights are:

  • Although it runs in the browser, it is a real-time IM and collaboration tool with asynchronous (and simultaneous) editing. (Postnote: Damn that’s clever !)
  • Conversation playback for late-coming participants.
  • Can incorporate publically published (e.g. blogs) with native collaboration functionality.
  • Mobile device integration.
  • Server-driven, therefore can better address issues like non-repudiation
  • Seamlessly supports right-to-left alongside left-to-right languages on the same screen.
  • Designed from the ground up to be easily extensible via plug-in architecture.  Indeed the demo includes a real-time sync of zooming into Google Maps (screenshot below).
  • Google Wave - Maps plugin
    Google Wave – Maps plugin
  • Google is committed to Wave being Open Source.

This last point elevates my interest to an even higher plane. This means that Wave can be incorporated  into anybody’s system – both the software source and the protocol are fully open.  As cool as Wave is even in this really, really, really early stage, how cool would it be if a user on a PACS from, say McKesson, can collaborate with a user on a PACS from, say, AGFA?

Actually, I chose those two vendors not quite at random.  McKesson is known to be comfortable with Open Source – especially Linux and MySQL (at least to some extent). Also, what I understand of the development methodologies at McKesson show a high degree of synergy with Open Source. Agfa, of course, has a high visibility in Open Source circles.

Food for thought.  I for one will be keeping a very open eye on how Google Wave progresses. My one concern is that Google has in the past been a little confused on its position with Open Source licenses. The digging I’ve been able to do has not revealed what kind of licensing (other than a somewhat unusual open patent license) is going to be involved.

A Future With Or Without MySQL?

Posted Posted by Martin P in Infrastructure, Open Source     Comments No comments

Oracle’s recent acquisition of Sun has exercised many over the future of two of Sun’s important Open Source IP holdings – Java and MySQL (another – OpenOffice – is under question also but not terribly relevant here). Ownership of Java in its current state would seem to complement the positioning of Oracle as an all-things-middleware outfit so is unlikely to change significantly and very likely, is safe in its current form.  The biggest questions hover over MySQL.

As a database, clearly it has been a significant competitor to Oracle’s own flagship product, and as such, it is difficult to see where Oracle is likely to go with it.  Indeed, many of MySQL’s core developers have already defected to one or other forks of the project, including the founding developer, Monty Widenius.
The main criticism from Widenius is that MySQL (under Sun) has lost focus on quality, and has addressed that himself by forking to an alternative project – MariaDB.

Other criticisms include one that MySQL has fallen into the trap of feature-bloat (ironic, then, that it ends up in the hands of Oracle :-) .  In reality, 90%+ of database-driven applications (including, to a large extent, PACS and RIS) do not need much more than a SQL-compliant engine, which is what MySQL was when it grew to power the planet’s Interweb infrastructure. There is a MySQL fork to address this also – Drizzle.

But does it matter? The option to fork is often cited as one of the benefits of open source software. If the core developers, including Widenius, have forked the project and are returning to original values, then why not simply use MariaDB or Drizzle instead of MySQL?  There are two reasons why such a situation could be unhelpful to the free database ecosystem:

  • The community becomes fragmented.  MySQL boast(ed) a healthy community as well as the corporate sponsorship of MySQL AB and that was always part of its strength.  Even one fork (let alone multiple) will dilute that community.
  • MySQL for many years has had one particular distinct advantage over the other free databases – and that is the class of it’s support.  Yes, the likes of PostgresSQL has support offerings from ‘partners’ but nothing close to the corporate heft of MySQL AB.  When persuading ultra-conservative C-level folk that Open Source and ‘Free’ can be as ‘safe’ as anything proprietary, that credibility is a major help.

The world awaits the word of the Oracle.

Writing Free Documentation Just Got Sooo Much Easier

Posted Posted by Martin P in Open Source     Comments No comments

In the formation of user guides and documentation, there are inevitably points where background information is not only appropriate but essential.  In the case of a user guide for DCM4CHEE, background sections on DICOM, HL7, and XML/XSLT are good examples – a user guide without some explanation of such concepts would be incomplete.

Where would one go for a briefing outside of the context of a user guide?  There is only one place to start of course and that is the ever-increasingly useful Wikipedia.  That unfortunately is where it stops when building a user guide, because content in Wikipedia is licensed under the GNU Free Documentation License. GFDL has a number of critics (including the very freedom-oriented Debian project) but the main problem is that it makes it virtually impossible to take snippets of different documents (in the case of Wikipedia, articles), and collate them into a separate document.

One of the reasons this is so hard is that the in GFDL, the full history of any source documents must be included in full in the resulting document.  To include the history from even a handful of Wikipedia articles would end up in a horrific document.

But now, SlashDot reports that the Wikimedia Foundation has resolved to relicense its content with the Create Commons Attribution Share-Alike 3.0 license.  This is an order of magnitude simpler than GFDL and effectively means that content can be reused with just a link back to the original.

A major step for increasing the quality of free software documentation.

Not having a great involvement in Wikipedia, I cannot say if the WikiMedia software has the abilty of alerting subscribers when edits are made.  This would be an important element.  To properly maintain any free resource, one must be in touch with upstream developments.

Why are so many still hung up on an outdated definition of Open Source?

Posted Posted by Martin P in Open Source     Comments No comments

There remains in the world a misunderstanding of the nature of open Source Software (OSS), perpetuated within the world of radiology, even amongst those that would consider themselves to be supporters of OSS that it is, by definition, written by unpaid amateurs.

It may be true that the root of OSS and Free Software lies in a slightly-left-of-centre-give-for-the-good-of-the-world philosophy, and that is still to some extent a driving force in OSS generally, although even in the ‘traditional’ perspective of OSS, there are many motivations other than altruism – including both skills development, and skills demonstration and career development.  But it is certainly true that in the last 10 years in particular, an alternative, very Capitalist and VERY credible face of OSS has matured – one in which money, and investment and return, is very much an important part of the ecology.

And to call OSS developers ‘amateurs’ is about as far wrong as one can be.  In fact, many of the world’s most talented and inventive developers participate in OSS projects.

Indeed, finding succesfull OSS projects started/supported by one (or more) commercial entities would be like shooting fish in a barrel.  OSS is still a small part of the overall software ecosystem (even the ‘poster child’ – Linux – is a minority shareholder), but growing.  Healthcare is one area where the growth opportunity is greater than most.

But before any significant growth can be achieved, it is important to make a clear distinction between Open Source Software, and ‘amateur’ software, because the boundary can just as easily be expressed as ‘good enough or better’ vs ‘not fit for purpose’.

Google Chrome for Mac and Linux

Posted Posted by Martin P in Development, Open Source     Comments No comments

Having written a little about Google Chrome, it came as a disappointment to many that Google hasn’t produced (yet) a version for Mac and Linux.  Being Open Source, of course, it is entirely possible that others could step in and fill that hole.  Enter Cross Over Chromium.

Now it should be said that even the native Windows beta-version turned out to be too much of a pain in the ass for me to continue and after a week or so I went back to Firefox.  A port of Chromium via the emulation layer provided by WINE is likely to be even less appealing, and CodeWeavers themselves describe it as a proof-of-concept only – not to be used as a main browser.

Open Source Participation

Posted Posted by Martin P in Open Source     Comments 1 comment

One of the key benefits in Open Source Software is in its community. Outside of the warm, fuzzy feeling, there are very real, tangible benefits like support and diversity, amongst others. Smarter people than me have waxed lyrical on those benefits so I won’t here. But I will comment on a common misconception around Open Source – and that is that the ‘community’ is about developers.

It is true that successful OSS projects have a large proportion of developers at the core – lets face it – it’s about producing software after all, but there is much that gets done which isn’t ‘developing’ (even though it often gets ‘done’ by developers). The REALLY successful projects boast a variety of skills in their community – from web design professionals to marketing professionals to legal professionals and a rake of other skills that all make the project what it is.

But if you’re not one of those – what if you’d like to feed back into ‘the community’ but all you can offer is an occassional bug report – only to be told it’s been reported, fixed, and ready for the next stable release?

Well, there may be a way you can contribute a little time to make the world a freer place. This blog is about open source PACS and RIS – although it hasn’t considered the challenges in RIS very much (so far). Well here is one. One of the features virtually anybody would expect to see in a RIS is some level of integration with a dictation/speech recognition (SR) system. The problem is, Open Source SR is still in very much a developmental stage. That’s OK, the SR vendors will cooperate with anyone who may bring in sales, surely. Yes, but then is it really Open Source – especially when OSS SR is beginning to bubble.

Open Source SR has no lack of recognition engine – in fact there are 2 or 3 pretty good engines. What it is lacking is language acoustic models . That is a large collection on speech samples for the engines to match patterns against. This is where virtually anybody can offer value to a worthy OSS project.

Voxforge is an Open Source project to collect vox samples to fill exactly that gap. They’ve gone out of their way to make it really, really easy to submit even a few seconds of sample. Your community needs you.

How can we be sure proprietary software is safe?

Posted Posted by Martin P in Open Source     Comments 2 comments

When software goes awry at a bank, no-one is hurt. A warehouse system may turn HAL but at the end of the day, maybe a little fruit goes off, or TVs fall off the back of lorry. Healthcare is one of the few environments where software has the power to do actual harm. That should be taken seriously.

How often does that happen? Who can say? Most healthcare institutions wouldn’t be too keen on publicising such a thing voluntarily. Only when matters go to court do incidents really get exposed.

The case is tragic, but some points stand out:

Singh’s lawyers allege the company knew about a software error that caused the catheter to overheat. Edwards didn’t warn hospitals of potential hazards, they say.

… but one can’t expect everything to be perfect, you say…

However, Sigh’s case is not the first time problems were reported with the monitor.

In my day job, I work with a lot of software proprietary vendors. Getting software from two different vendors to work in concert can be a challenge, and while it isn’t (for now) common, I have certainly experienced occasions of ‘Mexican Standoff’, with two (or more) vendors denying responsibility for an interface blip. On one occasion, I had external consultants come in the sniff the network to determine that, in fact, the connection that vendor A said was being rejected, in fact, wasn’t even being requested. We never did resolve that issue – the RIS involved is now defunct (although the vendor remains one of the biggest around) and the CR was replaced for other reasons.
How is that relevant? In each case, the vendor has control over crucial quality information. That isn’t an issue if the vendor has a sense of responsibility but lets get real – vendors are there to make money – nothing else. In that situation, someone, somewhere, is going to avoid responsibility.

Of course, I could say that making software Open Source fixes this problem – and to a large extent, it does, but that doesn’t answer the whole issue. I’m not sufficiently bigoted about open source to suggest that OSS holds all the answers. OSS and proprietary software must (and indeed, will) co-exist in any rational world and those who resist that will (IMHO) either change, or regret it.

So the safeguards must apply to both. The safeguards at present, are clearly lacking for even those devices to which they apply. There is a lot of quite simply rotten software around that puts patient care at risk (and, by the way, that includes OSS as well as proprietary). That’s the question. I don’t have an answer.