Journey to Interoperability – Part I


We are on a journey, a transition from health records being recorded on paper to a new paradigm of electronic health records (EHRs) and data interoperability.

We’ve all grown up in an era of paper records – the inability to read doctor’s handwriting, lost records, flood damage and overloaded filing cabinets have been the norm for decades. This is our common experience all over the world.

While they have been in use for much longer, it is only in the last 20 years that electronic health records have slowly been encroaching in everyday clinical practice, accelerating markedly in the last 5 or so. There are some areas where the benefits of electronic records have been a no brainer – for example, ability to generate repeat prescriptions in primary care were a major enabler in the mid 1990’s here in Australia. Yet despite some wins, the transition to EHRs has generally been much slower than we anticipated, much harder than we imagined, and it is not hard to argue that interoperability of granular health data remains frustratingly elusive.

But why? Why is it so hard?

Let’s start to explore this question using this map – paper records represented by the ‘Land of Missing Files’ in the left bottom corner and the ‘Kingdom of Data’ on the remainder of the map.


The ‘Kingdom of Data’ can be further divided into two – the ‘State of Connectivity’ and the ‘State of Interoperability’ which is located on top of the ‘Semantic Cliffs’.

Universally, our eHealth journey commenced in the Land of Missing Files, crossing through the ‘Heights of Optimism’ and dividing into two major paths.


The first path heads north to the ‘Maze of Monolithic Systems’ – the massive clinical solutions designed explicitly to encompass clinical needs across a whole health organisation or region – think Epic or Cerner as examples.  These systems may indeed provide some degree of connected electronic data across many departments as they will all use a common proprietary data model but departments that have different data or functional requirements are effectively marooned and isolated outside the monolith and there is enormous difficulty, time and cost in connecting them. The other harsh reality is that is often extremely difficult to extract data or to share data to the community of care that exists outside the scope of the monoliths influence. Historically the monolith vendors have been notorious for saying ‘if you want interoperability, buy more of my system’. It is likely that this attitude is softening but, due to the sheer size of these systems, any change requires months to years to implement plus huge $$$ .


The second path heads east to the ‘Forest of Solo Silos’. Historically this has been the natural starting point for most clinical system development, resulting in a massive number of focused software applications that have been created to solve a specific clinical purpose by well-meaning vendors but each with its own unique proprietary data model. Each data model has traditionally been regarded as superior to others and thus a commercial advantage to the vendor – the truth is that none of them are likely to be better than another, all built from the sole perspective of the developer alone and usually with limited clinician input.

Historically, our first priority was to simply turn paper health records into electronic ones – capture, storage and display – and we have been successful. However the systems we built were rarely designed with a vision of how we might collectively use this health information for other more innovative purposes such as data exchange, aggregation, analysis and knowledge-based activities such as decision support and research. This is still well entrenched – modern systems these days are still being built as silos with a local, proprietary data models and yet we still wonder why we can’t accurately and safely interoperate with health data.


In order to break through the limitations and challenges imposed by the solo silo and monolith approaches we have collectively trekked onwards into the ‘Swamp of Incremental Innovation’. It is a natural human trait to try to improve on what we have already built by implementing a series of safe incremental steps to extend the status quo. We have become very adept at this – small innovations building on the successful ones that have come before. And the results have been proportional – small improvements that have been glacially slow in development and adoption – and one key factor has been because we have been held back by our historical preference for disparate, closed commercial data models.

The natural consequence of incremental innovation on our journey to interoperability is that we are constantly looking down, looking where we will place our immediate next step, rather than raising our heads to see the looming ‘Semantic Cliffs’. I think that the large majority of vendors are stuck in this swamp, taking one step at a time and without any vision of strategy for the journey ahead. Clinical system purchasers are perpetuating this approach – just look at the number of jurisdictions procuring the monolithic systems. Nearly every other business abandoned this approach decades ago… except health.

We run the risk of becoming permanently stuck in this swamp, moving in never-ending circles or hitting the bottom of the Semantic Cliffs with nowhere to go and drowning in the ‘Quicksand of Despair’.

Beware, my colleagues, here be dragons!

Onward to Part II…


“Smart data, smarter healthcare”

Last week Hugh Leslie & I spent time in Shanghai and Huangzhou at the invitation of Professor Xudong Lu and Professor Huilong Duan. It was a privilege to be invited and particpate in the very first openEHR meetings to be held in China.

It follows on from a surprise discovery of Professor Lu’s work presented at the Medinfo conference in Brazil last year, and Prof Lu attending our openEHR training in Kyoto in Janauary.

Hugh & I presented at two events:

  • a seminar of invited guests and VIPs for the first openEHR meeting to be held in China on April 18 in Shanghai – an introduction to openEHR (Heather) and an overview of openEHR activity in Australia (Hugh); followed by
  • an introduction to openEHR – at the China mHealth & Medical Software conference, Shanghai on April 19

Watch my introduction to openEHR, ‘Smart data, smarter healthcare’ presentation, available via slideshare:

Adverse reaction risk: the provenance

This week I documented the provenance of our Adverse Reaction Risk archetype – it has been a long & memorable journey from the first iteration in 2006 through to its publication in the international openEHR CKM last November 2015.

In the beginning was Sam Heard‘s original archetype – created way back in 2006 when Ocean Informatics had a .biz email address and before any collaboration – just the initial thoughts of one individual.


This was uploaded to the International openEHR CKM in July 2008.


In 2008 this archetype had its first collaborative review. The results of this were collated and as Editor I revised this archetype significantly to include the review feedback PLUS input from a number of publications available from NHS England, FDA and TGA  drug reporting requirements and the ICH-E2B publications. This was uploaded at the end of August 2009.

In late 2010, Australia’s National eHealth Transition Authority (NEHTA) forked the archetype and brought it into the NEHTA CKM environment and ran a series of 5 archetype reviews during the period through to June 2011. The resulting archetype formed the basis for the adverse reaction data elements in the initial PCEHR CDA documents which are currently being transmitted from Australian primary care clinical systems into the PCEHR (now rebadged as ‘My Health Record‘).

NEHTA starts.jpg

In 2012, there was another review round carried out in the international CKM.

The results from that 2012 review, the outcomes from the June 2011 NEHTA archetype and publications from HL7’s FHIR resource and RMIMs were amalgamated by Ian McNicoll in June 2014 to form a new archetype – initial called ‘Adverse Reaction (AllergyIntolerance)‘ and later, the ‘Adverse Reaction (FHIR/openEHR)’ archetype – with the intent of conducting a series of joint FHIR & openEHR community review of the combined model and at the end of the process generating a FHIR resource AND an openEHR archetype with matching, clinically verified content.hl7.jpg

In August 2014 the first joint openEHR/FHIR review was carried out, with myself (@omowizard, AU, openEHR), Ian McNicoll (@ianmcnicoll, UK, openEHR), Graham Grieve (@GrahameGrieve, AU, FHIR) & Russ Leftwich (@DocOnFHIR, USA, HL7 Patient Care/FHIR) as editors. Nasjonal IKT forked the archetype into the Norwegian CKM at the conclusion of that process.


There was a subsequent joint review between openEHR & FHIR that followed, only rather than the few weeks I had anticipated, we had to wait until the FHIR community completed a full FHIR ballot. This blew out the review period to 7 months for our work.


This really highlights the need to separate the ballot/review process for clinical artefacts like FHIR resources and archetypes from balloting process of complete technical standard or specification within a typical standards organisation. If we use this same glacially slow process for the governance of clinical artefacts then it will take decades to achieve high quality shared clinical models.

And one HL7 participant contacted me and said it would be impossible for them to respond to the archetype review in less than 6 months. <facepalm here>. Just for perspective, our typical review round is open for 2 weeks and it takes anywhere from 10 minutes to 30 minutes for most participants to record their contribution.

But we waited… and fed the FHIR ballot comments back into the next archetype iteration. There were not that many! And then we sent it out for the next review – and this time the Norwegian CKM community participated as well. The Norwegian CKM team (led by Silje Ljosland Bakke, @siljelb, & John Tore Valand, @Jtvaland) translated the archetype into Norwegian &  ran a slightly shorter review period, contributing the collective feedback into the international review.

We did this simultaneous review across the FHIR, international and Norwegian communities twice – once in July 2015 and another in November 2015. One of these reviews resulted in the renaming of the archeytpe  concept to ‘Adverse reaction risk’.


At the end of the November 2015 review round, the editors found that there was a consensus reached amongst the participants. In the international CKM we removed the FHIR-specific components and published the content of the ‘Adverse reaction risk’ archetype. The publication status of original archetype was simultaneously changed in the CKM  to rejected – this rejected archetype remains in the international CKM as part of the provenance/audit trail for the published archetype.


The Norwegian CKM has now taken that international archetype and published it within their CKM and under their own governance. The archetypes are semantically aligned.

The FHIR resource has evolved in keeping with the archetype changes. To be completed honest I’m not sure if the final, published openEHR archetype has been reflected back into the latest FHIR resource, but there is no doubt that there certainly the great majority of the two artefacts are aligned due to the joint review process.


The archetype that has finally been published started with the brain dump of a single clinical informatician. At this point in its journey this archetype alone has been shaped by:

  • 13 review rounds
  • 221 review contributions
  • 92 unique individuals
  • 16 countries (top 3 being AU, NO & US)

This has been a very significant block of international work. Getting any kind of consensus on such a clinically significant artefact across different jurisdictions, standards organisations and diverse requirements has not been easy. But we have experienced a great generosity of spirit from all who contributed their time, expertise and enthusiasm to capture an open specification for a single piece of clinical knowledge that can be re-used by others, and potentially improved even more over time.

This is what the published Adverse reaction risk archetype looks like today:


The detail and thought behind each data element and example is significant. Yet we know it is not perfect, nor ‘finished’.

No doubt we will identify new requirements or need to modify it. This journey will then start its next phase…

If you have identified additional requirements… If you disagree with the model…  then register and contribute to the community effort now by registering on the international CKM and make a change request or start a discussion thread.