Journey to Interoperability – Part I

Featured

We are on a journey, a transition from health records being recorded on paper to a new paradigm of electronic health records (EHRs) and data interoperability.

We’ve all grown up in an era of paper records – the inability to read doctor’s handwriting, lost records, flood damage and overloaded filing cabinets have been the norm for decades. This is our common experience all over the world.

While they have been in use for much longer, it is only in the last 20 years that electronic health records have slowly been encroaching in everyday clinical practice, accelerating markedly in the last 5 or so. There are some areas where the benefits of electronic records have been a no brainer – for example, ability to generate repeat prescriptions in primary care were a major enabler in the mid 1990’s here in Australia. Yet despite some wins, the transition to EHRs has generally been much slower than we anticipated, much harder than we imagined, and it is not hard to argue that interoperability of granular health data remains frustratingly elusive.

But why? Why is it so hard?

Let’s start to explore this question using this map – paper records represented by the ‘Land of Missing Files’ in the left bottom corner and the ‘Kingdom of Data’ on the remainder of the map.

1a.jpg

The ‘Kingdom of Data’ can be further divided into two – the ‘State of Connectivity’ and the ‘State of Interoperability’ which is located on top of the ‘Semantic Cliffs’.

Universally, our eHealth journey commenced in the Land of Missing Files, crossing through the ‘Heights of Optimism’ and dividing into two major paths.

2a.jpg

The first path heads north to the ‘Maze of Monolithic Systems’ – the massive clinical solutions designed explicitly to encompass clinical needs across a whole health organisation or region – think Epic or Cerner as examples.  These systems may indeed provide some degree of connected electronic data across many departments as they will all use a common proprietary data model but departments that have different data or functional requirements are effectively marooned and isolated outside the monolith and there is enormous difficulty, time and cost in connecting them. The other harsh reality is that is often extremely difficult to extract data or to share data to the community of care that exists outside the scope of the monoliths influence. Historically the monolith vendors have been notorious for saying ‘if you want interoperability, buy more of my system’. It is likely that this attitude is softening but, due to the sheer size of these systems, any change requires months to years to implement plus huge $$$ .

3a

The second path heads east to the ‘Forest of Solo Silos’. Historically this has been the natural starting point for most clinical system development, resulting in a massive number of focused software applications that have been created to solve a specific clinical purpose by well-meaning vendors but each with its own unique proprietary data model. Each data model has traditionally been regarded as superior to others and thus a commercial advantage to the vendor – the truth is that none of them are likely to be better than another, all built from the sole perspective of the developer alone and usually with limited clinician input.

Historically, our first priority was to simply turn paper health records into electronic ones – capture, storage and display – and we have been successful. However the systems we built were rarely designed with a vision of how we might collectively use this health information for other more innovative purposes such as data exchange, aggregation, analysis and knowledge-based activities such as decision support and research. This is still well entrenched – modern systems these days are still being built as silos with a local, proprietary data models and yet we still wonder why we can’t accurately and safely interoperate with health data.

4a.jpg

In order to break through the limitations and challenges imposed by the solo silo and monolith approaches we have collectively trekked onwards into the ‘Swamp of Incremental Innovation’. It is a natural human trait to try to improve on what we have already built by implementing a series of safe incremental steps to extend the status quo. We have become very adept at this – small innovations building on the successful ones that have come before. And the results have been proportional – small improvements that have been glacially slow in development and adoption – and one key factor has been because we have been held back by our historical preference for disparate, closed commercial data models.

The natural consequence of incremental innovation on our journey to interoperability is that we are constantly looking down, looking where we will place our immediate next step, rather than raising our heads to see the looming ‘Semantic Cliffs’. I think that the large majority of vendors are stuck in this swamp, taking one step at a time and without any vision of strategy for the journey ahead. Clinical system purchasers are perpetuating this approach – just look at the number of jurisdictions procuring the monolithic systems. Nearly every other business abandoned this approach decades ago… except health.

We run the risk of becoming permanently stuck in this swamp, moving in never-ending circles or hitting the bottom of the Semantic Cliffs with nowhere to go and drowning in the ‘Quicksand of Despair’.

Beware, my colleagues, here be dragons!

Onward to Part II…

 

Journey to Interoperability – Part II

Read Part I

The journey continues…

5a.jpg

One of the moderately successful incremental innovations that has given us some reprieve has been the development of standardised messages or documents to exchange selected, critical data – the ‘River of Limited Exchange’. There is no doubt that these messages have made a significant contribution to sharing health data but it is also important to note here that the transformation to and from each message or document carries its inherent risks to data quality and integrity.

One of the biggest issue with this approach is that the standardisation of each message or document requires negotiation and collaboration between each stakeholder – in the standards world these negotiations can take anywhere between one to five years to identify requirements and reach agreement on the scope and specification for each message/document clinical content payload. Negotiations between local organisations or vendors can be less time consuming but the work and effort required is still substantial. If this needs to be negotiated for every type of clinical communication between every stakeholder then we quickly find ourselves in the ‘Bay of Never-ending Negotiation’ and floating out into the ‘Sea of Unsustainability’.

And what happens when we want to exchange more than ‘selected, critical data’? The old mantra about the ‘right data, right person, right time’ is great rhetoric but we can’t do that if we are constrained to use minimum data sets or fixed messages/documents.

We need to work smarter! At the very least we need to lift our gaze up from our current plodding path and avoid hitting the Semantic Cliffs without a plan.

There have been attempts to escape the Swamp and scale the Cliffs towards Interoperability. Significant effort over the past 2-3 decades has been channelled into the development of clinical terminologies and there is no doubt that terminologies have been a key enabler for some of our incremental innovation success. On the international stage SNOMED CT is probably the frontrunner. However SNOMED’s evolution has largely been focused on creating another type of silo, a semantic silo that has mistakenly grown and expanded to try to represent all clinical things for all clinical requirements, enthusiastically supported by many of our standards development organisations and national programs. The core of SNOMED is immensely valuable, no argument here. But it is the drive to make this terminology also act as an information model that has arguably resulted in creation of a resource that is massive and unwieldy, often fragmented, inconsistent and so complex that to use it in its entirety is beyond the ability of most mortal clinicians or vendors.

We also know from our extensive experience with silo systems that proprietary information models have not been enough. Even if we try to share information models across vendors this will not solve the issue – archetypes as information models are not enough by themselves.

So how to reach the State of Interoperability?

Albert Einstein is reported to have said, “We cannot solve our problems with the same thinking we used when we created them.” While there is much debate as to whether this quote is correctly attributed or not, the importance of the message remains. Can we expect to achieve interoperability of data within any reasonable time frame if we continue to develop health IT solutions with our same incremental innovation approach. We have been working at this for over 30 years now and it can be confidently said that progress has been glacially slow. Isn’t this parochially known as the definition of insanity – doing the same thing yet expecting a different outcome? How much longer are we prepared to persist?

We are edging towards the Quicksand of Despair inexorably, day after day. We will eventually reach the barrier of the Semantic Cliffs as an impenetrable vertical barrier.

The answer is deceptively simple – we need to change our approach. Scary, yes. At the same time it is exciting. We have sensibly and logically pushed incremental innovation to very near its limits and now it is time to take a step back and reconsider, to try something new, orthogonal, and somewhat radical.

6aWe need a Bridge of Smart Data! A shared and semantically accurate, safe and unambiguous data foundation that we use as the basis for all eHealth activities: for persistence; exchange; aggregation and analysis; decision support AND research.

The good thing is that we are not the first. This approach has been in development for over 20 years now, tested in international implementations and refined. This smart data approach has been pioneered within the openEHR Foundation, and is now in use within the international openEHR and a grassroots UK community, and by national programs in Australia, Norway, Brazil, Slovenia and NHS England.

But what about FHIR, I hear you ask? It packages information models plus terminology together as well, but is limited within the paradigm of a message standard. The problem of how to persist health data remains unsolved and the choice to limit standardisation of each FHIR resource to 80% of what legacy systems currently collect means that the furthest up the Semantic Cliffs that a FHIR implementation can reach is 80%, to the Plateau of FHIR.

By zeroing in on smart data  we automatically change the focus of discussion from EHR applications, messages and documents back to its component building blocks – the re-usable granular health data itself – and once established the shared data specifications become the basis for the new generation of applications, messages and documents as well as all other health data activity.

By the term ‘smart data’ I’m referring to shared clinical content specifications which are:

  • a combination of standards-based information models (such as openEHR archetypes) PLUS terminology knowledge resources (ranging from the high quality core of SNOMED CT down to value sets used only in a specific, local context)
  • clinician driven:
    • ensuring that the data we use represents how clinicians need their data
    • unambiguous, safe and fit for clinical use
  • collaboratively developed and agreed by domain experts, including but not limited to
    • informaticians
    • clinicians
    • software engineers
    • terminologists
  • shared
    • across an organisation;
    • across a jurisdiction;
    • nationally; or
    • internationally
  • zero cost
  • free access
  • agnostic to any single technology or clinical system
  • agnostic to any single terminology
  • capable of multiple language translations

To scale the Semantic Cliffs we need to work collaboratively to build our Bridge of Smart Data. We have some components evolving – the collection of archetypes in the international CKM is growing in number as well as quality. Terminology is not so simple – the core of SNOMED CT will be a great candidate but is currently only available under license and this will limit availability to low income countries.

Most importantly, we need to confront the status quo NOW, and to start conversations that will help us to avoid the Quicksand of Despair and reach the State of Interoperability.

This is no longer just about a technical solution – we have implementations and over 20 years of experience in learning how to make it work. Now, we need to focus on the people, and especially the vendors, choosing to work collaboratively, actively deciding to break down the proprietary silos to focus on the data. Can we do it? This is our challenge.

Or rather, ask this question… can we afford not to do it?

“Smart data, smarter healthcare”

Last week Hugh Leslie & I spent time in Shanghai and Huangzhou at the invitation of Professor Xudong Lu and Professor Huilong Duan. It was a privilege to be invited and particpate in the very first openEHR meetings to be held in China.

It follows on from a surprise discovery of Professor Lu’s work presented at the Medinfo conference in Brazil last year, and Prof Lu attending our openEHR training in Kyoto in Janauary.

Hugh & I presented at two events:

  • a seminar of invited guests and VIPs for the first openEHR meeting to be held in China on April 18 in Shanghai – an introduction to openEHR (Heather) and an overview of openEHR activity in Australia (Hugh); followed by
  • an introduction to openEHR – at the China mHealth & Medical Software conference, Shanghai on April 19

Watch my introduction to openEHR, ‘Smart data, smarter healthcare’ presentation, available via slideshare: