Journey to Interoperability – Part I

Featured

We are on a journey, a transition from health records being recorded on paper to a new paradigm of electronic health records (EHRs) and data interoperability.

We’ve all grown up in an era of paper records – the inability to read doctor’s handwriting, lost records, flood damage and overloaded filing cabinets have been the norm for decades. This is our common experience all over the world.

While they have been in use for much longer, it is only in the last 20 years that electronic health records have slowly been encroaching in everyday clinical practice, accelerating markedly in the last 5 or so. There are some areas where the benefits of electronic records have been a no brainer – for example, ability to generate repeat prescriptions in primary care were a major enabler in the mid 1990’s here in Australia. Yet despite some wins, the transition to EHRs has generally been much slower than we anticipated, much harder than we imagined, and it is not hard to argue that interoperability of granular health data remains frustratingly elusive.

But why? Why is it so hard?

Let’s start to explore this question using this map – paper records represented by the ‘Land of Missing Files’ in the left bottom corner and the ‘Kingdom of Data’ on the remainder of the map.

1a.jpg

The ‘Kingdom of Data’ can be further divided into two – the ‘State of Connectivity’ and the ‘State of Interoperability’ which is located on top of the ‘Semantic Cliffs’.

Universally, our eHealth journey commenced in the Land of Missing Files, crossing through the ‘Heights of Optimism’ and dividing into two major paths.

2a.jpg

The first path heads north to the ‘Maze of Monolithic Systems’ – the massive clinical solutions designed explicitly to encompass clinical needs across a whole health organisation or region – think Epic or Cerner as examples.  These systems may indeed provide some degree of connected electronic data across many departments as they will all use a common proprietary data model but departments that have different data or functional requirements are effectively marooned and isolated outside the monolith and there is enormous difficulty, time and cost in connecting them. The other harsh reality is that is often extremely difficult to extract data or to share data to the community of care that exists outside the scope of the monoliths influence. Historically the monolith vendors have been notorious for saying ‘if you want interoperability, buy more of my system’. It is likely that this attitude is softening but, due to the sheer size of these systems, any change requires months to years to implement plus huge $$$ .

3a

The second path heads east to the ‘Forest of Solo Silos’. Historically this has been the natural starting point for most clinical system development, resulting in a massive number of focused software applications that have been created to solve a specific clinical purpose by well-meaning vendors but each with its own unique proprietary data model. Each data model has traditionally been regarded as superior to others and thus a commercial advantage to the vendor – the truth is that none of them are likely to be better than another, all built from the sole perspective of the developer alone and usually with limited clinician input.

Historically, our first priority was to simply turn paper health records into electronic ones – capture, storage and display – and we have been successful. However the systems we built were rarely designed with a vision of how we might collectively use this health information for other more innovative purposes such as data exchange, aggregation, analysis and knowledge-based activities such as decision support and research. This is still well entrenched – modern systems these days are still being built as silos with a local, proprietary data models and yet we still wonder why we can’t accurately and safely interoperate with health data.

4a.jpg

In order to break through the limitations and challenges imposed by the solo silo and monolith approaches we have collectively trekked onwards into the ‘Swamp of Incremental Innovation’. It is a natural human trait to try to improve on what we have already built by implementing a series of safe incremental steps to extend the status quo. We have become very adept at this – small innovations building on the successful ones that have come before. And the results have been proportional – small improvements that have been glacially slow in development and adoption – and one key factor has been because we have been held back by our historical preference for disparate, closed commercial data models.

The natural consequence of incremental innovation on our journey to interoperability is that we are constantly looking down, looking where we will place our immediate next step, rather than raising our heads to see the looming ‘Semantic Cliffs’. I think that the large majority of vendors are stuck in this swamp, taking one step at a time and without any vision of strategy for the journey ahead. Clinical system purchasers are perpetuating this approach – just look at the number of jurisdictions procuring the monolithic systems. Nearly every other business abandoned this approach decades ago… except health.

We run the risk of becoming permanently stuck in this swamp, moving in never-ending circles or hitting the bottom of the Semantic Cliffs with nowhere to go and drowning in the ‘Quicksand of Despair’.

Beware, my colleagues, here be dragons!

Onward to Part II…

 

“Smart data, smarter healthcare”

Last week Hugh Leslie & I spent time in Shanghai and Huangzhou at the invitation of Professor Xudong Lu and Professor Huilong Duan. It was a privilege to be invited and particpate in the very first openEHR meetings to be held in China.

It follows on from a surprise discovery of Professor Lu’s work presented at the Medinfo conference in Brazil last year, and Prof Lu attending our openEHR training in Kyoto in Janauary.

Hugh & I presented at two events:

  • a seminar of invited guests and VIPs for the first openEHR meeting to be held in China on April 18 in Shanghai – an introduction to openEHR (Heather) and an overview of openEHR activity in Australia (Hugh); followed by
  • an introduction to openEHR – at the China mHealth & Medical Software conference, Shanghai on April 19

Watch my introduction to openEHR, ‘Smart data, smarter healthcare’ presentation, available via slideshare:

Bridging the interop chasms

The dilemma for implementers is: how to standardise clinical content yet use it in different clinical scenarios? How to lock down clinical content yet express variation in clinician requirements. How to create clinical content standards when every day the scope, depth and detail of the content is dynamically evolving? And then if we somehow manage to specify the content enough to successfully implement our own system, how to make that solution interoperable with others?

Everyone is running around (often in circles) trying to find the holy grail of health IT, the ‘one ring to bind them all’ of standards and, more recently, the ‘Uber of healthcare’. The sheer number of approaches reflects that there is no clear single approach at this point, no clear winning strategy.

My proposal: if we want to achieve any degree of semantic interoperability in our clinical systems we need to standardise the clinical content, keeping it open and independent of any single implementation or messaging formalism.

I noticed that John Halamka, CIO at Harvard & Beth Israel Deaconess Medical Center, was quoted in a rather unappetising Forbes article yesterday – worth reading for the interesting content but please leave the title and analogy at the door. His quote:

“FHIR is the “HTML” of healthcare. It’s based on clinical modeling by experts but does not require implementer’s to understand those details. Historically healthcare standard were easy for designers and hard for implementor’s. FHIR has focused on ease of implementation.” 

Totally agree: FHIR is the newest technology on the block and indeed many are considering it the HTML of healthcare. It has created a massive buzz of excitement around the world. But remember that HTML itself is a content free zone. Without addition of content, HTML is essentially just a technical paradigm. Similarly, without high quality clinical content, FHIR runs the risk of just being a marvellous new technical approach, elegantly solving our current implementer nightmares. We need more…

I have blogged previously about my experience participating in a FHIR clinical connectathon this time last year: The challenge for FHIR: meeting real world clinician requirements. My opinion about FHIR is neutral. I am not a software engineer. However as a clinician it was not offering enough to meet my recording needs a year ago. Little of my world has been FHIRified since. No doubt the FHIR team have a strategy for that, it will evolve, sometime.

My potentially controversial position is that it does not necessarily follow logically that the clever minds who devised FHIR are best placed to develop the clinical content that will ensure FHIR’s success. The great majority of FHIR models created to date are related to the US domain and within a rather limited scope that are largely focused on supporting existing messaging requirements and simple document exchange.

This is not a FHIR-specific problem – it is ubiquitous across the healthIT ecosystem. We need to transition away from the traditional business approach where the technicians/vendors dictate what clinicians would have in their EHR, and the data structure is determined by software engineers who have to investigate, interpret and then attempt to represent the world and work of the clinician in every EHR. With the best of intentions from both sides there is still a massive semantic chasm between the two professions.

What most technical modellers haven’t yet grappled with is the huge scope of clinical content – the sheer breadth and depth is astounding and as new knowledge is discovered it is also dynamic and evolving, which only compounds an almost impossible task. Most clinical recording in systems to date is relatively simple – usually focused on the ‘one size fits all’ minimal data set approach to enable some information to be shared broadly. It certainly has had some limited success but how long will it be before we realise that we need to standardise more than the minimum data sets.

Minimum data sets are woefully inadequate when we need to represent the detail that clinicians record as part of day-to-day patient care, for patient-specific data exchange with other clinicians, to drive complex decision support, for data querying, aggregation, analysis. Patients vary, so that no one approach will suit all – the needs for a surgical patient, a neonate, a pregnant woman vary hugely. Clinicians vary, so that no one approach will suit all – generalist vs specialist; medical and nursing notes may focus on the same things but from a completely different point of view. Clinical contexts vary, so no one approach will suit all. Many aspects of clinical recording needs recurring, fractal patterns. There are homunculi, questionnaires, normal statements, graphs, video, audio. The ‘one size fits all’ approach to health data standards is an absolute myth – in fact, the truth is actually that one size fits none.

Take the timing of medications as an example. Just this one seemingly simple data element raises a huge number of challenges for EHRs. I challenge you to show me one system in the world that enables prescribing today at the following level of detail:

This is the level of detail that clinicians need today in their clinical work. Yet most clinical systems only cater for the ability to prescribe to the level of complexity where someone can take one tablet, three times a day, after meals. Even if you try to prescribe a skin cream, many systems are unable to do anything but represent it in the most rudimentary way.

Below is the mind map of the current Medication Order archetype:

medication

You can see and download the corresponding archetype that is currently undergoing review here. This clinical content specification is the result of years of research, investigation of existing clinical systems, engagement with vendors and standards organisations and direct clinical informatician experience. The amount of work involved in specifying this common clinical order should not be underestimated, nor should it need to be undertaken ever again if we can get appropriate levels of agreement through international domain expert review! (Please register in CKM and adopt the archetype to participate in the international review.) It is massive. It is hugely complex.

Via a similar process, we have just achieved a consensus view on our Adverse Reaction Risk archetype: 12 review rounds completed; 91 participants from 16 countries; 182 total reviews; 0 face-to-face meetings! A core concept in any clinical system, with every system implementing it slightly differently. Now we have a line drawn in the sand, a starting point for being able to share adverse reaction data that has been designed and verified by clinicians, yet immediately computable. This work was done in collaboration with the FHIR/HL7 patient care community – note the joint copyright!

Archetypes bridge the semantic chasm between clinicians and software engineers.

Using archetypes as the means to represent clinical knowledge in an open, non-proprietary way is major breakthrough in our quest to “help kill the “golden goose” of proprietary EHR data”, as the Forbes article phrases it.

One does not have to be a rocket scientist to recognise that if we leverage the 400+ existing archetypes already in the international CKM, which already incorporates a huge breadth of clinician knowledge and expertise, we can kickstart any serious development of implementable resources for any technical paradigm willing to join in.

There is a powerful logic in actively separating out development of clinical content from the implementation formalism. In that way we can develop, agree and verify the clinical content specifications once. The resulting archetypes become the free, open standard representing the clinician’s knowledge in an implementation agnostic way. Subsequently technical transformations will be able to deliver the content to the implementer in any formalism that they choose. No longer just one archetype for openEHR implementation alone, but leverage the content to develop FHIR, CIMI, CDA, v2, UML resources… Now there is a vision!

Archetypes bridge the semantic chasm between implementation formalisms.

IMO this is probably the greatest benefit : the archetyped clinical content is created once, verified as fit for use by our clinicians, and remains as a governed infostructure resource for the next years, decades and beyond. Governance processes ensure that the archetypes are maintained and can evolve within a robust versioning framework. Different implementations will have the same, standardised clinical content.

One solid truth that we can rely on in the world of technology is that FHIR, HL7v2, HL7v3, CIMI, UML, AML, openEHR and other implementation formalisms will likely be overtaken at some time in the future. As sure as death and taxes. We really don’t want to start the process of building content for the each and every new technical invention that comes along.

Archetypes bridge the chasms created by the volatile fads and fashions in IT: our resulting ehealth INFOstructure will be a non-proprietary representation of clinical knowledge that can withstand and outlast the inevitable waves of technology as they come and go. 

Archetypes are a pragmatic and sustainable approach to interoperability: between professions; between implementations; beyond our current technologies!