Bridging the interop chasms

The dilemma for implementers is: how to standardise clinical content yet use it in different clinical scenarios? How to lock down clinical content yet express variation in clinician requirements. How to create clinical content standards when every day the scope, depth and detail of the content is dynamically evolving? And then if we somehow manage to specify the content enough to successfully implement our own system, how to make that solution interoperable with others?

Everyone is running around (often in circles) trying to find the holy grail of health IT, the ‘one ring to bind them all’ of standards and, more recently, the ‘Uber of healthcare’. The sheer number of approaches reflects that there is no clear single approach at this point, no clear winning strategy.

My proposal: if we want to achieve any degree of semantic interoperability in our clinical systems we need to standardise the clinical content, keeping it open and independent of any single implementation or messaging formalism.

I noticed that John Halamka, CIO at Harvard & Beth Israel Deaconess Medical Center, was quoted in a rather unappetising Forbes article yesterday – worth reading for the interesting content but please leave the title and analogy at the door. His quote:

“FHIR is the “HTML” of healthcare. It’s based on clinical modeling by experts but does not require implementer’s to understand those details. Historically healthcare standard were easy for designers and hard for implementor’s. FHIR has focused on ease of implementation.” 

Totally agree: FHIR is the newest technology on the block and indeed many are considering it the HTML of healthcare. It has created a massive buzz of excitement around the world. But remember that HTML itself is a content free zone. Without addition of content, HTML is essentially just a technical paradigm. Similarly, without high quality clinical content, FHIR runs the risk of just being a marvellous new technical approach, elegantly solving our current implementer nightmares. We need more…

I have blogged previously about my experience participating in a FHIR clinical connectathon this time last year: The challenge for FHIR: meeting real world clinician requirements. My opinion about FHIR is neutral. I am not a software engineer. However as a clinician it was not offering enough to meet my recording needs a year ago. Little of my world has been FHIRified since. No doubt the FHIR team have a strategy for that, it will evolve, sometime.

My potentially controversial position is that it does not necessarily follow logically that the clever minds who devised FHIR are best placed to develop the clinical content that will ensure FHIR’s success. The great majority of FHIR models created to date are related to the US domain and within a rather limited scope that are largely focused on supporting existing messaging requirements and simple document exchange.

This is not a FHIR-specific problem – it is ubiquitous across the healthIT ecosystem. We need to transition away from the traditional business approach where the technicians/vendors dictate what clinicians would have in their EHR, and the data structure is determined by software engineers who have to investigate, interpret and then attempt to represent the world and work of the clinician in every EHR. With the best of intentions from both sides there is still a massive semantic chasm between the two professions.

What most technical modellers haven’t yet grappled with is the huge scope of clinical content – the sheer breadth and depth is astounding and as new knowledge is discovered it is also dynamic and evolving, which only compounds an almost impossible task. Most clinical recording in systems to date is relatively simple – usually focused on the ‘one size fits all’ minimal data set approach to enable some information to be shared broadly. It certainly has had some limited success but how long will it be before we realise that we need to standardise more than the minimum data sets.

Minimum data sets are woefully inadequate when we need to represent the detail that clinicians record as part of day-to-day patient care, for patient-specific data exchange with other clinicians, to drive complex decision support, for data querying, aggregation, analysis. Patients vary, so that no one approach will suit all – the needs for a surgical patient, a neonate, a pregnant woman vary hugely. Clinicians vary, so that no one approach will suit all – generalist vs specialist; medical and nursing notes may focus on the same things but from a completely different point of view. Clinical contexts vary, so no one approach will suit all. Many aspects of clinical recording needs recurring, fractal patterns. There are homunculi, questionnaires, normal statements, graphs, video, audio. The ‘one size fits all’ approach to health data standards is an absolute myth – in fact, the truth is actually that one size fits none.

Take the timing of medications as an example. Just this one seemingly simple data element raises a huge number of challenges for EHRs. I challenge you to show me one system in the world that enables prescribing today at the following level of detail:

This is the level of detail that clinicians need today in their clinical work. Yet most clinical systems only cater for the ability to prescribe to the level of complexity where someone can take one tablet, three times a day, after meals. Even if you try to prescribe a skin cream, many systems are unable to do anything but represent it in the most rudimentary way.

Below is the mind map of the current Medication Order archetype:

medication

You can see and download the corresponding archetype that is currently undergoing review here. This clinical content specification is the result of years of research, investigation of existing clinical systems, engagement with vendors and standards organisations and direct clinical informatician experience. The amount of work involved in specifying this common clinical order should not be underestimated, nor should it need to be undertaken ever again if we can get appropriate levels of agreement through international domain expert review! (Please register in CKM and adopt the archetype to participate in the international review.) It is massive. It is hugely complex.

Via a similar process, we have just achieved a consensus view on our Adverse Reaction Risk archetype: 12 review rounds completed; 91 participants from 16 countries; 182 total reviews; 0 face-to-face meetings! A core concept in any clinical system, with every system implementing it slightly differently. Now we have a line drawn in the sand, a starting point for being able to share adverse reaction data that has been designed and verified by clinicians, yet immediately computable. This work was done in collaboration with the FHIR/HL7 patient care community – note the joint copyright!

Archetypes bridge the semantic chasm between clinicians and software engineers.

Using archetypes as the means to represent clinical knowledge in an open, non-proprietary way is major breakthrough in our quest to “help kill the “golden goose” of proprietary EHR data”, as the Forbes article phrases it.

One does not have to be a rocket scientist to recognise that if we leverage the 400+ existing archetypes already in the international CKM, which already incorporates a huge breadth of clinician knowledge and expertise, we can kickstart any serious development of implementable resources for any technical paradigm willing to join in.

There is a powerful logic in actively separating out development of clinical content from the implementation formalism. In that way we can develop, agree and verify the clinical content specifications once. The resulting archetypes become the free, open standard representing the clinician’s knowledge in an implementation agnostic way. Subsequently technical transformations will be able to deliver the content to the implementer in any formalism that they choose. No longer just one archetype for openEHR implementation alone, but leverage the content to develop FHIR, CIMI, CDA, v2, UML resources… Now there is a vision!

Archetypes bridge the semantic chasm between implementation formalisms.

IMO this is probably the greatest benefit : the archetyped clinical content is created once, verified as fit for use by our clinicians, and remains as a governed infostructure resource for the next years, decades and beyond. Governance processes ensure that the archetypes are maintained and can evolve within a robust versioning framework. Different implementations will have the same, standardised clinical content.

One solid truth that we can rely on in the world of technology is that FHIR, HL7v2, HL7v3, CIMI, UML, AML, openEHR and other implementation formalisms will likely be overtaken at some time in the future. As sure as death and taxes. We really don’t want to start the process of building content for the each and every new technical invention that comes along.

Archetypes bridge the chasms created by the volatile fads and fashions in IT: our resulting ehealth INFOstructure will be a non-proprietary representation of clinical knowledge that can withstand and outlast the inevitable waves of technology as they come and go. 

Archetypes are a pragmatic and sustainable approach to interoperability: between professions; between implementations; beyond our current technologies!

 

 

 

 

 

 

Oil & water: research & standards

The world of clinical modelling is exciting, relatively new and most definitely evolving. I have been modelling archetypes for over 8 years, yet each archetype presents a new challenge and often the need to apply my previous experience and clinical knowledge in order to tease out the best way to represent the clinical data. I am still learning from each archetype. And we are still definitely in the very early phases of understanding the requirements for appropriate governance and quality assurance. If I had been able to discern and document the ‘recipe’, then I would be the author of a best-selling ‘archetype cookbook’ by now. Unfortunately it is just not that easy. This is not a mature area of knowledge.

I think clinical knowledge modellers are predominantly still researchers.

In around 2009 a new work item around Detailed Clinical Models was proposed within ISO. I was nominated as an expert. I tried to contribute. Originally it was targeting publication as an International Standard but this was reduced to an International Specification in mid-development, following ballot feedback from national member bodies. This work has had a somewhat tortuous gestation, but only last week the DCM specification has finally been approved for publication – likely to be available in early 2014. Unfortunately I don’t think that it represents a common, much less consensus, view that represents the broad clinical modelling environment. I am neither pleased nor proud of the result.

From my point of view, development of an International Specification (much less the original International Standard) has been a very large step too far, way too fast. It will not be reviewed or revised for a number of years and so, on publication next year, the content will be locked down for a relatively long period of time, whilst the knowledge domain continues to grown and evolve.

Don’t misunderstand me – I’m not knocking the standards development process. Where there are well established processes and a chance of consensus amongst parties being achieved we have a great starting point for a standard, and the potential for ongoing engagement and refinement into the future. But…

A standards organisation is NOT the place to conduct research. It is like oil and water – they should be clearly separated. A standards development organisation is a place to consolidate and formalise well established knowledge and/or processes.

Personally, I think it would have been much more valuable first step to investigate and publish a simple ISO Technical Report on the current clinical modelling environment. Who is modelling? What is their approach? What can we learn from each approach that can be shared with others?

Way back in 2011 I started to pull together a list of those we knew to be working in this area, then shared it via Google Docs. I see that others have continued to contribute to this public document. I’m not proposing it as a comparable output, but I would love to see this further developed so the clinical modelling community might enhance and facilitate collaboration and discussion, publish research findings, and propose (and test) approaches for best practice.

The time for formal specifications and standards in the clinical knowledge domain will come.  But that time will be when the modelling community have established a mature domain, and have enough experience to determine what ‘best practice’ means in our clinical knowledge environment.

Watch out for the publication of prEN/ISO/DTS 13972-2, Health informatics – Detailed clinical models, characteristics and processes. It will be interesting to observe how it is taken up and used by the modelling community. Perhaps I will be proven wrong.

With thanks to Thomas Beale (@wolands_cat) for the original insight into why I found the 13972 process so frustrating – that we are indeed still conducting research!

Clinical Knowledge Repository requirements

I’ve been hearing quite a lot of discussion recently about Clinical Knowledge Repositories and governance. Everyone has different ideas – ranging from sharing models via a simple subversion folder through to a purpose-built application managing governance of combinations of versioned knowledge assets (information models, terminology reference sets, derived artefacts, supporting documentation etc) in various states of publication.

It depends what you want to achieve, I guess. In openEHR it became clear very quickly that we need the latter in order to provide a central resource with governance of cohesive release sets of assets and packages suitable for organisations and vendors to implement.

In our experience it is relatively simple to develop a repository with asset provenance and user management. What is somewhat harder is when you add in processes of collaboration and validation for these knowledge assets – this requires development of review and editorial processes and, ideally, display transparency and accountability on behalf of those managing the knowledge artefacts.

The most difficult scenario reflects meeting the requirements for practical implementation, where governance of configurable groups of various assets is required. In openEHR we have identified the need for cohesive release sets of archetypes, templates and terminology reference sets. This can be very complicated when each of the artefacts are in various states of publication and multiple versions are in use in ‘on the ground’ implementations. Add to this the need for parallel iso-semantic and/or derived models, supporting documents, and derived outputs in various stages of publication and you can see how quickly chaos can take over.

So, what does the Clinical Knowledge Manager do?

  1. CKM is an online application based on a digital asset management system to ensure that the models are easily accessed and managed within a strong governance framework.
  2. Focus:
    1. Accessible resource – creation of a searchable library or repository of clinical knowledge assets – in practice, a ‘one stop shop’ for EHR clinical content
    2. Collaboration Portal – for community involvement, and to ensure clinical models that are ‘fit for clinical use’
    3. Maintenance and governance of all clinical knowledge and related resources
  3. Processes to ensure:
    1. Asset management
      1. uploading, display, and distribution/downloading of all assets
      2. collaborative review of primary* assets  to validate appropriateness for clinical use
        1. content
        2. translation
        3. terminology binding
      3. publication life cycle and versioning of primary assets
      4. primary asset provenance, differential and change log
      5. automatic generation of secondary**/derived assets or, alternatively, upload and versioning when auto generation is not possible
      6. upload of associated***/related assets
      7. development of versioned release sets of primary assets for distribution
      8. identify related assets
      9. quality assessment of primary assets
      10. primary asset comparison/differentials including compatibility with existing data
      11. threaded discussion forum
      12. flexible search functionality
      13. coordinate Editorial activity
      14. share notification of assets to others eg via email, twitter etc
    2. User management
    3. Technical management
    4. Reporting
      1. Assets
      2. Users
      3. Editorial activity support

In current openEHR CKM the assets, as classified above, are:

  1. *Primary assets:
    1. Archetypes
    2. Templates
    3. Terminology Reference Set
  2. **Secondary assets:
    1. Mindmaps
    2. XML transforms
    3. plus ability to add transforms to many other formalisms, including CDA
  3. ***Associated assets:
    1. Design documents
    2. References
    3. Implementation guides
    4. Sample data
    5. Operational templates
    6. plus ability to add others as identified

While CKM is currently openEHR-focused – management of the openEHR artefacts was the original reason for it’s development – with some work the same repository management, collaboration/validation and governance principles and processes, identified above, could be applied for any knowledge asset, including all flavors of detailed clinical models and other clinical knowledge assets being developed by CIMI, or HL7 etc. Yes, CKM is a currently a proprietary product, but only because it was the only way to progress the work at the time – business models can always potentially be changed 🙂

It will be interesting to see how thinking progresses in the CIMI group, and others who are going down this path – such as the HL7 templates registry and the OHT proposed Heart project.

We can keep re-inventing the wheel, take the ‘not invented here’ point of view or we can explore models to collaborate and enhance work already done.