Fractal exam findings II

In testing out the physical examination CLUSTER archetypes, the ones highlighted in green are currently present in the Examination Findings project on the openEHR CKM.
Physical examination findings

Most of them follow the pattern identified in the ‘Examination of XYZ’ archetype (CLUSTER.exam_xyz).

The majority of the archetypes developed so far follow this pattern, and each has been developed using the base pattern only. It is intended that the detail will be added when it is identified as requirements for real world implementations. Until the detailed requirements are identified, the base pattern will enable the simple data representations for each clinical concept to be represented consistently.

Clearly the mind map, above, does not represent all of the physical examination domain. The scope and diversity of clinical content in this physical examination domain will grow over time, and the concept-specific detail within each model will be added as backwardly compatible revisions of these archetypes. In this way they will evolve in an organic way to suit clinical requirements, but within a tightly governed environment.

There are a number of archetypes that have been identified that have some special requirements, in addition to the ‘Examination of XYZ’ archetype pattern. These include:

  • Examination of body sites that are bilateral

In this example the ‘Examination of both eyes’ archetype can nest the ‘Examination of an eye‘ archetype. This allows comparison of one eye to another, and this will be applicable for examination of any bilateral body site – ears, eyes, breasts, nipples etc. The ‘Examination of an eye’ archetype will then record findings that will need identification of the side being examined.
eyes

You might also notice that the ‘No abnormality detected’ data element has brackets around the name. This is because an additional run-time name constraint has been applied that will allow clinicians to express that the normal statement is PERL (or PERLA) – pupils are equal and reactive to light (and accomodation).

  • Examination of skin

Examination of the skin needs some adaptation. Firstly, we need to enable the exact bit of skin that is being identified – this could be precise in terms of an well known anatomical site, for example the cubital fossa of the right arm, or it could need to be described in more generic terms, for example 5cm proximal to the medial epicondyle of the right arm.

Secondly, we could need to examine a lesion or wound, both of which have their own characteristics that warrant their own archetype, or any other abnormality which we may need to describe from first principles, or an assessment of 5 keratoses being present in the region, plus one naevus. There are many other possible ways that a clinician may want to record skin exam findings. This adaption of the XYZ pattern is trying to provide a flexible pattern that allows for flexible recording.skin

  • Examination of the fetus in utero

A fetus in utero can be examined as part of an abdominal examination or as part of a vaginal examination during labour.

abdo

Examination of the fetus – recorded as part of an abdominal examination

In this first example above, the Palpation of the fetus’ archetype is inserted into the ‘Examination of the abdomen’ archetype. Note that the cluster for ‘Vaginal findings’ is greyed out as not active for this particular clinical scenario.

vag

Examination of the fetus – recorded as part of a vaginal examination, perhaps during labour.

And in this second example, the ‘Palpation of the Fetus’ archetype is inserted into the ‘Palpation of the Cervix’ archetype – ensuring that the recording of the fetal findings on examination are kept with the context of a vaginal examination. In this example, the ‘Abdominal findings’ cluster has been made inactive and the ‘Vaginal findings’ cluster will be used instead.

Fractal exam findings I

The fractal and complex nature of the clinician’s physical examination is an obvious benchmark test for the capability of any modelling paradigm. If you can’t model the clinical requirement for something as fundamental, yet complex and diverse as physical examination, then you need to revisit the approach entirely.

In a number of previous blog posts I’ve used the phrase ‘fractal examination findings’. Others identified the concept and it describes the dilemma that @ianmcnicoll and I have faced in trying to solve how to represent it in archetypes. In fact it has taken over 5 years of experimentation to arrive at our latest solution, together with collaboration from our Norwegian colleagues, @siljelb and @jtvaland. It it has withstood our own collective testing to date, and worth sharing more broadly at this point.

Whilst the solution appears mind-numbingly simple to me now, the journey of discovery has been considerably longer and more painful than anticipated. Because of this, I’d like to share the patterns as we have them now – it is preferable to try to prevent others making the same mistakes and to build further on these learnings.

Let’s first identify the high level principles that frame the complexity surrounding structured representation of data for physical examination findings:

  • Different health professionals record the same clinical concepts to different levels of detail or granularity;
  • Different clinical purposes or contexts require recording of the same clinical concepts to different levels of detail or granularity
  • Each of the above statements is further compounded by the individual, clinician-to-clinician variation within the same profession, clinical context or purpose.

There will never be only one, single way to record any clinical record, but clinical examination seems to be an extreme example where we require a robust semantic foundation with strong governance, that can then be expressed in a flexible, mix’n’match approach to be able to cater for each of the differing requirements identified above.

This is where the dual level modelling paradigm simply shines:

  • archetypes provide the solid, foundational building blocks as source of ‘clinical truth’; and
  • templates express the variation in the patterns of archetype aggregation and constraint for any given clinical scenario.

Using this approach we can represent the clinical data requirements for representation of physical examination findings in all professional, contextual, purpose-driven scenarios.

A single, foundation archetype

Underpinning it is a single ‘Physical Examination findings archetype’ – OBSERVATION.exam – which is effectively a very simple framework with minimal clinical content. The primary purpose of the OBSERVATION archetype is to anchor all the other CLUSTER archetypes which carry the detailed clinical data elements. The interchangeability and nesting capability from using the CLUSTER archetypes in different configuration is the key to representing the fractal detail for exam findings.

OBSERVATION.exam

The baseline archetype for any and all physical examination – OBSERVATION.exam

In the example above, the ‘Physical Examination Findings’ archetype (OBSERVATION.exam) is the default root level archetype that provides the following data nodes:

  • ‘Description’ – to allow a simple text description of all examination findings, or to provide a single place for unstructured data from existing systems to be captured;
  • ‘Examination detail’ SLOT – this is where the ‘magic’ happens! Into this SLOT any and all clinically relevant CLUSTERs archetypes can be added, or even nested within each other to enable representation of the level of granularity and relationships between each of the examination archetypes. In the example below, three CLUSTER archetypes representing components of an ear examination have been inserted into this SLOT.
  • ‘Interpretation’ – allows for one or more evaluative statements to be made about all examination findings. For example – ‘normal exam’ or other specific summary statements.
  • ‘Confounding factors’ – allows for statements to be made about any factors that contributed to the findings and subsequent interpretation. For example, the child was crying throughout the examination or the patient was uncooperative.
  • ‘Device details’ SLOT – allows representation of any devices that were used as part of the examination. For example, the dermatoscope used to visualise a mole on the skin or the otoscope used to take a video image of the ear drum.
ear exam

The baseline OBSERVATION.exam with three cluster archetypes which represent the rich detail required for an ear examination.

The detailed exam CLUSTER pattern

We have also identified a simple pattern that is applicable to nearly all models representing the detail of the physical examination domain:

  • ‘XYZ examined’ – Clear identification of the specific XYZ examination being done or the XYZ body part being examined;
  • ‘No abnormality detected’ – to enable an ability to record common shortcuts used by clinicians when recording exam findings – for example, ‘no abnormality detected’ or ‘PERLA’ as an acronym for ‘pupils equal and reactive to light and accommodation’;
  • ‘Clinical description’ – to allow for recording a textual description of all findings for this specific examination as the simplest type of structured data collection or, alternatively, as a placeholder for the non-structured findings for this specific examination that might be found in existing legacy data systems;
  • Positive statements about detailed findings for each specific examination – findings that were observed as ‘present’ as well as where it is clinically important to record things as ‘not present’; plus
  • ‘Examination findings’ Slot – allows for nesting of other CLUSTER archetypes for Physical examination to be nested within the context of this CLUSTER archetype. This is the key to the fractal representation;
  • ‘Multimedia representation’ SLOT – allows for insertion of a purpose-specific CLUSTER to enable the ability to add digital representations of the findings, including annotated diagrams, photos, videos, audio or device data;
  • ‘Clinical interpretation’ – allows for one or more evaluative statements to be made about all examination findings. For example – ‘normal exam’ or other specific summary statements.
  • ‘Comment’ – the ‘get out of jail free’ data node that will allow for recording any additional data that does not fit the current structured data elements.
  • ‘Examination not done’ SLOT – allows for insertion of a purpose-specific CLUSTER to enable a clear statement that a specified examination was not performed

This has become the default pattern on which all recent examination CLUSTER archetypes were built, with the ability to adapt for specific use cases – CLUSTER.exam_XYZ:

CLUSTER.exam_xyz

The simple pattern used as the basis for developing any detailed CLUSTER examination archetype.

And in the real world this can be used to represent the ear examination as per the template below, where the details for Examination of the tympanic membrane (or ear drum) are displayed within the Physical examination findings OBSERVATION.:

CLUSTER.exam_tympanic_membrane

Representing an ear examination using the OBSERVATION.exam plus three CLUSTER archetypes, with the detail for examination of the tympanic membrane on display.

More in Fractal exam findings II

The Archetype Journey…

I’m surprised to realise I’ve been building archetypes for over 7 years. It honestly doesn’t feel that long. It still feels like we are in the relatively early days of understanding how to model clinical archetypes, to validate them and to govern them. I am learning more with each archetype I build. They are definitely getting better and the process more refined. But we aren’t there yet. We have a ways to go!

Let me try to share some idea of the challenges and complexities I see…

We can build all kinds of archetypes for different purposes.

There are the ones we just want to use for our own project or purpose, to be used in splendid isolation. Yes, anyone can build an archetype for any reason. Easy as. No design constraints, no collaboration, just whatever you want to model and as large or complex as you like.

But if you want to build them so that they will be re-used and shared, then a whole different approach is required. Each archetype needs to fit with the others around it, to complement but not duplicate or overlap; to be of the same granularity; to be consistent with the way similar concepts are modelled; to have the same principles regarding the level of detail modelled; the same approach to defining scope; and of course the same approach to defining a clinical concept versus a data element or group of data elements… The list goes on.

Some archetypes are straightforward to design and build, for example all the very prescriptive and well recognised scales like the Braden Scale or Glasgow Coma Scale. These are the ‘no brainers’ of clinical modelling.

Some are harder and more abstract, such as those underpinning a clinical decision support system of orders and activities to ensure that care plans are carried out, clinical outcomes achieved and patients don’t ‘fall through the cracks’ from transitions of care.

Then there are the repositories of archetypes that are intended to work as single, cohesive pool of models – each archetype for a single clinical concept that all sits closely aligned to the next one, but minimising any duplication or overlap.Archetype ecosystem

That is a massive coordination task, and one that I underestimated hugely when we embarked on the development of the openEHR Clinical Knowledge Manager, and especially more recently, the really active development and coordination required to manage the model development, collaboration and management process within the Australian CKM – where the national eHealth program and jurisdictions are working within the same domain of models, developing new ones for specific purposes and re-using common, shared models for different use cases and clinical contexts.

The archetype ecosystems are hard, numbers of archetypes that need to work together intimately and precisely to enable the accurate and safe modelling of clinical data. Physical examination is the perfect example that has been weighing on my mind now for some time. I’ve dabbled with small parts of this over the years, as specific projects needed to model a small part of the physical exam here and there. My initial focus was on modelling generic patterns for inspection, palpation, auscultation and percussion – four well identified pillars of the art of clinical examination. If you take a look at the Inspection archetype clinicians will recognise the kind of pattern that we were taught in First Year of our Medical or Nursing degrees. And I built huge mind maps to try to anticipate how the basic generic pattern could be specialised or adapted for use in all aspects of recording the inspection component of clinical examination.mindmapOver time, I have convinced myself that this would not work, and so the ongoing dilemma was how to approach it to create a standardised, yet extraordinarily flexible solution.

Consider the dilemma of modelling physical examination. How can we capture the fractal nature of physical examination? How can we represent the art of every clinician’s practice in standardised archetypes? We need models that can be standardised, yet we also need to be able to respond to the massive variability in the requirements and approach of each and every clinician. Each profession will record the same concept in different levels of detail, and often in a slightly different context. Each specialty will record different combinations of details. Specialists need all the detail; generalists only want to record the bare basics, unless they find something significant in which case they want to drill down to the nth degree. And don’t forget the ability to just quickly note ‘NAD’ as you fly past to the next part of the examination; for rheumatologists to record a homunculus; for the requirement for addition of photos or annotated diagrams! Ha – modelling physical examination IS NOT SIMPLE!

I think I might have finally broken the back of the physical examination modelling dilemma just this week. Seven years after starting this journey, with all this modelling experience behind me! The one sure thing I have learned – a realisation of how much we don’t know. Don’t let anyone tell you it is easy or we know enough. IMO we aren’t ready to publish standards or even specifications about this work, yet. But we are making good, sound, robust progress. We can start to document our experience and sound principles.

This new domain of clinical knowledge management is complex; nobody should be saying we have it sorted…