Imagine these three scenarios.
1. A patient stops by a busy urgent care center for concerning flu-like symptoms. Rather than waiting a couple hours just to start being seen by the nurse, the patient sits at a kiosk and interacts with an artificial intelligence (AI) assistant that asks the patient some questions tailored to the signs and symptoms. The patient can even follow easy instructions to take her own temperature, capture pictures of her ear drums using a digital otoscope and record her heart and lung sounds. The AI system then records this information, infers a probable diagnosis of influenza and sends the information to the clinician’s electronic medical records (EMR) with the Subjective Objective Assessment Plan (SOAP) note mostly filled out. Now, when the patient sees the clinician, most of the work is done and the time can be spent between the patient and clinician discussing the diagnosis and treatment plan.
2. Another busy patient is on a ranch and has developed a rash. It’s an hour drive to the nearest doctor or urgent care. So instead, the patient uses his phone and sets up a telehealth visit with a doctor right then. The doctor can review the patient’s previous history obtained from the medical record, look at an uploaded picture of the rash sent by the patient and converse with the patient over video. The document note would then be sent back to the EMR along with the rest of the patient’s record. The patient’s regular primary care provider can now access this encounter for follow up.
3. A busy doctor is seeing 30 patients a day in the clinic. Documentation and order entry is time consuming, but this doctor uses her phone and a special app to record the patient/physician encounter and use natural language processing and machine learning to turn this recording into a SOAP note. In addition, orders for labs and x-rays are captured during the encounter and sent to the EMR along with the SOAP note.
These scenarios are not science fiction. They represent new modalities of capturing patient information outside of the typical workflow—which I will call external apps for now. These modalities are working to be interoperable with the rest of the patient’s health information, which is typically stored in the medical record and claims databases. As their prevalence grows within the health IT ecosystem, it is important to understand how standards are being leveraged to integrate these applications.
Watch Grahame Grieve, FHIR product director for HL7 International, talk with HIMSS TV about how FHIR is key to interoperability and better data access.
Some data sets are easier to capture and integrate into these external apps. Here are some of the types of patient information being sent from and received by these external applications:
HL7 V2 is often used to extract basic patient demographics and other patient administration, or ADT, type data for use in these external applications. So, for example, when the doctor walks into the patient room to record the encounter, these external apps described above have already pulled visit and demographic information from the medical record. These external apps also want to be able to book appointments and be notified of visits through the clinicians’ schedule, typically maintained in the EMR.
RELATED: How Stakeholders Should Push Digital Health Services to Fight COVID-19
When exchanging more clinically related data, additional challenges arise in the semantic interoperability of these apps with EMRs. Structured and often codified information such as the patient’s problems, procedures, medications and allergies are shared and often have these semantic interoperability challenges. However, there are several standard terminologies that can be used for coding to mitigate these challenges. The below examples highlight the role of standards to ensure semantic interoperability.
For the free text notes generated by each of these systems, the HL7 CCD document format serves as a syntactic bridge allowing notes to be extracted from these external apps and sent to the EMR repository and well as received by these apps.
There are several interface options when trying to sync the external app with the EMR. Where does FHIR come into play? These application vendors are beginning to use FHIR to transmit and receive information from various healthcare IT applications, such as medical records and payer data repositories. However, FHIR still has not reached ubiquitous use across all the EMRs, so proprietary APIs are still predominantly being used to interface. Many major vendors have app-type stores that these external vendors are working with to achieve interoperability. These stores may use FHIR interfaces, but proprietary EMR APIs still appear to be more prominent.
Another potential channel that external apps are interested in using are the nationwide services. Some initiatives are already aggregating patient information from the EMRs and can be a data source for these applications.
These examples of new technology and healthcare interactions are really at the tip of the iceberg – we will increasingly see all kinds of patient data being captured using novel interfaces external to the traditional EMR. As standards like FHIR mature and are increasingly adopted, hopefully we will be able to use a single interface to be interoperable with any EMR.
The views and opinions expressed in this blog or by commenters are those of the author and do not necessarily reflect the official policy or position of HIMSS or its affiliates.
Originally published January 15, 2019.
This episode of the Accelerate Health Podcast discusses a wide range of global health challenges—from mobile technologies to surveillance tools—with Andrew Trister, MD, PhD, of the Bill & Melinda Gates Foundation.