Google Glass for EMS
By Christopher Matthews
Reprinted with permission from The Unwired Medic
My esteemed colleague, Greg Friese, has drawn my attention to an article about a new partnership for augmented reality and CPR with an app called CPRGlass. He posed a question to me, asking what benefit I would see in using Google Glass in EMS. I immediately started running all the scenarios in my mind that I had been thinking about over the last few months since I had heard of this consumer-grade device.
Google Glass isn’t the first viable wearable computer:
The past two years of the Consumer Electronics Show (CES) in Las Vegas, I have visited the Verizon booth and demo’ed two headset computers with camera systems and strictly voice interaction. Like Google Glass, these don’t actually have cell phone WiFi capability, but would rely on you connecting to a WiFi hotspot or via Bluetooth to your 4G smartphone. This is great, since we don’t want the EF radiation from a cell phone right on our heads (unrealized risks of cancer, and all). These systems were designed with the military and public safety in mind, and they are considerably more robust (technically and physically) and expensive than consumer-oriented Google Glass. I tried out two different wearable computers… the Kopin Golden-i (from CES2012) and the Motorola HC1 Wearable Computer (from CES2013), which I am told by the Motorola and Verizon reps is a later evolution of the Golden-i. The Golden-i was the device I was most excited about due to its lighter weight, and it has undergone a couple revisions to make it less military and more industrial, although both models were fantastic and have great potential in public safety and telemedicine. Both models even had interchangeable cameras, so you could switch to thermal imaging and add a second camera type to adapt the system to the mission.
Back to the point:
Well, Google Glass in EMS is the subject of this blog post, so let me get back to it. What uses can I envision for Google Glass in EMS? Let’s start with making it use an app like a HUD for driving in the ambulance to a call or to transport a patient to an ER. One of the distractors we face in EMS is having to navigate to a call or hospital while driving, sometimes without the assistance of a crew member in the passenger seat. Why not feed a GPS map or CAD info into Glass and have it route you without having to take your eyes away from the road? Every second spent looking down at an MDT or tablet, or even a GPS unit is another second you aren’t watching the road for dangers. You can have the vehicle feed info to your eyes to let you know your current speed and compass direction or GPS coordinates (useful when requesting a medevac) or have a smartphone feed GPS data when trying to arrive on foot to a SAR operation. If your agency is tech and safety savvy, then maybe they’re outfitting your rigs with vehicle-mounted FLIR, thermal imaging, or night vision cameras, so that video feed can be piped directly into the headset.
What about arrival on scene?
You walk in to a residence and you see medication bottles scattered everywhere. Why not tell Google Glass to use its camera to catalog the meds. With a simple voice command, you can tell it to capture as you look at each bottle label. That data is then scanned with OCR (optical character recognition – which “reads” the image for text) and adds the Rx to your ePCR (electronic patient care report) under patient meds. You then issue a voice command to cross reference the meds for interactions and discover that two of the meds have serious interaction potential and it wouldn’t have necessarily been caught by the pharmacist, since the meds came from two completely different pharmacy chains. You then locate the patient’s ID and insurance cards and they are captured and uploaded into the ePCR too. You may have just saved hundreds of keystrokes, not to mention the time it would take to input this info. You could also dictate your report narrative to Glass and it could add the text into the ePCR.
Let’s say you are quite a way from your trauma center and you have a vehicle collision with rollover and known ejection. You can start streaming scene footage to the trauma center as you pull up on scene. The trauma team now sees the MOI and extent of damage to the vehicle and where the patient is located in relation to the vehicle. They’ll see your assessment. You can consult right over the air with your trauma surgeon and decide if a trauma team activation is warranted. The team will see all your vitals and interventions. The doctor can provide online medical control. While you were walking around scene, maybe the vehicles’ license plates were captured and their data is being retrieved to help identify potential victims and cross reference that info with previous patient records.
You could use the imaging to collect info about a potential hazmat scene or MCI and have the data relayed back to your communications center or PSAP, so they can put resources on standby and provide the data to the incident commander for up-to-the-minute situation reports.
Incident Command:
Glass can give an IC a chance to view sitreps, stream a scene flyover with a drone, track accountability, access WebEOC and other critical incident management systems (CIMS), view CAD data, track patients and triage statuses, see hospital bed availability, and much more.
Connecting it with other technologies and devices:
Modern digital cameras can see into the IR spectrum. It might not be as intensely as a dedicated IR camera, but it can still see into the IR range (try looking at a TV remote control with your cell phone camera as you push buttons on the remote and you should see the LED in the front of the remote illuminate or flicker). Maybe switching to an IR filter would alert you to a dangerous hotspot. The Golden-i and HC1 had an interchangeable camera that had a military/law enforcement-grade spec that would allow thermal imaging and IR to see through residential walls, light barriers, and even vehicle doors. This could alert you to the presence of safety risks (aggressors, fires, IED’s) and unrecognized patients, like the person who was ejected and is out of human visual range. There is even a functional project to detect radiation via Android and iOS smartphone cameras.
Then, Glass could be tethered via Bluetooth or short range WiFi to communicate with hazmat sensors, and when a dangerous level is detected, like carbon monoxide for instance, it would send an alert to Glass. Many defib/monitors have Bluetooth capability today, so Glass could receive a 12-lead, or flash an alert when a lethal dysrhythmia, ST segment changes, or out-of-range vital signs parameter is detected.
On the educational front:
Google Glass can capture and stream video and audio of EMS interns going through scenarios. Using this tool, an instructor can review the call to provide guidance and both positive and negative feedback to the intern so they can grow into a better practitioner. Other interns can critique the call and provide team feedback. The camera goes wherever the intern is looking. Maybe they never looked at a very crucial part of the scene or patient, so they were on the wrong track or suffered from tunnel vision. You can use this to teach them to use more situational awareness or to be aware of tunnel vision when assessing a patient.
I’ve even considered the likelihood of having the instructor wear it to control simulation aids like technology-enabled manikins. That seems less likely as the instructor would have to speak the next action the manikin or sim would take, alerting the interns to what they would need to watch for next.
In closing:
We’re in a sort-of Star Trek generation. Things that sci-fi has been speculating on for decades are becoming a reality today. This is a fantastic opportunity to expand the role of technology in EMS and frontline medicine. I know many are opposed to the idea of adding even more tech into what we do, but it still doesn’t replace the human practitioner. Nothing should ever replace the human element. What tech does is it provides more tools to get the job done with more comprehensive data and sensory feedback, and it improves the possibility of a better outcome for our patients, and I’m all for that. Maybe next, we’ll be beaming our patients to the ER (or more likely, the ER triage waiting room).
I’d love to hear what you think about the possibilities Google Glass presents for us, and even tell me if you are against the idea of using it and why. Tell me in the comments, or write your own blog post and link to mine. Thanks for reading and considering!