Facial Recognition and Machine Learning Improve ICU Patient Safety

Japanese scientists developed an automated system that used facial recognition technology in predicting unsafe behavior of patients in the intensive care unit (ICU) with moderate (75% accuracy).

There is great potential in terms of the capacity to serve as a continuous monitor for the safety of patients for this automated risk detection tool. The number of staff that hospital care units have usually is below the required number which makes the tool a big help for such units. The research was presented in the annual meeting of the European Society of Anaesthesiology.

"Using images we had taken of a patient's face and eyes we were able to train computer systems to recognise high-risk arm movement", says Dr Akane Sato from Yokohama City University Hospital, Japan who led the research.

"We were surprised about the high degree of accuracy that we achieved, which shows that this new technology has the potential to be a useful tool for improving patient safety, and is the first step for a smart ICU which is planned in our hospital."

The routine sedation of critically ill patients in the ICU is done to prevent pain and anxiety, allow invasive procedures and improve patient safety. The challenge is great when it comes to providing patients with an optimal level of sedation. There is a high probability of patients to isplay high-risk behavior when sedated inadequately.

Twenty-four postoperative patients admitted to ICU between June and October 2018 were included in the study. The average age of these patients were sixty-seven years.

"The proof-of-concept model was created using pictures taken by a camera mounted on the ceiling above patients' beds. Around 300 hours of data were analysed to find daytime images of patients facing the camera in a good body position that showed their face and eyes clearly, " according to Eureka Alert.

Machine learning analyzed 99 images similar to the process where the human brain learns new information. The research was able to produce a model that gave warnings regarding high-risk behavior through the subject's face with high accuracy.

"Various situations can put patients at risk, so our next step is to include additional high-risk situations in our analysis, and to develop an alert function to warn healthcare professionals of risky behaviour. Our end goal is to combine various sensing data such as vital signs with our images to develop a fully automated risk prediction system", says Dr Sato.

The researchers showed that there are limitations when it comes to using images of patients in different positions. The method needs to improve the generalisabilty of the tool in real life. They also suggest tha accuracy could be improved through monitoring the conscious of patients to distinguish high-risk behavior and voluntary movement.

Join the Discussion

Recommended Stories

Real Time Analytics