Eyeris' emotion recognition AI technology: Adopted for Toyota Concept-i

Extract from TU Automotive Japan 2017

2017/12/28

Summary

ドライバー・モニタリング
Driver monitoring is particularly important when transitioning between different levels of autonomous driving. (Source: Eyeris)

The TU-Automotive Japan 2017 event was held in Tokyo in October 2017. This report gives an outline of the symposium lecture given by the founder and CEO of Eyeris, Mr. Modal Alaoui, entitled "Human Behavior Understanding AI inside Autonomous & Highly Automated Vehicles (HAVs)".

Eyeris was founded in 2013 to develop its EmoVu deep learning-based emotion recognition software that reads facial micro-expressions. The human machine interaction (HMI) performance of a vehicle is greatly improved by integrating the EmoVu software with the cameras embedded into a vehicle to monitor a driver’s emotions and state of alertness.
Note: Eyeris uses the above expression “human machine interaction” for the acronym HMI, which is commonly referred to as “human machine interface”.

EmoVu plays an active role in scenarios such as the following:

  1. During autonomous driving at the SAE 2 to 4 Levels of automation, there will be transitions from automated to manual operation and vice versa. When switching from autonomous mode to manual operation, it is necessary to verify whether the driver is able to operate the vehicle safely. Conversely, when the system determines that the driver is distracted to the point of being unable to operate the vehicle safely, the system can switch to automated driving mode and take control of the vehicle.
  2. Improving the in-vehicle ambient intelligence (AmI) of highly automated vehicles: In fully automated driving, the system is in complete control of the vehicle and a human is no longer required to drive the vehicle, so a vehicle’s occupants will be free to engage in other activities such as reading, working, and sleeping. According to Mr. Alaoui, since cars are becoming more of a commodity, it will become increasingly important to improve the productivity and comfort of the vehicle’s interior cabin area. In this respect, EmoVu contributes to understanding the occupant environment to improve the in-vehicle driving experience.

In addition to being adopted for Toyota Concept-i vehicles, EmoVu is being adopted or evaluated for adoption by a number of OEMs such as Honda. Toyota's MOBILITY TEAMMATE CONCEPT is aimed at developing advanced autonomous driving technologies where humans and cars cooperate with one another. To that aim, it seems that Toyota considers EmoVu as one of the most important elements for constructing a highly integrated HMI architecture.


Related reports:
Toyota's autonomous driving (2): Striving for "trillion-mile reliability"(March 2017)
Toyota's autonomous driving (1): More sophisticated ADAS and fully autonomous driving  (February 2017)

 

 



NHTSA: Indicates the importance of driver monitoring in automated driving systems

At the beginning of his lecture, Modar Alaoui introduced the views of NHTSA (National Highway Traffic Safety Administration). In September 2016, NHTSA announced its "Federal Automated Vehicle Policy" guidance for automated driving technologies. In September 2017, NHTSA announced its second federal guidance "Automated driving systems (ADS): A Vision for Safety 2.0".

According to NHTSA, understanding the interaction between the vehicle and the driver, the HMI (human machine interface), has always played an important role in the automotive design process, but recognizes that new complexity is introduced as advanced driving systems take on driving functions.

The HMI is critical when transitioning between Levels 2 to 4 of automation. Especially during autonomous driving Level 3 (Conditional Automation), the system is in complete control of many of the vehicle’s functions so the driver does not have to be constantly monitoring the driving environment. However, the driver must be ready to take control of the vehicle whenever desired or when autonomous operation of the vehicle is not feasible such as when the vehicle encounters unexpected road conditions, bad weather, and new road signs.

However, a driver’s ability to take back control of the vehicle is limited by their capacity to stay alert to the driving task and thus capable of quickly taking over control, while at the same time not performing the actual driving task until prompted by the vehicle. Therefore, NHTSA encourages entities involved in the development and manufacture of autonomous vehicles to introduce driver monitoring systems that are able to assess a driver’s awareness and readiness to perform the full driving task. Legislation mandating the adoption of driver monitoring systems is expected to be enacted in the near future.

In highly automated driving systems, if the system determines that the driver’s state is such that it is unsafe to drive in manual mode due to drowsiness, distraction, etc., the system should be capable of deciding to switch to automated driving to take control of the vehicle.

ドライバー・モニタリング ドライバー・モニタリング
Driver monitoring is critical when transitioning between autonomous driving Levels 2 to 4. (left diagram). (Source: Eyeris)

 

 



EmoVu: Software that recognizes human emotions

Eyeris developed its EmoVu software to read facial micro-expressions and recognize human emotions. The EmoVu system, consisting of the software installed using any embedded small camera installed in the dashboard, allows the EmoVu to scan the face to recognize the driver’s emotions and state of alertness while driving.

Eyeris has collected over 3 million data facial expressions during driving. The data spans five different races, four age groups, male or female genders, 10 lighting conditions, 13 frontal and non-frontal head poses, various camera architectures (mounting position, resolution, etc.), and with different accessories such as eyeglasses, hats, and sunglasses, as well as with a variety of facial occlusions, for example, with the driver holding a cellphone to his ear. Based on this data of over 3 million facial expressions, Eyeris developed the EmoVu software  using deep learning methods.

Emotions are hard-wired into a human’s brain at birth, and emotional states are manifested in our facial expressions even before a person realizes it. Research has long proven the universality of a human’s emotional states, regardless of age, gender or race. Therefore, by analyzing facial micro-expressions it is possible to derive much information about the driver’s emotions and behavior. The EmoVu system classifies human facial expressions as joy, surprise, sadness, disgust, fear, and anger.

In addition to facial expressions, EmoVu monitors head position, posture and movement, as well as the eye region (openness of the eyes, gaze direction, drowsiness, blinking) etc. to determine how alert the driver is to the surrounding environment. Frequent blinking is associated with drowsiness and yawning can indicate fatigue. As a result, the EmoVu software is able to achieve a level of facial recognition accuracy that has never before existed.

In addition, what is ultimately important for judging the condition of the driver is the system’s ability to recognize 1) Driver Inattention (carelessness and indifference), 2) Cognitive Awareness (ability to recognize and judge situations), and 3) Emotional Distraction (distractions that occur when confused or upset). Humans perceive these reactions naturally, but until now it has been a difficult task for computers to accomplish.

ディープラーニングのイメージ 運転中の顔の表情を300万件以上収集 顔の表情に、頭の位置・姿勢、目の状態などを加えて判定
Image of deep learning Eyesis has collected over 3 million facial expressions while driving Emotions judged by adding head pose, eye movement, etc.
to facial expressions

Source: Eyeris

EmoVu is embedded into existing systems such as cameras

EmoVu was designed with the user’s embedded systems in mind. It requires minimal processing power and is optimized for integration into most embedded platforms such as a Micro Controller Unit (MCU), System-on-a-chip (SoC), and Digital Signal Processor (DSP). EmoVu is compatible with any camera.

Eyeris provides the facial recognition software only, which is one of the reasons that it is being adopted by OEMs and Tier 1 suppliers because it is rapidly customizable to a users' embedded equipment and systems. In addition, the software can be easily customized for the various needs of individual users and usage environments.

EmoVu software includes algorithms to ensure continuous learning, which is important to improve the system’s performance on an everyday basis. Eyesis is also planning to upgrade its system with over-the-air programming (OTA) capability.

Data is not sent to the cloud, but processed locally to run on embedded systems

Data collected from the sensors embedded into a vehicle are processed local by the EmoVu system rather than sending the data to the cloud for processing. The speed of data analytics processing is particularly important. For example, there is no time to send data on the driver’s situation to the cloud to be processed, as instantaneous judgment is required if the car needs to take over control from manual to autonomous mode or vice versa. That requires local processing, which in turn demands optimizing the solution for a very narrow set of signals.

 

 



Increasing the "safety", "productivity" and "comfort" of the vehicle interior in highly automated vehicles

Research has indicated that the driving experience will change significantly when fully automated driving is realized because a human will no longer be required to drive the vehicle, freeing the vehicle occupants to engage in other activities such as eating, working, using the telephone, and sleeping. Eyeris has developed its software to determine what the occupants are doing by analyzing body movements as well as facial expressions.

According to Modar Alaoui, cars are becoming more of a commodity regardless of whether driven in automated or manual modes, so it is important to provide technologies for occupants to more effectively and comfortably use the time spent in a vehicle. As vehicles become more and more autonomous, the focus will shift from "driver monitoring" to "occupants monitoring".

Occupants Monitoring technologies adopted for highly automated vehicles (HAV) are aimed  to optimize the 1) safety, 2) productivity (making the vehicle interior a highly productive space), and 3) comfort of the vehicle’s interior.With the exception of vehicles equipped with driver monitoring systems, there are few models on the market installed with in-vehicle cameras.But by integrating AI technology-based facial recognition software with in-vehicle cameras it is now possible to achieve an unprecedented level of occupant monitoring to improve the in-vehicle experience. In the future, the EmoVu software is expected to be used in combination with entertainment and infotainment systems with multiple cameras installed in the vehicle’s interior for occupant monitoring  to accomplish tasks such as watching over children and infant passengers of the vehicle.

Note: This function is referred to as "Ambient intelligence". Currently, in developed countries, people are surrounded by intelligent and intuitive interfaces imbedded in the everyday objects and physical environments around them. These interfaces recognize and respond to the presence and behaviors of an individual in a personalized and relevant way to provide safer, more secure, and more comfortable living environments.
EmoVu is being evaluated for adoption in a wide range of applications including smart homes and robots that support homes and individuals, as well as for in-vehicle applications.

高度自動運転車 高度自動運転車 高度自動運転車
Image of highly automated vehicle interior Occupant activities in highly automated vehicles AI targeting the interior of highly automated vehicles

Source: Eyeris

 

 



Toyota's way of thinking on autonomous driving: MOBILITY TEAMMATE CONCEPT

The "MOBILITY TEAMMATE CONCEPT" (bottom left diagram) gives insight into Toyota’s way of thinking about autonomous driving. Toyota is aiming for autonomous driving where humans and cars cooperate with one another.

The above Toyota direction is also reflected in the SAE Level 3 initiative. The SAE defines Level 3 as meaning that "autonomous driving systems are required to constantly monitor for situations where the driver must take control of the vehicle, and revert control to the driver in such situations." However, it is believed to be extremely difficult for drivers to take back control in time-critical situations after having allowed the system to take control, leading some OEMs, such as Ford and Volvo, to skip Level 3 and instead aim straight for the development of Level 4 and 5 autonomous driving systems.

Toyota aims to realize Level 3 autonomous driving by establishing a cooperative and trusting relationship between humans and automobiles, then will aim to develop fully autonomous driving at Levels 4 or 5 only after that relationship has been established.

自動運転に対するトヨタの考え方 自動運転技術開発のアプローチ
Toyota's autonomous driving concept Approach to development of autonomous driving technology

Source: Toyota

 

 



Toyota Concept-i with AI that understands people

Toyota unveiled its future mobility Toyota Concept-i technology with AI that understands people at the International CES show in January 2017 and more content was added to the concept announced at the Tokyo Motor Show in October. Toyota plans to conduct public road tests of vehicles equipped with some of the Concept-i functions in Japan around 2020.

The Concept-i technology is supported by an AI system that recognizes a driver’s emotions by interpreting the driver’s facial expressions and tone of voice. It appears that EmoVu was adopted by Toyota to recognize facial expressions. The Concept-i system supports the driver with visual and tactile prompts, and even aromas to rouse them into an alert state when feeling drowsy or help them calm down when they are excessively stressed or upset. If necessary, the system can switch to automated driving mode to operate the vehicle for the driver.

Toyota’s Concept-i technology aims to realize the "favorite car" in the new era of future mobility in line with its concept of "more than a machine, a partner”, which involves the vehicle better understanding the driver and growing with them as an irreplaceable partner.

Toyota Concept-愛i Concept-愛iのコア技術
Toyota Concept-i  (Tokyo Motor Show 2017) The core value provided by the Concept-i series technology
is to "understand the driver" (Source: Toyota)


------------------
Key Word
Toyota Concept-i, Eyeris, EmoVu, Emotion recognition, Driver monitoring

 <Automobile Industry Portal MarkLines>