Toyotani Lab.

IoT innovation in the medical and welfare fields

肺の問診システム

Educational interview VR system

 In our laboratory, we are conducting joint research with professors in the Faculty of Medicine and Pharmacy to develop next-generation education systems and drug navigation apps for foreigners. It is possible to perform real-time animation of the shape of the lungs of an actual person on a computer and the movement during inspiration and exhaust using a virtual reality VR system. Why does such a sound occur by arranging the necessary sound information in various places at that time? We will help you understand why you should put a stethoscope on it. The following is an example of the screen of a next-generation educational inquiry system that operates interactively by voice, such as "Please take a deep breath."



AI for the visually impaired

Visually impaired people are in trouble at intersections and pedestrian crossings because most of the traffic lights do not make any sound and they cannot tell whether they are blue or red. Our laboratory is developing a system that recognizes pedestrian signals with "artificial intelligence: AI" and conveys them to the visually impaired in collaboration with external organizations. The following image shows the results of recognizing humans and pedestrian red lights.

eye_tracker eye_tracking

Signal recognition by AI

 The above result is the result of recognizing based on the color information of the signal on the leftmost side, and it is the situation that all the places where the red component is mixed other than the red signal are recognized. If you use a cascade classifier and YOLO(You Look Only Once), which is one of AI / machine learning, to learn the shape of a traffic light using many images, you will be able to recognize the signal correctly as shown on the right.

tsukiji sahi

AI signal recognition for commercialization

In terms of actual commercialization, on roads with three or more lanes on one side, the pedestrian signals will be very small as shown in the photo above, and ordinary AI will not be able to recognize them. Also, the LED is blinking very quickly, and half the time the light is off. This problem is also a problem in the drive recorder at the time of the accident. Therefore, we made it possible to recognize pedestrian signals by using image processing technology using IT and the ingenuity of AI machine learning. The picture below shows that it correctly recognizes the green light.

eye_tracker

Level difference recognition by depth camera

Many visually impaired people invariably fall on steps and get injured, so there is a need to develop technology to recognize steps. In our laboratory, we are conducting research to calculate distance using two cameras on the left and right, and to recognize steps that are dangerous for visually impaired people based on changes in distance to surroundings. The photo shows an example of its implementation, with red representing far distances and blue representing close distances.

MRI動画

MRI image animation

The following is a part of the system screen that makes it easier to grasp the whole picture by displaying the MRI image with animation.

drag2 seminar drag1 seminar

Medicine navigation for foreigners

The figure below is a drug navigation app for smartphones at pharmacies for foreigners. For example, when we go abroad and get sick, all the products we don't know when we go to the pharmacy, there is no Japanese even if we look at the notation, and English is all technical terms, so which one should we buy? You will be in great trouble. This is a problem that foreigners are feeling in Japan, so we are co-developing smartphone apps to solve this problem with teachers and students in the Faculty of Pharmacy.