Toyotani Lab.

IoT innovation in the medical and welfare fields


Educational interview VR system

In our laboratory, we are conducting joint research with professors in the Faculty of Medicine and Pharmacy to develop next-generation education systems and drug navigation apps for foreigners.

Medical AI system

In addition, we are also developing medical systems that utilize AI for applications in laparoscopic surgery, among others. Currently, we are conducting joint research with School of Medicine, College of Science and Engineering, College of Engineering, College of Humanities and Sciences, and other institutions.


AI for the visually impaired

Visually impaired people are in trouble at intersections and pedestrian crossings because most of the traffic lights do not make any sound and they cannot tell whether they are blue or red. Our laboratory is developing a system that recognizes pedestrian signals with "artificial intelligence: AI" and conveys them to the visually impaired in collaboration with external organizations. The following image shows the results of recognizing humans and pedestrian red lights.

eye_tracker eye_tracking

Signal recognition by AI

 The above result is the result of recognizing based on the color information of the signal on the leftmost side, and it is the situation that all the places where the red component is mixed other than the red signal are recognized. If you use a cascade classifier and YOLO(You Look Only Once), which is one of AI / machine learning, to learn the shape of a traffic light using many images, you will be able to recognize the signal correctly as shown on the right.


New technology for environmental investigation

 It is necessary to record environmental sounds at work sites, such as in medical welfare facilities and corporate environmental surveys. However, when recording the noise, etc., the contents of other people's conversations are also saved, compromising privacy. To address this issue, we are developing technology that renders conversation content unintelligible by applying processing to the sound recording process.


Level difference recognition by depth camera

Many visually impaired people invariably fall on steps and get injured, so there is a need to develop technology to recognize steps. In our laboratory, we are conducting research to calculate distance using two cameras on the left and right, and to recognize steps that are dangerous for visually impaired people based on changes in distance to surroundings. The photo shows an example of its implementation, with red representing far distances and blue representing close distances.


MRI image animation

The following is a part of the system screen that makes it easier to grasp the whole picture by displaying the MRI image with animation.

drag2 seminar drag1 seminar

Medicine navigation for foreigners

The figure below is a drug navigation app for smartphones at pharmacies for foreigners. For example, when we go abroad and get sick, all the products we don't know when we go to the pharmacy, there is no Japanese even if we look at the notation, and English is all technical terms, so which one should we buy? You will be in great trouble. This is a problem that foreigners are feeling in Japan, so we are co-developing smartphone apps to solve this problem with teachers and students in the Faculty of Pharmacy.

tsukiji sahi

AI Signal Recognition for Commercialization

On wide roads, pedestrian signals become extremely small, as shown in the above photo, to the point where they cannot be recognized by conventional AI. Additionally, the LEDs in the signals flash rapidly, with a fifty percent chance of the light disappearing, making it difficult to determine if it's red or blue. This issue is also problematic in dashcams during accidents. Therefore, it is necessary to consider the information from the preceding images or signals to determine whether the signal is red or blue.


・Murayama, Uemura,Toyotani et al., "Determination of Biphasic Menstrual Cycle Based on the Fluctuation of Abdominal Skin Temperature during Sleep", Advanced Biomedical Engineering 12(1) ,p.28-36 2023年2月
・Miura, Omae, Kakimoto, Toyotani et al.,"Three-State Classification of Pulmonary Artery Wedge Pressure from Chest X-Ray Images Using Convolutional Neural Network", ICIC Express Letters, Part B: Applications ICIC International 2023 14(3) 271-277 2023年3月
・柿本, 大前, 豊谷, 原, 高橋, COVID-19の感染リスクを抑制する飲食店における座席割当モデル", 日本経営工学会論文誌 74(2), p.77-89, 2023年7月
・Yuki Saito, Yuto Omae, Saki Mizobuchi, Hidesato Fujito, Masatsugu Miyagawa, Daisuke Kitano, Kazuto Toyama, Daisuke Fukamachi, Jun Toyotani, Yasuo Okumura, Prognostic significance of pulmonary arterial wedge pressure estimated by deep learning in acute heart failure, ESC Heart Failure, 2022.12. doi: 10.1002/ehf2.14282
・大前 佑斗, 柿本 陽平, 齋藤 佑記, 深町 大介, 永嶋 孝一, 奥村 恭男, 豊谷 純,Wasserstein距離による特徴マップの異常性スコアを基準としたCNN特徴量の次元削減,電子情報通信学会技術研究報告(MEとバイオサイバネティックス) 122(291) 29-31 2022年11月
・Yuto Omae, Yuki Saito, Yohei Kakimoto, Daisuke Fukamachi, Koichi Nagashima, Yasuo Okumura, Jun Toyotani, GUI System to Support Cardiology Examination Based on Explainable Regression CNN for Estimating Pulmonary Artery Wedge Pressure, IEICE Transactions on Information and Systems, 2022.12. doi:10.1587/transinf.2022EDL8059
・Yuto Omae, Makoto Sasaki, Jun Toyotani, Kazuyuki Hara, Hirotaka Takahashi, Theoretical Analysis of the SIRVVD Model for Insights into the Target Rate of COVID-19/SARS-CoV-2 Vaccination in Japan, IEEE Access, vol.10, pp.43044-43054, 2022.04. doi:10.1109/ACCESS.2022.3168985.
・Yuki Saito, Yuto Omae, Daisuke Fukamachi, Koichi Nagashima, Saki Mizobuchi, Yohei Kakimoto, Jun Toyotani, Yasuo Okumura, Quantitative Estimation of Pulmonary Artery Wedge Pressure from Chest Radiographs by a Regression Convolutional Neural Network, Heart and Vessels, vol.37, no.8, pp.1387-1394, 2022.02.
・Yuto Omae, Yohei Kakimoto, Makoto Sasaki, Jun Toyotani, Kazuyuki Hara, Yasuhiro Gon, Hirotaka Takahashi, SIRVVD model-based verification of the effect of first and second doses of COVID-19/SARS-CoV-2 vaccination in Japan, Mathematical Biosciences and Engineering, vol.19, issue 1, pp.1026-1040, 2022. doi: 10.3934/mbe.2022047.