Investor Relations


Title KAIST Succeeds in Commercializing the First-Ever Voice Sensor that Mimics the Human Ear


The KAIST research team’s f-PAS installed on smartphones and AI speakers. [Source: KAIST]


A Korean research team has developed the first-ever voice recognition sensor that imitates the human ear, opening the pathway toward commercializing the technology. The team, led by Professor Keon Jae Lee and Hee Seung Wang, Ph.D. of the Department of Materials Science and Engineering at KAIST, announced on February 15th that they had developed a highly accurate, ultrasensitive, AI-based speaker identification and voice security technology, and succeeded in creating products equipped with the sensor by applying the technology to smartphones and AI speakers.


The trapezoid-shaped base membrane in the human cochlea generates numerous resonances in the audible frequency range, amplifying the sound and allowing humans to hear sounds from distant points. In order to mimic and maximize this effect in applying it to the voice sensor, the research team used an extremely thin and flexible piezoelectric film to imitate the human ear and produced a resonance-type voice sensor that is ultrasensitive in identifying sounds through the multiple resonance channels developed by the team.


In 2018, the research team was the first in the world to conceptualize this resonant flexible piezoelectric acoustic sensor (f-PAS). Thereafter, in this recent study, the team revealed the theory behind resonance, frequency, and the role of the piezoelectric film in the sensor structure. The research team has also made the product smaller in size and improved its performance. The f-PAS is expected to contribute greatly to providing consumers with customized services by converging future IoT technology that accurately controls smart devices from a distance and security technology that encrypts voice. This biomimetic f-PAS has an excellent signal-to-noise ratio, giving it exceptional voice recognition function, and the multiple channels allow for higher speaker identification accuracy even with minimal AI voice service data.


Tests comparing this voice sensor’s performance with that of a commercial capacitive microphone under the same conditions showed that the recognition rate in voice analysis and speaker identification of the f-PAS was significantly higher, and the error rate was reduced by 60-95% depending on the conditions.


The prototype developed by the research team was unveiled at CES 2020 through FRONICS, the company founded by Professor Lee. Currently, the company showcases high-quality AI voice technology and is seeking opportunities to collaborate with leading IT companies in Silicon Valley through their US branch.


Professor Keon Jae Lee said, “This mobile voice sensor that we have successfully applied to devices has high sensitivity, and its size has also been dramatically reduced, so it can be applied as a key sensor to drive the future of artificial intelligence. It will soon be applied in our daily lives, as the mass production and commercialization process is nearing completion.” This research was sponsored by the National Research Foundation of Korea, and the results were published in the international scientific journal “Science Advances” on February 12.