Sab-AI.Lab.Japan
Our Voice-Speech Analytics portfolios
Natural language processing enables machines to understand and, to some extent interpret human language the way it is written or spoken. There are still huge gaps between the ways humans interpret spoken languages and the machines do. We're aiming to bridge the gaps by bringing acoustic features and logical cues of a speech into the natural language processing models. The acoustic features of a speech are highly unstructured particularly in high-entropy cases such as free-speech though contain valuable information.
Here we introduce some of the human-machine communication systems using spoken languages designed and developed in our lab. Some of those systems are libraries for Python; you could download them either through PYPI.org or download directly via GitHub. There are also some systems which are standalone and executable. What you need is just to download the packages directly from the GitHub connections.
There are full technical descriptions for each system, which are accessible on GitHub, please click the links associated to each system.
our work
Our Portfolio displays a few small software applications to demonstrate what we are up to and what work we can do for our clients. The portfolio is rich with lots of custom software applications developed so far. We have designed human-machine interaction packages using spoken languages; mainly targeted educational institutions (clients' requests). However, they could be customised and employed for different purposes. .
Here below is a list of the simplified version of the packages. Please feel free to download and try them out. Click the links; it takes you to GitHub repositories, where you can read the technical documents as well as download the package.
Contact us
Office
〒466-0834 Hirojichō, Umezono Nagoya City Aichi. Japansabailabo@gmail.com
Sab-AI Lab 愛知県 名古屋市 昭和区 広路町字梅園 10-4