Using AI audio processing to streamline 911 calls
Currently, 911 call dispatchers do not have an efficient way to distinguish between prank calls and legitimate calls when people use code words to relay their situation to the police. Because of the frequent prank calls, dispatchers often hang up without understanding the real intention a caller might try to convey.
911 calls during an active crime often creates inaudible audio feedback from the caller with loud background noises and muffled, choppy input. Dispatchers struggle to comprehend this suboptimal audio causing delays and inefficiency in service.
Dispatchers often need more context around a crime besides just what the caller tells them. Relevant dangerous background noises such as gunshots and car crashes might not be audible for the dispatcher because of a generally noisy background or a loud voice.
We use AI to address many challenges they face regarding quick and efficient response.
We use python web servers to input, process, and feed live audio to our neural networks. The process is warmstarted for inference but is still quick and effective.
Our deep neural networks are trained on various 911 audio and text datasets. These datasets are accurately labeled and contain real 911 calls, making them effective to be deployed in real systems.
We use recent and effective architectures supported by research to build our solution. We use a variety of architectures for both classification and audio reconstruction, using image-inspired architectures in the process. View the “Documentation” page for more details.
911 operators use CAD, or Computer Aided Dispatch systems to respond to callers. We built our service so that is can easily be integrated with CAD systems by reading telephone audio and displaying feedback on the screen.