Just as artificial intelligence is entering so many other industries, AI is also becoming a vital part of the public safety world - and the innovations are coming fast and furious.
The advantages of using AI, whether as part of an actual robot or as software in law enforcement situations seem obvious: They can help keep humans out of dangerous situations while doing jobs that once fell to police officers or firefighters (chief among them search and rescue).
But there are also more intelligence-based, administrative, and even commonplace activities where AI can help out, and public safety agencies all over the country are investigating the possibility of incorporating more artificial intelligence innovations into their daily operations.
According to a recent study by Stanford University called “Artificial Intelligence & Life in 2030,” public safety and law enforcement are two of the eight areas most likely to have an explosion in AI technology over the next decade. Here are some of the areas where AI will become more vital than ever.
This isn’t a new area of AI use and investment for law enforcement - in fact, since 2010, agencies have spent over $55 million since 2010 on military-style robots.
But today, we’re seeing a new level of sophistication in these robots. For example, we’re familiar with the idea of robots being sent in to investigate potentially explosive devices, but that’s merely one potential use. Experts predict that these robots will increasingly be used to deactivate explosives, and in Dallas in 2016, police used a robot to take down an active shooter.
Again, it’s not new for either the military or law-enforcement to use drones for any number of tasks. But there’s a new level of improvement in drone technology that can help with crowd control (using speakers so that officers can address crowds) and facial recognition technology that would allow law enforcement officials to identify suspects before crimes even occur.
Social Media Monitoring
As organizations like ISIS and Al Qaeda, and even drug cartels, have begun conducting more and more of their business through social media, local and national law enforcement agencies have had to step up their own monitoring of various social media outlets like Facebook, Twitter and Instagram.
The potential development in AI as far as social media is concerned is in two areas. The first involves developing algorithms that search both hashtags and general online activity that might indicate the sale or purchase of illegal drugs. Once one of those targets is identified, the technology behind the algorithm can pass that information to the law enforcement teams so they can launch investigations.
The other area of development is comprehensive scanning of social media for individuals who might have become radicalized. The Stanford study mentioned above discusses how public safety agencies are using AI to analyze conversations on different media platforms to watch for signs of domestic or foreign terror groups communicating with individuals susceptible to radicalization.
One such monitoring tool has been dubbed iAWACS, an umbrella term for the military’s information-gathering intelligence systems. These systems monitor online activity that might indicate active shooter situations or other scenarios that have a high likelihood of involving extremists.
There are potential privacy concerns to be weighed against public safety with some of these AI-related systems, and that discussion, while justified, could possibly slow the advance of artificial intelligence and robotic law-enforcement solutions. But even if that slowing does occur, it’s apparent that tech-based innovations will greatly affect the future of public safety.
To learn more, read our post “The Latest Developments in Public Safety Technology.”