Preserving capacity, General Tom Lawson, Chief of the Defence Staff, Keys to Canadian SAR
Issue link: http://vanguardcanada.uberflip.com/i/1136584
26 JUNE/JULY 2019 www.vanguardcanada.com artifiCial intelligenCe Peace, War and Public Safety Both public safety and national security already rely on specialized technologies to manage critical data with extraordinary integrity and assurance. Military and po- licing stand to benefit greatly from AI, especially for intelligence analysis, adap- tive offensive and defensive responses, de- ployment of resources and correlation of integrated information – but not without calculated risks. In the United States, the U.S. Depart- ment of Defense (DOD) has begun to roll out new strategies, partnerships and budgets to develop and adopt advanced generative technology for military use. Laying the groundwork for trust, DOD officials have been putting AI's use in con- text, communicating and assuring that hu- mans will be the decision-makers for any lethal actions. A good start – inevitably more challenges will percolate to the top, whether between humans and machines or between government, the public, private sector or other governments. Policing will see similar concerns but on a more dynamic level. Wrestling with long- standing problems around best practices at the psycho-social level and managing and converting analogue and electronic data into usable intelligence, these advanced generative technologies are attractive. Since data is preeminent in policing, data storage limits, integrated data, whether individual or statistically informing, has enormous value. Similar to the military, the sheer volume of data and uncoordi- nated data schemas makes these arduous, budget-draining tasks. AI's generative processes become a game-changer – cor- relating these large amounts of data and applying 'human-like' thinking through heuristics, predictive and adaptive outputs can be produced. A possible gateway for AI may be evi- denced-based policing (EBP) practices that scientifically gather qualitative and quanti- tative evidence from operational practices and analyze it in a controlled framework to improve policies and procedures. Once data is catalogued and collated, AI could enhance its usability with other data, in- cluding temporal crime data, victim/of- fender characteristics, spatial and GPS data, body-worn camera video, biometrics, intel- ligence or field data, evidence and forensics. Opening the Black Box The more 'neural' the technology be- comes, the more ethics and privacy issues will arise from AI's ambiguous processes. For all its potential, the lack of root-cause insight and inability to examine the guts of advanced generative technologies leaves a gaping hole in the credibility of autonomy and risks of unknown vulnerabilities. With much hope pinned to other criti- cal areas, like the diagnosis and pathology of diseases, prediction and regulation of economies and markets and societal prob- lems, black box oversight will be a balance between functional need and acceptance of some ambiguity and errors, only where it makes sense. Keeping in mind, as we do not fully understand human memory, advanced generative technologies will fur- ther leverage cognitive psychology and neuroscience to improve our understand- ing of these subjective, experiential and unique processes. As our physical, biological and social systems collide and these technologies be- come more like us, transparency will not serve all layers of trust. All stakeholders will need to be at the table – technolo- gists and scientists, government, industry and the public – in order to form a deeper, critical perspective on how we use ad- vanced generative technologies and which black boxes must be opened. References: 1. YouTube: https://www.youtube. com/watch?v=hsLJdpjSJcM 2. The Verge.com: https:// www.theverge.com/ tldr/2018/2/20/17033982/ boston-dynamics-spotmini-door- opening-video-interrupt-test 3. Technology Review.com: https://www.technologyreview. com/s/604324/nvidia-lets-you- peer-inside-the-black-box-of-its-self- driving-ai/ 4. Breaking Defense https://breaking- defense.com/2019/04/rush-to- military-ai-raises-cyber-threats/ 5. MIT: https://www.media.mit.edu/ publications/review-article-published- 24-april-2019-machine-behaviour/ Valarie Findlay is an American Society for Evidenced-Based policing member and a research fellow for the police Foundation (USA) with two decades of senior-level ex- pertise in cybersecurity and policing initia- tives. She has worked extensively on fed- eral cyber initiatives and is a member of the Canadian Association of Chiefs of police eCrimes Cyber Council and AFCEA DC. She has a Masters in Sociology and a Masters in Terrorism Studies with her dissertation addressing the impacts of terrorism on law enforcement in Western Nations. As our physical, biological and social systems collide and these technologies become more like us, transparency will not serve all layers of trust.