Vanguard Magazine

Vanguard June/July 2019

Preserving capacity, General Tom Lawson, Chief of the Defence Staff, Keys to Canadian SAR

Issue link: http://vanguardcanada.uberflip.com/i/1136584

Contents of this Issue

Navigation

Page 24 of 47

artifiCial intelligenCe www.vanguardcanada.com JUNE/JULY 2019 25 benefits appeared to be tremendous for many, if not all, sectors. But these earlier generations of AI's inner workings were easy to verify and validate, and its con- structs and conclusions were understood. Unlike today, the algorithms that repli- cated human thought processes were de- veloped and controlled by development teams. Engineers and testers ensured that objectives, requirements, functions and success criteria, defined by humans, were met throughout a traditional develop- ment lifecycle. In production, the engi- neers knew what the inputs were, what decisions were made, and what activities formed each output. Already an elaborate logic model, things became complicated when deep learning and machine learning launched off the simpler (if there is such a thing) generative AI. The traditional development lifecycle morphed into an infinite development continuum that continued outside the labs and without human control. That brings us to the current day prob- lem and theoretical conundrum. Advanced generative technologies are using exten- sible algorithmic logic and compounded learning processes that evolve in a black box shielding its complex conclusive anal- ysis. A black box that no one, not even its engineers, can efficiently peer into. Not being able to isolate this cause and effect has created an uneasy stir in regu- lated industries that are driven by account- ability and preponderance of the evidence. Not only a necessity, it is often a legal right. The very part of AI that was a technologi- cal Holy Grail has become a tenuous issue and potential barrier for adoption for some. When technology Runs Rogue Nvidia's experimental autonomous vehi- cle, BB8, is a perfect example of the tech- nological veil that shields the inner work- ings of these technologies. Rather than following sets of instructions programmed by engineers, BB8 relies on an algorithm that learns how to drive by observing a hu- man. Evolving through heuristic learning, BB8's reasoning and decision-making are obfuscated and so complex that its engi- neers have struggled in deconstructing it. Built on human-like memory and learn- ing structures, similar to Neural Turing Machines (NTM), BB8's behaviours are not copied, but developed and learned autonomously. Incrementally learning and continuously refining, memory cells retrieve complete vectors using heuristic- based patterns and assign priority, just like the human memory. That's why, on the surface, BB8 operates as if a human were driving it, seemingly making all of the right decisions. But does it? Always? Once out "in the wild," dis- entangling the decision processes behind the behaviours of deep neural networks is a forensic nightmare. With no way to verify intent and causality, if the vehicle crashed into a tree, it may be impossible to determine why (to their credit, Nvidia engineers have made some progress). Some Experts Are Sounding the Alarm Recently at an Atlantic Council conference on AI, Frederick Chang, former director of research at the National Security Agen- cy, stated: "There has not been a lot of work at the intersection of AI and cyber," and "Governments are only beginning to understand some of the vulnerability of these systems," resulting in an increased size of attack surface. At the same conference, Omar Al Ola- ma, the minister of state for artificial intel- ligence for the United Arab Emirates, was more direct in his warnings, charging that the "ignorance in government leadership" is leading to the adoption of AI without impartial scrutiny and that "sometimes AI can be stupid." So, there you go. Sud- denly the foreboding predictions on AI by the late Stephen Hawking and Elon Musk don't seem so farfetched. As always, there are two sides to a de- bate. A faction of academic and industry researchers don't see what the big deal is – black boxes are not new and have been studied in other sciences for decades. Nick Obradovich, MIT Media Lab re- searcher, observed that, "We've developed scientific methods to study black boxes for hundreds of years now [and] can leverage many of the same tools to study the new black box AI systems." Obradovich's pa- per proposes studying AI systems by using empirical observation and experimenta- tion, as science has done with animal and human studies. Not entirely fantastic examples, in my view, since it has not been uncommon for human and animal studies to have been heavily flawed and sometimes fully re- tracted as the sciences advance (e.g. David Mech's alpha wolf study that piggy-backed on Schenkel's flawed work, was retracted years later by Mech, but has persisted for decades and is still cited today). The point put forth by the former group is that not knowing the problem to be solved or adopting AI without extensive risk analysis raises the stakes substantially. With advanced generative technologies al- ready tasked with solving critical problems using image captioning, voice recognition, language translation and video intelligence (think, 'deep fakes'), larger questions arise that require expansive foresight. How would AI decisions that affect soci- eties and individuals be overturned? What would be accepted as an evidentiary chal- lenge – other AI systems' analysis? Who decides what is ethical or moral, and where do ideological and cultural values fit in? More importantly, as the cyber civil space increases, will decision-makers be capable of providing technological governance and garnering societal trust? Nvidia's experimental autonomous vehicle, BB8

Articles in this issue

Links on this page

view archives of Vanguard Magazine - Vanguard June/July 2019