Vanguard Magazine

Vanguard AugSep 2018

Preserving capacity, General Tom Lawson, Chief of the Defence Staff, Keys to Canadian SAR

Issue link: http://vanguardcanada.uberflip.com/i/1017188

Contents of this Issue

Navigation

Page 44 of 47

www.vanguardcanada.com AUGUST/SEPTEMBER 2018 45 THE LAST woRD P reviously only imagined in science fiction, AI is poised to challenge and blur our con- cepts of computing and the "natural" human. This will require governments and sectors to devel- op expansive foresight and critical under- standing of the impacts of digitization and emerging technologies. As both a sociologist and technologist, how our social systems will adapt as these complex technologies automate human processes and collide with our "natural" world is fascinating to me. Although we're in an early transition phase to what is re- ferred to as a new Industrial Revolution, postulations on how the future will form are already swirling. More than automation and computa- tion, for the first time ever, physical, bio- logical and social systems are converging with the digital replication of the most complex system in the known universe: the human brain. The complexities are expo- nential with AI's technological intelligence Public Safety, Privacy and ethicS By VAlARIe FIndlAy It's safe to say, artificial intelligence (AI) is not a trend. beyond robot- dogs slipping on banana peels, the innovative capabilities AI will bring to business, government and the sciences will be transformative. and decision-making processes replicating intricate higher brain functions, as seen with Google's DeepMind probabilistic programming techniques. For those reasons, the idea of ethical and moral boundaries that are expected to manage and mitigate AI's potential negative effects are fueling critical debate among academics and practitioners be- yond responsibility and liability. Started from the bottom, now we're here … Within these debates more contentious questions are surfacing: Who – or which nation – decides what is ethical or moral? Where do ideological and cultural values fit in? What happens when technology governance cannot be agreed upon – are other technologies employed, such as hy- per-meshnets, to create cyber-barriers – or do we rely on social or economic sanctions as we do now? For nations who do agree to a common set of technology ethics, would that signal a move to a World Government model, that draws its own concerns? As cyber's civil space increases, and physical spaces decrease, will governments and decision- makers be capable of providing technolog- ical governance to maintain political and societal trust? The answers are important. Downstream they will form the basis for AI algorithms and databases that will develop autono- mous learning processes and decisions – without a human in control – from visual, auditory, patterning, recognition and in- terpretation data. With "free" AI databases already pop- ping up that have no assurance or certi- fication of accuracy or integrity, the issue is imminent. Literally humanity-altering, the misapplication, misuse or poor design could inflict longstanding damage to the safety, security, quality and well-being of human life. The new dynamic risks In practical terms, if the integrity or defini- tion of AI databases and algorithm charac- teristics are not accurate, the output data will be unreliable at best. A simple example in object identifica- tion: if a few photos of pears are thrown into the several thousand photos of apples used to create the scope of an apple's ac- ceptable features, this is a serious problem – especially if "apples" are a public safety threat or a military target. On the flipside, high integrity defini- tions are powerful. A high integrity apple schema and a high integrity pear schema allows AI to "learn" what an apple is com- pared to a pear. Then that learned data can be extended and associated to other ob- jects, based on features they have and do not have – sort of a process of elimination through association. Public safety and national security sec- tors that rely on specialized technologies and critical data will benefit greatly from AI, but not without taking on certain risks. Whether developing adaptive and extensi- ble responses, cyber-warfare offensive and defensive countermeasures, correlating massive amounts of integrated informa- tion, or performing facial recognition us- ing human and physiological factors, the devil will be in the details. Right now, government and defence ver- ification, validation and certification pro-

Articles in this issue

Links on this page

view archives of Vanguard Magazine - Vanguard AugSep 2018