Vanguard Magazine

Vanguard December2019/January2020

Preserving capacity, General Tom Lawson, Chief of the Defence Staff, Keys to Canadian SAR

Issue link: http://vanguardcanada.uberflip.com/i/1194327

Contents of this Issue

Navigation

Page 11 of 43

12 DECEMBER 2019/JANUARY 2020 www.vanguardcanada.com Ai Although the operational risks and rewards of adopting AI reside predominantly within individual business functions, the strategic imperatives and organizational goals related to AI are well-suited for governance by information management and policy functions. that such AI-enabled tools not only perform these HR-related tasks at significantly higher volumes than humans do, but they also do so within a fraction of the time. In turn, the resulting labour savings allow HR staff to focus more on strategic initiatives and higher-value activities within the HR func- tion, while so-called "digital workers" focus on mundane and repetitive operational tasks. Finance functions in defence departments are another area that can benefit greatly from the adoption of AI technologies. For example, payments to service members, employees, contractors and vendors can be better managed with the help of intelligent automation and advanced analytics report- ing techniques, similar to what has been described previously for the HR function. Short- and long-term financial forecasts for budgets of upcoming military projects can also be made more precise by using machine-learning models that encompass variables such as global market conditions, commodity prices and overhead costs. Ad- ditionally, internal audit teams can lever- age AI, machine-learning and data-mining algorithms to more reliably detect anoma- lies and fraudulent transactions across an entire general ledger, instead of relying on existing and less reliable heuristic sampling methods. The potential here in adopting AI lies in the use of intelligent systems to enhance the accuracy of conventional ac- counting practices and financial planning estimates for defence organizations that are known to typically operate under large and complex fiscal budgets. Military supply-chain management func- tions for matériel lend themselves quite naturally to AI enablement as well, as their impact on smooth and efficient military operations makes them a prime candidate. Whereas predictive analytics methods for inventory analysis or supplier risk to reduce working capital may not be as relevant in military supply chains as they would be in commercial ones, opportunities to embed AI certainly exist from a tactical planning and optimization perspective. Specifically, AI-based planning tools can be designed to analyze historical deployment strategies against current matériel availabilities and mission goals to develop large-scale plans for execution in relatively short periods of time. Moreover, predictive maintenance systems that use image recognition and IoT data alongside machine learning tech- niques can be layered on top to optimize fleet management and ensure any necessary systems are available for deployment. Above all, information management and policy are two areas that are foundational to the long-term success of AI use cases developed to support military operations within any other organizational function. The collective goal of these two must be centered around enabling business units to sustainably deploy AI solutions within their functional streams, while governing their responsible and ethical use. This includes not only making sure that the right mix of people, processes and technology are in place to support these advanced solutions, but also ensuring that current and impend- ing regulatory requirements related to data security and privacy are proactively adhered to as well. Although the operational risks and rewards of adopting AI reside predomi- nantly within individual business functions, the strategic imperatives and organizational goals related to AI are well-suited for gov- ernance by information management and policy functions. implementing and adopting responsible Ai As organizations begin to design and de- ploy AI systems more pervasively across their back-office functions, they tend to encounter challenges around trust and ac- countability. These challenges often stem from a lack of interpretability in AI deci- sions and can consequently expose an or- ganization to increased operational, repu- tational and financial risks if they are not properly addressed. For defence- and pub- lic-sector companies, it is critical to avoid such risks given the importance of national trust and security. To effectively mitigate these risks, organizations must be proac- tive in building high-quality, transparent, explainable and ethical AI applications that instill trust and confidence among end users and stakeholders alike. AI-related risks may generally be divided into two distinct categories: business-level risks and national-level risks. Business-re- lated risks include those surrounding per- formance (e.g. errors, bias, concept drift), security (e.g. adversarial attacks, privacy, cyber intrusion) and controls (e.g. lack of human agency, inability to control rogue AI models). National-level risks revolve around ethical (e.g. lack of values, moral misalign- ment), economic (e.g. job displacement, liability) and societal (e.g. reputational damage, non-compliance with regulatory requirements) factors. While the relevance of each may vary greatly from one company to another, defence organizations need to have a holistic understanding of these key risks as they strive to design and deploy AI systems that are responsible, trustworthy, fair and stable within their business func- tions. There are five key dimensions that un- derpin the implementation and adoption of responsible AI within an organization. First and foremost, the foundation is robust end-to-end enterprise governance. AI gov- ernance encompasses strategy through to operationalization and helps organizations answer critical questions around account- ability, model development, processes and controls around AI systems through their entire life cycle. Layered directly on top is the second key dimension: adherence to ethical and regulatory considerations. Or- ganizations should endeavor to design, implement and adopt AI-based solutions that are morally responsible and ethically defensible. Third, AI solutions should be designed to be interpretable and explain- able to various stakeholders. These may be C-suite executives, business sponsors, data scientists, regulators, lawyers and end consumers, to name a few, each of whom may require tailored and coherent explana- tions of how AI-based decisions have been derived for their particular application or business problem. Fourth, AI systems must be robust and secure. This includes devel- oping AI solutions that are easy to validate, monitor, assess, maintain and verify for sus- tained performance. And fifth, there must be a focus on designing AI solutions that mitigate bias through fair decision-making. Organizations should place significant em- phasis on ensuring human prejudices do not enter into the design of these complex and

Articles in this issue

Links on this page

view archives of Vanguard Magazine - Vanguard December2019/January2020