STAGING EN

AI: Service Or Servitude?

AI: Service Or Servitude?

For those with an eye and an ear for these things, few topics have elicited as much anticipation and trepidation than AI. 

This article won’t be covering the science of AI or go into too much detail about its current use. Rather, we hope to give a brief overview of its potential for great good, or abject misuse by individuals and business. As is often the case, AI may be a science that society is not quite ready for. History is littered with technological advances that mankind readily embraced before it reached a level of societal sophistication that enabled it to assume complete control and accountability. 

The machine gun was said to be the weapon that would end all war because, as a perfect killing machine, contemplating its use was unthinkable. The same was said of nuclear energy. Capable of near limitless power generation, when properly used, it first made the headlines after a fission device was dropped over Japan, instantly killing tens of thousands, an act that brought about the end of the bloodiest conflict in recorded history. 

AI worst case scenarios are only possible if it remains unregulated and misunderstood. 

AI suffers, or benefits from the same dichotomy, and without a clear understanding of its impact and potential misuse we are left to imagine all sorts of either dire or idealist outcomes. 

A Brief History of AI – Fact and Fiction 

The quest for artificial intelligence is nothing new. Scientists and mathematicians have toiled over it, spent lifetimes prying out its secrets and debating its viability, vulnerabilities, and potential to change human history. Alan Turing, a legendary computing pioneer portrayed in the blockbuster The Imitation Game (search: German Enigma Cipher Machine) devised the Turing Test, that is: a series of tests which ascertain whether a machine can interact with a human in a way that is indistinguishable from another human. This benchmark hasn’t been attained yet. Writers have also long been obsessed with AI, readers, even more so. Isaac Asimov, heralded Sci-Fi author, penned his classic I Robot, where he outlined the now famous Three Laws of Robotics1

First Law 

A robot may not injure a human being or, through inaction, allow a human being to come to harm. 

Second Law 

A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 

Third Law 

A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law. 

The paradox is obvious. Using this law as an inviolable programming rule would seem to make things clear. But Asimov overlooked one important detail: the Three Laws of Robotics necessarily preclude robots have true AI. True artificial intelligence would have to exhibit self-awareness and free will, the latter which wholly contradicts the intention of the three laws of robotics. Ultimately, true AI may merely be an ideal and never fully realizable. 

Fiction is filled with humanity’s close calls with AI, Mary Shelley’s Frankenstein (which pointed at, but did not directly refer to AI), the Terminator film franchise, The Matrix, and possibly the darkest, and most reasonable portrayal of machine learning and AI gone-bad, the HAL9000 computer in; 2001, A Space Odyssey. All have left indelible marks on our zeitgeist and may have inadvertently set back AI, at least in its PR efforts. Incidentally, the letters HAL are each in turn adjacent to the letters I, B, and M. 

IRL however, it has made considerably less ambitious progress. IBM’s Deep Blue2 beat world champion Gary Kasparov, while it’s later computer, Watson3, won NBC’s Jeopardy handily beating two previous notable winners. But without self-awareness and free will these are not true examples of AI but simply more small steps towards it. 

The largest companies are the most likely to have an AI strategy, but only 50% have one.

Definition and Governance 

AI is defined in slightly different ways by various experts and governing bodies. Artificial intelligence, once a standard definition and control of what qualifies as AI is determined, will likely need to fall under strict regulation to control its use. Ironically, regulation will also need to prevent it from actually being intelligent and self-aware for reasons of the inherent moral dilemmas mentioned above. 

Multilateral governance of AI has become a hot topic and after recommendation by Canada and France, an international G7-led Global Partnership on AI5 was formed, supported in part by Montreal’s very own International Centre of Expertise in for the Advancement of Artificial Intelligence6

Despite these preliminary efforts, the regulation of AI, including the actual defining of its philosophy, use, legality, and potential, are nascent and uncoordinated. Governments are discussing, academia is debating, and advocates of AI exploring and discovering what the limits may need to be. 

And at the same time business forges ahead deploying and depending on AI as never before. So, who’s minding the store? 

Application and Responsibility 

Proper governance of AI can preclude any apocalyptic outcome, and indeed its role in healthcare, justice and law enforcement, education, finance, and perhaps most ominously, the development of AI itself is growing at an increasing pace. 

If AI is to have as promising a future as intended, trust will need to be built in. Whether that’s in the form of an Asimovian Law of Robotics, multilateral international policing organizations, absolute transparency, enhanced reliability, or more likely a combination of these and other considerations. But it does have a future. And we’re all destined to play a part in it. 

By 2025, 90% of all new enterprise apps will employ embedded AI7

The COVID-19 pandemic was among the first global tests of AI, as industry and commerce had to reimagine their way of operating, their use of and access to manpower, and how they engage consumers and service users. In this way, the pandemic has been a force multiplier, accelerating AI and making imperative the need to plan for responsible uses, and ways to curb malevolent ones. 

Business is also charged with this. KPMG cited 6 things business can do to adopt a proactive stance*. 

• Develop AI principles, design criteria, and control. 

• Design and implement end-to-end AI governance and operating models. 

• Assess current governance and risk framework 

• Implement governance committees and frameworks 

• Develop and integrate an AI risk management framework 

• Establish criteria to maintain continuous control over algorithms without stifling innovation and flexibility. 

Fewer than 50% of consumers understand AI.8 

“AI is not likely going to reduce the active workforce, but it will change it! AI will allow industry to automate many of the activities currently being done by people, giving them opportunity to be redeployed to more critical roles, as well as steering the incoming workforce to focus their education and training to areas that will require human interaction.” 

Blair Richardson, VP, Client Success, Cofomo

Business is on the cusp of an AI revolution, with equal or greater potential for impact than the advent of the internet had. Some might argue we’re already in the Golden Age of AI. Netflix’s recommendation engine alone was, in 2016, thought to generate US$1 B in revenue9, while increasingly smarter chatbots help business cut costs by US$8B10. Clearly, AI is not all talk. 73% of global consumers are open to AI if it makes life and work easier11

AI ambivalence is rife. While 87% of Americans polled seem to applaud the advent of the era of self-driving cars, 35% said they’d never sit in a fully autonomous vehicle12

As with any revolutionary technological leap forward there will be pushback. 

The explosive growth of AI will engender endless moral dilemmas for decades to come. How will employment be affected, how will the wealth created by AI be distributed, can we avoid the occasional, inevitable, but potentially disastrous “artificial stupidity” mishaps, can we protect our AI from adversaries, or becoming an adversary, and can we program AI to be sufficiently free from bias? On the last point, search online for “schoolgirl”. The results will be largely photos of adult women wearing provocative schoolgirl outfits, while searching for “schoolboy” produces photos of ordinary boys in school13. AI delivered those results based on what it learned from our behaviour. 

In the end the success of AI may not be how well it imitates human thought, but how well it improves upon the biases and shortcomings of human thought. While keeping us safe. And employed. And healthy. And not battling an army of AI robots hell-bent on world domination. 

We live in interesting times.

——————————————

Information for this article was compiled from several sources: 

  1. https://en.wikipedia.org/wiki/Three_Laws_of_Robotics 
  2. https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer) 
  3. https://en.wikipedia.org/wiki/Watson_(computer) 
  4. https://sloanreview.mit.edu/projects/reshaping-business-with-artificial-intelligence/ 
  5. https://gpai.ai 
  6. https://ceimia.org/en/ 
  7. https://www.idc.com/research/viewtoc.jsp?containerId=US45599219 
  8. https://dataprot.net/statistics/ai-statistics/ 
  9. https://www.businessinsider.com/netflix-recommendation-engine-worth-1-billion-per-year-2016-6 
  10. https://www.juniperresearch.com/resources/analystxpress/july-2017/chatbot-conversations-to-deliver-8bn-cost-saving 
  11. https://www.pega.com/system/files/resources/pdf/what-consumers-really-think-of-ai-infographic.pdf 
  12. https://dataprot.net/statistics/ai-statistics/ 
  13. https://www.change.org/p/google-google-must-change-school-girl-results-from-sexualized-images-to-actual-girls-in-school 

 A. Blair Richardson, VP Client Success, Cofomo, Ottawa 

Artificial Intelligence: Beyond Automation and Intelligence

Organizations who adopt and exploit automation, robotics, and intelligent systems have a key competitive edge.…

READ MORE

Co-Innovation: Human/Machine Collaboration

Human-machine collaboration perceives, learns, memorizes, understands, decides, and applies knowledge, providing individuals and AI systems with the ability to overcome…

READ MORE

Law 25, Data, And You. Compilance Is Easy. Failure, Expensive.

Here’s what you need to know. Now.  Law 25, Québec’s data privacy and protection legislation, affects…

READ MORE

Cofomo Rolls Out Data Virtualization Services With Denodo

Montreal, November 7th, 2022Cofomo News Service (CNS) Cofomo, a Canadian IT and business consulting services…

READ MORE
This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.