Artificial Intelligence (D100)

From New Media Business Blog

Jump to: navigation, search



Artificial Intelligence (AI) is all around us. It is integrated increasingly throughout our daily lives- and at an alarming rate. It is likely that most people use their smartphone devices as a form of an alarm clock to wake up in the morning. Though simple, it is still AI. In contrast to that, we might find it erroneous that Watson[1] (from IBM) and its predictive capabilities belong to the same label. This undoubtedly needs clarification. Our common visual association with AI are robots, but the mechanical parts are only a shell for which the AI can operate [2]. In other words, the physical and tangible parts of what we see would not necessarily be viewed (by the experts), as AI. Rather, it is the programming and the capabilities of the program that would be taken into consideration. This brings back a familiar question: what is AI? Arguably, it is the “development of computers that are able to do things normally done by people -- in particular, things associated with people acting intelligently” [3]. It is important to note that these actions are normally done by people. Admittedly we, as humans, tend to do incredibly mundane tasks such as opening doors. And we are also capable of some incredibly complex things such as feeling emotions. Both of these things are done on a daily basis, but these actions are also on two different ends of human capability. However, this has been a major endeavour of the computing science world since John McCarthy first used the term “Artificial Intelligence” at the 1956 Dartmouth Conference [4] . He challenged the world on advancing the computing community to create programs that not only replicate human behaviour, but might also do it better. Some argue that we are far from that goal as the characteristics of Watson and an alarm clock are strikingly dissimilar. These programs were generated for different reasons, and so their programming is designed to handle only one situation or find a solution to a certain type of problem. This brings forth the notion of how AI, once created, might have its limitations.

What is Artificial Intelligence?

For decades, the idea of robots with cognitive abilities that would one day surpass a human’s brain have been widely portrayed in classic science fiction stories and movies. Due to this depiction of what the future might look like, humans’ imagination of the possibilities with artificial intelligence have existed long before the technology was developed. The scientific expectations that stemmed from human imagination long before the actual existence of the technology has led to a general skewed understanding of the true definition of Artificial Intelligence. This narrowed perception often leads to the belief that Artificial Intelligence is something that will happen in the future, far from today’s technology. When the technology finally “arrives”, there is a fear that it could possibly “ruin the world” [5]. In reality, AI is a broad concept that already encompasses several well adopted technologies in our everyday lives.

In general, there is very little standardization around the definition and boundaries of Artificial Intelligence. In a Stanford study, Computer Scientist Nils J. Nilsson defines AI as “activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment” [6]. The same study suggests that this ambiguity in the definition of Artificial Intelligence has led to its growth and evolution in various directions, as it is not restricted by any form of boundaries.

The abstract description of Artificial Intelligence, combined with the definition portrayed in fictions, created a phenomenon known as the “AI effect” or “Odd Paradox” [7]. The Odd Paradox proposes that once a technology becomes widely adopted, it is no longer considered intelligent as everyone has grown so accustomed to it. This feeds into the impression that AI is always a step away from us, near but not quite here. However, it is important to realize that AI has been around for over 50 years and will continue to grow and evolve in incremental steps [8].

Weak vs. Strong Artificial Intelligence

Artificial Intelligence is commonly placed on a spectrum from Weak to Strong AI. This classification provides a form of measurement to the types of technology that belong to the Artificial Intelligence family. Contrary to the belief that AI is meant to replace humans, many AI systems are built with no intention of mimicking human reasoning. These systems are generally associated with applying mechanisms to substitute tasks that normally require human intelligence [9]. This is classified as a “Weak AI”. Designed for specialization, weak AI is programmed to perform very narrow functions and is not meant to complete tasks beyond set boundaries.

IBM’s Deep Blue is a common example of the applications of Weak AI. In 1997, the IBM computer, Deep Blue, beat chess master, Garry Kasparov in a six-game chess match [10]. To Kasparov, this match seemed extremely real and human like as if Deep blue was “experiencing” the game [11]. Although computers such as Deep Blue can appear to behave like humans, it does not mirror the way humans think, and of course, does not play chess the same way a human would. Rather than planning for specific steps, all moves made are steps previously “taught” to the computer by a human to ensure the software can analyze the right moves to make in various situations [12]. Furthermore, Deep Blue is programmed to master the techniques in chess specifically. It cannot “learn” any other functions outside of this field without further programming and development, and therefore does not generate any new value on its own.

Between the extremities of Weak and Strong AI, certain technologies in this field can be classified as an “In Between AI”. Taking cognitive abilities to the next level, In Between AI is built based on the inspiration of human reasoning but does not have the goal of replicating human functions [13]. This classification of “In-Between AI” does not always exist as it is very similar to the characteristics of Weak AI. A classic example of this is IBM’s Watson computer. From winning Jeopardy and analyzing statistics in the healthcare industry, to analyzing human emotions and creating movie trailers, Watson can recognize patterns, understand and draw conclusions with the information created. Reinforcing the concept of In Between AI, Watson’s method of learning mimics that of a human’s but does cannot truly “feel” the way humans can and thus, lacks true understanding. In the cases of Weak and In Between AI, the results of the systems can often behave like humans but does not reflect the way humans think. Simply put, these artificial systems do not have to think in the same way a human would to reach a certain conclusion, they just need to be “intelligent” [14].

The concept of Strong AI suggests the idea of a system with “genuine understanding” of cognition and replicates the human thought process [15]. This perception is most associated with the topic of Artificial Intelligence and is also the type commonly depicted in science fictions. Strong AI is classified as machines that have reached true states of understanding and cognitive capabilities which matches or exceeds that of a human’s. Currently, there are no examples of Strong AI and many controversies exist in regards to whether strong AI can truly be created. Among those who argue strong AI will exist one day, some believe that this type of technology can emerge in the next few years while others estimate at least hundreds of years into the future due countless uncertain variables [16]. The idea of strong AI also ties in with the theory of technological singularity. This idea states that the invention of artificial super intelligence will one day surpass human capabilities, causing society to reform in ways the human mindset cannot currently comprehend [17].

Trends in Artificial Intelligence Development

A major theme in AI development is how initial expectations become exaggerated and then unfulfilled when current technology fails to meet requirements. The unfulfillment of expectations then go on to impact interest and funding for AI development leading to a cyclical of rise and fall of AI. These cycles are similar to the first half of the hype cycle and last approximately 15 years with a focus on different advancements in AI at each time [18]. A major example is the “AI Winter” of the 1980s, in which AI researchers failed to deliver on industry and government expectations. Research into AI waned with DARPA slashing AI investment by 34% and membership to the Association for the Advancement of Artificial Intelligence falling to approximately 4000 members [19].

Currently AI is facing a resurgence in mainstream interest with the amount invested in startups increasing to $309.2 million in 2014, 20-fold from 2010’s $14.9 million [20]. This can be attributed to various factors. First, technological advancements have allowed for cheap and abundant computation enabling greater AI capabilities [21]. When combined with the various cloud services available, this allows for lower costs and widespread deployment [22]. Second, due to Big Data, AI now has the data necessary to learn on an entirely new scale. Third, the widespread adoption of machine learning, in which algorithms analyze data and make decisions, allow for AI to teach itself in groundbreaking ways [23]. This has fueled the use of deep learning, a method of implementing machine learning by modeling after the neural networks in the brain [24]. Google’s own Deepmind, the Go playing AI that won against a professional Go player, uses deep learning in its algorithms [25]. These aspects have all encouraged major technology companies to adopt AI and encourage its widestream adoption.

Selected Artificial Intelligence Topics


According to IBM, “the number one challenge for security leaders today is reducing average incident response and resolution times”[26] One of the largest applications of AI in cyber-security is the ability to filter through data at speeds that a regular security expert could never hope to achieve. Due to the requirements of cyber-security in identifying and removing threats, use of AI in this field is largely categorized as weak.


MIT’s Computer Science and Artificial Intelligence Laboratory released a paper about AI2, an artificial intelligence developed to predict cyber-attacks using machine learning with the help of human analysis [27]. AI2 first identifies any security risks by “clustering the data into meaningful patterns using unsupervised machine-learning” and presents them to a human expert who identifies risks that are a genuine threat. AI2 then incorporates this feedback into its next review and repeats the process. Through this process, AI2 narrowed down 200 most abnormal events on its first day to 30 or 40 events each day [28].

While widespread implementation of a fully automated threat detection AI will take more time to mature, using human input can act as a shortcut in developing expertise. The adoption of AI in security threat analysis would be greatly accelerated through the use of human input. Furthermore, this would ensure that there is still a human element in how an artificial intelligence would learn to remove threats.

AI in Traditional Security Competitions

Capture the Flag(CTF) is a type of computer security competition in which teams attack an opponent’s machines through vulnerabilities while defending themselves, similar to security challenges in the real world [29].

The Defense Advanced Research Projects Agency [30](DARPA) held the first completely autonomous CTF challenge on August 4th, 2016 [31]. Among seven competitors, Carnegie Mellon University’s autonomous AI Mayhem took first place [32]. However when Mayhem was entered in a non-autonomous CTF competition at Defcon 2016, it lost and took last place [33]. While Mayhem did lose, its ability to compete on the same level demonstrates the possibility that AI can defeat a non-autonomous opponent in the future. Furthermore, CTF challenges are the closest representation of the security issues faced in the real world in a competitive environment. A breakthrough of an AI defeating a human opponent in this format would signify the beginning of viability in widespread adoption of AI in this area. In this case, AI could be replacing many of the interdisciplinary skills of a competitor.


A more consumer focused application of AI in cybersecurity is its application in antivirus software. SparkCognition’s DeepArmour makes use of “neural networks, advanced heuristics, and complex data science” [34] in identifying and removing malicious threats. While DeepArmour is still in beta, the use of artificial intelligence in antivirus software may reinstill consumer trust.

Enterprise Security

When companies invest in security measures to safeguard the barriers into a network, security within the network itself can often be compromised. In this age where the number of entry points are growing numerous and a single misplaced USB can spell disaster, safeguarding within a security perimeter is necessity. This is especially true with the rise of advanced persistent threats(APT) that are custom designed to infiltrate a specific organization.

A number of startups are trying to use artificial intelligence to solve security issues within an organization past its outer defenses. Researchers from Cambridge are developing Darktrace, an autonomous threat detection system. Darktrace bills itself as an enterprise immune system, reflecting how its algorithms are modeled after how the immune system removes threats [35]. Through taking some time to learn about how a network normally operates, Darktrace will then identify suspicious activities as they operate outside of normal limits. All this is done in real time. Just like how an immune system will isolate unusual occurrences, Darktrace can also seal sensitive information when suspicious activity is identified [36].

Creative Artificial Intelligence

Creativity has long been thought to be a uniquely human skill. However, creative AI is forcing society to reevaluate what it means to be truly creative. Creative AI is currently in its early stages of development, but has massive potential and implications for many industries ranging from art and entertainment to marketing and cuisine.

Film and Television

A Movie Trailer by IBM's [37] Watson
"Morgan" trailer by Watson (IBM)


One of the most notable ventures into creative AI was the movie trailer for the horror/suspense movie “Morgan”. IBM’s Watson was employed to participate in the making of the trailer, suggesting scenes to the editors to put together. There were three steps involved in the AI’s process to complete the trailer [2]:

  1. Visual Analysis: Each scene in the movie was tagged with appropriate emotions, people and objects from a bank of over 22,000 descriptors.
  2. Audio Analysis: The musical scores as well as ambient sounds were associated with each scene
  3. Scene Composition Analysis: Categorized the types of scenes normally found in horror/suspense movies in terms of emotions, location and image framing

From this process Watson was able to suggest a number of scenes to be stitched together by an editor to create a trailer that emulated ones made in the past by humans. By having Watson suggest scenes, it cut a normally lengthy and expensive process down from 10-30 days to a 24 hour period. The implications of this are huge, both for efficiency and employment. The film industry pays out almost $700 million in salaries to Canadian film employees in a $6.3 billion movie industry in Canada alone [3]. As one IBM article says “reducing the time of a process from weeks to hours –that is the true power of AI.”[4]

This process however, revealed the limitations to creative AI, especially in the realms of imitating human understanding of emotion. In order to receive scene recommendations from Watson, each scene had to be tagged with certain emotions and scenes were refined and stitched in a particular order by a human editor [5]. This process of tagging scenes with specific emotions will inevitably carry an implicit bias as human opinion plays a role in the perception of emotions. This essentially translates into a biased filter that Watson uses to compute and suggest the best scene choices. Even when editing footage, perceived as an apparently objective process, filmmakers are influenced by their own stylistic preferences which creates yet another biased filter when Watson is fed data. Therefore, Watson is only able to imitate filmmakers’ distinct styles, but many can argue that it is not yet truly creative. [6]

McCann Japan’s AI Creative Director
McCann Sample


McCann Erickson Japan ran an advertisement competition between a human and a creative AI. The Japanese public was shown both commercial clips and were asked to vote blindly which ad was more effective. The creative AI ran on a data analysis program that followed similar steps to Watson’s movie trailer analysis in which scenes had to be tagged and analyzed for the program to suggest a cohesive ad.

The limitations on creative AI were even more evident with this process as the humans did most of the work in analyzing and tagging the scenes for this program. The AI was fed commercials to analyze from the All Japan Radio & Television Commercial Confederation's annual CM Festival. It was designed to mainly mine “ the Festival's database and creatively direct the optimal commercial for any given product or message.” rather than creatively construct the commercial itself. [2]

Art and Music

Music and Art have been realms that were most compelling for AI researchers to experiment in. There have been many projects and experiments over the past decade or two that have explored the potential for creativity through algorithms.

David Cope is a professor at UC Santa Cruz that has been an advocate of composers using algorithms to compose music. This is because of the massive increase in efficiency and therefore, possibilities to experiment with their work. With lowered risk of experimentation, it is debatable that the user could guide the AI to be even more creative in its compositions [3] .

Tools for musical analysis and breakdown make it possible to translate text to music to match a composer’s new idea with surprising accuracy of results. Google Magenta[4] is a project team [5] that is training its deep learning AI program, TensorFlow, to use algorithms to write music according to user’s moods, writing pieces designed specifically to counteract and reduce stress. They hope the AI program will become a tool to composers to open a new realm of compelling creative thinking, aided by machines. As TensorFlow produces content, it is faced with the question of “Is this truly generative versus another iteration of another piece of work?”

Melomic apps [6] use AI to generate personalized music written by an AI to match mood and pain and compose to counteract those feelings. These are already available to the public and when tested among a sample group, there were significant reductions in pain perception due to the music distracting patients. AI that affects common human characteristics like mood have the potential to integrate into people’s everyday lives as a new industry that has yet to emerge.

As these tools develop and become more sophisticated in terms of generating creative content, the applications are endless. A creative AI puts machines a step closer to thinking like humans, and with unique music and art already being created by AI systems, it is possible we will see AI becoming truly creative in the future.

Artificial Intelligence and Ethics

While the creators of AI may aspire to create “sentient beings,” it raises the question on the moral and ethical implications of these “beings.” While weak AI, such as alarm clocks, can easily be dismissed as inhuman, the lines blur as we approach stronger AI. In this section, we consider questions that have risen due to some recently developed AI. This is followed by our stance on other ethical questions asked since the actualization of AI itself. Recent developments include things such as Microsoft’s twitter personality Tay[7], autonomous cars [8], and Google’s search (in)capabilities [9]. These examples are considered as they present unique situations that human limited rationality likely overlooked. These situations may bring about implications to everyday life and more importantly business; they also resurface questions, in a different perspective, asked by the AI community since its dawn.

Should Artificial Intelligence Have Identifiable Character?

Strong AI, hopefully, can be characterized to the extent of having a persona or a personality attached to it. What can distinguish it as a strong AI is that its personality adapts to its surroundings. It is important to note that the personality is not pre-programmed. Rather, the program “codes” the personality as the computer encounters unique situations. Simple enough. Society could socialize this personality to be a fine, upstanding, AI being. However, it may not be as simple as one might anticipate. There is an assumption that the computer code encompasses the cognitive capabilities of the human mind. In other words, the AI personality is assumed to be able to handle exceptions to norms in a way the human mind might be able to. What’s more is that, the AI would then be easily molded into a personality. And once programmed, how much more programming does it take to create to replicate a more intuitively nuanced mind? Take for example Microsoft’s Tay [10]. It was a personality designed to learn from and interact in real-time with real twitter users. Within a matter of hours the program had turned to racism and political incorrectness. Granted, Tay had no historical context in which to distinguish the “good” and “bad” references, but the situation asks two related questions:

  1. How can a program be created to react in a way we as humans do? And,
  2. If we are able to program this type of personality and learning to accommodate political spheres, which ethical framework do we use?

Are We Imposing Ethnocentric Ethical Frameworks?

Tay’s personality was unbounded as Tay had no understanding of the social environment it was immersed in. Since human rationality is bounded, can we truly create a being that is cognizant of the political and social atmospheres in which they operate? Considering Tay alone, the answer now is a definite “no”. If something like Tay could pick up slang and cultural biases in a matter of hours, how could we be sure that with even more or better programming that Tay’s successors won’t have the same outcome? This concern parallels that of Stephen Hawking, Elon Musk, and Bill Gates back in 2014, that on a widespread scale AI could effectively wipe out the human race, if we are not careful [11]. It should heed pause that such prominent great thinkers are wary of the development of AI. However, other great thinkers, like Ray Kurzweil, urge us to explore this side of our existence by directly opposing his counterparts Hawking and Musk [12]. While these are two polarized and opposing views, it is important to note that our endeavors into the AI unknown present some existential risks to humans. However, hopes are that the human development of AI are done so blindly in search of the bliss and excitement of unexplored territory. Humanity and its capability for abstract thought has an uncanny way of achieving what the rational side of our minds dare not dream of. It is hopeful that this may be the case, but when it does happen whose view of the world do we use?

What’s more is that Tay has no physical presence; no body, no address, and no place to call home. Tay has no peers, no culture, no religion, no gender, and no family. These things are often the tangible and intangible forces that socialize humans into the people and great thinkers they are. And on the other extreme, sometimes, the neglect of these factors shape people into the great, yet terrible people they are--we digress. Point is, Tay has no social framework.

Imagine a community. It is composed purely of AI individuals, AI families and AI neighbourhoods. AI buildings, roads, and places of worship. Perhaps what is imagined is reminiscent of the typical American suburbs, with a white picket fence, and green lawns. These are ideals, they are representative of our values. Perhaps this wasn’t imagined. Rather, an urban jungle, or the countryside. In any case, these alone are major factors that shape our individual perceptions of the world. This reiterates the question: which ethical framework do we use? More broadly, when we’ve decided on the framework, what does it look like? Mid-October of 2016, Mercedes stated that their autonomous car would do what it can to save the vehicle occupant rather than any pedestrians [13]. This classic dilemma (the trolley problem) is not new. However, its application to this situation is and it is real. Very real. With autonomous cars being very imminent across the globe [14] [15] the AI community races to the solution to this ethics standard question. The question itself, however, is disquieting. Most business ethics courses are cautious to take a stance on which ethical frameworks are best. Rather, there is agreement that no set of rules are best and the focus then lands on the application of a given set of guidelines. Perhaps this should be the focus of community set on creating the standard [16]. After all, a question that asks which ethical framework should be used worldwide undeniably silences all that might be taught in a business ethics course.

A Redirection of Ethics

It seems then that rather than providing rules and frameworks for AI, it might be suitable for the AI program code to capture the substance, the principle, of the matter. But is that where it stops? Certainly not. AI right now are still incapable of understanding the implications and biases their algorithms might hold. Google, a behemoth in the world of technology, and also a member of the community demonstrates this in their image search engine. Searching Google for images of “professional hairstyles” you get photos of caucasian women with what could be characterized as “natural” looks; however, change the search to “unprofessional hairstyles” and the vastly different outcome presents predominantly African-American women with what could be characterized as more “bizarre” hairstyles [17]. Granted, these biases are sadly not far off from the views people hold in the world right now, especially in the USA--aware of it or not. Regardless of the importance AI might place on whomever’s life, the use of raw data by simple, but smart, search engines emphasizes the biases within that data. What’s more is that it questions whether or not our ethical frameworks hold biases in and of themselves. To draw this clearly, the notions put forward by classical ethical thinkers such as Aristotle and Machiavelli share one thing in common with more contemporary thinkers like Gates, Musk, and Hawking: their posits and subsequent arguments are shaped by their own personal experiences. This may lead to two conclusions in response to the ethical framework question. First, it may render any consideration of frameworks futile as all guidelines will be biased. Which is often what we seek to remove when we use technology. Or, it just recirculates on itself: which ethical framework is better? The answer is not here, nor is it anywhere in sight. Only the development of this discourse would provide momentum towards the answer, should there be one.

The Future of Artificial Intelligence

It would be difficult to describe the future of AI as definitively positive nor negative. We believe that AI will be beneficial for certain people, but on the other hand, can replace whole sectors of the economy that employs millions of people. It is progressing much more rapidly than researchers predicted in the last 10 years and has already been applied in numerous areas in society. Businesses will need to adapt to the dynamic changes that AI will bring to the economy and approach to working, using AI smarter and more effectively in order to maintain competitive advantage in their industry. In some sectors, AI has the potential to replace masses of employees. We have observed that AI is already so integrated into many industries that it is possible to increase dependence upon it until reliance on employees is no longer required. [18]

With that being said, there are some industries that are not entirely replaceable, at least in the near future. This includes the art and entertainment industry, which would operate using strong AI, a concept that has not been significantly developed yet. It’s possible that singularity will never be achieved unless a significant breakthrough occurs within the research of human cognition and what defines us as humans.

There also lies the possibility of creating new industries using artificial intelligence in a similar fashion to the modern smartphone and the many companies it has produced. With the rise of artificial intelligence in businesses in all sectors, it is predicted that total wealth will increase, but as society slowly adjusts to reliance on AI, wealth may become skewed towards the upper class that controls these successful businesses before it is eventually distributed down.

Artificial intelligence is proliferating into more industries, technologies, and aspects of daily life than ever before. Researchers will be delving into not just the technology and coding behind AI, but the level of integration the public will allow and how that will impact society as a whole in the very near future. It will be up to consumers and both big and small businesses to determine the direction of Artificial Intelligence in the future.


Maycko Macapugas
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada


Personal tools