Artificial Intelligence (D100)
From New Media Business Blog
Introduction
Artificial Intelligence (AI) is all around us. It is integrated increasingly throughout our daily lives- and at an alarming rate. It is likely that most people use their smartphone devices as a form of an alarm clock to wake up in the morning. Though simple, it is still AI. In contrast to that, we might find it erroneous that Watson[1] (from IBM) and its predictive capabilities belong to the same label. This undoubtedly needs clarification. Our common visual association with AI are robots, but the mechanical parts are only a shell for which the AI can operate [2]. In other words, the physical and tangible parts of what we see would not necessarily be viewed (by the experts), as AI. Rather, it is the programming and the capabilities of the program that would be taken into consideration. This brings back a familiar question: what is AI? Arguably, it is the “development of computers that are able to do things normally done by people -- in particular, things associated with people acting intelligently” [3]. It is important to note that these actions are normally done by people. Admittedly we, as humans, tend to do incredibly mundane tasks such as opening doors. And we are also capable of some incredibly complex things such as feeling emotions. Both of these things are done on a daily basis, but these actions are also on two different ends of human capability. However, this has been a major endeavour of the computing science world since John McCarthy first used the term “Artificial Intelligence” at the 1956 Dartmouth Conference [4] . He challenged the world on advancing the computing community to create programs that not only replicate human behaviour, but might also do it better. Some argue that we are far from that goal as the characteristics of Watson and an alarm clock are strikingly dissimilar. These programs were generated for different reasons, and so their programming is designed to handle only one situation or find a solution to a certain type of problem. This brings forth the notion of how AI, once created, might have its limitations.
What is Artificial Intelligence?
For decades, the idea of robots with cognitive abilities that would one day surpass a human’s brain have been widely portrayed in classic science fiction stories and movies. Due to this depiction of what the future might look like, humans’ imagination of the possibilities with artificial intelligence have existed long before the technology was developed. The scientific expectations that stemmed from human imagination long before the actual existence of the technology has led to a general skewed understanding of the true definition of Artificial Intelligence. This narrowed perception often leads to the belief that Artificial Intelligence is something that will happen in the future, far from today’s technology. When the technology finally “arrives”, there is a fear that it could possibly “ruin the world” [5]. In reality, AI is a broad concept that already encompasses several well adopted technologies in our everyday lives.
In general, there is very little standardization around the definition and boundaries of Artificial Intelligence. In a Stanford study, Computer Scientist Nils J. Nilsson defines AI as “activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment” [6]. The same study suggests that this ambiguity in the definition of Artificial Intelligence has led to its growth and evolution in various directions, as it is not restricted by any form of boundaries.
The abstract description of Artificial Intelligence, combined with the definition portrayed in fictions, created a phenomenon known as the “AI effect” or “Odd Paradox” [7]. The Odd Paradox proposes that once a technology becomes widely adopted, it is no longer considered intelligent as everyone has grown so accustomed to it. This feeds into the impression that AI is always a step away from us, near but not quite here. However, it is important to realize that AI has been around for over 50 years and will continue to grow and evolve in incremental steps [8].
Weak vs. Strong Artificial Intelligence
Artificial Intelligence is commonly placed on a spectrum from Weak to Strong AI. This classification provides a form of measurement to the types of technology that belong to the Artificial Intelligence family. Contrary to the belief that AI is meant to replace humans, many AI systems are built with no intention of mimicking human reasoning. These systems are generally associated with applying mechanisms to substitute tasks that normally require human intelligence [9]. This is classified as a “Weak AI”. Designed for specialization, weak AI is programmed to perform very narrow functions and is not meant to complete tasks beyond set boundaries.
IBM’s Deep Blue is a common example of the applications of Weak AI. In 1997, the IBM computer, Deep Blue, beat chess master, Garry Kasparov in a six-game chess match [10]. To Kasparov, this match seemed extremely real and human like as if Deep blue was “experiencing” the game [11]. Although computers such as Deep Blue can appear to behave like humans, it does not mirror the way humans think, and of course, does not play chess the same way a human would. Rather than planning for specific steps, all moves made are steps previously “taught” to the computer by a human to ensure the software can analyze the right moves to make in various situations [12]. Furthermore, Deep Blue is programmed to master the techniques in chess specifically. It cannot “learn” any other functions outside of this field without further programming and development, and therefore does not generate any new value on its own.
Between the extremities of Weak and Strong AI, certain technologies in this field can be classified as an “In Between AI”. Taking cognitive abilities to the next level, In Between AI is built based on the inspiration of human reasoning but does not have the goal of replicating human functions [13]. This classification of “In-Between AI” does not always exist as it is very similar to the characteristics of Weak AI. A classic example of this is IBM’s Watson computer. From winning Jeopardy and analyzing statistics in the healthcare industry, to analyzing human emotions and creating movie trailers, Watson can recognize patterns, understand and draw conclusions with the information created. Reinforcing the concept of In Between AI, Watson’s method of learning mimics that of a human’s but does cannot truly “feel” the way humans can and thus, lacks true understanding. In the cases of Weak and In Between AI, the results of the systems can often behave like humans but does not reflect the way humans think. Simply put, these artificial systems do not have to think in the same way a human would to reach a certain conclusion, they just need to be “intelligent” [14].
The concept of Strong AI suggests the idea of a system with “genuine understanding” of cognition and replicates the human thought process [15]. This perception is most associated with the topic of Artificial Intelligence and is also the type commonly depicted in science fictions. Strong AI is classified as machines that have reached true states of understanding and cognitive capabilities which matches or exceeds that of a human’s. Currently, there are no examples of Strong AI and many controversies exist in regards to whether strong AI can truly be created. Among those who argue strong AI will exist one day, some believe that this type of technology can emerge in the next few years while others estimate at least hundreds of years into the future due countless uncertain variables [16]. The idea of strong AI also ties in with the theory of technological singularity. This idea states that the invention of artificial super intelligence will one day surpass human capabilities, causing society to reform in ways the human mindset cannot currently comprehend [17].
Trends in Artificial Intelligence Development
A major theme in AI development is how initial expectations become exaggerated and then unfulfilled when current technology fails to meet requirements. The unfulfillment of expectations then go on to impact interest and funding for AI development leading to a cyclical of rise and fall of AI. These cycles are similar to the first half of the hype cycle and last approximately 15 years with a focus on different advancements in AI at each time [18]. A major example is the “AI Winter” of the 1980s, in which AI researchers failed to deliver on industry and government expectations. Research into AI waned with DARPA slashing AI investment by 34% and membership to the Association for the Advancement of Artificial Intelligence falling to approximately 4000 members [19].
Currently AI is facing a resurgence in mainstream interest with the amount invested in startups increasing to $309.2 million in 2014, 20-fold from 2010’s $14.9 million [20]. This can be attributed to various factors. First, technological advancements have allowed for cheap and abundant computation enabling greater AI capabilities [21]. When combined with the various cloud services available, this allows for lower costs and widespread deployment [22]. Second, due to Big Data, AI now has the data necessary to learn on an entirely new scale. Third, the widespread adoption of machine learning, in which algorithms analyze data and make decisions, allow for AI to teach itself in groundbreaking ways [23]. This has fueled the use of deep learning, a method of implementing machine learning by modeling after the neural networks in the brain [24]. Google’s own Deepmind, the Go playing AI that won against a professional Go player, uses deep learning in its algorithms [25]. These aspects have all encouraged major technology companies to adopt AI and encourage its widestream adoption.
Selected Artificial Intelligence Topics
Cyber-Security
According to IBM, “the number one challenge for security leaders today is reducing average incident response and resolution times”[26] One of the largest applications of AI in cyber-security is the ability to filter through data at speeds that a regular security expert could never hope to achieve. Due to the requirements of cyber-security in identifying and removing threats, use of AI in this field is largely categorized as weak.
AI2
MIT’s Computer Science and Artificial Intelligence Laboratory released a paper about AI2, an artificial intelligence developed to predict cyber-attacks using machine learning with the help of human analysis [27]. AI2 first identifies any security risks by “clustering the data into meaningful patterns using unsupervised machine-learning” and presents them to a human expert who identifies risks that are a genuine threat. AI2 then incorporates this feedback into its next review and repeats the process. Through this process, AI2 narrowed down 200 most abnormal events on its first day to 30 or 40 events each day [28].
While widespread implementation of a fully automated threat detection AI will take more time to mature, using human input can act as a shortcut in developing expertise. The adoption of AI in security threat analysis would be greatly accelerated through the use of human input. Furthermore, this would ensure that there is still a human element in how an artificial intelligence would learn to remove threats.
AI in Traditional Security Competitions
Capture the Flag(CTF) is a type of computer security competition in which teams attack an opponent’s machines through vulnerabilities while defending themselves, similar to security challenges in the real world [29].
The Defense Advanced Research Projects Agency [30](DARPA) held the first completely autonomous CTF challenge on August 4th, 2016 [31]. Among seven competitors, Carnegie Mellon University’s autonomous AI Mayhem took first place [32]. However when Mayhem was entered in a non-autonomous CTF competition at Defcon 2016, it lost and took last place [33]. While Mayhem did lose, its ability to compete on the same level demonstrates the possibility that AI can defeat a non-autonomous opponent in the future. Furthermore, CTF challenges are the closest representation of the security issues faced in the real world in a competitive environment. A breakthrough of an AI defeating a human opponent in this format would signify the beginning of viability in widespread adoption of AI in this area. In this case, AI could be replacing many of the interdisciplinary skills of a competitor.
DeepArmour
A more consumer focused application of AI in cybersecurity is its application in antivirus software. SparkCognition’s DeepArmour makes use of “neural networks, advanced heuristics, and complex data science” [34] in identifying and removing malicious threats. While DeepArmour is still in beta, the use of artificial intelligence in antivirus software may reinstill consumer trust.
Enterprise Security
When companies invest in security measures to safeguard the barriers into a network, security within the network itself can often be compromised. In this age where the number of entry points are growing numerous and a single misplaced USB can spell disaster, safeguarding within a security perimeter is necessity. This is especially true with the rise of advanced persistent threats(APT) that are custom designed to infiltrate a specific organization.
A number of startups are trying to use artificial intelligence to solve security issues within an organization past its outer defenses. Researchers from Cambridge are developing Darktrace, an autonomous threat detection system. Darktrace bills itself as an enterprise immune system, reflecting how its algorithms are modeled after how the immune system removes threats [35]. Through taking some time to learn about how a network normally operates, Darktrace will then identify suspicious activities as they operate outside of normal limits. All this is done in real time. Just like how an immune system will isolate unusual occurrences, Darktrace can also seal sensitive information when suspicious activity is identified [36].
Creative Artificial Intelligence
Creativity has long been thought to be a uniquely human skill. However, creative AI is forcing society to reevaluate what it means to be truly creative. Creative AI is currently in its early stages of development, but has massive potential and implications for many industries ranging from art and entertainment to marketing and cuisine.
Film and Television
A Movie Trailer by IBM's [37] Watson
Maycko Macapugas | |||
---|---|---|---|
Beedie School of Business Simon Fraser University Burnaby, BC, Canada | Beedie School of Business Simon Fraser University Burnaby, BC, Canada | Beedie School of Business Simon Fraser University Burnaby, BC, Canada | Beedie School of Business Simon Fraser University Burnaby, BC, Canada |
mmacapug@sfu.ca |
References
- ↑ https://www.youtube.com/watch?v=8cWHxd3k4gs&feature=youtu.be
- ↑ http://adage.com/article/creativity/check-commercial-mccann-japan-ai-creative-director/304320/
- ↑ http://newatlas.com/creative-artificial-intelligence-computer-algorithmic-music/35764/
- ↑ https://magenta.tensorflow.org/welcome-to-magenta
- ↑ http://www.businessinsider.com/google-wants-artificial-intelligence-to-be-creative-2016-5
- ↑ http://newatlas.com/creative-artificial-intelligence-computer-algorithmic-music/35764/
- ↑ http://gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160
- ↑ http://fortune.com/2016/10/15/mercedes-self-driving-car-ethics/
- ↑ http://thenextweb.com/artificial-intelligence/2016/09/30/artificial-intelligence-is-quickly-becoming-as-biased-as-we-are/#gref
- ↑ http://gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160
- ↑ http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence/
- ↑ http://time.com/3641921/dont-fear-artificial-intelligence/
- ↑ http://fortune.com/2016/10/15/mercedes-self-driving-car-ethics/
- ↑ http://www.bloomberg.com/news/features/2016-08-18/uber-s-first-self-driving-fleet-arrives-in-pittsburgh-this-month-is06r7on
- ↑ https://www.theguardian.com/technology/2016/aug/24/self-driving-taxis-roll-out-in-singapore-beating-uber-to-it
- ↑ https://www.fastcompany.com/3064196/mind-and-machine/tech-giants-team-up-to-devise-an-ethics-of-artificial-intelligence
- ↑ https://www.thenextweb.com/artificial-intelligence/2016/09/30/artificial-intelligence-is-quickly-becoming-as-biased-as-we-are/
- ↑ http://news.harvard.edu/gazette/story/2016/09/what-artificial-intelligence-will-look-like-in-2030/