Artificial Intelligence

From New Media Business Blog

Jump to: navigation, search


History and Evolution

Traces of AI can be dated back to 1642 but modern AI began around the 1950s. In 1950, Alan Turing published Computing Machinery and Intelligence. In the paper, Turing proposes to answer the question 'can a machine think?' and introduces the Turing Test to determine if a computer can demonstrate the same intelligence as a human. [1]

Then in 1956, Allen Newell, Cliff Shaw, and Herbert Simons created The Logic Theorist which was considered to be the first artificial intelligence program, as it mimicked the problem-solving skills of a human. It was presented at a conference hosted by John McCarthy who is known as the father of Artificial Intelligence.[2]

John McCarthy defined AI as “the science and engineering of making intelligent machines”.[3]. He also created Lisp, which is a programming language used in robotics for different types of Internet services. He started working on self-driving cars by proposing that these cars system requires a computer that is equipped with camera input that uses the same visual input available to the human driver.[4]. This model has become pretty popular and is the basis for the modern automated cars equipped with sensors and controlled systems.

From the 1960s to 2000, AI started to make huge advancements. One of the major highlights was the Mark 1 Perceptron, built by Frank Rosenblatt. It was the first computer-based on a neural network that 'learned' through trial and error[5]. Another key highlight was the introduction of IBM's Deep Blue which beat the world chess champion.

In the 2000s, Supercomputers started entering the market that utilizes some form of neural networks and machine learning.

IBM Watson was one of the most popular supercomputers in the 2000s, It became popular when it beat two world champions in a game of jeopardy. Watson could hold and process large sets of data and information, and could store about one million books worth of information. These days Watson is used for business and AI decision-making. [6]Baidu's Minwa supercomputer was developed that used a special kind of deep neural network to identify and categorize images with a higher rate of accuracy than the average human.[7]Microsoft launched Project oxford that utilizes advanced machine learning and detects facial, text, and speech recognition patterns to identify the identity of the person. Its original goal was to be integrated with smartphone technology. [8] Another popular computer was Google DeepMind, which was a combination of machine learning and the pursuit of neuroscience. Google DeepMind was responsible for building the best general-purpose learning algorithms in the industry. DeepMind's AlphaGo program made DeepMind popular and showcased its abilities. Powered by a deep neural network, DeepMind beat the world champion Go player, in a five-game match. The victory was very impressive due to the high number of possible moves in the game.[9]

By 2014, AI started to appear in consumer goods like smartphones, smart homes, etc. Many large companies like Microsoft and Apple released their voice recognition software like Cortana and Siri. Amazon and Google also released Amazon Alexa speakers and Google released its own home speakers. Both use voice interaction and AI technology to perform tasks like music playback, providing weather and traffic, sports making to-do lists, streaming podcasts, setting alarms, and providing other real-time information. [10]

By 2016, many humanoid robots started appearing. Hanson Robotics launched Sophia; it was one of the first robots with human-like socializing capabilities and facial expressions. Sophia is able to have conversations and generate facial expressions using a complex network of neural technology on its skull. This helps Sophia take note of the speaker's tone of voice and mirror its expressions based on the speaker [11].

Today, the key trends in AI revolve around Machine Learning, automated vehicles, chatbots, virtual assistants, decision making, and medical technology.

What is Artificial Intelligence?

AI's early use was to make complex decision-making problems and automate mundane tasks, but as the technology evolved, it is becoming important and popular in different industries and all aspects of life. The modern definition of Artificial intelligence is that it uses computers and machines to mimic human intelligence and performs problem-solving and decision-making tasks like a human would do [12]. In other words, it refers to the capability of a computer or machine to gain key characteristics of the human brain and perform tasks like a human or better.

There are two main types of AI:

General AI

The AI systems that can perform complex tasks that require a high level of cognitive ability are known as General AI. These systems replicate human intelligence and can perform tasks like humans. General AI is also referred to as strong AI. General AI systems are capable of learning and evolving and perform complex decision-making tasks. [13]. Currently, there aren't many General AI systems available, but as AI capabilities evolve, the future will be surrounded by such systems.

Narrow AI

The AI systems that can perform single, limited, and focused tasks are known as Narrow AI. Despite being limited, Narrow AI has powerful applications[14]. Almost all the AI system used today is Narrow, like self-driving vehicles, speech recognition, natural language processing, and AI-powered virtual assistants

Applications of A.I

There are many applications of AI in several industries. From finance to healthcare, AI could be found everywhere. Some of the applications are:

Speech Recognition: With the rise of voice assistance and smart speaker, speech recognition becomes one of the most most important applications. Speech recognition utilizes NLP to process human speech [15]. For example, Siri, Cortona, Alexa, and other virtual assistances use this technology.

Medical Technology: AI is also being used in the medical industry for diagnosis, treatment, surgical procedures, and maintaining Electronic Health Records. AI has allowed the medical industry to save costs and be more efficient.

Finance: AI is also being utilized by the finance industry in automating trading, Robo-advising, detecting, and flagging suspicion banking activities.

Computer vision: Computer vision allows systems to gain insights and meaningful data from visual input like images and videos [16]. Its goal is to comprehend tasks that the human visual can do and then try to replicate these tasks[17].It is really useful in self-driving, social media, and medical diagnosis.

AI also has applications in gaming, business decisions making, marketing, military, agriculture, and many other industries.

Future of A.I

As AI evolves, more general AI systems would start to come up, and there is also the possibility of AI evolving to superintelligence. According to philosopher Nick Bostrom, superintelligence is "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest"[18]. Currently, this may not seem feasible or just appear to be science fiction, but many countries are rapidly researching and trying to achieve superintelligence capabilities specifically for warfare.

Another interesting field for the future of AI is Augmented Eternity. Researchers at the MIT Media Lab and Ryerson University in Toronto believe that by using artificial intelligence to the data produced each day, we may be able to transfer our thoughts and feeling to a virtual system capable of learning new information, therefore, live forever. [19]

Risks of A.I

Despite the various benefits of AI, there are some major potential risks associated with it. Many countries are in a race to develop AI technology for military uses and this can have a detrimental effect and potentially lead to wars. Governments and political parties can use AI to curb negative speech against them and oppress the less powerful. Businesses can also assert rights over workers and consumers[20]. AI systems can malfunction and algorithms can become unpredictable which can be detrimental, for example, autonomous cars not working as intended could lead to accidents [21]. Due to a lack of human emotions and in order to achieve the goal efficiently, AI may become harmful instead of being helpful. [22] .

Therefore, developers and the Government should be really careful while implementing this technology. They should understand it , regulate it and ensure that it is not being used to cause more harm than benefiting humankind.

Decision Making Under Imperfect Information


After beating the chess and go, the A.I. researchers started to dive into games with imperfect information. In 2015, the team from University of Alberta created a poker A.I. named DeepStack that specializes in heads-up No-limit Hold’em. In 2019, the team from Carnegie Mellon University and Facebook developed Pluribus, the poker A.I. that can beat human pros in 6-player game format.[23]

Solving Imperfect Information Game

Due to the nature of the imperfect information game, in heads-up, the A.I. does not know its opponent’s hand strength to calculate the absolutely correct bet sizing. It is also uncertain whether the human player is willing to give up on their hands at all. Unlike how AI solves chess and go, the A.I. poker bot needs to formulate an unexploitable strategy that is profitable in the long run rather than in one specific hand. The strategy aims to break even at first, and then passively exploits their human opponent who deviates from the baseline strategy. It does not actively seek leaks in its opponent’s strategies. Using Rock paper scissors as a simplification, the poker A.I. would always randomize its frequencies in 33.33% for each, even if it faces an inexperienced opponent whose frequencies on certain options are too predictable. The fixed approach also prevents the A.I from being manipulated by its human opponent as there exists no counter-exploits against the game theory optimal strategy. [24]

Beyond Zero Sum Game and Potential Applications

However, with three or more players involved, it is not possible for the A.I. to reach at a Nash Equilibrium. Pluribus still managed to beat the top pros by playing against itself trillions of times, tracing back to its moves to make better decision in the future. Its success is more significant than the previous A.I. programs as most real-world situations involve multiple parties with hidden information. Researchers believe that the self reinforcing capability is transferable to solve real-world problems such as automated negotiations, medicine, and fraud detection.[25] The self-learning algorithm is also applied in medicine industry, where the A.I. learns from the input via human assistance before performing independently.[26]

Robotics, A.I and Consciousness

The topic of Artificial Intelligence (AI) and robotics have attracted a vast amount of interest over the last decade. Bloomberg considers artificial intelligence “to be the most disruptive force in technology in the coming decade” [27].

Origins and Definition

Karel Capek - R.U.R

The term Robot originated from the Czech word robota which translates to forced labor [28]. The term has some Slavic linguistic roots as the term “Rab” means “Slave”. It was first used in the play R.U.R by Karel Capek where robots are exploited by their factory owners until they eventually revolted [29].

An updated definition of robotics goes off of 3 dimensions of sense, thought, and act. A robot, using its sensors, gathers information about its environment, thinks or uses that information and actively follows instructions, and completes tasks [30].

Processing Power – Computers and the Human Brain

Computer scientists have been predicting the imminent rise of machine learning since the 1950s. However, after decades of technological innovations, we have yet to make any real progress towards true artificial intelligence. Scientists and philosophers have long pondered the nature of the brain and its relation to the mind – typically modeled the mind-body problem [31]. The importance of such debates over the years is crucial for considering the possibility of building machines that reproduce human capabilities like imagination, emotion, or consciousness.

Many researchers argue that true artificial intelligence is around the corner – with new iterations of electronics achieving better, faster, and more efficient capabilities.

Moore's Law and Processing Power

Moore’s Law

Moore’s Law – an observation stating the speed and capability of technology doubles every 18 months – can be used to illustrate the validity of this argument [32]. Suppose it’s 1940 and Lake Michigan has somehow been emptied and your job is to fill it up using the following rule: you can add one ounce of water, and every 18 months, you can double this amount [33]. The year 1940 is picked because that was when the first programmable computer was invented. And 18 months is chosen as the rate of doubling because of Moore’s Law. Lake Michigan is chosen because its size in fluid ounces is approximately equivalent to the computer power of the human brain measured in calculations per second. For the first 70 years, it will seem as if nothing has happened, and within a time span of only 15 years, Lake Michigan will be filled. The exponential curve of Moore’s Law suggests it will take us till 2025 to build a computer with the same processing power as the human brain [34]. Over the years, computers have moved from a trillionth of the power of a human brain, all the way to a thousandth [35].


The Fugaku Supercomputer

In computing, floating-point operations per second (FLOP, flop/s) is a measure of computing performance [36]. Further, a petaflop is one thousand trillion, or one quadrillion, operations per second [37]. Fugaku supercomputer developed by Fujitsu and Japan’s national research institute Riken is currently the world’s fastest supercomputer. Fugaku has held the top spot by achieving a score of 442 petaflops, or quadrillions of floating-point operations per second [38]. It has also topped other categories like performance in artificial intelligence and big data processing capacity [39]. The supercomputer has many applications including in the automobile industry where it's helping automakers develop more resilient vehicle structures by using A.I to study collision impacts [40].

Artificially Intelligent Robots

Bridge between A.I and Robotics

Artificial intelligence and robotics are two entirely separate fields of technology and engineering. However, there is one small area in which the two fields overlap and that is artificially intelligent robots or robots controlled by A.I [41].

Most robots are not artificially intelligent [42]. For instance, all industrial robots – robots used for manufacturing processes – can be programmed to carry out a repetitive series of movements that do not require artificial intelligence [43]. However, non-intelligent robots are quite limited in their functionality which is why AI algorithms are necessary to allow the robot to perform more complex tasks, like understanding human emotions.



CIMON is an AI-powered robot astronaut assistant aboard the International Space Station [44]. The development of CIMON-1 first began back in August of 2016 when the German Aerospace Center, Airbus, and IBM partnered together to build an A.I robot that would help astronauts during their mission and reduce their exposure to stress.

Soon after its successor, CIMON-2 was launched into space on one of SpaceX’s rockets in December of 2019. Researchers were able to augment the robot with a heightened ability to analyze human emotion allowing CIMON-2 to be more of an empathic companion rather than just a scientific assistant [45].

The robot was built entirely using a 3D printing process [46]. It has an LCD screen displaying his face and visual aids. Cimon weighs roughly 5 kilograms, has 14 fans to maneuver himself, has a total of 5 cameras that he uses for documentation and facial recognition, and 7 microphones for detecting sounds and voice recognition [47].

IBM Watson

IBM Watson defeated two of Jeopardy’s greatest champions: Jen Jennings (pictured on the left) and Brad Rutter (pictured on the right)

Success and Mission

Watson is a computer system developed by IBM Research capable of answering questions presented in natural language [48]. While the challenge driving this project was to win Jeopardy!, the broader goal of Watson was to create a new generation of technology that is more effective and interacting with natural language [49].


Whereas IBM’s stock is down more than 10% since Watson’s 2011 win on Jeopardy!, its competitors like Amazon, Microsoft, and Google have emerged as leaders in cloud computing and A.I and thus, have multiplied their share value over the years [50]. Watson’s downfall can be contributed to IBM’s emphasis on big and difficult initiatives. For instance, Martin Kohn, a former chief medical scientist with IBM Research, recalls using Watson for credibility demonstrations like predicting whether patients will have an adverse reaction to a specific drug, rather than recommending cancer treatments [51]. IBM’s intention to use Watson for praise and revenue seems to be the core reason for its downfall.

Self-aware Robots

One of the barriers to having self-aware robots is that robots currently do not have the ability to mimic humans. This is because they lack proprioception – which is the sense of awareness of muscles and body parts [52].

Hod Lipson, a robotics engineer and his Ph.D. student Robert Kwiatkowski of Columbia University are working on a task-agnostic self-modeling machine – a robot that learns what it is, from scratch without prior knowledge of physics or geometry [53].

The machine is comparable to an infant in the sense that it has no knowledge of its own body or physics of motion. However, using deep learning, it is able to repeat thousands of movements, take note of results and build a model out of them. Then using machine-learning algorithms, the robot is able to strategize about future movements based on its prior motion.

Small advances like the one in Columbia University are enabling modern robots to behave increasingly like humans.

Current Applications

Amazon - Robotic Fulfilment Centres

Amazon employs hundreds of thousands of employees to run its massive warehouse network, and it has more than 200,000 mobile robots working inside this network [54]. Robotic fulfillment centers are an example of warehouses operated by different kinds of robots. In 26 of these fulfillment centers located worldwide, robots and people work together to pick, sort, transport, and stow packages [55].

The central goal of these warehouses is to speed up the delivery process and minimize the time needed to fulfill an order [56]. An example of a robot helping Amazon achieve this mission is the Fanuc 6 axis robot. This robotic arm can lift pallets weights roughly 3000 pounds 24 feet in the air [57].

Amazon Truck Drivers

Amazon is using AI-equipped cameras in delivery vans to monitor drivers while they’re on the job, with the aim of improving safety. The technology will provide drivers with real-time alerts to help them stay safe when they are on the road. The cameras are equipped with A.I software capable of detecting 16 different safety issues, including if drivers fail to stop at a stop sign, distracted driving, speeding, hard braking, and whether the driver is wearing a seatbelt [58].

Automation Impact

As per a 2019 report, Amazon plans to move toward increasing reliance on automation which may lead to a loss of 1300 jobs across 55 of its facilities in the U.S [59].

In Midwest manufacturing industries, robots have decreased employment for certain worker demographic groups like young, less-educated men and women [60]. According to Century Foundation, a progressive think tank headquartered in New York City, the estimated impact of robotization is as follows:

  • For an increase of one robot per thousand workers, the employment-to-population ratio falls by an estimated 3.5 percentage points [61].

Rehabilitation - Brain Machine Interfaces (BMI)

The principle behind BMI technologies is that it allows a computer or another digital device to communicate with the brain. Here there are two main components, namely a robotics-based assistive device coupled with a brain signal recorder [62]. When paired together, recorded brain signals from the recording device are converted and programmed into the assistive device [63]. This may then function as a neuroprosthetic, allowing rehabilitation and movement to be gained in body parts deficit of motor or cognitive function due to Central Nervous System (CNS) related disorders or accidents or for patients who have suffered from Stroke.

Application and studies surrounding BMI

Lokomat Rehabilitation Robot

BMI and studies surrounding its applications and expansions are rapidly growing. It is the focus of many studies, experiments, and technology firms all aiming at creating devices and better technology that may help improve the quality of life for patients affected by paralyzes such as paraplegics and stroke patients.

A 2016 study used BMI-focused learning techniques for 8 subjects to train and use robotic devices such as a Lokomat, lower limb exoskeleton, and virtual reality avatar body. Using 12 months of data, the authors concluded that all subjects experienced improved physical sensation and voluntary control of affected muscles and limbs [64].

Another study from 2020, involving 51 stroke patients with one-sided weakness of upper extremities used Brain-Computer Interface techniques to perform therapy and noted significant functional improvements in affected areas of the subjects. Increased neural plasticity or the ability of the CNS to respond to stimuli by reorganizing and restructuring its function was attributed to improved limb function and movement in both studies [65].


Pager playing Pong using only his mind, rewarded with a smoothie via a metal straw

Neuralink is a technological firm whose main focus is research, development, and advancement in Brain-Machine Interfaces. Their main product that is currently under development is called the Link. This neural implant will allow the user to control electrical devices by talking to areas of the brain that control motor function using multiple electrodes inserted in these areas [66]. Evidence of this technology can be seen in a recent video released by the company, where a nine-year-old macaque monkey named Pager is seen playing a video game using only his mind. Using the implanted neural device, he is able to play without using a manual handheld controller [67].

With continued testing and ongoing research, the company hopes to help not only patients with spinal cord injuries but also restoring eyesight, hearing and helping Parkinson's disease patients by replacing affected parts of the brain with the implant [68]. The process of inserting the device into the skull is another field under development as Neuralink hopes to implant the device using a robot and without anesthesia, making the process minimally invasive and time-efficient [69].

Research - Artificial Consciousness

Artificial consciousness or machine consciousness refers to the ability of a machine to be aware of its own existence. More specifically, consciousness in machines deals with measuring things like perceptions, sensations, feelings, thoughts, and memories [70].

Hanson Robotics

Sophia - Social Humanoid Robot

In 2016, Hong Kong-based company Hanson Robotics introduced the world to Sophia, the robot with human-like socializing capabilities and facial expressions. Created by David Hanson, Sophia is able to have conversations and generate facial expressions using a complex network of neural technology on its skull. This helps Sophia take note of the speaker's tone of voice and mirror its expressions based on the speaker [71].

Although research and development surrounding Sophia's technological advancement is still a work in progress, this robot has captured worldwide attention. Sophia has done multiple interviews with media outlets based on predesigned and programmed conversations through coding and has also been conferred citizenship by Saudi Arabia in 2017, making it the world's first robot citizen of a country [72].

Another example is a robot called BINA 48. BINA 48 is owned by Martine Rothblatt and is modeled after his wife, Bina [73].

Machine Learning

What's Machine Learning?

According to IBM [74], machine learning is stemmed from the integration of artificial intelligence. Its main purpose is to take raw data and algorithms to learn what humans do and improve its accuracy. The data that it's given and being processed could be made up of various variables, such as words, images, and number of clicks, in which is stored within its algorithm [75]. This method of analysis that is integrated within artificial intelligence’s main goal is being able to make decisions with little to no human intervention based on patterns and recognition. Many of the applications and other services that we have incorporated into our daily lives use machine learning to store data in order to predict recommendations to its users.

How Machine Learning Works

Machine learning can break down data, take this data sets and make sense by creating algorithmic trends. First, it needs to determine how it’s going to use the given data to decide or predict. As data is often complex, it is classified as labelled or unlabelled. Unlike labelled data, unlabelled data is not told its characteristics by label/tag, therefore it's unsure how to classify the given data. Often, it's made up of samples that just contains the data itself with nothing else.

Although machine learning has been around for a long time, it has become a lot more complex due to the big data applications being more developed with larger and new sources being introduced rapidly. Labelled data in machine learning takes the unlabelled data and tries to understand what it means by attaching a tag to it; the process in which it identifies what the data means by attaching more information [76]. Labels often have some human intervention [77], in which we tell the data what it is trying to identify rather than have it guess. Secondly, it detects error functions where it's used in such way that allows the prediction of the model and assesses its accuracy. The third component is optimizing model processes, where the given algorithm will repeat this process and weights autonomously until a threshold of accuracy has been met. [78]

Supervised, Unsupervised, Semi-Supervised Machine Learning

In supervised learning, we present the computer with already labelled data to train algorithms to better predict outcomes most accurately as possible. An example in which supervised learning is utilized in our daily lives is how the computer is able assess and determine spam mail in our inbox [79].

Unsupervised learning uses algorithms to analyze unlabelled datasets—without the need of a human, it is able to find patterns and/or data grouping [80]. The pattern in which it is most often used is clustering of information and dimensionality reduction.

There is also semi-supervised learning, which is a combination of both supervised and unsupervised when a smaller set of data is used to guide classification and feature extraction from a larger pool of unlabelled data. This method of semi-supervised machine learning is often used when there is not enough labeled data to train a supervised learning algorithm.

File:Unsupervised-learning.png File:Supervised-learning.png

Machine learning has enhanced many aspects of our daily lives, but there are also some challenges that we could come across with such a complex automated system. Although machine learning is autonomous, it is highly susceptible to errors. Data collected must be unbiased and of good quality in order to generate desired results. However, machine learning requires enough time for the data to develop algorithms without with little error as possible, which often requires expensive software and computers to do so. [81]

Real World Application of Machine Learning

Artificial intelligence has become more and more integrated in our everyday lives, most time without being aware, just like machine learning has enhanced hundreds, thousands, and millions of processes from businesses to daily tasks. The following are some examples in which machine learning has been widely applied in our daily, and professional lives.

Image Recognition

Image recognition is very often used in facial recognition applications, such as tagging people on social media, or when our phones are able to identify and categorized images based on the individual's face [82]. Although most often used in terms of facial recognition, it is also able to analyze X-rays, or being able to identify writing based on size, shape, and patterns.

Medical Diagnosis

Machine learning has become so advanced that it is able to diagnose diseases without having to have met the person, simply by using softwares that are able to scan patients and be able to recognize rare diseases, often ones in which our naked eyes would not be able to distinguish. [83] Some real life examples in which the medical field have been constantly using machine learning in its practice is in oncology and pathology scanning for cancer. In addition, doctors are now able to use AI and machine learning to help create treatment options for patients.

Statistical Arbitrage

Machine learning is used widely across the financial industry as it is able to accurately predict trends based on given data, trends, and algorithms more effectively than humans could possibly--often it enhances arbitrary strategies to manage large volumes of secure financial data. Not always, but many professionals and bank industries use algorithmic trading which quickly analyzes large data sets. Based on consumer confidence or stock trader's predicted trend, the algorithm is able to tell real time arbitrary opportunities.[84]

Natural Language Processing

Natural Language Processing (NLP) involves machines that understand text and spoken words in the same way that humans can [85]. Computer programs can translate text from one language to another, respond to spoken commands, and summarize large volumes of text rapidly [86].

There are several NLP tasks that break down human text and voice data in ways that help the computer make sense of what it is ingesting [87]:

Speech recognition: Reliably converts voice data into text data
Part of speech tagging: Determines the part of speech of a particular word or piece of text based on its use and context
Word sense disambiguation: The selection of the meaning of a word with multiple meanings to determine the word that makes the most sense in the given context
Named entity recognition: Identifies words or phrases as useful entities
Co-reference solution: Identifies if two words refer to the same entity
Sentiment analysis: Extracts subjective qualities (attitudes, emotions, and sarcasm,etc.) from the text

Use Cases

Spam Detection

The use of spam detection in NLP models scans emails for language that often indicates spam or phishing [88].

Some indicators include:

  • Overuse of financial terms
  • Characteristic bad grammar
  • Threatening language
  • Inappropriate urgency
  • Misspelled company names

Machine Translation

A widely known example of machine translation is Google Translate. Truly useful machine translation involves more than replacing words in one language with words of another [89]. Effective translation must accurately capture the meaning and tone of the input language and translate it to text with the same meaning and desired impact in the output language [90].

Virtual Assistants and Chatbots

There are several technology available today that are being used as virtual assistants and chatbots. Virtual assistants such as Apple's Siri and Amazon's Alexa use speech recognition to recognize patterns in voice commands. Chatbots perform the same way in response to typed text entries. The best chatbots learn to recognize contextual clues about human requests and use them to provide even better responses of options over time [91].

Social Media Sentiment Analysis

With the use of sentiment analysis, NLP models can analyze the text in social media posts, responses, and reviews to extract attitudes and emotions from the writers [92]. By gathering the sentiment of the text, businesses can use this information to understand the perception of customers and develop a strategy to improve on certain business operations. An example of sentiment analysis is explained later in 'A.I in Business'.

Text Summarization

Text summarization involves consuming huge volumes of digital text and creating a summary to provide a shorter version of the entire text [93].

There are two approaches to text summarization:[94]

  • Extraction-based Summarization: Selects the main information from a source text. A subset of words that represent the most important points is extracted from the text and combined to make a summary. However, the results generated may not be grammatically accurate.
  • Abstraction-based Summarization: Paraphrases and shortens the original text with the use of advanced deep learning techniques. Sentences may be created that are not originally on the document. The results generated eliminate grammatical errors.

A.I in Business

Amazon's Alexa

Amazon's Alexa uses speech recognition to recognize patterns in voice commands and natural language generation to respond with appropriate or helpful comments. This technology is very useful in businesses that enables organizations and employees to get more work done. They can use Alexa as their intelligent assistant to be more productive in meeting rooms, at their desks, and even with the devices they have at home.

Alexa for Business can perform the following[95]:

Alexa for Business
  • Reserve meeting rooms and start conference calls
  • Link employee emails and calendars with Alexa
  • Join online meetings
  • Schedule calendar meeting events
  • Inform meeting participants
  • Automatically release unattended meeting room reservations that are booked after a chosen time period
  • Track meeting rooms metrics such as attendance rate, recovered bookings, and most and least used rooms

Steps [96]:

1. Users make requests from shared or personal devices

2. Alexa uses speech recognition to interpret the request

3. Alexa for Business provides context and additional information

4. Alexa responds and performs the requested actions

Sentiment Analysis in Banking

A bank in South Africa was concerned about its perception in the market. With intense competition, the bank wanted to make sure that its customers did not turn to other banks. Repustate’s clever AI-powered API sentiment analysis extracted all the data from the 2 million texts collected from the social media campaign over the duration of 3 months [97]. With the use of sentiment analysis, the bank noticed that most complaints were about not receiving any service at particular branches during lunch time. To handle this issue, the bank ensured that the branches had more tellers during high volume hours and never had empty teller stations at peak traffic times. With these new systems in place, the bank saw a reduction in employee turnover and an increase in new customers. The bank could tell that the biggest sources of negative comments were phone support and online banking. With the use of NLP, the bank understood where to focus its efforts to improve its business.

A.I in Mental Health


Woebot is a relational agent for mental health that was founded in 2017 by Alison Darcy [98]. The app was developed in hopes of building trusting relationships to meet the need for a new generation of mental healthcare [99]. A large-scale study involving 36,070 users discovered that Woebot was capable of establishing a therapeutic bond with users[100].

The key findings from the study [101]:

  • The bond that Woebot formed appeared to be non-inferior to the bond between human therapists and patients
  • The bond is established extremely quickly, in just 3-5 days
  • The bond does not seem to diminish over time

Advantages of Woebot: [102]

  • Available 24/7
  • Free of cost and password-protected
  • Users can decide when and how often they need to use it
  • Empathetic and validating responses
  • Evidence-Based lessons and skills

Disadvantages of Woebot: [103]

  • Can be repetitive/Get in a feedback loop
  • Unable to provide much correction in response to free text
  • Typically need to scroll to find past examples
  • Cannot replace a human therapist

Woebot can quickly form a bond with users and deliver human-like therapeutic encounters that are psychologically related, responsive to a person's dynamic state of health, and targeted using multidisciplinary tools [104]. Woebot is not a replacement for an in-person therapist, but can be helpful to tech-savvy people who are new to therapy and those in remote areas with no access to traditional therapy. [105]

Challenges to Natural Language Processing

Natural Language Processing is a powerful tool that can be beneficial, but it has several limitations and problems: [106]

  • Contextual Words and Phrases: The same words and phrases can have different meanings according to the context of a sentence and many words have the same pronunciation but different meanings. NLP language models may have learned all the definitions, but they find it difficult to differentiate between two words that are pronounced the same.
  • Ambiguity: Sentences and phrases that may have two or more interpretations.
  • Errors in text and speech: Misspelled or misused words can be a problem for text analysis. Different pronunciations, accents, and stutters may cause difficulty for machines to understand. These issues can be minimized by having a growth in the language databases and being trained by users to overcome the errors.
  • Lack of Research and Development: Machines that incorporate NLP require a lot of training data to function and become smarter. As new machine learning techniques and custom algorithms are advancing, more research and new techniques must be developed to continuously improve.


Ayush Joshi Run Cai Mnorath Mann Denise Zhen Karina Yan
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada


  74. https://
  75. https://
  76. https://
  77. https://
  78. https://
  79. https://
  80. https://
  81. https://
  82. https://
  83. https://
  84. https://
Personal tools