Edge Computing

From New Media Business Blog

Jump to: navigation, search



What is Edge Computing and Why Does it Matter

Edge computing presents a new way of reshaping data that is handled, processed and delivered. Gartner’s definition explains edge computing as “a part of a distributed computing topology in which information processing is located close to the edge - where
How Edge Computing Works[1]
things and people produce or consume that information” [1]. It can also be described as computing that is done at or near the source of the data instead of relying on the cloud at one of a dozen data centres to do all the work [2]. It is often referred to as Mobile Edge Computing (MEC) and provides execution resources with networking within or at the boundary of operator networks. It also brings computation and data storage closer to the devices where it’s being gathered, rather than relying on a central location that can be thousands of miles away [3].

The figure to the left depicts how the edge computing framework operates, the process initiates by obtaining data from the internet of things (IoT) devices which is followed by an analysis of the data sets at the edge of the network. The device then transmits the processed data to a centralized data centre or the cloud [1]. Edge computing solves all issues with transmitting data as it provides a local source of processing and storage for these systems [1]. Edge computing-based processing occurs in real-time so that data doesn't suffer latency issues that could affect the performance of an application. The framework was developed due to an exponential growth of IoT devices, which connect to the internet for either receiving or delivering information to and from the cloud [1]. Edge computing can be placed at enterprise premises in various locations since the edge infrastructure can be managed or hosted by communication service providers. Some of these locations can include factory premises, buildings, homes and vehicles such as trains, vehicles and airplanes [3].

For small companies or startups, cost-saving is a big driver towards implementing edge computing. Most companies that are well-versed about cloud computing know that bandwidth costs are significantly higher than they expected. The biggest benefit of edge computing is its ability to process and store data faster, allowing more efficient real-time applications that can enable critical use-cases for companies [1]. For example, before edge computing, a smartphone scanning a face for facial recognition would require running the algorithm through a cloud-based service which would typically take a long time to process. Given the edge computing technology, the algorithm can be run locally on an edge server given quicker and more efficient answers or in this case facial recognition [1].

What is Driving Edge Computing

In the context of a global economy, organizations are quickly moving quickly towards digitizing all of their systems processes. Early investments in edge infrastructure can provide organizations with the capability to significantly improve their customer engagement. Technically, the expansion of computing, storage and networking have given innovators the tools they need to develop systems that used to be theoretical [4]. 5G is a great compliment to the growth edge computing is expected to see in the next few years. It will be important to pair these two technologies together since 5G is a critical driver for the growth of edge computing.

The rate at which devices are collecting data is growing exponentially, and there won’t be enough bandwidth to transfer that data to the cloud for processing [4]. The amount of big data being produced today is exponentially growing with the increasing advent and commercialization of IoT devices being introduced. According to Cisco Systems, network traffic is going to reach 4.8 zettabytes by the year 2022 [5]. There is a need for high-performance and low-latency platforms that can consume data and instantaneously generate insights. However, the cost of managing such data sets continues to increase because all of the data travelling to the cloud and back may not be relevant [6].

For specific industries where network latency is critical, such as autonomous vehicles, it can be life-threatening to rely on a traditional centralized cloud model. Decisions will be made too slow which can cause serious problems in time-sensitive use cases such as factory automation and field inventory control. Consumers and businesses do not want to rely on a system where, “network latency is too high, and reliability is too low to be practical” [4]. With the evolution of computing capabilities, users expect to have improved and immersive experiences available at all times. In 2018, leading semiconductor manufacturers continue to develop new integrated circuits that keep pace with Moore’s Law [7]. For context, the computing power in an iPhone X is far superior to the computer that was used in the Apollo 11 mission [8]. If technology companies continue to push the boundaries of what’s possible, consumers will continue to demand greater capabilities from the technology they use.

Another major factor that has affected the edge computing landscape is the Covid-19 Pandemic. All of the lockdowns associated with the pandemic have fundamentally changed the way we conduct business, education, and entertainment. Companies are quickly finding that investing in technology is the key to ensuring employees are able to stay connected as well as maintain access to the content, documents, and data they need to do their jobs. Some of these changes will likely outlive the lockdowns including more remote work, virtual collaboration, and an increase in streaming services [9]. Some examples of companies committing to Edge Computing during the pandemic include Verizon, who has committed to boosting its investments in network infrastructure by $500 million to accommodate the demands of more telecommuting and online learning [10]. Datacenter provider Equinix has said it will accelerate the infrastructure investments it would have carried out over the next year or two [11].


Market Landscape

Current Market Size

The current global market size of the Edge Computing market is estimated at 3.5 billion USD which is a noticeable difference from the valued market in 2017 which was 975 million USD [12]. Edge computing is being adopted by all major regions around the world, however, North America accounted for the largest revenue share of around 46% of the global market in 2019 [12]. Within North America, the USA holds the largest market share due to the adoption of edge computing solutions in industries such as surveillance, automotive, and healthcare [13]. Europe and the Asia-Pacific regions account for the second and third largest markets, respectively. Increasing investment by major industry players is a big reason for the growth seen in the market so far and the continued growth that is to be expected. For instance, investments in other industries have been a major contributor to the expansion of the edge computing market. In order to improve the growth of other advanced technologies like natural language processing, augmented reality, and virtual reality organizations are focusing on how they can improve the customer experience through minimum latency and advanced connected solutions [14]. Nearly 75% of potential edge players in industries including media, gaming, telcos, and security companies already have some plans of integrating edge into their IT environment by the end of 2020 [9].

Current Edge computing market by industry. [12]

The two largest industries in Edge computing by market share are the Energy and utilities industry and the industrial industry, and the large market share of the energy and utilities industry can be explained by the vast amount of data being generated by distributed energy stations [12]. This data needs to be processed and analyzed quickly to facilitate data-driven decision making in a low latency environment [12]. There is huge potential for edge computing in this space and we can attribute some of that to the legacy infrastructure that needs to be replaced. The capabilities of edge computing, the energy, and the utility industry are plentiful, however, many of the systems in this space have been outdated. Some of the recent investment decisions in this area may be attributed to the need for an upgrade, not the drive toward innovation.

The industrial sector is seeing an increase in edge computing investment due to manufacturers employing smart machines to increase product lines. With the advent of industry 4.0, which is known as the fourth industrial revolution, has been a driving force in the need for digitization and automation in the smart manufacturing sector [15]. One example of a smart factory was when Elon Musk tried to automation the Gigafactory. He quickly determined that “humans are underrated” and decided some things are better-left manual [16]. However, manufacturers around the world are seeing the benefits of advanced control, sensing, and even simulation capabilities and are quickly investing to automate their factories. When machines on factory floors can produce real-time, actionable insights combined with the expected improvement of speed with the introduction of 5G there are a number of growth avenues for “smart factories”[12].

Emerging Industry Players

We’re seeing a move to an integrated ecosystem of rugged and portable edge enabled devices- such as Amazon’s AWS Snowcone and Microsoft's Azure Stack Edge. These companies aim to create competing hybrid computing platforms by marrying edge and native cloud computing platforms together. Such an integrated ecosystem of devices enables running cloud services and virtualized apps on-premises and then extending the reach to remote locations.

Microsoft Azure Stack Edge: Azure Stack Edge is an AI-enabled edge computing device with network transfer capabilities. It is the newest addition to the Azure family of hybrid computing devices. As a cloud-managed device, it contains an inbuilt field-programmable gate array (FPGA) that enables accelerated AI-inferencing at the edge and possesses all the capabilities of a traditional network storage gateway [17]. Its datasheet cites the device's technical specification as consisting of 2x10 core Intel Xeon CPUs, 128 GB of RAM and 12 TB of NVME flash storage [18]. The device benefits from hardware-accelerated machine learning capabilities which can be used to build and train machine learning models in Azure cloud, analyze video feeds and machine sensor data at the edge. It promotes edge and remote site computation by running applications at remote locations to address latency and bandwidth constraints [19][3]. Azure Stack Edge enables IoT scenarios by providing the ability to preprocess data by aggregating, modifying or creating subsets of data at its source prior to sending it to the cloud to further process or retrain machine learning (ML) models or storage [17] [19]. It is available in both commercial and rugged form factors, with the latter being a portable device that is tailored for operations in harsh environmental conditions in the field of disaster relief, defence, security, energy scenarios and geological surveys. The device also helps prevent regulatory compliance violations by using ML models to alert users regarding potentially sensitive data collected at edge locations and enables taking action locally before transmitting such data sets to the cloud [19].

AWS Snowcone: AWS Snowcone is the smallest member of the AWS Snow Family of edge computing, edge-based storage and data transfer devices. It weighs 4.5 pounds (2.1 kg) and has 8 terabytes of usable storage data. Snowcone is a secure, ruggedized and purpose-built edge device to enable use-cases outside of traditional data centers [20]. In terms of computing power, the device consists of 2 vCPUs with a 4GB RAM and an 8TB hard disk drive which is capable of running machine learning workloads at the edge. It has wifi access, two gigabit Ethernet ports and two USB-C type ports which are used to provide power and for data transfer, respectively [21]. The device works with IoT sensors as a centralized IoT hub, application monitor and data aggregation point, including a lightweight analytics engine. Snowcone’s Wi-Fi or wired 10 GbE networking enables data collection and processing at the edge using AWS IoT Greengrass or Amazon EC2 instance AMIs. It offers two data transfer options, either by shipping the device which contains data to AWS for offline data transfer purposes or via online data transfer with AWS DataSync. It is designed to withstand harsh environments including free-fall shock, operational vibration and is equipped with dust-tight and water-resistant features when the device is sealed. To ensure productivity in harsh external environments, the device is equipped with wide operating temperature ranges from freezing to extremely hot desert-like conditions. For security, the device uses hardware-based Trusted Platform Modules (TPM) to ensure encrypted storage of device-specific keys making them inaccessible to software and helps ensure device integrity. Any data stored in the device is encrypted using two layers of at-rest encryption, which helps ensure the security of stored data during shipping. Encryption keys are never stored on the Snowcone device and are securely managed using AWS Key Management Service (KMS) [20].

Potential Market Size and Growth

With the expectation that the cost of investing in edge computing will decrease, companies are finding ways to reduce their costs by implementing edge infrastructure. The edge market is poised for growth in the next 5-10 years [22]. Edge computing investments are increasing in all industries and all regions around the world. One “compliment” market to edge computing is 5G, and the expected increase in the demand from consumers enables large technology growth avenues for telecom providers. Telecom providers are expected to embrace new opportunities in Multi-access Edge Computing (MEC) market space [22]. MEC gives organizations the ability to reduce latency and mitigate network congestion by bringing processing tasks closer to the end consumer, in this case, the cellular customer [22].

Growth rates by region[23].

The global edge computing market can be categorized into 5 major geographic regions: North America, Asia Pacific, Europe, Middle East, and Africa. The Asia Pacific region is expected to grow at the fastest rate, based on CAGR, between 2020 and 2027 [12]. Asia has seen a significant increase in the adoption of advanced technologies such as IoT and cloud computing. Both of these technologies are a driving factor in the expected growth of edge computing in the Asia Pacific region [12]. In addition, the edge computing space can be segmented by components, which include services, solutions, platform, software, and hardware [24]. The hardware component of the edge market is estimated to hold the largest market size during the forecast period of 2020 - 2025 due to, “computing operations and decentralizing storage for the large-scale distribution of hardware components” [24]. There is an anticipated wave of micro Edge Data Centers that will be deployed to help improve application performance. The data centers being deployed are expected to range from small clusters located on streetlights to a few racks located in the base of cellular towers [22].

The Asia-Pacific region is estimated to have the largest growth rate from 2020 - 20205 [25].

The global edge computing market would grow from 3.6 billion USD in 2020 to 15.7 billion USD by 2025, for a Compound Annual Growth Rate (CAGR) of 34.1% during the forecasted period [25]. The emergence of markets that are still in their infancy, such as autonomous vehicles and the connected car infrastructure necessary to meet consumers' performance and safety standards will continue to drive the opportunities available to edge computing vendors [26]. Organizations are innovating fiercely to meet customer demands that include low-latency connectivity and automated decision making. The technology we use on a day-to-day basis requires advanced capabilities that allow consumers to conveniently use their technology. For instance, Sony, Bose, Apple, and Microsoft are competing in the active noise cancelling (ANC) space [27]. In order to automate the decision to increase or decrease the level of ANC at any time requires fast decision making with AI at the forefront of these automated decisions [28]. Consumers use their headphones every day with the expectation that they will register when they’re in a loud area, and companies are required to adhere to this expectation. Investments in edge computing will create endless opportunities for users to have significantly improved experiences with their every-day tech.


Large companies are coming out with innovative Edge solutions and the benefits of reduced data transmission and storage costs through localized processing are promising. However, companies will have to invest heavily in the required architecture to create comprehensive edge computing solutions [29]. Some of the hardware that will significantly increase the capital expenditure of companies include edge nodes, other edge devices, and edge data centers [29]. There are also potential security risks that have yet to be addressed by early investors in edge computing. Researching and developing ways to improve the security of the entire network including the devices on the network will require large investments [29]. Due to this, some industries may be reluctant to invest in edge computing solutions. It’s always important to understand whatever investments in emerging technologies fundamentally align with the goals of the business. For example, many industries would benefit from reduced latency and improved application performance, however, if consumers don’t see the value added by this reduction there won’t be a point in investing in that technology. Live and on-demand video streaming companies may benefit from edge computing, although the need for them to invest in this technology remains unknown because of consumer demands and the potentially low pay-off [29]. The edge computing market is still young, and continuous R&D and advancements in this space are expected to reduce the cost of edge computing infrastructure [29].


Some of the benefits of edge computing include low latency, improved security, reduced infrastructure requirement, improving reliability, speed and high bandwidth. They will each be discussed in their respective sections.

Low Latency, High Bandwidth and Increased Speed

Low latency is the duration in which your request travels through your device to the network and back [30]. It is a benefit of edge computing as it shortens the time between when you perform an action to when you receive the result. On the contrary, high latency takes longer to display the requested results, therefore the lower the latency, the better [30].

Model IoT based devices typically work at fast speeds when connected to the internet. Latency usually goes unnoticed by the average user but exists in some scenarios and can cause pretty severe damage [31]. For example, edge computing is a framework that enables autonomous vehicles to benefit from low latency connections since the vehicle is required to make real-time decisions in object identification such as other surrounding cars, people, etc [31]. Having high latency within autonomous vehicles will put the driver and their surroundings in danger as the time it takes to generate a predicted result is delayed.

An example where low latency could benefit from a gaming perspective would be in fast-paced games like Overwatch or Call of Duty. In this case, high latency creates a lag in games that leads to significant delays between the character’s action and their input. In other words, the enemy could have already shot you while you are still trying to shoot your gun, however, on your screen you would not know until the connection catches up. [30].

Bandwidth refers to the maximum capacity of an internet connection however does not include or refer to the actual speed [30]. An example of bandwidth being an important factor to consider is when you are streaming videos online. Streaming involves downloading content from servers, and bandwidth tends to play a big role in both video and audio streaming. Low bandwidth is tied to your connection as it often leads to longer buffering times in order to keep up with the size of the content [30]. Therefore, bandwidth optimizes the ability to send information to the cloud server. Additionally, edge aids in reducing all bandwidth related costs as it allows for data analysis on the device. This optimizes the data and increases productivity [32]. When considering speed, the total downtime or latency in data communication can cost a business thousands of dollars, here edge gives it the capability to increase network speed through reducing latency [33]. This is done by reducing the distance information travels through processing data closer to the source of the information. For that reason, we would not be sending data to a far cloud server, so it will help with speed quality and overall responsiveness [33].


It is interesting to note that although edge computing brings security challenges (as discussed in the next section), it also addresses the traditional data security concerns associated with cloud computing. Security refers to the possibility of the stored information on the cloud to be hacked; compromising the user or customer's data [31]. Traditional cloud computing is centralized, putting them at risk to Distributed Denial of Service (DDoS) attacks and power outages [34]. However, edge computing distributes processing, storage across devices and data centres which make it difficult for disruptions to take down the networks. As well, edge computing will allow a device to collect a large amount of filtered data, thus only relevant and contextual information is sent to the cloud. There are times when devices are not linked to a network. If the cloud somehow becomes compromised, it will not affect all of the users data, mitigating some of the potential risks a hack could bring [31].

Reduced Infrastructure

With fewer data transferred to the cloud, it is harder to intercept during transmissions. Rather than expanding servers to make enough capacity for information, with edge computing, you can store data directly with the user and that means you can expand without paying for more resources for additional infrastructure [31]. Thus, this offers companies scalability as edge computing can be used to scale the IoT network without needing to worry about storage requirements [33].


Edge computing also improves reliability as it does not depend on a steady connection with the server or internet, it cannot struggle with a slow connection or network failure [31].

Since the data storage is local, IoT applications consume less bandwidth and are able to work even when the connection to the cloud is affected. Edge computing operates even when there is limited connectivity so business operations can carry through without worrying about data loss. Data can be routed to different paths to allow users access to information when and wherever they need it [34]. Edge computing is typically used for operations in remote areas or locations which typically lack the presence of a reliable and high-quality network connection [31].


Some of the challenges of edge computing include security, more hardware, cost, and different device requirements. The upcoming sections will discuss each of these in greater detail.

Security and Authentication

Starting with security and authentication, like any emerging technology, cybersecurity is also common with edge [31]. The data collection process poses a challenge when the possibility of an attack arises as if the process is compromised during such a critical juncture, the hacker can manipulate the device to misunderstand the collected data [31]. Edge powered devices are required to physically shield the data stored by them because if they fail to do so, any hacker can tamper with it [31]. Additionally, data at the edge could be troublesome especially when handled by different devices that might not be as secure as a centralized or cloud-based system [35]. To combat this challenge, it would be good to make sure that the data is encrypted correct [35]. As an extra layer of security, different devices can be mapped to different expiration times depending on the functionality of the device [35]. For critical devices such as a locker or heart monitor, the expiration times can be very low and should be authenticated every time just before their use [35].

More Hardware

Edge will require more storage on the device as the storage devices and computing evolves in terms of sophistication and compactness. Usability should not be an issue as more hardware is needed because when you initiate the development of an IoT device, this may become a major factor [31]. Inside an edge computing device, there is a limited amount of CPU, RAM, GPU/APU, and networking support which exists [35]. Each real-time application needs some part of these resources to perform its task within the prescribed deadline [35]. So devices may wait to operate until capabilities can be secured from the edge computing network. As an example, consider a video stream from an AR device being analyzed at an edge device and a thermostat [35]. Now it needs analysis of the video stream to determine how many people are in a room and adjust its setting accordingly. If the edge device can only support the analysis of a single video stream due to the demanding nature of such processing, it will have to make a scheduling decision to prioritize one of the two streams [35]. If the edge computing paradigm becomes popular, this guarantees contention for the resources at the edge devices, and the assumption of instant availability of resources that many applications make today may cause failures of timeliness guarantees for many real-time services. One possible solution approach here is to use the cloud as a failure backup for applications that usually expect delays [35]. Also, where applications are designed to not last, the information can be stored on the user device, cloud or a combination of the two [35].


Edge computing devices that possess data processing functions have expensive operating costs. Operating older versions or technological architectures that do not have such processing capabilities mandates additional equipment which results in extra costs. Similarly, in terms of implementation and post-implementation phases of systems development, the configuration, deployment, and maintenance phases of an edge computing framework are equally expensive. In addition, implementing an edge infrastructure can be quite expensive although it can lead to more efficiency [36]. Edge computing platforms charge a premium for the advantages they make possible. For instance, a request processed via AWS's lambda@edge costs approximately three times the price of a request which is processed via AWS Lambda, which is already a costly service. Given the nascency of its technology which mandates a significant period of time prior to its mainstream market adoption, such expensive pricing rationale would constrain edge computing to a niche product servicing only a limited set of applications which are extremely sensitive to network latency and high bandwidth constraints [37].

Varying Device Requirements

Differing device requirements for processing power, electricity and network connectivity speeds can significantly impact the performance reliability of an edge device [1]. If a single node fails, then redundancy and failover management become crucial for devices that process data at the edge to ensure that such data is delivered and processed correctly [1].

Current Trends and Use Cases

There have been vibrant discussions around edge computing and its potential applications for enterprises in recent years. Combined with the development of cloud computing, IoT, AI and 5G networks, companies started to see the value of edge computing and the potential in the fast-evolving technology. Currently, there are many applications or use cases of edge computing adopted in a wide range of industries and seven of them are listed below as examples.

Testing for the face recognition live at Sartup Autobahn Expo Day [38].

Automotive Industry

During the Startup Autobahn program in 2019 [39],Porsche collaborated with the US-based edge computing start-up Foghorn on a driver recognition project using edge AI to enable keyless and offline unlocking of Porsche vehicles [38]. The driver’s identity will be authenticated based on multiple security factors. For instance, there is a camera for facial recognition, an infrared camera for spoofing detection and a Bluetooth sensor to detect the proximity of the driver’s mobile phone [38]. Within the 100-day program, the algorithms were trained with real camera images of a few people and were able to recognize them off-line during the demonstration. If implemented on a larger scale, this technology will enhance the driver experience by providing a frictionless and instant entry to the car and eliminate the inconvenience brought by misplacing the key [38].

Retail Industry

Shell, the oil and gas company, is piloting a machine vision system built on Microsoft Azure at their retail gas stations to better protect its customers. The machine vision system is essentially a computer that can process and analyze the footage that it captured and guide machine actions and human decisions accordingly [40]. At the retail gas stations, the system will be able to predict and detect unsafe actions onsite and alert staff for intervention [40]. For instance, if a person starts smoking at the pump, an onsite camera will capture the footage of the situation and identify that there is a safety issue [40]. The system will then pop up an alert on the computer dashboard of the store manager, who can disable the pump immediately to avoid potential incidents [40]. In this case, Shell chose Azure IoT Edge to take advantage of the low-latency and nearly real-time responses that the edge solution offers, which is especially valuable in making quick safety decisions to avoid hazards. For the next step, the company is looking into leveraging edge technologies in the safety management of its vital assets such as pipelines and wells.

Bühler's LumoVision grain sorter [41]

Agriculture Industry

Bühler, a Swiss plant equipment manufacturing company, worked with Microsoft to develop a grain sorter called LumoVision [42]. The sorter uses UV light and AI-enabled cameras to identify and expel defective grains from the production line, at a rate of 10 to 15 tons per hour [43]. This will bring the efficiency and quality control of grain sorting to the next level as it was previously impossible for human eyes to find small defects in such a large-scale and fast-moving production line. Specifically, the machine can minimize toxic contamination in maize and improve yield by identifying and removing 90% of cancer-causing, aflatoxin-infected grains [44]. The innovation not only saves time and money for food production companies but also helps reduce waste in the global food supply chain [42].

Healthcare Industry

An example of the LHSS dashboard [45].

FogHorn, the US-based edge AI software developer, is offering a new product line called the Lightning Health & Safety Solution (LHSS) [46]. The suite contains real-time analytics capabilities and provides machine learning that is pre-trained for actions such as temperature detection, cough detection, hand washing monitoring and social distancing monitoring [45]. For instance, using infrared cameras, video analytics and biometrics, the system can determine a person’s temperature and carry out mask detection [45]. This use case is extremely helpful and relevant during a pandemic such as COVID-19.

The FogHorn LHSS also offers a dashboard that allows customization for different industrial environments. For example, if a company requires hard hats and safety vests to be worn on-site, the system can monitor the use of the safety equipment and trigger alerts on the manager’s dashboard when workers fail to comply [45].

The AI-led Mayflower Autonomous Ship is expected to cross the Atlantic Ocean on its own in September 2020 [47].

Transportation Industry

In March 2020, the marine research company Promare and IBM announced that they have developed and will be testing an autonomous ship called Mayflower. With an AI Captain, the Mayflower Autonomous Ship (MAS) will be able to self-navigate across the Atlantic Ocean [47]. During the two years before the trial, the company had trained the AI Captain with over a million nautical images [47]. Now the AI Captain will be able to use cameras, AI and edge computing systems to navigate around the ships, buoys and other ocean hazards that it is expected to face during its transatlantic voyage in September 2020 [47].

Edge computing is especially critical for an autonomous ship like Mayflower because it will have no access to high-bandwidth connectivity throughout its journey. The ship will have to sense the environment, make smart decisions and act on these insights all by itself within a short period of time, and this can be accomplished with the help of edge computing [47].

Noble launched the world’s first digital drilling vessel [48].

Oil and Gas Industry

In 2018, the offshore drilling company Noble launched the world’s first digital drilling vessel called Nobel Globetrotter I. Before this digitized rig was implemented, whenever there was a rig failing, the protocol was extremely inefficient and involved a lot of back and forth. The crew members onsite had to troubleshoot locally, phone the experts onshore to discuss their findings, follow the experts’ directions for further troubleshooting, and call back for additional instructions [48].

The new system solved this problem by collecting information on the actual rig and creating a virtual twin of the rig that lives in an edge processor [48]. If the drawworks fail prematurely on the digitalized rig, the same malfunctions are anticipated to occur on its physical twin in the near future. The system will collect and analyze data on its physical twin and notify the crew on-site and the experts onshore, who will work together to fix the issues before they take place [48]. By predicting potential failures as far as two months in advance, Noble could avert breakdowns almost entirely and save the company $80,000 to $465,000 USD per day [48].


Similar to the private sector, government agencies have started to see the benefits of edge computing and incorporated the technology within their operations. Below are some examples of the US government implementing edge computing in their business processes.

FEMA, which stands for the Federal Emergency Management Agency, is using edge-enabled drones to collect visual data before deploying human rescue teams. They are also using facial recognition to gather information about disaster survivors onsite [49].

Edge computing has helped the US Air Force to save almost $1 million USD a week in tanker refueling costs as it enables better predictive logistics for refueling planes when they are in the air [49].

With edge computing, the US soldiers are able to run more complex applications such as 3D-rendering applications or geospatial apps at the tactical edge and dynamically change the applications based on unforeseen factors [49]. For instance, the soldiers may be on a monitoring mission initially. When they get into the area, something may have changed that turns the mission into a reconnaissance one. Edge computing allows them to reconfigure their edge devices and bring up apps that are more suitable for their new mission, all while connected to or disconnected from the network [49].

The Agriculture Department is using edge computing to perform onsite soil sample analysis [49].

Future Trends

Smart Cities

A smart city is a city that aspires to achieve objectives by utilizing communication technology solutions and trends. It is a framework composed of information and communication technologies to develop, deploy, and promote sustainable development practices [50]. The main objective is to enhance the quality of life of its residents by addressing the growing urbanization challenges. It’s important to invest in technology that will help make smart cities a reality because it’s estimated by the year 2050, 60% of the world's population will live in cities [51].

Smart Grids

The current grid suffers from issues such as unpredictable outages, undetectable consumer fraud, and inflexible electricity prices [52]. With the introduction of smart grids, there is communication between the electricity provider and consumer. By measuring a consumer's electricity usage frequently through a smart meter, these smart grids can improve efficiency by constantly monitoring the grid's status, integrating renewable energy sources, and predicting the energy demand [52].

Smart grids consist of a large number of sensors that continuously collect high-resolution data. In order for this data to be analyzed effectively, it needs to be done in real-time [52]. This is where the edge is necessary for the success of these systems. The cost savings and efficiencies created by smart grids will improve drastically with the integration of edge computing technology. By analyzing the data closer to homes and electricity grids, the edge will help reduce latency and improve reliability [52]. Reliability is crucial in smart grids because real-time data analysis is vital to properly facilitate the use of technologies such as smart meters and microgrids so that all stakeholders can reap the benefits of this proposed system.

Future Smart meters [53]

All of this is possible through smart meter technology, where real-time information is being exchanged between stakeholders. Through edge computing, smart meters installed in residential homes are capable of analyzing data and then sending relevant information to the grid [52]. This ensures that information is being exchanged efficiently rather than having inefficient data transmission between a centralized server. With the rise of smart appliances, smart meters can connect with these appliances and schedule the most efficient time for these appliances to be used [52]. For instance, a smart meter may schedule your washer and dryer to be used during non-peak times so that you can have a cheaper bill at the end of the month. Dynamic pricing models can give consumers real insight into how they consume electricity. Currently, our energy bills are inflexible and may lead to higher expenses for consumers and energy producers [52]. The main objective with these models is to increase the price of electricity in real-time during peak times when the loan on the power grid is high, and decrease the price during slower times [52]. Through smart grids, energy providers can obtain all of the data from homes in real-time and adjust their prices accordingly. Smart meter technology is already being used and consumers who opt for suppliers who use smart meters won’t have to rely on estimated utility bills [54]. This can help producers allocate their resources in a more efficient way and help reduce the chance of outages that prove costly.

Edge computing also enables the existence of microgrids. A microgrid is a local energy grid, which means it can disconnect from the traditional grid and operate autonomously[55]. An example of a microgrid would be the use of solar panels on the roofs of buildings in order to self-generate electricity when possible [56]. This micro-grid is connected to the larger power grid and is capable of communicating when it needs more electricity from the power grid when it can sustain itself, and even when it has extra electricity to sell [55]. There have been pilot projects with microgrids that test the ability for peer-to-peer lending of energy[56]. In the future, where homeowners own smart meters that communicate with other smart meters and power grids. Real-time data processing on the edge is an important consideration to run an effective microgrid.

Drone Automation

An automated drone system increases efficiency by eliminating the need for a drone operator while providing seamless access to routine, frequent and real-time data [57]. UAV’s (unmanned aerial vehicles) are increasingly used in many areas and it’s just the beginning. Left unmanaged, unmanned aircraft systems could lead to mid-air collisions, injury to people, and damage to property or other aircraft [57]. This is why it's important to have reliable technology that can facilitate the real-time analysis necessary to use UAV’s responsibly. Edge nodes reduce the distance of the communication loop from the drone to the air traffic control tower by allowing faster data-based decision making [58]. There’s just not enough fiber in the ground to send it up to the centralized internet, it’s got to happen quickly at the edge. Drones eliminate almost all of that labor and actually can cost-effectively increase the ability to collect that data [57].

Drone Automation via DHL delivery drones [59]

Traffic Management

Ground Traffic Management: With the introduction of UAV’s monitoring traffic during rush-hour, data can be exchanged between street cameras, drones, and first responders to effectively navigate congested roads. Overall, drones will have the visibility of traffic to help autonomously make decisions to help reduce congestion. For instance, During peak times, drones will be launched to help manage ground traffic, with drones responsible for certain areas, they can communicate with each other, reference their data with a street camera, and then decide to control certain traffic lights [60]. By obtaining real-time data through the UAV’s, first responders on the ground have a bird’s eye view of roads and can adjust routes and speeds according to real-time recommendations from these devices. Going further, Saguna networks give firefighters the ability to call upon and deploy drones to gain visibility in emergency situations [61]. By deploying these drones, firefighters can access a live camera feed for areas they wouldn’t be able to see otherwise [61]. With the rapid deployment of these UAV’s we may see them used in ways consumers don’t want as well. For example, what if these UAV’s could tell when someone was driving over the speed limit, completed a rolling stop, or was texting while driving? These drones may have the capability to pick up license plates and autonomously send out tickets to those with infractions. This may rub people the wrong way and raise questions about privacy.

Air Traffic Management: Typically, air traffic has been monitored and managed through air traffic controllers ensuring planes aren’t colliding. However, with the introduction of drones, there will be a need to manage low altitude air traffic [60]. These drones need to have the ability to communicate with each other very quickly so that accidents in the air don’t occur. It’s simply not good enough for air traffic to be monitored through a centralized cloud platform because the consequences of accidents occurring in airspace can be fatal. Using edge computing latency can be reduced to such a degree that a drone will travel two inches before the next update is provided, as opposed to twelve feet if the compute was to happen in the cloud [60]. In the near future, we will see drones in the air for delivery, traffic management, and industrial uses which will significantly increase the amount of air traffic needing to be managed. If we have flying cars in the future, this will significantly complicate air traffic control and necessitate unmanned traffic management systems.

Automated Delivery

It’s difficult to determine how many drones we will see flying around with packages in the future. Although, there are already companies experimenting with promising technology to make drone delivery a reality. DHL is working with a company Ehang, an intelligent autonomous aerial vehicle company to tackle the delivery challenges associated with the last mile of congested urban areas in China [59]. With the huge demand for B2C delivery in China, DHL will be able to meet the demands of time-sensitive deliveries and create a competitive advantage through their logistics network. The drones allow DHL to avoid traffic congestion often seen through road transportation, and will reduce the energy consumption and carbon footprint of the delivery methods in the past [59]. Drone delivery also has high potential cost savings with DHL stating that their current drones may be able to reduce cost by up to 80% per delivery [59]. Drone delivery systems also have the potential to reach consumers in more remote spaces. In some areas, drones may be a necessity for citizens to obtain their packages.

Case Study: Edge Computing and Computer Vision Technology

What is Computer Vision

The field of computer vision primarily focuses on designing computer systems that possess an ability to capture analyze, and interpret any important visual information that is contained within the image and video data [62]. Using contextual knowledge provided by human input, computer vision systems then translate such data sets into actionable insights that drive decision making. Its core principle is to modify the raw or unprocessed image and video data into higher-level concepts that can be interpreted and acted upon by humans, with or without conjunction with other computer systems [62]. The technology has been a subject of increasing interest and rigorous research for decades [63]. Computer visions global market size has been predicted to be worth between $17.4 billion to $48.32 billion by 2023. Deloitte has reported that 57% of U.S.-based survey respondents commented that their organizations had already adopted computer vision technology [64]. Research in the field of computer vision aims to develop AI-based machines that can closely emulate the human visual system and automate certain use-cases that necessitate visual cognition processes [63]. Due to the significantly greater amount of multi-dimensional big data that requires intensive analysis, the process of deciphering images and videos to generate context is more complex than understanding other forms of binary information. This makes the process of developing AI systems that are competent at recognizing visual data a complicated task [63].

Evolution of Computer Vision

Until recently, the use of computer vision systems was constrained to handcrafted algorithms for individual purposes or specific use-cases [65]. Prior to 2012, training such systems involved processing images at their smallest granular units of visual data, the pixels. The system would evaluate digital images on the basis of minute differences in factors such as the levels of brightness and darkness, colour saturation and pixel density which would determine the structure and thus the identity of the larger object [62]. Early computer vision systems relied extensively on manually building rule-based classification techniques to help identify and classify individual groups. This involved explicit codifying and programming of machines by engineers to specifically memorize individual features of objects that were understood to constitute a whole image [62].

Such systems were very adept at identifying specific features and images in laboratory settings and simulated environments. However, their commercial performance in the real world deteriorated quickly when input data strayed from design assumptions such as changing lighting conditions, erratic weather patterns, changing camera angles and other issues caused by unknown externalities [65]. Researchers spent several years developing and tailoring algorithms for individual use-cases to ensure continuous operations for differing external conditions. However, despite incremental progress cameras or video recorders that used such algorithms were still not robust enough in their performance capabilities. This historically limited the usefulness and commercial adoption of computer vision technology [65].

Deep learning workflow for computer vision technology [62].

However, the advances in machine learning, deep learning, in particular, are making computer vision algorithms more effective for real-world applications [62]. Computer vision technology is now powered by deep learning algorithms that use convolutional neural networks (CNN), which are a special kind of neural networks to derive value from images and video data. While traditional machine vision techniques begin with a top-down approach to analyze image composition, deep learning models flip the entire process [62]. The deep neural network training process uses massive data sets and iterative training cycles to teach the machine using a bottom-up approach. The image describes the difference between traditional machine learning processes for object detection and image recognition in comparison to a deep-learning-based approach [62].

The core differentiator being that traditional vision systems involve humans providing machines commands about specific features that an object should constitute or be composed of in comparison to using a deep learning algorithm which is capable of automatically extracting object features and memorizing them to identify objects with similar elements. The bottom-up approach is more effective at solving certain kinds of visual analysis problems [62].

These neural networks are trained using thousands of sample images which helps the algorithm understand and break down all the elements contained in an image [63]. Such networks scan individual images pixels to identify patterns and “memorize” them. In cases of supervised learning, it also memorizes the ideal output that should be provided by each input image or by classifying components of images by scanning characteristics such as colours and contours [63]. This stored memory is then used by computer vision systems as a reference point while scanning more images. With each successive iteration, the AI system becomes more efficient and adept at providing the right output [63]. During training, the algorithm is capable of automatically extracting relevant feature information. This process produces a model that can then be applied to previously unseen images to produce an accurate object classification [62].

Benefits of Edge AI-based Computer Vision Technology

Consumers have an insatiable appetite for technological advancements in the consumer IoT segment, an all-encompassing term for various communication, entertainment, security, home automation devices and gadgets that improve their user experience, convenience and safety [66]. This is evident from the evolution of interface mechanisms that has evolved from being purely tactile and touch-based solutions to now including a wider range of biometric, voice, gesture and video-based and computer vision capabilities [66]. However, several concerns over latency, physical limitations and privacy of traditional cloud-based connected devices have constrained further development in this field [66]. Often the significant time delay taken to send video or images data to a centralized cloud computing location does not meet the timeliness required to make critical real-time decisions. Such time-delay reduces the efficacy of processing real-time information as it cannot be acted on to generate immediate insights [67]. Sometimes the network’s physical limitations do not provide enough bandwidth to communicate all of the unstructured data sets to such cloud server locations. The final consideration against the future adoption of cloud-based models for computer vision relates to data privacy regulations, as sending data across regional or geographic boundaries to a centralized location may violate privacy laws [67]. In the longer term, lessening the dependence on the cloud-based model by shifting the burden to the edge computing is becoming a priority strategy across the IoT ecosystem [68]

The constraints with cloud-computing based computer vision systems have given rise to edge-based processing [66]. Edge computing has had a significant impact on modern computer vision systems. Computer vision systems are being increasingly deployed in situations to enable time-sensitive and business-critical decisions that are made on the basis of visual data sets. Innovations in edge AI-enabled computer vision technologies are poised to revolutionize existing IoT connectivity structures and business models [65]. The technology will enable an expansion of existing computer vision-based use-cases by creating an evolution from the first phase of IoT adoption, which primarily focused on connecting different IoT devices which require direct commands under a single hub, that aggregate data and are used to build up big data platforms [65] [66]. In the second phase, the focus is shifting towards intelligent and perceptive IoT devices that can infer user intent through the use of technologies like deep learning and computer vision to generate a diversity of newer forms of actionable data [65] [66].

The human-machine interface (HMI) forms a critical element in enhancing the user experience in an era of interconnected smart IoT devices. Machines that can understand and predictively respond to user behaviour, commands or touch without a constant dependence on the cloud are poised to revolutionize how IoT can deliver unprecedented levels of productivity, privacy and convenience in consumers' lives [68]. Although internet access will be necessary for several IoT scenarios like streaming media content, requesting real-time updates or information. This new era of hybrid cloud/edge IoT would be facilitated by the prominence of local intelligence that lessens the dependency and ultimately, cost and risk of communicating user vision or voice input data to the cloud for processing. AI-driven neural networks which are processed at the edge locations, whether directly on-location or in close physical proximity, are pivotal in addressing current challenges in performance and robustness, including data privacy concerns that constrain the cloud computing model [68]. Until recently, commercial use of smart edge processing was reserved for expensive devices like smartphones owing to the intensive computational power requirements that have been out of reach for low-cost devices or appliances. However, newer generation SoC’s can now offer secure neural network acceleration devices at commercial price points that target mainstream consumers [68]. Such cost-effective AI-based edge solutions can be used to improve performance by creating a more intuitive human-like technological experience. This will enable a range of truly smart IoT devices that can utilize multi-sensor and multi-modal inputs to use always-on listening features that can aid in learning about an individual user’s behavioural patterns and associate such patterns with device interactions [68].

Future Use Case 1: Synaptics: Media and Entertainment Industry

Synaptics is a leading developer of human-machine interface hardware and software solutions that are advancing the use-cases of consumer IoT products enabled by secure inferencing performed on the edge device [69]. Their Smart Edge AI platform called VideoSmart VS600 is a family of high-performance multimedia system on a chip (SoC) solutions which are targeted at device manufacturers that integrate an NPU, CPU and GPU into a single software-enriched chip. The SoC is based on SyNap, (Synaptics Neural Network Acceleration and Processing) is a full-stack solution and framework which enables on-device processing of deep-learning modules to enable advanced features such as user identification and behavioural prediction through computer vision, video and voice technology. This allows smart IoT devices to perform ambient computing for intuitive user interactions [69]. The VS600 family is specifically designed with human perceptive intelligence to create new generations of smart home devices such as smart cameras, smart displays, voice-enabled devices, video soundbars and emerging computer vision IoT products [69]. The chip's energy efficiency and small form factor can fit in battery-powered consumer IoT products. It can run sophisticated AI and machine learning algorithms locally, sparing users the bandwidth, latency, privacy and cost challenges of a cloud-based model [70].

The Smart Home Edge AI platform enables ambient computing, a term which refers to the symbiotic combination of software, hardware, user experience and human-machine interaction and learning to enable consumers to use devices subconsciously. It is the collective use of IoT devices that become extensions of one another to offer an overall seamless customer experience [71]. Although based on a minimal level of user interaction, ambient computing does not necessitate any form of continuous active user participation [70]. Using AI and deep learning technology the platform can power an entire integrated ecosystem of IoT devices to learn about users, their preferences, and environments to offer hyper-personalization of services by continuous machine learning to predict optimal actions or response [70]. Such perceptive intelligence is enabled by vision technology and sensors embedded in our environments. This advanced level of intelligence is a result of advancements in AI and machine learning to deep neural networks that change the paradigm from sensing to perception and ultimately the recognition of user intent [70]. Using Smart Edge AI IoT devices can collect, label and analyze video and audio data and respond intelligently in near real-time rather than collecting and transferring such data to central servers [70]. This would make a cloud connection necessary only when streaming media or music content and receiving real-time updates of news and weather [72].

Synaptics Smart Edge AI platform using ambient computing technology [73].


Synaptics attempts to further advances in biometric technology use cases from its current uses as an authentication and verification tool by using computer vision and voice technology. The ability to firewall and store all sensor-related information within a single device that is capable of running complex machine learning algorithms creates a new spectrum of applications which were earlier impractical [68]. Synaptics Smart Edge AI enables such as:

Automatic Content Personalization: Hyper content personalization is achieved by automatically creating and pairing customer profiles with their individual media history using easily identifiable biometric features such as voice and face ID. The IoT device uses edge processing and doesn’t require explicit user interaction or biometric training to complete the user registration or verification process. The range of additional modalities in combination with AI will enable human-machine interaction models to better learn and adapt to individual users’ behaviours and become contextually aware [68].

Voice Identification: Smart Edge AI distinguishes voice from other users through machine learning at the edge to deliver personalized content preferences that are catered to individual user profiles [74]. The company's performance and feature breakthroughs using a far-field voice interface help to bring a more natural user convenience to voice-enabled devices [68].

Face Identification: Smart Edge AI uses cameras and integrated computing vision intelligence on the device which offers facial recognition and emotion identification to distinguish between users [68]. If multiple users are within the camera’s field of vision it uses machine learning to display a content menu related to programs that these users previously watch together based on their media history [74].

Event Detection: Another Smart Edge AI feature uses a machine learning feature that enables media users to train their television programming to capture specific marks in certain events. For instance, the algorithms can be trained to capture baseball pitches during a game and enable users to smart fast-forward to such desirable content to create an enhanced viewing experience [72]. Locally analyzing such user media content using machine learning models will enable the device to better match user preferences by personalizing its interface [68].

Logo and Content Detection: When streaming media such as videos from OTT sources, the rights associated with such premium content prohibit sending any video or audio segment to cloud servers for content analysis [68]. Smart Home AI uses on-device computer vision processing to recognize logos and media content that is displayed within viewer programming that is playing on a screen. It was found to recognize certain contents like logos for BMW and CNN with a 99% accuracy rate. Media preference data is then sent to service providers who use it for monetization purposes to deliver highly targeted scaled cost advertising or offer paid programming based on individual user preferences [74]. This addresses digital rights management challenges by enabling machine learning-based content right analysis and securing video content in a trusted environment [68].

Multi-Factor Biometrics: Smart Edge AI enables multi-factor biometrics as different levels of user authentication may be appropriate for different functions. The technology recognizes the need for alternative authorization forms for varying use cases. For instance, passive face identification is satisfactory for generating TV show recommendations, whereas a user might need multi-factor biometrics to authorize movie purchases using both face and voice recognition [75].

Hacking Protection: All biometric registration and authentication information is stored inside the edge device away from the cloud, thereby addressing privacy concerns that currently impede wide-spread adoption and consumer confidence in always-on IoT devices. The localization of sensor data helps increase device manufacturer and consumer confidence about using audio and visual sensors in an always-on mode by providing an acceptable guarantee for security and data privacy. Such devices can use their machine learning capabilities to become more contextually aware of using input video and voice data. For instance, such edge enabled devices can run voice biometrics, natural language understanding and a large vocabulary in always-on/listening mode. This would allow the device to constantly monitor and differentiate between users talking around it and determine if its participation is required by analyzing speech content [68].

Future Use Case 2: Marketing and Retail Industry


Graymatics is an industry-leading computer vision-based platform that allows for automatic real-time indexing, analysis and classification of visual data, including videos and image analysis. The company has a suite of search, recommendation, curation and advertising tools [76] [77]. Its core technology platform includes ‘context connect’, which analyzes content in a company’s digital assets to help link to other related content or e-commerce partners. The technology helps place advertisements within other content such as images or videos while ensuring the best match between the type of content and nature of the product [76]. Graymatics’ retail computer vision system provides several benefits to aid employee performance and contributes to increasing efficiencies in individual retail store management. For the retail applications of their technology, the company builds and trains sophisticated AI which uses algorithms trained to identify specific objects, activities and human behaviours from visual feeds such as customers interacting with products and conversing with employees [77]. Their computer vision platform is built to improve management inefficiencies in retail that result from the traditional post-event based video analysis by creating more proactive and actionable insights using existing data. Their computer vision technology aims to utilize existing still image and video data from video inputs and other sensors such as infrared cameras to add a layer of objective performance management to the retail store floor [77]. Their retail benefits are outlined below:

Graymatics Computer Vision System [76].

Objective Performance Indicator: Traditionally, employees’ performance is measured using standardized manual performance review systems that provide an inefficient surface-level measurement of key performance indicators or KPI’s. Graymatics technology uses existing data from CCTV cameras to aid employee performance analysis by tracking individual employee actions and movements. The technology can track individual employee-customer interactions and offers objective performance indicators such as measuring average customer servicing times that can supplement other traditional employee review processes [77].

Emotion Identification: Graymatics’ technology uses emotion identification to analyze emotions expressed by customers during their interaction with store employees across a video timeline. By using computer vision the system can be trained to identify and measure customer satisfaction levels by gauging facial expressions from real-time video inputs. This aids in identifying and differentiation between individual employee performance levels by basing it on the satisfaction level of customers they interact with [77].

Increased Security: Traditionally from a security perspective, store CCTV cameras are used to provide post-event proof in case of damage to the store, larceny, or other illegal activity. Graymatics platform provides an ability to automate activity analysis such as customer shoplifting activity to other internal activities. Such features can help companies save massive amounts of time while also identifying potential areas of insecurity and danger that human scanning could never identify pre-emptively [77].

Targeted Training: The technology also provides benefits from a human resource perspective. It provides the ability to identify poor performers and assist in performance management by imparting targeted training based on their real challenges rather than on perceived areas of improvement [77].

Gorilla Technology

Gorilla Technology is a private company that specializes in video intelligence and IoT technology. It supports a wide range of video-centric and content management applications for enterprise, surveillance and retail-based applications. Their machine learning and deep learning video analysis algorithms can identify, analyze and extract information from digital content to drive business process automation and business intelligence solutions [78]. In their whitepaper titled Gorilla Edge AI, Gorilla Technology’s IVAR (Intelligent Video Analytics Recorder) is described as a real-time video analytics solution that is designed for CPU efficiency that uses AI optimization to process large sets of data from video applications. It provides benefits in business application and security intelligence fields by being an Intel certified performance-driven edge AI and computer vision software. IVAR technology was developed to be fully integrated with existing camera/surveillance systems. For example, it has integration capabilities with Video Management Systems (VMS) and System Access Control. This is particularly integral as companies seek smart video analytics technologies that offer integration capabilities with existing infrastructure ecosystems [79].

Gorilla Smart Retail is a comprehensive, portable and retail in-store analytical solution that uses edge-AI based computer vision technology to address the marketing and operational issues facing individual and multi-store retail stores. It incorporates a camera and IoT correlation analytical data to store retail operation overviews of top-performing traffic, revenue count and shopper conversion rates to deliver actionable insights for better staffing management. advertising strategy and drive business outcomes [80]. It provides a single platform that can be applied to different verticals within the retail space such as casinos, shopping malls to retail banking [81]. Gorilla’s smart retail technology creates real-time and actionable business insights by applying market segmentation analysis which is traditionally reserved for digital mediums into an offline retail space environment.

Gorilla’s edge-based computer vision helps generate innovative forms of business insights on the basis of customer activity in retail stores. In a recent case study at Senao, a national mobile phone retailer in Taiwan, Gorilla's Smart retail service cameras were embedded outside the store to generate data on customer conversion rates. The system provides summarized analytical reports which display total customer traffic within the store’s radius, including passersby and shoppers entering the store. Computer vision cameras setup within the store further provide analytics on shopper activity on the basis of demographic, psychographic and behavioural traits patterns of customers [82]. These segmentation strategies are outlined in the section below:

Demographic Analysis: Gorilla’s computer vision technology is able to identify the demographic characteristics of shoppers such as age, gender, and ethnicity using video cameras placed at store entrances when customers enter the retail premises. It then uses smart signage placed outside the entrance to display store product information targeting the individual shopper based on identifiable demographic factors [82].

Gorilla Smart Edge AI tracks individual shoppers using computer vision technology [80].

Psychographic Analysis: Gorilla has combined infrared and computer vision technology to provide real-time merchandise activity analysis by identifying and measuring points of customer product interactions. Heat maps can identify individual customer product interactions and tag them with a red, green and blue heat field that showcases heavy, normal and minimal activity with specific products, respectively. It provides real-time daily statistics and the ability to average such data types to identify changing customer trends per day or longer duration periods. Path analysis is another feature which utilizes heat maps to track the direction which shoppers take from point of entry to exit. The data is then analyzed to predict which directions shoppers are more likely to chart [82].

Behavioural Analysis: Gorilla’s AI-based dwell-time analysis is another feature that uses heat maps to measure the time shoppers stay in a defined area inside the retail store. The data is visualized using a custom dashboard which is compared against average customer occupancy rates. The system is designed to manage customer traffic on the retail floor by triggering a notification if occupancy exceeds the optimum threshold [82]. Such systems can prove critical to help companies better adhere to social distancing norms during the COVID-19 pandemic.

Advanced Data Analytics: Gorilla Smart Retail's metrics dashboard enables managers to increase efficiency in decision making by creating actionable data insights that can be acted upon immediately to capture market momentum and prevent potential losses. Customized widget settings enable management to review high priority information that is relevant to the individual store and product/behaviour correlation analysis to create tailored marketing campaigns. Their metrics dashboard platform offers integration capabilities with external databases such as POS or CRM to produce in-depth comparison analysis insights across the different datasets [80].

Concerns of Edge-based Computer Vision Technology

Privacy and Public Safety Concerns

Although ambient computing enables systems to passively and unconsciously identify, collect and use consumer data insights with little to no user acknowledgement or intervention. There might be situations or environments such as commercial retail spaces where individuals do not consent to their emotions or behaviour being analyzed or to be paired with easily exposed personal biometric features such as a user’s face or voice without explicit enrollment, such as Synaptics’ Smart Edge AI solution [8]. For instance, Clearview AI, a controversial computer vision-based facial recognition app, enables mass identification by comparing pictures of unknown individuals against a database of more than three billion images which are claimed to be scrapped from employment, learning and social media websites such as Facebook, LinkedIn, Twitter, Instagram, YouTube and others [83].

A chart from marketing materials that Clearview provided to law enforcement [83].

Although major internet platforms and tech companies have sent cease and desist letters to Clearview that order it to stop illegal scraping of data as it violates their terms of service and community guidelines, the company defended its behaviour by pleading their First Amendment right to collect public information [84].

Clearview’s algorithm doesn’t require photos of people looking straight at the camera and can correctly identify individuals from images with face covering such as hats or glasses, profile shots or partial views [83]. There are concerns over the technologies potential use by law enforcement agencies to identify and arrest protestors, including its use in the recent BLM movement. Such concerns include the public eradication of privacy, stifling of free speech and encroachment of civil liberties. There have been additional concerns regarding whether the company follows a due process in considering a law enforcement agency's "history of unlawful or discriminatory policing practices" prior to selling the technology or its parameters regarding free trials [85]. Although Clearview AI initially maintained that its tool was only meant for restricted use by law enforcement agencies and some private companies, the company was later found to have consistently misrepresented the nature and extent of its work and breadth of its possible use cases [84].

Clearview’s marketing materials regarding its facial recongition technology [83].

The company was allegedly found selling its technology to clients in countries such as Saudi Arabia and the UAE, which have concerning records on civil liberties and human rights [84]. Such technology can potentially grant state actors the ability to penalize individuals and communities or be used as a tool of coercion against the general public. Regarding the possibility of its use cases, the computer code underlying Clearview’s app includes a programming language to easily pair it with augmented-reality glasses. This can enable individual users to potentially identify every individual in their field of vision without their knowledge or consent to reveal intimate personal details such as their names, addresses, occupational information and list of associates in their network [83]. Clearview AI has found a potential revenue source and is currently in talks with US federal authorities and three states to use its facial recognition software to assist Covid-19 related contract tracing efforts. This news elicited a response from US Senator Ed Markey who was concerned that the company’s involvement in the pandemic could cause normalizing the invasive technology and “spell the end of our ability to move anonymously and freely in public” [86].

Recently in Illinois, a class-action lawsuit was filed against Macy’s, the retail store which alleged the department store chain violated the state’s Biometric Information Privacy Act by using Clearview software to identify shoppers from security-camera footage. The plaintiff argued that the technology allowed Macy’s to stalk or track customers and use it to profit off stolen data [87]. While the UK recorded the world’s first landmark legal victory against the use of an automated facial recognition technology by the police which scans 50 faces per second to compare individuals against a police watchlist. A London Court of Appeals ruled that its deployment breached data protection laws and human rights since there were “fundamental deficiencies” and no clear guidance in the broad discretion it potentially provided to police officers for its use [88]. Meanwhile, the US remains a patchwork of local laws governing biometric use at the discretion of individual states. Congress has been unable to pass even a basic federal online privacy law. Whereas existing bans of the technology’s public-sector use that are based on its current discriminatory and inaccurate implementations won't probably be sustainable as the technology evolves [89].

Hacking and Perception Problems

The hacking of artificial intelligence is an emerging security crisis and represents situations where user input data has been intentionally tweaked to trick a neural network and fool systems into “seeing something that isn't there, ignoring what is, or misclassifying objects entirely” [90]. Researchers have demonstrated the ease with which an AI system can be fooled into misreading a stop sign by carefully positioning stickers on it or by fooling deep neural networks into seeing a penguin in a pattern of wavy lines [91]. Such minute alterations to inputs are typically imperceptible to humans and can confuse the most advanced neural networks. Identifying such adversarial weaknesses could grant hackers the ability to hijack an online AI-based system and influence it to run their own algorithms [91]. In order to pre-empt hackers who attempt to hijack AI systems by tampering with datasets or their physical environment, researchers have turned their focus to adversarial machine learning. While such adversarial examples are now described as constituting of a system attack, they were first perceived as “an almost philosophical blind spot in neural network design” since we assumed machines saw and processed objects using criteria which is similar to humans [90]. Firstly, although machine-learning developers incorrectly assume that training and testing data for AI would be identical, attackers are free to manipulate data inputs for their benefit. Secondly, the assumption that neural networks process data similar to humans is incorrect. Neural networks are “extremely crude approximations of the human brain” and attempting to classify their decision-making processes as similar to humans is the origination point of such adversarial attacks [90].

Deep neural networks can be easily hacked [91].

Deep neural networks or DNNs are powerful as their many layers of processing enable them to identify patterns in many different elements and features of an input image when they attempt to classify it [91]. However, for example, an AI trained to recognize an aircraft might consider image features such as texture, colour patches or background details as equally strong predictors to identify the object in comparison to the features that humans would consider to be salient markers such as the aircraft’s wings primarily. Such processing which considers a wide-set of image features as constituting salient markers have potential ramifications [91]. Hackers can create very small changes in a group of pixels within the image, called noise, which can confuse the AI classification process of an object and cause the algorithm to categorize the image as an entirely different state or object [91].

This idea was first described in 2014 by Google researches in a paper titled Intriguing Properties of Neural Networks that described “how adding a ‘perturbation’ to an image meant the network saw it incorrectly which they dubbed ‘adversarial examples’ ” [90]. These small distortions which fooled neural networks into misreading or misclassifying objects raise questions about the intrinsic blind spots inherent in the design of neural networks and also the non-intuitive characteristics involved in their learning process. Such adversarial examples illustrate that researches have a very limited understanding of deep learning operations and their limitations [90]. There are also entire spectrums of attacks made possible depending on the phase of the machine-learning model pipeline targeted by a hacker. For instance, a training time attack occurs during the development stage of a machine learning model with the use of malicious data to train the system. While an inference time attack involves showing specially crafted inputs to the model using a range of algorithms [90].

Examples of adding noise to fool deep neural networks [91].

Another example of using human behaviour based trickery and adversarial attacks is represented by Walmart. Recently, a group of its employees expressed their frustrations with high rates of false positives at self-checkout and rising shrinkage issues, including public health concerns that were allegedly a result of the company’s use of Everseen, an anti-theft AI-enabled computer vision system which was adopted in 2017. Everseen was created to resolve issues with self-checkout and uses AI to analyze and detect real-time footage from surveillance cameras such as when a customer places an item in their bag without scanning it upon which it automatically alerts store associates [92].

However, a video created by the employees showcases multiple instances of the system being easily fooled by humans and failing to flag unscanned items. Further, the employees believe that the tech frequently misinterprets innocent behaviour as potential shoplifting which led to customer frustration. In an internal communication from Walmart, a corporate manager expressed concerns that employees were being put at risk by the additional customer contact necessitated by false positives. Such instances merit a better understanding of the potential of adversarial attacks that are easily able to confuse machine learning-based systems [92].

AI Bias and Black Box Problem

Current issues with AI biases include exclusion overhead, a term which describes the cost of systems that don’t take into account the diversity of human characteristics such as skin colour, hair and other ethnically diverse features into data-sets used to train AI for various use-cases [93]. Research has uncovered large gender and racial bias in AI systems developed by tech giants like Amazon, Microsoft and IBM. A recent research conducted on facial identification processes of AI systems by the companies mentioned earlier revealed error rates around 1% for light-skinned men. However, in contrast to darker-skinned women, the errors soared to 35%. In fact, AI systems from leading companies have failed to correctly classify the faces of famous personalities such as Michelle Obama, Oprah Winfrey and Serena Williams.The under-sampling of people of colour and women in the input data that shapes AI has led to the under-representation of such groups in the design, implementation and governance of AI. The creation of technology that is optimized for a small portion of the world [93].

Microsofts AI system incorrectly identifies a picture of Michelle Obama [93].

Ever since widespread protests over racial inequality have taken centre-stage, companies such as IBM and Amazon have announced cancelling their facial recognition programs to advance racial equity in law enforcement and implement stronger regulations to govern its ethical use [94]. In 2014, Amazon found an AI algorithm developed to automate headhunting employees had taught itself to bias against female candidates. While MIT researchers recently reported in 2019 that facial recognition software is proven less accurate in identifying individuals with darker pigmentation. Recently, a study by the National Institute of Standards and Technology (NIST) found evidence of racial bias present in nearly 200 facial recognition algorithms [94]. MIT recently removed their highly cited dataset called “80 million Tiny Images” that trained AI systems to use misogynistic and racist terms to describe certain communities and groups of people [95]. The data set was created in 2008 to develop advanced object detection techniques by teaching machine learning models to identify objects and people in still images. The damage of such datasets is compounded by the fact that it has been fed into neural networks, teaching them to associate images of persons of colour with racist words and female anatomy with sexist and derogatory language [95].

This occurrence is an example of a garbage in-garbage out approach as any AI model using this dataset is actively perpetuating sexism and racism, resulting in racially biased software, sexist or racist chatbots. While earlier this year, racial bias in a facial recognition software led to the wrongful arrest of Robert Williams in Detroit after a facial recognition system mistook him for another black man [95]. Such instances showcase that AI is equally susceptible to replicating the inherent biases in the real world and the nature of content shared online. AI needs to mature from computer science owned AI to consumer science enabled AI. There exists a pressing need to establish a field of AI whose purpose is to understand human behaviour patterns combining an interdisciplinary approach of gender studies, psychology, sociology and neuroscience curricula [94].

This is compounded by the black box problem, which refers to the inability to make sense of the decision-making process of complex machine learning algorithms. Rather than attempting to create ML models which are inherently interpretable, there has been an increased shift to develop ‘explainable ML’, where a second (post hoc) model, one often unreliable, is created in an attempt to explain workings of the first black-box model [96]. A belief that there is always a trade-off between accuracy in decision making and interpretability of machine learning models has led most researchers to abandon attempts to develop an interpretable model. There are many cases where companies hide the details of their AI systems for commercial reasons, such as to protect intellectual property and remain competitive. However, while such models help to maximize company profit, it does not minimize the damage that adopting such unexplainable models poses for society. For instance, a situation where a prisoner gets an excessively long sentence or certain individuals are refused loans is owed to racial or gender bias in the data models [96]. The use of black-box models for high-stakes decisions represents a conflict of responsibility companies profiting from these models are not necessarily liable for the quality of individual predictions. This trend has great repercussions in fields such as banking, health care, and the criminal justice system. Since such algorithms are only accessible to developers, there is minimal opportunity to conduct impartial investigations into their inner-workings and researchers must primarily rely on flawed black-box explanation methods to understand the decision-making process [96].


With the development of 5G network and IoT, processing and managing the overwhelming amount of data is becoming a growing concern for many companies. For instance, the much anticipated autonomous cars can generate 4 TB of data per car per day [97]. Edge computing introduces a new way of handling, processing, and delivering data. As it is still in its infancy, most organizations have only scratched the surface of the benefits that can be offered. Some of these benefits include low latency, security, high bandwidth, and increased speed [98]. We see that the current edge computing market is growing at a significant rate, and is expected to grow at a similar pace for the next 5-10 years. Organizations that invest early in edge computing infrastructure will continue to learn how shifting data processing closer to connected devices can improve network speeds and enhance the customer experience [98]. While edge computing has the potential to transform businesses for the better, there are also some challenges they must consider in order to be successful. These include security and authentication, edge requiring more hardware, cost, and varying device requirements.

Many of the current use cases for edge computing are enabling huge disruptions in their respective industries. It is clear that without edge computing, industries such as retail, automotive, and agriculture wouldn’t be capable of evolving into the ways we’re seeing today. Consumer expectations are rising with many people not understanding how organizations enable these experiences. Edge computing offers an architecture that filters and distributes the data to the right places instead of sending everything to the cloud. With more and more internet-connect devices being created, companies will have to be innovative and creative in leveraging the edge computing framework to optimize their business processes in the coming future.


  1. 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
  2. https://www.theverge.com/circuitbreaker/2018/5/7/17327584/edge-computing-cloud-google-microsoft-apple-amazon
  3. 3.0 3.1 https://www.ericsson.com/en/digital-services/edge-computing?gclid=Cj0KCQjww_f2BRC-ARIsAP3zarEIqw0z_NgIgvIG6rNZV-ofCqE5XPJY3caOlzmA9TIy8_ltAfJMlxwaAmubEALw_wcB&gclsrc=aw.ds
  4. 4.0 4.1 4.2 https://www.forbes.com/sites/forbestechcouncil/2019/11/26/living-on-the-edge-part-ii-whats-driving-edge-computing/#351172f23c40
  5. https://www.apmdigest.com/what-is-driving-edge-computing-and-edge-performance-monitoring
  6. https://medium.com/@mobodexter_inc/key-drivers-of-the-edge-computing-market-a9bdb770878e
  7. https://en.wikipedia.org/wiki/Moore%27s_law
  8. https://www.independent.co.uk/news/science/apollo-11-moon-landing-mobile-phones-smartphone-iphone-a8988351.html
  9. 9.0 9.1 https://www.bain.com/insights/covid-19-lockdowns-may-accelerate-the-shift-to-edge-computing/
  10. https://techcrunch.com/2020/03/12/verizon-increases-network-infrastructure-investment-by-500m/
  11. https://www.equinix.com/newsroom/press-releases/pr/123952/Equinix-to-Expand-Canadian-Operations-with-US-Million-Acquisition-of--Bell-Data-Center-Sites/
  12. 12.0 12.1 12.2 12.3 12.4 12.5 12.6 12.7 https://www.grandviewresearch.com/industry-analysis/edge-computing-market
  13. https://www.marketresearchfuture.com/reports/edge-computing-market-3239
  14. https://www.researchandmarkets.com/reports/4618281/global-edge-computing-market-forecasts-from
  15. https://www.grandviewresearch.com/industry-analysis/smart-manufacturing-market
  16. https://www.cnbc.com/2018/04/13/elon-musk-admits-humans-are-sometimes-superior-to-robots.html#:~:text=was%20a%20mistake.-,To%20be%20precise%2C%20my%20mistake.,felt%20were%20our%20core%20technology%E2%80%A6.
  17. 17.0 17.1 https://docs.microsoft.com/en-us/azure/databox-online/azure-stack-edge-overview
  18. https://azure.microsoft.com/en-us/resources/azure-stack-edge-datasheet/
  19. 19.0 19.1 19.2 https://azure.microsoft.com/en-us/products/azure-stack/edge/#benefits
  20. 20.0 20.1 https://aws.amazon.com/snowcone/#Snowcone_Edge_Computing
  21. https://www.forbes.com/sites/janakirammsv/2020/06/22/aws-snowcone-portable-and-ruggedized-edge-computing-device/#3a6acfb48eff
  22. 22.0 22.1 22.2 22.3 https://www.grandviewresearch.com/press-release/global-edge-computing-market
  23. https://www.mordorintelligence.com/industry-reports/edge-computing-market-industry
  24. 24.0 24.1 https://www.fiormarkets.com/report/edge-computing-market-by-component-services-solution-platform-418067.html
  25. 25.0 25.1 https://www.marketsandmarkets.com/Market-Reports/edge-computing-market-133384090.html
  26. https://www.zdnet.com/article/connected-cars-how-5g-and-iot-will-affect-the-auto-industry
  27. https://www.whathifi.com/us/best-buys/best-wireless-noise-cancelling-headphones-2020
  28. https://www.soundguys.com/noise-canceling-anc-explained-28344/
  29. 29.0 29.1 29.2 29.3 29.4 https://www.grandviewresearch.com/press-release/global-edge-computing-market
  30. 30.0 30.1 30.2 30.3 30.4 https://www.highspeedinternet.com/resources/bandwidth-vs-latency-what-is-the-difference#:~:text=Higher%20bandwidth%20is%20better.&text=Bandwidth%20refers%20to%20the%20maximum,one%20time%20is%20100%20Mbps.
  31. 31.00 31.01 31.02 31.03 31.04 31.05 31.06 31.07 31.08 31.09 31.10 https://medium.com/@winjitmarketing/pros-and-cons-of-edge-computing-1cd789ae999b
  32. https://download.schneider-electric.com/files?p_Doc_Ref=SPD_VAVR-A4M867_EN
  33. 33.0 33.1 33.2 https://www.hitechwhizz.com/2020/04/5-advantages-and-disadvantages-risks-and-benefits-of-edge-computing.html
  34. 34.0 34.1 https://codeburst.io/what-is-edge-computing-the-quick-overview-explained-with-examples-bc8e1ec5b9a0
  35. 35.0 35.1 35.2 35.3 35.4 35.5 35.6 35.7 35.8 35.9 https://cacm.acm.org/magazines/2020/1/241702-dependability-in-edge-computing/fulltext
  36. https://www.dqindia.com/challenges-deploying-edge-computing/
  37. https://blog.cloudflare.com/cloudflare-workers-serverless-week/
  38. 38.0 38.1 38.2 38.3 https://medium.com/next-level-german-engineering/unlocking-new-potentials-how-foghorn-and-porsche-leverage-edge-computing-for-vehicles-f7f7e1dabd
  39. https://startup-autobahn.com
  40. 40.0 40.1 40.2 40.3 https://customers.microsoft.com/en-gb/story/shell-mining-oil-gas-azure-databricks
  41. https://www.youtube.com/watch?v=AUFe1et8qu8
  42. 42.0 42.1 https://customers.microsoft.com/en-us/story/buhlergroup-azure-machine-learning-iot-edge-switzerland
  43. https://iotmktg.com/disrupting-production-line-ai-edge-inference/
  44. https://www.buhlergroup.com/content/buhlergroup/global/en/media/media-releases/buehler_lumovisionsavinglivesandimprovinglivelihoodswithrevoluti.html
  45. 45.0 45.1 45.2 45.3 https://www.edgeir.com/foghorn-offers-edge-based-worker-safety-solution-for-covid-19-20200617/amp
  46. https://www.foghorn.io/edge-ai-solutions/
  47. 47.0 47.1 47.2 47.3 47.4 https://newsroom.ibm.com/2020-03-05-Sea-Trials-Begin-for-Mayflower-Autonomous-Ships-AI-Captain
  48. 48.0 48.1 48.2 48.3 48.4 https://www.ge.com/news/reports/digital-ship-edge-computing-helps-oil-rig-workers-drill-better-maintenance
  49. 49.0 49.1 49.2 49.3 49.4 https://fedtechmagazine.com/article/2019/02/edge-computing-air-force-and-fema-take-advantage-intelligent-edge-perfcon
  50. https://www.thalesgroup.com/en/markets/digital-identity-and-security/iot/inspired/smart-cities
  51. https://www.techrepublic.com/article/smart-cities-the-smart-persons-guide/
  52. 52.0 52.1 52.2 52.3 52.4 52.5 52.6 52.7 https://www.researchgate.net/publication/329169630_Edge_Computing_for_Smart_Grid_An_Overview_on_Architectures_and_Solutions_Design_Challenges_and_Paradigms#:~:text=Similar%20to%20other%20IoT%20domain,continuously%20collect%20high%2Dresolution%20data.&text=To%20address%20this%20issue%2C%20Edge,where%20the%20data%20is%20collected.
  53. https://www.youtube.com/watch?v=AUFe1et8qu8
  54. https://www.uswitch.com/gas-electricity/guides/smart-meters-explained/#:~:text=Smart%20meters%20use%20a%20secure,with%20an%20in%2Dhome%20display.
  55. 55.0 55.1 https://microgridknowledge.com/microgrid-defined/
  56. 56.0 56.1 https://www.brooklyn.energy/about
  57. 57.0 57.1 57.2 https://www.airoboticsdrones.com/applications/
  58. https://stlpartners.com/telco-edge-compute-use-case-aerial-drones/
  59. 59.0 59.1 59.2 59.3 https://www.dhl.com/tw-en/home/press/press-archive/2019/dhl-express-launches-its-first-regular-fully-automated-and-intelligent-urban-drone-delivery-service.html
  60. 60.0 60.1 60.2 https://www.analyticsinsight.net/futuristic-smart-cities-powered-uav-drone-technology/
  61. 61.0 61.1 https://www.youtube.com/watch?v=zDmTtuwnk6A
  62. 62.0 62.1 62.2 62.3 62.4 62.5 62.6 62.7 62.8 62.9 https://www.dynam.ai/what-is-computer-vision-technology//
  63. 63.0 63.1 63.2 63.3 63.4 63.5 https://www.forbes.com/sites/cognitiveworld/2019/06/26/the-present-and-future-of-computer-vision/#29b885a8517d
  64. https://venturebeat.com/2020/03/03/computer-vision-joins-the-enterprise-mainstream-but-its-a-hot-potato/
  65. 65.0 65.1 65.2 65.3 65.4 65.5 https://www.iotforall.com/computer-vision-iot/
  66. 66.0 66.1 66.2 66.3 66.4 66.5 https://www.eetimes.com/edge-ai-solutions-for-smart-homes-can-transform-hmi/#
  67. 67.0 67.1 https://goto50.ai/2020/04/15/what-is-edge-computing-and-how-it-empowers-computer-vision/
  68. 68.00 68.01 68.02 68.03 68.04 68.05 68.06 68.07 68.08 68.09 68.10 68.11 68.12 https://www.embedded-computing.com/iot/iot-intelligence-moves-toward-the-edge
  69. 69.0 69.1 69.2 https://www.globenewswire.com/news-release/2020/01/06/1966410/0/en/Synaptics-Announces-Industry-First-Edge-Computing-Video-SoCs-with-Secure-AI-Framework-at-CES-2020.html
  70. 70.0 70.1 70.2 70.3 70.4 https://www.forbes.com/sites/forbestechcouncil/2020/05/19/neural-networks-and-machine-learning-are-powering-a-new-era-of-perceptive-intelligence/#11edbb6647cf
  71. https://www.condecosoftware.com/blog/what-is-ambient-computing/#:~:text=Ambient%20computing%20is%20a%20term,without%20necessarily%20consciously%20using%20it.
  72. 72.0 72.1 https://www.synaptics.com/company/news/IBC2019
  73. https://www.synaptics.com/company/blog/perceptive-intelligence-forbes
  74. 74.0 74.1 74.2 https://www.synaptics.com/company/blog/IBC2019
  75. https://www.biometricupdate.com/201911/biometrics-beyond-the-lock-and-key
  76. 76.0 76.1 76.2 https://graymatics.com/#about
  77. 77.0 77.1 77.2 77.3 77.4 77.5 77.6 https://medium.com/@aryeh_62634/seeing-is-believing-ai-and-computer-vision-for-retail-business-41279f1f048c
  78. https://www.gorilla-technology.com/about-gorilla
  79. https://www.gorilla-technology.com/Edge-AI/whitepapers
  80. 80.0 80.1 80.2 https://www.gorilla-technology.com/Retail
  81. https://www.youtube.com/watch?v=-xm-lDwTrzE&feature=emb_logo
  82. 82.0 82.1 82.2 82.3 https://www.youtube.com/watch?v=pAXlJojyu54&feature=emb_logo
  83. 83.0 83.1 83.2 83.3 83.4 https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
  84. 84.0 84.1 84.2 https://www.vox.com/recode/2020/2/11/21131991/clearview-ai-facial-recognition-database-law-enforcement
  85. https://www.cnet.com/news/senator-concerned-clearview-ai-facial-recognition-is-being-used-by-police-in-black-lives-matter-protests/
  86. https://www.nbcnews.com/tech/security/facial-recognition-company-wants-help-contact-tracing-senator-has-questions-n1197291
  87. https://www.bnnbloomberg.ca/macy-s-sued-over-use-of-clearview-facial-recognition-software-1.1476586
  88. https://www.ft.com/content/b79e0bee-d32a-4d8e-b9b4-c8ffd3ac23f4
  89. https://www.wired.com/story/facial-recognition-laws-are-literally-all-over-the-map/
  90. 90.0 90.1 90.2 90.3 90.4 90.5 https://www.wired.co.uk/article/artificial-intelligence-hacking-machine-learning-adversarial
  91. 91.0 91.1 91.2 91.3 91.4 91.5 91.6 https://www.nature.com/articles/d41586-019-03013-5
  92. 92.0 92.1 https://www.wired.com/story/walmart-shoplifting-artificial-intelligence-everseen/
  93. 93.0 93.1 93.2 https://time.com/5520558/artificial-intelligence-racial-gender-bias/
  94. 94.0 94.1 94.2 https://techcrunch.com/2020/07/03/we-need-a-new-field-of-ai-to-combat-racial-bias/
  95. 95.0 95.1 95.2 https://thenextweb.com/neural/2020/07/01/mit-removes-huge-dataset-that-teaches-ai-systems-to-use-racist-misogynistic-slurs/
  96. 96.0 96.1 96.2 https://bdtechtalks.com/2020/07/27/black-box-ai-models/
  97. https://newsroom.intel.com/editorials/self-driving-cars-big-meaning-behind-one-number-4-terabytes/#gs.cxh93r
  98. 98.0 98.1 https://www.vxchnge.com/blog/the-5-best-benefits-of-edge-computing?fbclid=IwAR3DSmVPOu8y11Y63l07wQgu32T3Z_s1srxFZVPEdOHqj977HZQ8Pz92vXo


Daichi Keber Gunit Sethi Jessica Zhou Yu-Ning Huang
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
Beedie School of Business
Simon Fraser University
Burnaby, BC, Canada
daichi_keber@sfu.ca gsethi@sfu.ca jpzhou@sfu.ca yuningh@sfu.ca
Personal tools