Machine Learning Fall 2015

From New Media Business Blog

Jump to: navigation, search


Contents

Introduction

In general, Machine learning is a method of teaching computers to make predictions based on some data and it is a branch of Artificial Intelligence.Over the past two decades Machine Learning has become one of the Mainstays of information technology and with that, a rather central, albeit usually hidden, part of our life.

What is Machine Learning?

There are also many definitions of Machine Learning in different perspective from authoritative textbooks in the field.[1]

  • Mitchell’s Machine Learning[2]

“The field of machine learning is concerned with the question of how to construct computer programs that automatically improve with experience.” - Mitchell’s Machine Learning

  • Elements of Statistical Learning[3]

“Vast amounts of data are being generated in many fields, and the statisticians’ job is to make sense of it all: to extract important patterns and trends, and to understand “what the data says”. We call this learning from data.” – The Elements of Statistical Learning: Data Mining, Inference, and Prediction

  • Pattern Recognition[4]

“Pattern recognition has its origins in engineering, whereas machine learning grew out of computer science. However, these activities can be viewed as two facets of the same field…” – Bishop, Pattern Recognition and Machine Learning

  • An Algorithmic Perspective[5]

“One of the most interesting features of machine learning is that it lies on the boundary of several different academic disciplines, principally computer science, statistics, mathematics, and engineering. …machine learning is usually studied as part of artificial intelligence, which puts it firmly into computer science …understanding why these algorithms work requires a certain amount of statistical and mathematical sophistication that is often missing from computer science undergraduates” – Marsland, Machine Learning: An Algorithmic Perspective

  • Business Perspective Machine Learning[6]

In a recent interview, the corporate vice president of Machine Learning at Microsoft, Joseph Sirosh, explained the machine learning process. “You take data from your enterprises and make several hypotheses and experiment with them. When you find a hypothesis that you can believe in… you want to put that into production so you can keep monitoring that particular hypothesis with new data.”

History[7]and Relationships to Other Fields

1642: one of the first mechanical adding machines was designed by Blaise Pascal. It used a system of gears and wheels similar to those found in odometers and other counting devices Pascal’s adder, known as the Percaline, could both add and subtract and was invented to calculate taxes.

1642-Mechanical Adder








1847: Logic is a method of creating arguments or reasoning with true or false conclusions. George Boole created a way of representing this using Boolean operators (AND, OR, NOR) and having responses represented by true or false, yes or no, and represented in binary as 1 or 0. Web searches still use these operators today.
1847-Boolean Operators








1945: The Mark I, built at IBM and designed by Howard Aiken, was the first combined electric and mechanical computer. The Mark I could store 72 numbers and it could perform complex multiplication in 6 seconds and division in 16. While nowhere near as fast as current computers, this is still faster than most humans.
1945-Mark I








1946: The first fully electronic computer was built by John Mauchly and John Eckert and named ENIAC, short for Electronic Numerical Integrator and Computer. ENIAC was a thousand times faster than the Mark I. This computer weighed about 30,000 kilograms and fit on a wall 3 meters high and 24 meters across. These days computers fit in our pockets.
1946-ENIAC








1952: Arthur Samuel was an IBM scientist who used the game of checkers to create the first learning program. His program became a better player after many games against itself and a variety of human players in a ‘supervised learning mode’. The program observed which moves were winning strategies and adapted its programming to incorporate those strategies.
1952-Checker Program








1957: Frank Rosenblatt designed the perceptron which is a type of neural network.

a neural network acts like your brain; the brain contains billions of cells called neurons that are connected together in a network. The perceptron connects a web of points where simple decisions are made that come together in the larger program to solve more complex problems.
1957-Perceptron








1990’s: We began to apply machine learning in data mining, adaptive software and web applications, text learning, and language learning. Advances continued in machine learning algorithms within the general areas of supervised learning and unsupervised learning. As well, reinforcement learning algorithms were developed.
1990-Machine Learning Application








2000’s: The new millennium brought an explosion of adaptive programming (programming that changes its behavior based on the current state of its environment).

Anywhere adaptive programs are needed, machine learning is there. These programs are capable of recognizing patterns, learning from experience, abstracting new information from data, and optimizing the efficiency and accuracy of its processing and output.
2000-Adaptive Programing








The relationship among Data Mining, Artificial Intelligence, and Machine Learning was confused by their similarity features and functions. Data mining discovers previously unknown patterns and knowledge Machine learning is used to reproduce known patterns and knowledge, automatically apply that to other data, and then automatically apply those results to decision making and actions. Data Mining can cull existing information to highlight patterns, and serves as foundation for AI and Machine Learning; AI is the broad term for using data to offer solutions to existing problems, and Machine Learning actually takes the process one step further by offering the data necessary for a machine to learn and adapt when exposed to new data. For example, if you want to train a machine, it depends on Data mining and AI by reading mined data, creating a new algorithm through AI, and then updating current algorithms accordingly to “learn” a new task.[8]

Forms of Machine Learning

Supervised Learning

Supervised Learning

supervised learning is where the algorithm generates a function that maps inputs to desired outputs, which means there is always a “correct” output is given for each instance. For example, a model is prepared through a training process where it is required to make predictions and is corrected when those predictions are wrong. The training process continues until the model achieves a desired level of accuracy on the training data. Therefore, supervised learning is commonly used in applications where historical data predicts likely future events. It can anticipate when credit card transactions are likely to be fraudulent or which insurance customer is likely to file a claim.





Unsupervised Learning

Unsupervised Learning

Unsupervised learning which is that input data is not labelled and does not have a known result, the goal is to have the computer learn how to do something that we don’t tell it how to do and analyses relations between instances. For example, a model is prepared by deducing structures present in the input data. This may be to extract general rules. It may through a mathematical process to systematically reduce redundancy, or it may be to organize data by similarity. Therefore, this type of training will generally fit into the decision problem framework because the goal is not to produce a classification but to make decisions that maximize rewards. It can identify segments of customers with similar attributes who can then be treated similarly in marketing campaigns. Or it can find the main attributes that separate customer segments from each other. Popular techniques including self-organizing maps and nearest-neighbor-mapping. A form of reinforcement learning can be used for unsupervised learning, where the agent bases its actions on the previous rewards and punishments without necessarily even learning any information about the exact ways that its actions affect the world.




Reinforcement Learning

Reinforcement Learning

With reinforcement learning, the algorithm discovers through trial and error which actions yield the greatest rewards, where the agent bases its actions on the previous rewards and punishments without necessarily even learning any information about the exact ways that its actions affect the world. So the goal in reinforcement learning is to learn the best policy and it often used for robotics, gaming and navigation.

MarI/O

Super Mario

The video of Super Mario further explained the process of how machine learning. The machine learning technology used in the Super Mario computer is reinforcement learning. Reinforcement learning is learning by interacting with an environment. The agent, which is MarI/O in the video, learns from the consequences of its actions, rather than from being explicitly taught and it selects its actions on the basis of its past experiences and also by new choices. In that way, reinforcement learning is essentially trial and error learning.

The computer like Mari/O is not like a pre-programed AI, which can master a game with expert knowledge. Instead, with machine learning technology, a computer can master a game with only some basic information at the beginning. As along as the computer allowed to learn through machine learning, is eventually can learn how to play video games and master the games.


Deep Learning

Deep Learning

Deep learning is a new field of neural network research, artificial neural networks are a group of algorithms that are loosely based on our understanding of the brain, in theory, it can model any kind of relationship within a data set, but in practice. Deep learning having tremendous success in areas where many artificial intelligence approaches have failed in the past. It combines advances in computing power and special types of neural networks to learn complicated patterns in large amounts of data. Deep learning techniques are currently state-of-the-art for identifying objects in images and words in sounds. Researchers are now looking to apply these successes in pattern recognition to more complex tasks such as automatic language translation, medical diagnoses and numerous other important social and business problems.




Applications

Business

Microsoft Azure Machine Learning

Microsoft Azure Machine Learning

Microsoft is currently one of the leaders in Machine Learning industry. On 2010, Microsoft released their Windows Azure, which was renamed to Microsoft Azure later. It is a cloud computing platform and infrastructure, created by Microsoft, for building, deploying and managing applications and services through a global network of Microsoft-managed and Microsoft partner hosted data centers. It provides services and supports many different programming languages, tools and frameworks, including both Microsoft-specific and third-party software and systems.

Facial Recognition

Facebook's Facial Recognition

Facebook's Facial Recognitio

Facebook is using Facial Recognition to detect faces of users and help tag your friends in the photo. But Facebook always detects the pattern on shirt, picture on thumb as a face by mistake. This is because Facebook is currently using a facial recognition software that uses an algorithm to calculate a unique number based on someone’s facial feature, like the distance between eyes, nose and ears. And, since the facial feature is the same on the shirt and the real face, Facebook thinks that face pattern belongs to the person named Henry. It cannot tell whether it is a shirt or a head. But privacy advocates think privacy question is raised and say the company’s technology could only be used with explicit permission. Especially Facebook has the ability to combine facial data with extensive information about users, including biographic data, location data, and associations with friends. Facebook defends its use of facial-recognition only a form of biometrics, and that it only enhances the user experience. When someone is alerted they’ve been tagged in a photo, it’s easier to take action, whether it’s commenting, contacting the person who shared it, or reporting it to Facebook.

Facial Recognition Used in Walmart

Facial Recognition in Walmart

Walmart has been testing this technology that automatically spot shoplifter in a crowd. The system provides notifications to security that includes a profile of suspect and corporate directive of how to react. First, it has a camera that takes a photo of a customer’s face. Then it analyses reference points using facial overlay grid, and compare it to faces in a database. If it makes a match with the thief’s face, the system provides notification to security. Walmart tested its facial recognition software in store across several states for a number of months, but then discontinued because it didn’t have a return on investment. The problem is that it is unclear how accurate the technology is. As well, experts raised privacy concerns over retailers using such technology on their customers.

Facial Recognition Used in Australia Government

This technology is being used to enhance country’s security as well. Australian federal government announced the set up of database, called Capability. It aims to draw together official photographs, like driver’s licences and passport photos that the state and agencies could use to identify criminals. Capability could also be drawn from social media sites like Facebook and Instagram.It will first be used by Commonwealth agency. And then other institutions, even commercial entities may gain access to it. The benefit is it’s clear that The Capability can be an extremely useful tool to enhance national security, and is an essential development in keeping up with increasingly high-tech crimes. But there were no concrete answers about which agencies could access the data. The more accessible you make, the more it will be abused. And critically, there’s no real rule on its use. The loss of control over biometric and personal data can lead to individual feelings of disempowerment and the loss of personal autonomy. There is no restriction on how long our images will be stored and no way to know how certain photos will be used or interpreted.

Speech Recognition

Speech Recognition

Similar to facial recognition, speech recognition is an implication of Machine Learning. Now speech recognition are brought mainstream by Apple’s Siri and Google’s voice search. Around 55% of teens and 41% of adults use voice search more than once a day. Also, household items such as TVs and lights are able to take voice command. Here is a video that introduce automatic speech processing. In commercials, Apple advertises Siri as a personal assistant that you can seemingly say anything to, but it is not the case in real life. Speech recognition cannot recognize and do everything.

Problems

The first problem is about its understanding. While computers are based on formal logic and fixed categories, people have many ways to express the same idea. Human language is flexible and dynamic. They have a certain amount of vagueness and ambiguity, and highly affected by context. Computers try to guess what we mean when we say something. Usually, they don’t get our social and emotional psychology.

The second problem is that Speech technologies do not work well for everyone. And that’s more likely to happen if you don’t speak English. Most of the available voice data that feeds into Siri and Google is in standard American English. Those who do not speak default voice recognition language or speak languages with heavy accent are suffering from much more error when using it. There is a concern with security of using speech recognition technology. What would happen with your voice request completed? In certain cases, when personalized voice recognition is enabled, your voice nuances are stored and used to improve algorithm that help better understand what people are saying.

In the cases when your device needs help to understand what you are saying, your voice is sent to a data processing provider and run through a series of programs to both learn and attempt to filter your speech. Data collection can be used for good or evil. It can help improve technology and introduce valuable new features. It can learn your habits and predict things that may help you, such as suggesting an alternate route around traffic. But the data collected can be used without your knowledge. For example, LG smart TV was sending information, including viewing habits, channel selection, and file names of attached USB stick back to Korean company.


Voice Recognition

Besides of understanding what you are saying, devices now can recognize your identity by your voice. The latest version of Apple’s mobile operating system learns what your voice sounds like and can identify you when you speak to Siri, ignoring other voices that try to break in. Siri is not the only one who knows your voice. Researchers at Google unveiled an artificial neural network could verify the identity of a speaker saying “OK Google” with an error rate of 2%. Recognizing individual voices is different from understanding what they are saying. The recognition software has been filled by massive sets of vocal data built into a huge model of how people speak. This allows measurements of how much a person’s voice deviates from that of the overall population.


Benefit and Problem

This technology is already being used in criminal investigation. Police use it to compare the killer’s voice with that of a list of possible suspects. In the future, experts are aiming to detect a person’s likely diseases or psychological state through voice analysis. Same as the above example, this identity recognition suffers from security concerns: Since the speech and voice algorithms often aren’t embedded in the device itself; what you say is sent to a server for analysis, and then ported back quickly.

Video Games

Google DeepMind

Google DeepMind

Google’s DeepMind is currently working on machine learning technology in video games. So far, DeepMind’s deep learning software can outperform humans in more than 30 different video games. The reinforcement learning used in an algorithm to learn and master games has become the “first significant rung of the ladder” towards such a system can work.[1]Games that the AI algorithm has mastered include the classic arcade game Pong, and Atari titles like Bank Heist, Enduro, River Raid and Battlezone. On 24th, September 2015, DeepMind research scientist Koray Kavukcuoglu shows how the same algorithm can be used to learn the different games. With the machine learning algorithm, the AI can improve itself by analyzing the game data without having the games’ rules programmed into its software. However, the software can not learn a more complex game. DeepMind is still working on that part, and they expect the machine learning algorithm can figure out the complex game in one day. Think about that if the software can drive a car in a racing game, then potentially, it should be able to drive a real car in reality.


Industry

Wind Turbines

Wind Turbines

There is a project called ALICE (Autonomous Learning in Complex Environments) research project was funded by Germany’s Ministry of Education and Research to figure out how to optimize wind turbines. The project involves experts from Siemens, IdaLab GmbH, and the Machine Learning group at the Technical University of Berlin. This project was completed in June 2014.

The main problem with traditional wind turbines is that they can not always run at full capacity. That means the turbines delivers far less energy than they potentially could, especially when they run in a weak wind situation or wind only blowing at half strength. To solve this problem, researchers from ALICE found that continuous learning can help wind turbines improve their electricity output.[2]

With machine learning technology, wind turbines can optimize their output by comparing their operating data with weather data. The data includes speed of wind, temperatures, electric currents and voltages. Through analyzing past measurement data, the machine learning software can calculate the optimal settings fro various weather scenarios that involve a variety of factors such as sunshine duration, hazy conditions, and thunderstorms.[2] More specifically, the analyzed data would have transmitted to the wind turbines’ control units, which take it into account from then on as they adjust the functions.



Robotic System for Recycling

ZenRobotics Recycler

ZenRobotics Recycler (ZRR) is the first robotic waste sorting system in the world. ZRR is much like a standard industrial robot in appearance. However, what make it different is the machine learning technology. With the help of machine learning technology, ZRR can reclaim valuable materials from waste stream efficiently. Specifically, machine learning is used to help the robot to recognize different kinds of object. ZRR’s system learned the distinguishing features in the waste stream through analyzing the data of thousands of pictures of bottles, cans, bricks and everything else.[3]The features learned by the system include shapes and the way light reflects off certain cartons and even labels.

The benefits of using ZRR in recycling facilities include:[4]

  • 24/7 operation for cost efficiency. The machine is highly durable, and minimal downtime or maintenance is required. With the help of the auto machine, waste sorting is becoming more profitable in business.
  • Decentralization and simple process. ZRR can sort the waste to different categories as required. Therefore, there will be no need to source separate waste streams.
  • Reduced labour cost. ZRR can potentially save lots of man hours every year and increase work safety by reducing personnel injuries during manual sorting.
  • Increased profits on recyclables and reduced waste costs. The high purity fractions completed by ZRR can lead to better price and easier sales of sorted materials. The needs of incineration and landfill also can be reduced with ZRR.


Detection of Cognitive Distractions in Drivers

Distraction detection system

On Oct 26th, 2015, Mitsubishi announced that it has developed a technology that can detect absent-mindedness and other cognitive distractions in drivers when their vehicles are traveling straight. The technology used is called deep learning, which is a popular type of machine learning. Mitsubishi expects this technology can be installed in driver sensing units and sold commercially from around 2019 or beyond.

Mitsubishi has already developed the technology to detect distraction in drivers before. However, the existing system can only detect visually distraction due to drowsiness or inattentiveness. The basic function for the existing system is to detect distractions from drivers’ face or eye movement. It is difficult to detect cognitive distractions since symptoms sometimes appear in a driver’s behavior or biological patterns, rather than in their face or eye movements.

Deep learning can help the system of the new technology analyze time-series data including the vehicle information (steering, etc.) and the driver information (heart rate, facial orientation, etc.). From the analysis, the system can detect distractions and tell potential dangerous indications. The basic theory for the new technology is that predict appropriate driving actions based on time-series data. Specifically, the machine learning algorithm uses a combination of data on what is called "normal driving" and time-series data on actual vehicle and driver to predict appropriate driver actions.[5]The technology would then detect cognitive distractions if driver's actions totally differ from the algorithm-based prediction of what would be appropriate.  And if a distraction is detected, the driver will be altered immediately.

Problems with the “Smart” Machine Innovation in Industry

One problem with the “smart” machine innovation is the technological unemployment. Technology unemployment is defined as the loss of jobs caused by technological change that typically includes the labour-saving machines or more efficient processes. Obviously, the innovations such as more efficient wind turbines, robotic system for recycling can directly affect employment. The more efficient auto-machine also means less labour force required.

Another problem is that the innovation of technology in autonomy area cloud eventually destroy the jobs. A more serious situation is that the speed of destroying jobs by changing technology is faster than the speed of creating new jobs. In result, the jobs provided in industry facilities will be less and less.

The problem associated with detection of distraction is more about the ticking and enforcement side of that technology. In addition, a driver can be potential over dependent on that system, which will affect the driver’s driving habits.

Medicine

Predicting and Reducing Hospital Readmissions

Predicting and Reducing Hospital Readmissions

Machine Learning technique has been applied to predicting and reducing hospital readmissions. The application reads current data, compares them with past data, and analyzes them with machine learning algorithms to return predicted possibilities. Although these possibilities are not absolute accurate, they are credible with good accuracy, they give both doctors and patients strong evidences to make decisions. It can also reduce doctors’ turnover rates.





Efficient DNA-based Treatment

Efficient DNA-based Treatment

Doctors will routinely use your DNA to keep you well, targets cancer therapy based on you and your cancer’s genetics. IBM predicts this application would be applied within three years. By using the new system, which applies Machine Learning’s function of classifying DNA sequences, DNA sequencing will be accessible to doctors and patients to help tackle cancer and other disease with a DNA link.[1] Not only better DNA-based treatment can be achieved, but also the time consumption will be reduced a lot, so the personalized treatment can be achieved. Also, because the new system will continuously learn about cancer and the patients who have cancer, the level of care will only improve. No more assumptions about cancer location or type, or any disease with a DNA link, like heart disease and stroke.






Not-Too-Distant Future Trends

So where is the future of machine learning is headed? According to IBM research, Machine learning will enable cognitive systems to learn and engage with us in a more personalized way. Firstly, These systems will get smarter and more customized through interactions with data, devices and people. They will help us take on what may have been seen as unsolvable problems by using all the information that surrounds us and bringing the right insight or suggestion. Over next few 2-3 years, machine learning applications will lead to new breakthroughs that will amplify human abilities, assist us in making good choices, look out for us and help us in powerful new ways [2]

Personalized Learning: 5 Future Technology Predictions from IBM

Education

The first area is education, which is the class will learn you in the near future. Basically the future classroom will turn into a truly personalized environment in where teachers provide each student with personalized learning experiences. It is no longer the traditional education classrooms that have been focused on a one-to-many interaction between a teacher and a group of students. All students receive the same material from a teacher in a lecture setting. With machine learning, tremendous amount of data teaching and learning also includes students behaviors are analyzed over a long period of time. So what the system with machine learning can help teachers do? These systems would also help teachers identify students who are struggling with course materials. The system could also couple a student's goals and interests with data on their learning styles so that teachers can determine what type of content to give the student, and the best way to present it. The teacher would use this cognitive system to find out the students learning style and develop a plan that addresses their knowledge gaps.

Digital Security: 5 Future Technology Predictions from IBM




Digital Security

The second area of machine learning in the future is related to the security. The IBM research predicted that the smarter system might be in our life some day, which is called Digital Guardian who will protect you in online environment. A digital guardian soon will learn how to better-secure people’s online life by the machine learning technology along with utilizing the big data to learn from online behavior patterns to know what to protect. And when it detects a possible breach, people will be the first to know. And it also incorporates security measure such as fingerprint and facial recognition. However, the smartest move the digital guardian can do is to make decisions for people. For example, your digital guardians will the suspicious credit spending based on your spending habit and the location you were in when the spending incurred, thereby to decline the transaction for you.

Buy Local: 5 Future Technology Predictions from IBM




Shopping

The last interesting area is the big shift of shopping locations from online to local stores. IBM Research is exploring prototype software called the Virtual Stylist that uses data to help retailers more precisely predict clothing a customer will like, based on what complements the existing contents of their closet and their preferences. While many e-retailers today offer personalized recommendations, most are made by looking at the item you purchased, and what other people who purchased that item also bought and then recommending to people those other products. Rather than basing your recommendations on what others buy, the Virtual Stylist would let retailers make recommendations based on your unique taste and styles by look at items you recently purchased or showed interest. In addition, the best thing of buying local allows shoppers bring their favorites items to home without extra delivery time. As opposed to online shopping, there still a delivery time needed even though the all e-retailers nowadays are finding ways to minimize the delivery time.




Three Paradox Issues

People always argue machine-learning application is either good or bad, it is up to people to decide. There is some paradox issues always associated with machine learning according to the article published in Futurist Speaker[1].

Paradox One

Firstly, it is Optimized humans will become less human. A quote from the author: [1]. “ This utopian dream of living the easy life certainly has its appeal, but grossly oversimplifies our need for obstacles to overcome, problems to wrestle with, and adversarial challenges for us to tackle.” This implies that human are living with an easy life with all smart machines in the future, it is a bad thing for people to use these machining learning applications in people’s daily life. Human do not have the ability to control things happened in their way.

Paradox Two

The second paradox is that originality becomes impossible when all possible options can be machine generated. Machine learning is imperative to protect the humans’ right for originality. Author believes that it is bad to let the machines to learn those things. As we all know, Humans place great value on creativity, and originality. Let the machine to learn what artist does will prevent human from produce creative work and even raise legal issues associated with artist works.

Paradox Three

Thirdly, author claimed that perfection eliminates dependencies and will destroy our economy. This implies that having a super intelligent machine is destructive for the employment if the robotics takes place of human’s job. The recent trend is that majority of business seek for the emerging technologies to improve their business operations, so it might increase the rate of unemployment with the mature of automation technologies. Therefore, it will lead to economic changes in order to restructure the future of employment opportunities for those displaced workers.


Limitations and Concerns

In terms of limitations, privacy issue is always related to this filed. People are feeling insecure that personal life is under surveillance all the time and sensitive information leaking. For example, since machine learning technology are used to in order to improve the translation accuracy for voice translation system in different contexts. This means massive of data for real conversations from personal life has been recorded as input data to train the machine to learn. Secondly, what the machine cannot do is clean the data. There is an increasing need of data scientists to make sure the data is correctly structured that is readable by the computer. By doing so, it enables data scientist to generate models faster and help get results faster. This requires data scientists with adequate level of skills to effectively interpret the results generated from machine learning algorithms[2]. Meanwhile, we need to be aware of there is lacking of data scientists in the market. Additionally, one limit of machine learning isn’t useful to apply it to areas where people aren’t so fault tolerant and machine learning usually disappoints includes machine translation, speech recognition, and image recognition.

In terms of concerns of machine learning, most of time, people is more likely to misuse data especially for personal information concerning with bad behaviors. The same concern from AI, such as software behavior or software agent’ responsibility that will need to be regulated. This might be helpful to prevent the development of the bad applications by using machine learning technology.


References

  1. 1.0 1.1 Futuris Speaker. Three Great Machine Learning Pradox. Retrieved from: http://www.futuristspeaker.com/2015/02/three-great-machine-learning-paradoxes/
  2. Nesta. Machines That Learn in the Wild: Machine learning capabilities, limitations and implications. Retrieved from: http://www.nesta.org.uk/publications/machines-learn-wild-machine-learning-capabilities-limitations-and-implications
Personal tools