Sharing Economy Fall2015

From New Media Business Blog

Jump to: navigation, search

Contents

Software

Software sharing has driven technological development for a great length of time now, with the movement truly having taken hold in the 80’s and 90’s[1] . Free software, and free software code are a key element to the majority of businesses, with roughly 80% taking advantage [2]. It also holds a significant place in private use as well. To illustrate the impact, as well as the business implications, it is best to consider both open source software, and source code.

Open Source

Open source, although it can be applied in more general terms, is a term typically used to describe software that has been made available free of charge, most commonly also allowing users access to the source code with the ability to modify and update the software. With the widespread use of open source software, user feedback, and user validation, the hurdle of accepting free software into a business is long past. Today it is a significant source of savings for most companies, as well as a way to develop business processes by incorporating the ides of the open community.

The primary concern surrounding business use of open source software is the accuracy to which the available applications may present, therefore, it is important to note that many open source offerings adhere to high standards thanks to community involvement, and peer review.

Open Source in Web Development

To demonstrate how widespread the use of open source software is, we can look into a very common business, or personal, operation, the development of a website.

It has all but become a requirement for a business to have a website to access their customers, web site development has become a widespread industry that is largely fueled by open source software. To illustrate this, we can first look at one of the beginnings of the free web development platform by considering L.A.M.P., or Linux, Apache, MySQL, and PHP. This platform provides free access to web development for anyone with a desire to do so, and the knowledge of how to use these programs.

Each of these elements does in fact have a number of alternatives, many of which are costly, but together they made up the first significant free platform of web development.

Linux

“Linux is technically an operating system kernel for the "GNU operating system".” [3] Together, they are typically referred to as GNU or Linux. Linux is far more flexible than other common operating systems such as Windows. In Linux, users can customize the system in virtually any way they choose, as they are given access to the coding for the software. In Windows, options for customization are strictly limited, and hold users in a more narrow range of operation. For web development, Linux provides the platform for communication among the other elements so to speak.

Apache

Apache is an open source web server that is developed and maintained by the open community, under the Apache Software Corporation. The Apache HTTP Server services over 50% of web sites, and over 50% of servers on the internet[4]. A wed servers function is to accept and respond to requests by users[5]. The request is interpreted and then relayed to other elements in the web system. The wide spread use of Apache is a clear example of the impact the sharing economy has on business.

MySQL

MySQL is LAMP’s relational database management system, a system that allows for the storage of data in formal tables which can be retrieved or reassembled on command without restructuring the raw data tables themselves[6]. It is common place for online databases that require information queries that can issue the requested information in a logical way for the user.

PHP

Lastly, PHP, or PHP Hypertext Preprocessor, is the application programming language typically associated with the LAMP platform. PHP in a server-side scripting language that was intended for web development, but is also widely used as an all-purpose purpose language as well. The PHP code is inferred by the web server and produces the web page.

Each of LAMP’s original applications has several equivalents that offer similar outcomes. This is just one example of a plethora of free software available on the world wide web.

Source Code

Source code is code that has been written by one or more contributors, and made available for free to the community. Users can then review, or improve upon the code. Trading of code is quite simple, and there are two main sources for a wide range of coding applications, Github and Stack Overflow. These platforms offer virtually any element of code that could be desired by a programmer, then they must simply put the code into a ‘wrapper’ to communicate with other elements of the software in development.

Applying Source Code at Speechtech

To illustrate this, let’s look at a real world example. Speechtech is a Victoria, British Columbia based company that provides software for communication between an airlines control tower, and the aircraft. The primary function of their software is to vocalize weather and trafficking data for pilots. When customizing their software for new customers, small changes are required that may demand a significant amount of effort from the programmers. For example, the vocalization application has set speeds in which the speech is delivered, and many customers request that this speed be increased, or decreased. There is no need for the developers to investigate the mathematics behind tempo changes, as such applications are available for the open community. A quick search on Stack Overflow, based on the programming language desired, C#, delivers precisely what the developers require. This element of programming provide countless cost saving to Speechtech, and many other businesses.[7]

The code is then packed for use in the system, so that it can communicate with other pieces of that system. When incorporating source code into a system, developers must maintain awareness of a few key implications of using prewritten code. They must ensure that the code is maintainable, so that it can be easily updated, clear, so that it can be interpreted by others in the future, and modular, so that it will adequately communicate with other code within the system. When utilized correctly, the implications of source code are seemingly limitless.

Information

Business Sharing of Information

Traditionally in business ot is viewed as an advantage to have information that your competitors lack. This may be true but there are many cases where the sharing of information can be beneficial to multiple parties. A recent example of this is in June of 2014 Tesla announcing that all patents relating to their electric car would be free for anyone to use [8]. This releasing of patents goes against what they are normally used for. Patents are designed to protect a company's intellectual property. So why would Tesla encourage others to use theirs? History gives us examples as to why. In 1817 William Gilmour invented a semi-automatic power loom and, much like Tesla, allowed anyone to use or improve upon his invention[9]. For two decades the textile manufacturing industry thrived because of this. There were few mechanics skilled enough to build these machines and few entrepreneurs who knew how to operate them so those who could made large profits. By allowing other mechanics to improve upon Gilmour's designs machine productivity was doubled in a 20 year period. There are other examples in history with similar stories including steel making, and the steam engine. In my recent history, Microsoft was not granted a patent until the company was 11 years old, and in the early days of Qualcomm they freely shared an algorithm that is still used in cell phones today.

It makes sense for companies with innovative, disruptive, technologies to share their information with others. Tesla needs complementors in order to gain mass appeal. They need charging stations across the globe, and mechanics with the know how to fix electric cars. Sharing information with other companies is helps standards be created. In Elon Musk's own words he is trying to create a "common, rapidly-evolving, technology platform." By the 1830's most textile manufacturing technology had been patented and firms were less likely to share knowledge. This is common once technology matures. It may have taken Microsoft 11 years to be issued their first patent but they now file 2,000 to 2,500 patents a year.[10]

Government Data Sharing

Zillow is an online housing database that helps users get all the information they need when looking to buy, sell, or rent a home.[11] iTriage is app that helps people diagnose medical issues and find appropriate medical facilities.[12] Check That Bike! Is an UK based app that checks bikes against a database of known stolen bike to ensure the one you are buying hasn't been stolen.[13] What these three companies have in common is that they are all built upon open government data.[14] In December of 2015 Zillow had a market cap of over $1.4 billion.[15] In 2009 the United States government decided to make their data open by default. Currently their site, www.data.gov, has over 188,000 data-sets that anyone can download.[16] This information can be valuable to a small business looking for a location because they can search things like crime rates, or pedestrian traffic rates all in one place. British Columbia has a similar site at http://www.data.gov.bc.ca/. These sites contain a huge amount of information with the potential to create new businesses and help existing businesses in their operations.

However there are some concerns. Some of the data made available is useless. Joe Gurin of the Centre for Open Data Enterprise estimates that up to four-fifths of the data released is not useful. Just because the government collects this data and makes it available to anyone online does not mean that it is useful for others. Data.gov is a collection of data sets of over 170 different government organizations. With data sets coming from so many different sources that have different ways of tagging their data sets it can be hard to search and find the best database for your needs. There are also questions around data quality and missing meta-data for much of what is currently available online. Without this meta-data (data describing the main data set) it makes these databases much harder to use. There are also too few people with the necessary skills to turn these data-sets into something more useful. Having the technical skills to be able to manipulate databases and being able to interpret that data is a rare skill that is necessary to create value out of these open data sets.

Users sharing information

Crowdsourcing

Crowdsourcing is the practice of obtaining needed services, ideas, or content by soliciting contributions from a large group of people and especially from the online community rather than from traditional employees or suppliers.[17] A common example of this would be the website 99designs.com If you are someone who needs graphic design work you describe what you are looking for and how much you are willing to pay then graphic designers who use the site will try and fill your needs.[17] At the end you select which design you like best and the graphic designer is paid with a portion going to 99designs.com. This can be a good way to find cheap graphic design work but once start one of these projects many times you are stuck with the results whether you like them or not.

Another use of crowdsourcing is the crowdsourcing of information. One of the more common websites for this is Yahoo Answers. Anyone can go onto the site and ask a question in one of 26 categories.[17] Just as anyone can ask a question anyone can answer someone else’s question. There are also more specific information crowdsourcing sites such as CrowdMed. CrowdMed is a website were people who are experiencing medical issues can upload their symptoms and “Medical Detectives” start working on their case.[18] The standard fee is $500 with $300 going towards whoever solves the case and the rest going to CrowdMed. [19] Anyone can sign up to solve cases on CrowdMed and this leads to a wide varieties in suggested remedies. In one case of a professional photographer from Pittsburgh she was given a list of 15 potential causes and remedies for her symptoms and they ranged from an issue with her SEM muscle in her neck that required chiropractic work to chair yoga and trigger point massage. When this particular case was shown to Dr. Lisa Sanders who writes weekly article for the New York Times she thought the cause of the symptoms could be a damaged artery shooting small blood clots into the brain. If left untreated this can be deadly. Although the crowd can be a great thing it is not always right. When the stakes are low such as “How do you cook Sweet Potato in the oven with aluminum foil?”[20] there is not much harm in a wrong answer. When the stakes are much higher, like a case of life and death, how much do you trust strangers on the internet who may not be qualified?

Social Media and Sharing

Social media continues to grow throughout the world and now more people are getting their news through social media websites. In the 12 months leading up to August 2015 Internet users grew by 7.6% while active social media users grew by 8.7% and active mobile social users grew by 23.3%.[21] There are more people actively using social media which in turn leads to more sharing of information. As more users are turning to mobile it allows them to share information anywhere at any time through any number of social networking platforms. This is leading towards humans consuming more information than ever before. As of 2011 we were consuming 5 times more information a day than in 1986.[22] The information we were consuming daily in 2011 was equal to 174 newspapers. As social media has grown since 2011 it is reasonable to assume our information consumption has increased as well. In 2015 63% of Facebook and Twitter users reported getting news from these sites.[23] This is up from the numbers reported in 2013. With more people using and getting their news from social media it gives more power to these social media sites.

In November of 2015 it was reported that Facebook was blocking all mentions of a rival upstart social media site Tsu.co.[24] You could not post the URL to the Facebook Newsfeed, an Instagram comment, or even send it through a Facebook message. The CEO of Tsu even went onto claim that Facebook removed any mention of the site from their archives. When news websites first discovered this story they could not even share their stories to Facebook. This clearly demonstrates the power that Facebook has when it comes to what gets published in our Newsfeed and messages. Facebook claims that they blocked all mentions of Tsu because Tsu promises to pay users a percentage of ad revenue generated on the site and to give a small commission to users who recruit others to the site. Facebook sees this as spam and has an anti-spam policy. The motives behind Facebook blocking all mentions of Tsu can be debated however it is undeniable that Facebook blocked posts that in no way violated their terms of service (the news stories reporting on this issue). These kind of incidents bring up questions regarding what could happen in the future. What if in the 2016 US presidential election Twitter decides that it does not support Trump. Twitter could perform sentiment analysis on tweets and only shows negative Trump tweet when users look at trending topics. This not a new problem has been seen before such as in the 2015 Canadian Federal Election. Postmedia owns numerous newspapers throughout Canada and their CEO declared that all of them have to support the Conservatives.[25] Many even had front page ads supporting the Conservatives in the week before the election. As news moves online and people are getting more of their news from social media it is important that they check multiple sources and stay vigilant about potential biases in what they are reading.

As more people are on social media it also means we are putting more of ourselves online and sometimes we are putting more than we think. A study found that just by looking at public “likes” on Facebook they could infer things such as sexual orientation, drug use, intelligence, race, and political views.[26] The researchers could correctly predict sexual orientation in males 88% of the time while only 5% of homosexual participants “liked” gay marriage. They were able to do this by analyzing other likes such as TV shows, movies, books, and finding patterns. People might be ok with others knowing their favourite TV but might not be OK with others knowing their sexual orientation. As we share more of ourselves online we should be aware that this information could be used for more than what we originally anticipated and once it is online there is almost nothing we can do about it.

Privacy

What Is Privacy?

Privacy and sharing economy go hand in hand. Internet privacy, in particular, is defined by Techopedia as the level of protection and security of personal data published online. Different techniques and technologies are utilized to protect sensitive data, communications and preferences. Statistically, an average of 294 billion emails are sent daily around the world. Approximately 4 billion of high security messages related to 9,700 banking organizations, securities institutions and corporate customers are sent out from 209 countries annually. About 50 million lines of coding is written across the globe in 2011. The data was based on 2011 numbers, and is significantly higher in 2015. With this information, data protection of information shared online, it is mandatory to provide security and protection of not only highly sensitive information but also private and corporate data.

Who Is Affected By Privacy?

Anybody can be affected by privacy – Individuals, businesses and corporations, governments, R&D, competition, economies to name a few. Individuals are the single units in the sharing data privacy environment. There are multiple ways in which an individual can be affected by private information being stolen or sold to corporations. One example is the case of prominent Facebook. Facebook is one of the most successful project undertakings in the world. Approximately 43.3 % of world Internet users have accessed the platform between August 17 to November 17, 2011. Users are the driving force in generating content on the website. This content, in its turn, drives and attracts traffic on the website making it more popular. As such, the popularity of the websites results in higher prices for advertising on this social media platform. What happens next is that Facebook team profits from the activity users generate online for free. They sell data collected on the users to corporations that later advertise and sell products to users. Eventually, Facebook and corporations profit from the endeavor, and individuals creating content are not paid. This type of relationships was equalized to user exploitation which happens on all major social media platforms. The type of data Facebook collects includes private data, communications and contacts. What makes matters worse is that users do not know what type of data Facebook collects and sells to corporations. Users are generally fine with data collection as long as their contextual integrity is not being infringed upon. Contextual integrity involves such data aspects as education, health care, psychoanalysis, voting, employment, the legal system, religion, family, and the marketplace. In cases this type of data is collected and sold can result in adverse personal consequences to an individual [1]. Some propositions that were made to address the problem included opt-in option where a user will be able to decide whether he or she wants to be exposed to the advertisements and give permission for such activity. Opt-out option will give an individual an option to completely ban advertisements on their end, the least likely solution. Making Facebook a non-profit platform is the third option where Facebook does not have any advertisements and is operated by donations as does Wikipedia. This case is also unlikely due to the present state of the platform. Social media and online sharing present a number of other concerns to consumers [2]. Among them are stalking, whereas a personal data is seen and/or collected by an undesirable third party, job discrimination and harassment and cyber bullying. Job discrimination can happen for a number of reasons. Typically, such reasons include an employer finding out information on an employee via online that they deem inappropriate or at odds with their own agenda. As a result, harassment and discrimination can occur in the workplace. Certain online or digital sharing of content results in suicide, especially among teenagers as the pictures or information are damaging to their reputation or image as well as due to consistent cyber bullying. The studies have found that younger people feel safer when sharing content via mobile devices rather online. It was also found that written content is more damaging than pictures because current technologies are not advanced enough to search by picture but advanced enough to trach written content. The pictures, however, once found are harder to deny. The digital media is known for its four properties: persistence, searchability, replicability, and scalability. This means that once online, content is there permanently. One cannot easily get rid of it as it easy to store, retrieve, replicate and reproduce the content [3].

Governments, Economies, Businesses and Corporations, Competition, R&D

Almost the same adverse consequences described for individuals are possible for governments, economies, businesses and corporations, competition, R&D just on a bigger scale. Cross-border information flow is of much concerns to the governments and companies worldwide. The reason for most cases is the clash of privacy cultures as different countries have different approaches managing such information. Sharing of data has proved affecting human rights, intellectual property, democracy, trade, business, sovereignty and security. Some of the cases that affect human rights, democracy, and security are outlined below: 1. The South Korean National Intelligence Services were found spying on emails sent and received via Gmail. 2. India, Saudi Arabia, Bahrain, Indonesia, and the United Arab Emirates threatened Canadian BlackBerry services suspended if for email, instant messaging, and web browsing were not allowed for monitoring private user-generated data. 3. China imposed its national censorship on users trying to access social media platforms within the United States borders. On a bigger scale, the United States captured the data on the failure of Brazil’s coffee crop via its satellites. In this case, the United States informed and alerted the Brazilian government to summon up its resources to allocate to other economic areas. Should this not been communicated in a timely manner, a competing country could have used the situation to its advantage that could adversely affect the Brazilian economy [4].

How Data Is Collected Online?

There are many ways to capture data online – different software tools and technologies allow for that. What gained the attention recently are cookies, tools typically used to collect data on browsing patterns of consumers. HTTP cookie is the most known one. It can collect up to 4 KB of data storage, and it is limited to a browsing session. Adobe Flash Player Flash Cookies are more serious as they collect up to 100 KB of storage and are persistent, not limited to one browsing session. The cookie of special concern is Evercookie. It is extremely persistent and can recognize a user even after the browser was rid of information. Revised HTML 5 protocol is even more persistent than Evercookie and allows 5MB of storage [5].

The Future of Privacy

Regulations is one way to protect the data against infringements. Another solution would be to give users an option to not being tracked. This is one more form of law. That way the government will protect the data shared online by means of law, although it still would be hard to impose rules on those who is accessing data online, and it is hard to prove how the data was accessed. Robust software that could manage cookies and online visibility could prevent undesirable data tracking. The software can come with certain types of coding that will let a user manage who can see their data and browsing history.

The Deep Web

What is the Deep Web?

Size of the Surface Web vs. the Deep Web

Believe or not, the billions of webpages online right now only represent a tiny portion of the Internet. These websites (such as Google, Facebook, Wikipedia, etc.) and all other content that are accessible through regular search engines are known as the Surface Web. There are much more websites on the Internet that even the most powerful search engines cannot find; this hidden area of the Internet is called the Deep Web. The amount of information on the Deep Web is exponential compared to the Surface Web. The Deep Web, also known as the “The Deep Net” or “Invisible Web”, is estimated to be 400 to 500 times larger than the Surface Web and it is constantly expanding (fast). It contains all content that is invisible online such as: user databases, archives, webmail, password protected webpages, corporate intranets, and much more [27]. It is even theorized that even extremely confidential information such as NSA documents or the Vatican Secret Archives are buried in the Deep Web, but due to the immense size and accessibility, it is uncertain what can be found.

History of the Deep Web

Ashley Madison website before being hacked

The term, Deep Web, was first created in 2000 by a computer scientist named Mike Bergman and has been widely adopted since then [28]. In 2001, researchers at the University of California, Berkeley estimated that the Deep Web contained 7.5 petabytes of data [29]. In 2004, another attempt at calculating the size of Deep Web estimated that it contained over 300,000 websites. Currently, there is no official size of the Deep Web but taking into consideration that these estimations were over ten years old and are well before the expansion of the Internet and “The Internet of Things”, the number of websites on the Deep Web could very well be in the high millions. The Deep Web have recently become popular in the media after appearing in headline news for the multinational take down of the infamous online drug trade, Silk Road, [30] in October 2013 and the Ashley Madison data dump incident in August 2015 [31].

The Dark Web

The terms “Dark Web” and “Deep Web” are often used interchangeably but they do not denote the same thing. The Dark Web actually refers to a small section within the Deep Web. The Dark Web is a collection of anonymously-hosted websites that can be accessed by anyone using a special web browser called Tor [32]. The IP addresses of both the server and visitor are disguised by Tor, thus, making it very difficult to track the user.  

How to Access the Dark Web

Screenshot of Tor.

Accessing the Dark Web is easier than one may think. To enter the Dark Web, one would need to download a software called Tor. Tor, which stands for “The Onion Router”, was originally a United States Naval project designed in 1995 to protect their online communication network and the identities of their agents. In 2004, the United Sates Naval Research Laboratory released the codes for Tor and has been made publicly available online and open sourced since then [33]. Tor is a great program for preserving your privacy on the Internet and also accessing the Dark Web. Tor allows users to browse the internet and Dark Web anonymously by disguise the IP address with several levels encryption and continuously routing it to Tor users from around the world [34]. With no real IP address, the user’s location is hidden and the user can now bypass the firewalls set up by their Internet Service Provider (ISP) and access the Dark Web.

What is in the Dark Web?

Silkroad's logo

Due to its anonymity factor and unregulated environment, the Dark Web provides the perfect playground for criminals, thus the majority of the websites on the Dark Web harbour illicit activities. There are thousands of websites on the Dark Web ranging from pirated content to the sale of illicit products and services. These websites look like any other website on the Surface Web; they are designed to be user friendly and some even offer user reviews on their illicit products.

On the Dark Web you can find libraries of pirated books (including banned/prohibited ones), movies, and even illegal content such as child pornography. Many ecommerce websites are also available offering: illegal drugs, weapons, stolen items of all sorts, counterfeit bills, and fake government ID’s (driver licenses, passport, etc.) [35]. Known services on the Dark Web include: hit men for hire, commercial hacking, underground gambling, and identity theft. Aside from businesses, the Dark Web is also a very popular platform for whistleblowers. Users can anonymously share and release classified information about government secrets or national scandals.

The Currency of the Dark Web

Bitcoin

The currency used for purchases on the Dark Web is none other than the cryptocurrency, Bitcoin. All Bitcoin transactions are recorded in a public ledger on the Bitcoin Network [36], however, users on the Dark Web have invented a coin-mixing service to overcome this and continue to preserve their anonymity. A coin-mixing service is essentially a money laundering system for Bitcoins that allows customers from all over the Dark Web to send in their Bitcoins into a central network where it is then mixed up with other Bitcoins before being sent the recipient [37]. The amount received is not affected, however now the Bitcoins are not from the original buyer but rather a mixed amount from users all over the world who have also sent in their Bitcoins to the coin-mixer. Though this process delays the transaction, it makes the Bitcoins extremely challenging to trace back to the buyer and ultimately allow Dark Web users to purchase and sell illicit products and services without having to worry about being tracked (by government agencies).  

Regulating the Dark Web

Illicit activities on the Dark Web continues to prevail because regulating the Dark Web is extremely difficult for the government. Countries have begun initiating projects to catch criminals on the Dark Web. Earlier in 2015, the United States created an advanced search engine designed to find crime lurking on the Dark Web [38]. The United Kingdom have also launched a department in law enforcement dedicated to catching cybercriminals [39]. However, with no real IP address for both the users and hosts on the Dark Web, pinpointing the location of cybercriminals can be problematic.

It is important to keep in mind that using Tor and the Dark Web does not make you invulnerable. It is not impossible for government agencies to track you on the Dark Web. There have been cases where even the most powerful figures on the Dark Web have been caught; the notable example would be the arrest of Silk Road’s owner, Ross William Ulbricht, in 2013 [30].

Future of the Dark Web

It is predicted that the Dark Web will continue to grow. The Dark Web is becoming more advance at concealing the users and their activities in order to stay one step ahead of the authority. This would ultimately lead to an increase in illicit businesses and transactions [40]. Legal businesses (that need not sell illicit products and services) may even turn to the Dark Web as well if the government impose an income tax on online sales [41]. This will drive these businesses to perform their transactions on the Dark Web and use the anonymity factor and even a coin-mixer to conceal their sales. As a result, these businesses do not need to report their income to the government and it cannot be traced.

Even regular Internet users may also begin to start using Tor as their default browser in order to preserve their privacy on the Internet. Not only does Tor block online advertisements, but it gives users the Internet in its raw form and escape the Filter Bubble that websites like Google or Twitter is known for. It is also notable, that even social media giants such as Facebook have taken noticed of the Dark Web and begun joining in by allowing their website to be Tor compatible [42].

With a predicted growth in the user-base, businesses, and illicit activities on the Dark Web, the government will further surveillance this area of the Internet with technological developments and eventually pass new laws specifically related to the Dark Web.

Education

MOOCS

A brief overview of MOOCs [43]

MOOCS, known as Massive Open Online Courses, allow users to enroll in online courses for absolutely free. All content is contributed by the schools, and can be taken at your own pace. There is no penalty for dropping out, or failing; passing allows you to apply for a certificate, acknowledging your progress [1] . Two well-known and accredited schools, Harvard and MIT, released a large majority of their university undergraduate classes for free in this manner under EdX [2]

This level of instant verification has been tentatively proven helpful for owners of these certificates. By having these degrees supplement their portfolios or resume, employers often took it as a positive sign that the interviewee was willing to learn [3] However, in almost all cases, it’s been noted that the certificates alone were not the driving factor in hiring decisions, but additional “extras” that allowed them to stand out [4]. They still had their traditional degrees, work experiences and references, it’s just the extra mile of having the certificate that gave them an edge. It can be assumed that having some education is better than no education, even if it’s simply an online certificate; it still shows a willingness to learn.

On the negative side, with software comes software hacking; online education is no different. A recent attempt at gaming the system was caught, which harvested answers from slave accounts to feed into the real master account [5]. This is obviously not the intended way to determine a participants knowledge in the subject matter, and only further hurts the reputation and credibility of MOOCs. This kind of cheating can occur in any type of online education, and with huge amounts of students enrolling in these courses, it further reduces the perceived value of these certificates.

Nanodegrees

Nanodegrees [6] are an interesting way of looking at shortening the traditional gold standard that is the bachelor’s degree. Completed anywhere from 6-12 months, they offer project-based learning with an online community, as well as mentorship. The cost is $200 a month, which can be subsidized with various scholarships. The impressive part is the backing of several big names, including Micrsoft, Google, AT&T, and Accenture [7]. Obviously, a nanodegree might not hold as much sway with a company unfamiliar with the term, but it can make an impressive impact given the program’s backers. The sharing economy kicks in here, since the curriculum is almost solely structured on what’s required in the industry, such as technological shifts requiring different skills in data analysis software, and what one person might learn can differ greatly. This rapid feedback structure lends well to the sharing of valuable information that can actually be used in the workplace.

Students and the Sharing Economy

Open-book exams are exams where the traditional allowance of only your pen and paper are supplemented with any physical reading material. The idea behind this was quite simple; in the real world, you wouldn’t be barred from having access to these textbooks. Not only that, but if you didn’t know the subject matter, then reading the book on the day of the exam would undoubtedly cause you to fail due to the vast amount of information to sift through. If you didn’t know what you needed, then you were already in trouble. However, with the introduction of laptops as a popular learning tool, why not extend open books to open laptops?

There is the idea of simply allowing students to use Google answers [8]. One Harvard professor claims that it will prepare them for the working world; which is true in some sense. Workers with easy access to a computer can simply search up whatever they need, and having a time limit pressures the student to know enough to Google what they need efficiently, which is still a skill onto itself. It’s also not always straightforward, especially if you pose questions that are not simply able to be found on Google, but in bits and pieces, and in how they interpret the results. [9]. For instance, you could search up a math problem specifically, but get no results; instead, they might search up the method for solving that type of question and apply that to their own problem.

Plagiarism is a concept that is brought up at Simon Fraser University a whole lot. This was in part due to the whole cheating debacle in the past [10] and something that Simon Fraser would gladly like to be rid of. Defined as taking without permission, plagiarism is being treated as a serious issue, but in a world where combinations of words can be found quite easily and flagged, it’s quite a daunting task to produce unique content, especially when plagiarism-detection software is used as a post-hoc investigative tool [11]

Language and the Sharing Economy

Learning a new language is an interesting discussion point. Duolingo, one of many sites that offers a way to learn a new language while taking advantage of social media and accessibility, is essentially acting as an alternative to learning with the sharing economy as a driving factor in success due to having over 8 million active users at any given time [12]. This allows peers to contribute translations, offer assistance, and generally share information with other people.

This was made even more interesting with its integration with Uber [13] Basically, Uber drivers can opt to share their language proficiency within their driver profile. This allows Uber users to feel more confident that they can communicate when talking to their drivers. The whole idea of the sharing economy plays an interesting role here, given that they are two separate apps that found a common thread to link them together; that is, accreditation. If we were to look at the process to verify your degree with a third party, it would take a lot more effort than simply having developers unify their platforms; having a verified degree show up on an application would take a tremendous amount of effort, versus the sharing economy using this information to increase relevancy, similar to how the Internet of Things only works if everyone is using it.

Educators and the Sharing Economy

Lesson plans for educators everywhere at reasonable prices.

With the whole idea of sharing between students, why not educators? With TeachersPayTeachers [14], we can have teacher’s buying lesson plans online in a marketplace. Not only that, but these lesson plans can get refined through discussion and ratings. This allows collaboration between teachers and professors, who can contribute new ideas or problem sets to each individual plan. This also frees up educators who spend their holidays and late nights working on lesson plans for the year [15] This sharing of information between teachers also has the benefits of rewarding contributors of lesson plans; while the company hosting TeachersPayTeachers gets their cut as the middleman in the transaction, it's still beneficial to all parties involved and lends a more global perspective on things. Due to the competitive nature of the market, costs can be expected to be economically friendly, as too high of a price might cause your lesson plan to ignored in favour of a similar, cheaply priced one, although that can be counteracted with quality over quantity.

Another aspect of this is that it makes it easier to re-use and refine other peoples work [16]. Cynics might think that this is a lazy way for educators and teachers to just outsource part of their work, which is true, but you have to understand that being an educator is much more than just having the materials to work with. This is also a step upwards in standardization, especially if everyone is copying each other and refining, and allows for a core curriculum to be taught universally, with some modifications based on the educator's preferences on what to teach outside of the norm.

Some courses, however, cannot simply have a lesson plan due to their dynamic nature. For instance, BUS 466 is an excellent example of a course that you requires modification each semester that if you would be consistently buying outdated material.

Is Simon Fraser Ready for Online Education?

Specific differences outlined in WebCT vs Canvas [17]

SFU only chose to switch to Canvas out of necessity. According to an article by BCCampus, [18] it was due to the shut-down of support of WebCT in December 2012. Had Blackboard Inc chose to continue supporting WebCT, there is little doubt that SFU would still be using it to this day. This is mostly speculation, but there was little information given on the exact reasons for switching, and from a student’s perspective, everything Canvas can do, WebCT was able to do as well. In other words, we can consider Canvas as a sidegrade; the chart outlined on the right shows the major differences.

CODE is SFU’s Center of Distance Education, located in the bottom floor of WMX. This small hub serves as the managing center for all of SFU’s distance education courses. Currently, there’s well over 90 courses to choose from [19]; this selection seems to be slowly expanding, especially when looking at cached versions of old online offerings. This is a small step in the right direction, but of course, the issue here is that students are still paying the premium tuition amounts, versus the aforementioned freedom of MOOCs and low costs of nanodegrees.

There is also a high level of integrated learning styles that most professors are comfortable with; some ignore all the latest gadgets and gizmos in favour of standing in front of a lectern and talking for hours, while others are much more savvy and can utilize everything in the room to greater benefit. Given the traditional teaching methods that most professors employ, it's easy to say that we are not ready for a larger conversion into online education, but rather, professors need to start utilizing it in order to make the transition smoother if it does happen in the end


Generic differences outlined in WebCT vs Canvas [17]













References

  1. https://www.edx.org/how-it-works
  2. https://www.edx.org/schools-partners
  3. https://medium.com/@sujinlee/i-am-a-mooc-learner-can-i-expect-to-get-a-job-8a139b344831#.3wze5srad
  4. http://www.pcworld.com/article/2071060/employers-receptive-to-hiring-it-job-candidates-with-mooc-educations.html
  5. http://www.thecrimson.com/article/2015/9/3/cameo-cheating-method-mooc/
  6. https://www.udacity.com/nanodegree
  7. http://www.cnn.com/2015/11/23/tech/nanodegrees-google/
  8. http://www.telegraph.co.uk/education/educationnews/11275200/Allow-pupils-to-use-Google-in-GCSE-exams-says-academic.html
  9. http://www.telegraph.co.uk/education/educationnews/11572349/Pupils-should-be-allowed-to-Google-in-exams-says-exam-chief.html
  10. http://www.cbc.ca/news/canada/british-columbia/sfu-disciplines-more-cheating-students-than-ubc-survey-says-1.2550250
  11. http://america.aljazeera.com/opinions/2015/1/plagiarism-mediazakaria.html
  12. https://aws.amazon.com/solutions/case-studies/duolingo/
  13. http://adigaskell.org/2015/10/13/duolingo-and-uber-team-up-to-showcase-the-english-skills-of-drivers/
  14. https://www.teacherspayteachers.com/
  15. http://www.dailymail.co.uk/news/article-2159173/70-teachers-nighter-prepare-lessons-according-survey-teaching-magazine-concludes-hours-rest-us.html
  16. http://www.nytimes.com/2015/09/06/technology/a-sharing-economy-where-teachers-win.html?_r=0
  17. 17.0 17.1 https://canvas.sfu.ca/courses/14686/pages/what-are-the-key-differences-between-canvas-and-webct
  18. http://bccampus.ca/2013/12/18/canvas-is-coming-to-an-sfu-campus-near-you/
  19. http://www.sfu.ca/outlines.html?2015/fall/
Personal tools