Edge Computing: What is it and Why Should we Care?

Edge computing is a new and exciting computational approach that has the potential to remedy a number of problems associated with the exponential expansion of Big Data and cloud computing. Currently, several companies are pursuing new technologies in this sector, from IBM and Amazon to Google, Microsoft, and NVIDIA

The advent of edge computing will streamline cloud computing challenges related to processing power, bandwidth, and latency. It will also enhance a company’s business model by promoting better workflow and supply chain management, selective data insights, data compliance practices, and a more robust data infrastructure. Moreover, as 5G networks become more widely implemented in conjunction with the development of new IoT technologies, edge computing will eventually become a necessity at the enterprise level. Companies will need to derive alternative processing solutions to the increasing influx of raw data

There are, however, a number of negative consequences related to edge computing. Such consequences include compromises to data privacy and security (which are notorious among IoT devices) as well as potential limitations to user autonomy. The practice of edge computing allows corporations to gain even more control over their products and subsequently their user’s experience with them. For those of us who simply use technology to make our lives easier, this may not be a valid concern, however, for those who care about managing their individual devices (i.e., approving software updates) it may warrant consideration. 

Throughout this article, I will explain the concept of edge computing in more detail, how it relates to and differs from cloud computing, as well as some of the benefits and consequences of this evolving technology. 

Cloud Computing vs. Edge Computing 

Currently, most cutting-edge companies use cloud-based data infrastructures to help streamline their digital transformation and build scalable data storage, management, and processing tools that help disseminate Big Data sources. Typically, in cloud infrastructures, raw data is obtained at some data source (e.g., traffic sensors that monitor traffic flow) and then transmitted throughout a network of remotely located servers where data processing, analytics, storage, and management occur. 

Cloud infrastructures are centralized, in the aforementioned sense, such that data acquisition and processing are not distributed across various individual devices, but rather, throughout one coherent network. Edge computing, on the other hand, literally functions at the ‘edge’ of this network by imbuing individual devices with the capacity to process data at its source (or as close as possible); it is both a localized and decentralized method of computation. 

However, it is important to note that edge computing is not meant to replace cloud computing, but rather, to increase cloud providers’ abilities to synthesize, manage, and analyze data by curating datasets that contain only relevant information and reducing the processing load placed on remote servers

Processing Power and Bandwith 

Cloud infrastructures are robust and highly useful at the enterprise level, however, they still suffer from two primary technical challenges: 1) bandwidththe overall volume of data that can be relayed throughout an internet connection over a given amount of time and 2) latencythe delay that occurs when data is transferred from one point to another

Since edge computing functions by employing localized processing at the data source, it has the potential to almost conclusively eliminate these problems. Let us now turn to a prominent example that highlights both the aforementioned issues. 

The market for autonomous vehicles is rapidly increasing in size and popularity, especially as environmentally sustainable imperatives push governments and individuals to adopt alternative methods of transportation that reduce carbon emissions. However, before this technology becomes widely implemented at scale, a number of safety issues must be addressed, a few of which edge computing can solve. 

For instance, consider the problem of bandwidth in this case. Conducting near real-time data analytics and processing using cloud servers for a limited number of vehicles is possible. However, as consumers purchase more vehicles, and the data influx from these vehicles grows, cloud infrastructure will suffer from congestiontoo much data all at once. This problem is exacerbated by the development of newer vehicles, which contain more advanced and plentiful sensors, thereby increasing abilities for data acquisition. 

Edge computing, in this case, would localize a portion of data processing to the vehicle itself through the integration of machine learning analytics. This would streamline network optimization by allowing the vehicle to sift through all of the data it acquires on a daily basis and subsequently classify it in terms of importance or relevance. This essentially reduces the overall volume of data being relayed as well as the time it takes for it to re-enter the cloud and undergo further analysis. 

Autonomous vehicles, beyond being able to communicate with the cloud, must also retain the ability to communicate with each other (i.e. Device-to-Device communication) as well as other integrated IoT devices (e.g., you can unlock a Tesla using only your smartphone). Cloud analytics experience latency problems because servers can often be located in areas nowhere near the data source. The time it takes for data to travel to a server creates a delay in processing – when you type in a search query, results may be quick, but they are not instantaneous. In the case of autonomous vehicles, real-time communication is paramount for safety; by analyzing and processing data at its source of acquisition, dimensionality is reduced without compromising data utility. Vehicles can then transmit important pieces of information to one another directly, without having to go through cloud servers first. 

Autonomous vehicles are just one example of IoT technology. However, as this technology evolves, especially in the realms of urban design and infrastructure, healthcare services, and finance, edge computing will become an increasingly necessary computing tactic. It would be wise for companies to begin considering this addition to their data infrastructure now, as the pace of data expansion will not slow anytime soon. 

Data Privacy and Security 

Cloud platforms are now well-known for their robust abilities to maintain security protocols and reduce the risk of cybersecurity breaches and threats. This is due to a variety of factors: remotely located data servers, automated software patching, constant security and firewall updates, as well as numerous backup procedures. There are also additional measures in place that can enhance cloud security through third-party services such as data encryption and dual-factor authentication

Seeing as edge computing will primarily be used in IoT analytics before data reaches a cloud infrastructure, security concerns should be adequately understood and scrutinized, especially when dealing with sensitive consumer data. For instance, malware attacks often exploit device-specific deficiencies in software or hardware, which can then be used to gain remote access to a user’s personal information – in its current form, edge computing cannot mitigate these risks. 

Moreover, edge computing requires a certain level of data storage capacity in the device in which it is used. This means that if the device crashes or malfunctions somehow, all data may become lost or permanently inaccessible. If no one can access this data, it may not be a concern for you, but ask yourself how comfortable you are with the idea of your SSN or Bank Account number floating around in some digital ether. 

Using encryption at the data source seems like a plausible solution, but it also appears to defeat the purpose of edge computing. Ultimately, we are striving for low latency and high bandwidth – while encryption does not inherently make a file larger, additional procedures used in the process, such as padding, do increase file size, which results in limitations to bandwidth capacity. Moreover, once data reaches a server, it needs to be decrypted for analysis and further processing, a task that will likely increase latency. 

If, however, the goal of edge computing analytics is to streamline the practice of obtaining aggregate statistics from disparate data sources in a reasonable timeframe, then the differential privacy approach could be more promising. For more information on this concept, visit this AITJ article

User Autonomy 

The advent of edge computing will invariably increase the autonomous capacities of developing and exponential technologies that require real-time data analytics to function. Edge computing will also allow companies to monitor the state of all their products with high degrees of speed and accuracy due to strong bandwidth and low latency. This could result in providing consumers with a product whereby additional digital services are updated, integrated, and provided regularly and autonomously, but without user consent. In other words, when a user purchases a digital device, their purchasing of it represents an implicit agreement with the corporation in question, whereby device functionality is distinct from device ownership (i.e., you own the device, they own the software). 

For instance, when your iPhone is updated, you receive a request that asks you whether you would like to continue with the software update or postpone it. In the case of edge computing, your purchasing of the iPhone would also entail your agreement to use any subsequent software, whether you like it or not; the iPhone would update itself when necessary based on whatever localized device indicators are used in edge analytics. 

Edge computing intrinsically threatens user autonomy by increasing the level of influence tech corporations have over the digital products they create – certainly, companies should have some influence over their products, but to what extent should they be able to influence the use of their products at the individual level? The digital ecosystem is already rife with coercive and persuasive digital tactics that aim to further enhance user data profitability (e.g. content curation algorithms, targeted digital marketing, geolocation, etc.). Importantly, these tactics are sustained by the deeply entrenched socio-economic structure of surveillance capitalism, which is unlikely to go anywhere anytime soon. 

We must ask ourselves, especially in the case of edge computing, what tradeoffs we are willing to make with respect to our privacy and sovereignty as exponential technologies become more integrated across all domains of life. 

Contributor

Sasha is currently pursuing an MSc in Bioethics at King’s College, London. Prior to engaging in his current studies, Sasha was a Division 1 Ski Racer at Bates College, where he graduated with a Bachelor’s in Cognitive Psychology and Classical Philosophy. He is deeply interested in applied ethics, specifically with respect to AI-driven exponential technologies and how they might one day affect humanity

About Sasha Cadariu

Sasha is currently pursuing an MSc in Bioethics at King’s College, London. Prior to engaging in his current studies, Sasha was a Division 1 Ski Racer at Bates College, where he graduated with a Bachelor’s in Cognitive Psychology and Classical Philosophy. He is deeply interested in applied ethics, specifically with respect to AI-driven exponential technologies and how they might one day affect humanity

View all posts by Sasha Cadariu →