Technology and computing in 2020 and beyond – are you ready to live life on the edge?
15 May 2020
Right now, for those in lockdown at home, the answer for many will no doubt be a resounding ‘oh, yes!’ but is the answer the same from a particular technology perspective?
Most of us have become pretty familiar with the workings of the cloud, IoT, blockchain and AI, but increasingly people are now talking about ‘edge’ computing. So, what’s it all about?
Over the past few years computing and data storage has become firmly rooted in the cloud. We all have access to lots of devices – phones, tablets, laptops and desktops – these are typically used to access ‘centralised’ services, in the cloud, such as email, CRM and document management. The cloud is also used for functions and services such as data management and analysis, networking, SaaS and servers. These cloud functions and services are accessed remotely via a network, usually the internet. Examples include Dropbox, Snapchat, Instagram, Netflix, AWS, Google and Azure.
Edge computing is a movement to bring computer processing nearer to the individual device. Edge technologies are less reliant on the cloud and, used in its literal context, the ‘edge’ simply refers to the fact that the location of the processing takes place nearer to the data source. Benefits can include speed, efficiency and cost savings. Let’s take a closer look at the advantages and disadvantages.
Why do we need edge computing?
One big plus is the reduction in latency, thus increasing the speed at which processing decisions can be made. A simple example of this might be when controlling your home assistant such as Alexa. Asking a question requires it to:
- process the request
- ping a compressed version to the cloud, where it is then unpackaged
- the cloud may need to ping another API somewhere to obtain the necessary information
- and that information is then combined, repackaged, and pinged back to your device and unpackaged, to produce the response.
If more processing can take place within the device itself, this increases the speed at which the reply can be given. There is also benefit for the cloud provider/customer as server costs reduce.
In another example, edge computing will become an important enabler for connected and autonomous vehicles. Here, the idea is that the ECU in the vehicle can itself make decisions rather than everything being managed in the cloud. The cloud may not be adequately resilient or quick enough to make all decisions, such as when to make an emergency stop.
In reality, edge computing and cloud computing will need to work together and, in the vehicle example, the cloud would for example still provide data such as traffic, weather and entertainment data, whereas the ECU would make more crucial, time-sensitive decisions.
Privacy and Security
The distributed nature of the edge computing model requires a shift in security from those methods employed for cloud computing. It is conceivable that data will move between different edge computing ‘nodes’ before reaching the cloud. Moving away from the centralised ownership/control of data may shift some of the responsibility for privacy and security to the end user. This may be seen as a benefit, for example, by some retailers, who want to closely control their customer, store and sales data.
However, data that is collected at the edge may not be given the same level of security by the device as the resources available in the cloud. More devices at the edge may mean more distributed denial of service (DDoS) attacks. These DDoS attacks can, for example, search for weak login credentials, with the attackers then taking control of devices and overloading networks with huge volumes of requests.
It seems likely that software for edge devices will need to run on a constantly-updating basis, the same way that most internet browsers do. Edge computing will need to bring ever-enhancing and adapting security features to help prevent or reduce the impact of DDoS or similar attacks. Penetration testing of devices will also become even more important.
Scalability
Cloud-computing is a relatively ‘tried-and-tested’ model which has an established infrastructure in place. Edge relies on all of one’s devices being able to ‘talk’ to each other with reliable connections between them. Careful planning will be needed to scale-up edge networks to prevent slowing down the speed of communication between connected devices.
Efficiency
By bringing the tools closer to the job, one can use more sophisticated tools and methods on the edge. This means that more AI tools can run closer to the edge, making decisions relatively independently and increasing efficiency. It is easy to see how this might apply in a retail or manufacturing environment, for example, tools to measure stock or material levels which automatically order new stock or materials when they drop below a set level.
The flipside is that more processing power and/or other improved hardware is often needed within the device. In addition, IT teams will need access to more expertise and resource to help develop, coordinate and manage these systems, especially to ensure that they communicate safely and seamlessly with other devices.
Contractual protections
We are used to living in a world where our cloud-run systems operate reliably. There is naturally a learning curve when introducing any new system. If procuring technology involving edge computing, due consideration should be given to warranty protection, penetration testing, security requirements, compatibility, scalability, remedial measures, comfort around associated networks and devices not being adversely impacted, latency and the frequency and process for patches.
As with the adoption of any new technology, there’s some inherent risk and lots of intertwined issues to consider, but it seems pretty certain that edge computing will play an important part in our future lives and, with the risks identified and suitably managed, will present some useful benefits.