by Patrick Donovan, Sr. Research Analyst Data Center Science Center IT Division, Schneider Electric
Amidst the surge of Digital Transformation, Edge has become the gatekeeper for resilient, data centre services, across the network.
Edge Computing has fast become a well-established trend amongst the IT Industry and those engaged in both the delivery of digital and Hybrid-Cloud-based services.
Edge builds upon and complements the concept of Cloud Computing which has in the last ten years, completely revolutionised the way in which IT services are deployed. It makes possible new technologies, such as driverless cars, that might previously have been considered science fiction. Whilst breathing life back into traditional businesses, such as the retail sector, which has been struggling to deal with an increase in competition from rapidly growing online companies.
Since its inception, one question that has been regularly debated is how should one define the concept of edge data centres? What are their distinguishing characteristics, and are there any special design considerations that need to be thought out when building or configuring the hardware components?
Defining ‘the edge’
According to the Infrastructure Masons,“An edge location is a computing enclosure/space/facility geographically dispersed to be physically closer to the point of origin of data or a user base.” For an edge to exist there must be a hub or a core; therefore, dispersion of computing to the periphery would qualify as edge computing and the physical enclosure, space or facility can be defined as the edge.
This definition is probably one point on which much of the industry can agree, but it also implies that the ‘Edge’ is both a subjective and relative term; from a logical perspective there can be no edge if there is not already a hub located away from the point of origin or use.
In a nutshell, edge computing is about moving applications and computing power closer to the source of data production, or nearer to the user. However, the edge forms a now critical component of an ever-more comeplex data centre ecosystem.
By contrast a larger, centralised and highly secure data centre will remain an essential part of the IT ecosystem, providing resilient backup services in addition to other, low-cost applications that are not dependent on speed of response. Edge data centres enable high-bandwidth, low-latency applications, such as video on demand, to be more widely distributed across a greater geographical area, therby ensuring a faster speed of service for customers.
Edge data centres also help to prevent floods of data traffic from emerging applications reliant on Internet connectivity. This includes Internet of Things (IoT) enabled apps, connecting through closed loops between the points where the data is originated and where it is consumed. As such, a localised micro data centre solution will help to improve efficiency and reduce traffic congestion across wider networks.
The drivers of edge
Among the factors driving data centres to the edge is the sheer volume of connected devices, including PCs, tablets, smart phones and wearable tech. Cisco expects that by 2020 there will be as many as 50 billion network attached devices. However, a further 20.8 billion IoT devices may also be connected within the same time frame, which will drive a threefold increase in global data centre IP traffic over the next five years.
For technical reasons, the need to process the most critical and latency-dependent data quickly and reliably will require a greater number of data centres to be deployed at the edge. But there are other, more relevant and regulatory requirements for data centres to become more geographically dispersed.
Data sovereignty and privacy legislation, such as that demanded by Europe’s GDPR (General Data Protection Regulations) will put restrictions on where data may be stored in relation to its use. The upshot is that Cisco expects that 40 per cent of all data will be stored, processed, analysed and acted upon at the edge, closer to the point of origin, rather than at a centralised destination.
Its is therefore not unlikely that the data centre industry will continue to evolve to comprise a more hybrid environment with large hyperscale facilities at the centre and smaller, more agile facilities providing business-critical services at the edge. Although they will host different applications, many of the design considerations and infrastructure components will have many common features.
Electrical efficiency, for example, will continue to be a critical factor not only because energy costs fluctuate greatly, due to issues beyond the industry’s control, but also because the demand for new data centres, driven by the factors discussed above, may require new sources and methods of power supply.
It follows that all power-hungry components of a data centre, including the IT and cooling equipment will have to be designed with efficiency as a priority in order to improve energy use and lower both CAPEX and OPEX.
Speed of depoyment and standardisation are also vital elements. Whether rolling out a highly available, low-latency data centrs at the edge to support a fast-growing business, or by rapidly scaling up a centralised facility due to customer demands, the time taken to design and deploy additional data-centre resource continues to be improved, often with expectations that infrastructure will now arrive in site in weeks or months, rather than years.
This is driving both the use of reference design architectures and the development of standardised, modular and prefabricated components, built to specific industry standards to allow maximum flexibility of deployment at high speed, whilst enabling the user or company to scale up as required.
Software and data driven insights
Software is another enabler of the hybrid data centre environment, allowing evolution between central hubs and the edge. It is for example, the fundamental difference between converged and hyperconverged infrastructure, allowing IT and data centre components from different manufacturers to be delivered in a single, integrated stack and work as a complete solution from the moment it is deployed.
Standardised designs, integration and collaboration have become key to this rapidly evolving IT environment where reliability is key to the customer, and as such standardisation may be considered the foundation on which such hyperconverged solutions are built and optimised.
With many of today’s IoT-enabled solutions, sensors on individual components provide a constant stream of data regarding the status and service condition of equipment. They help to automate the management function, permitting timely adjustments to balance loads for maximum efficiency, schedule maintenance operations and respond to potentially harmful incidents. Intelligent analytics and the increased use of Applications can deliver alerts and status updates to Smart-devices in the most proactive and efficient manner.
Its is fair to conclude that edge data centres will yield nothing to larger hyperscale data centres in terms of performance, resilience, security and availability. In many respects they benefit from the same levels of technology as their larger counterparts. It could hardly be otherwise given that they will be used for critical applications dependent on low latency such as driverless cars, or for medical applications using Virtual or Augmented Reality to deliver collaborative treatments to patients from afar.
The performance, reliability and efficiency of all data centres, edge or otherwise, will be driven by increased modularity and consequent scalability of connecting infrastructure devices. And the built-in intelligence provided by smart sensors—an example of the data centre industry making good use of the IoT technology whose proliferation it supports—will make management and maintenance both proactive and more efficient. Much to the benefit of customers across many of today’s rapidly evolving industries.