Serverless deployment lifts enterprise DevOps velocity


Serverless deployment isn’t just for startups anymore.

Mainstream companies have revved up DevOps velocity and slashed IT operations overhead with the addition of serverless technologies to underpin new apps. These companies, which base modern apps on AWS Lambda, OpenWhisk and Google Cloud Functions, say that serverless deployment abstracts the underlying cloud infrastructure completely from app developers, and lets DevOps teams focus on business logic and application code instead of infrastructure management.

But while serverless deployment can pay off, so far, traditional companies have found it easiest to adopt serverless infrastructure when a new project begins. Consumer robot maker IRobot Corporation tried a serverless approach on AWS Lambda in 2015, when the Bedford, Mass., maker of the Roomba vacuum cleaner was unhappy with a turnkey cloud service provider for its IoT application.

The company found that the turnkey platform wasn’t scalable or extensible enough when it added sensors to its consumer products and so it decided to replace that platform. Starting fresh gave IRobot an opportunity to rethink its infrastructure and relaunch the cloud robotics application on AWS, where it now uses more than 30 of the cloud provider’s services, including AWS IoT Services and Lambda. The serverless deployment for this project falls to a DevOps team separate from the one that handles IRobot’s existing applications.

“It takes single-digit operations headcount to run, and basically runs itself,” said Ben Kehoe, cloud robotics research scientist at IRobot. “It’s been a very successful serverless story, but it’s not one of a transition from a legacy system.”

The cloud robotics project is well-suited to serverless deployment because AWS IoT uses event-driven communication between services, which dovetails nicely with Lambda functions as a service, Kehoe said.

Serverless deployment also meant IRobot could redeploy the cloud robotics project quickly, and focus on how to make it work for customers, rather than how to build the underlying cloud infrastructure.

“It meant that, as an organization, we didn’t have to develop the expertise in scalable, elastic cloud application development,” Kehoe said. “We were able to completely leapfrog that and work primarily in the [business] problem domain.”

Shipping company taps serverless for mobile app

A 136-year-old shipping company also found serverless deployment best suited for a new cargo container tracking application designed for mobile devices. Matson Inc., in Honolulu, built this app in 2017 to relieve traffic pressure on its website where customers often search for container ID and shipment numbers. An uneven traffic load, with occasional spikes during business hours, meant that it would be wasteful to try to support the app with a fleet of constantly running AWS Elastic Compute Cloud instances.

Dave Townsend, principal software engineer at Matson, said he had experimented with serverless in 2016, and decided the logistics app would be the perfect opportunity to try serverless. “We’ve been live since November [2017], and we barely even have a bill for the entire back end of running this,” he said.

Townsend’s architecture and innovation team primarily manages the app and its underlying AWS services, such as Lambda, DynamoDB database as a service, and the Simple Notification Service for notifications. The company’s core IT operations team approves serverless deployments to production, but hasn’t been as involved in the application’s design and maintenance as with other apps so far, Townsend said.

Serverless deployment poised to expand beyond the cutting edge  

Industry analysts said that although serverless deployments are common for new apps, they expect this to change in the next year or so as serverless management tools and services mature.

An IDC survey of 301 enterprise IT shops, conducted in December 2017 and January 2018, found that 35% of respondents have serverless deployments now and another 30% may adopt the technology in the next 12 months.

At a tipping point of serverless expertise, enterprises will start to put existing applications in serverless architectures as well. Significant challenges remain when converting existing apps to serverless, but some mainstream companies have already started that journey.

Smart Parking Ltd., a car parking optimization software maker based in Australia, moved from its own data centers in Australia, New Zealand and the U.K. to AWS cloud infrastructure 18 months ago. Its next step is to move to an updated cloud infrastructure based on Google Cloud Platform, which includes Google Cloud Functions serverless technology, by June 2018.

“As a small company, if we just stayed with classical servers hosted in the cloud, we were doing the same things the same way, hoping for a different outcome, and that’s not realistic,” said John Heard, CTO at Smart Parking.

“What Google is solving are the big questions around how you change your focus from writing lots of code to writing small pieces of code that focus on the value of a piece of information, and that’s what Cloud Functions are all about,” he added.

Smart Parking’s primary application uses sensors in parking spaces and garages in major cities around the world. When sensor data comes into the Google Cloud infrastructure, it kicks off a Cloud Function that interfaces with other services, such as the BigQuery data warehouse and Bigtable NoSQL big data database service. Smart Parking also uses Google’s Stackdriver monitoring tool to manage Cloud Functions.

For its serverless deployment, Smart Parking redesigned its application with a more modular architecture and communication via RESTful APIs, which required a mindset shift among DevOps teams. Smart Parking must work more with Google to integrate machine learning and AI services, such as Data Studio report customization in with BigQuery and Stackdriver reporting functions. But once the Smart Parking team got past the initial learning curve, engineers could troubleshoot Google Cloud Functions faster than traditional servers.

“The very small modularity [of the application] and the focus of Cloud Functions and APIs [mean] we can rapidly identify the API that’s not performant and the Cloud Function that may be causing an issue,” Heard said. “I don’t see my engineers sitting there scratching their heads for a day — they work it out within minutes, usually.”

Adobe goes whole-hog on serverless deployment

Adobe Systems has begun to rebuild existing applications in its Adobe Commerce portfolio on the Adobe I/O Runtime, which the company based on the Apache OpenWhisk serverless project.

“Our version [of OpenWhisk] is the Adobe I/O Runtime, which we can deploy to Azure, AWS or our own internal data centers,” said Ryan Stewart, group product manager at Adobe I/O. Stewart’s team maintains Adobe I/O Runtime for external customers and internal app development teams as part of the Adobe Cloud Platform.

“With Runtime, we can embed things that are really only meaningful for Adobe customers, or Adobe teams, [such as] [Software Development Kits] or JavaScript libraries that are specifically for calling some of our APIs or managing something like our Adobe authentication system,” he added. This makes it easier for Adobe customers to build their own extensions to I/O Runtime.

As with Smart Parking, Adobe’s internal commerce team found its serverless deployment easier to manage once it was up and running, but it required the commerce division to reorganize its IT teams completely to modernize legacy applications

“The team focus around business processes became highly iterative, which changed our [Atlassian] Jira process,” said Errol Denger, director of the Adobe Commerce program and strategic alliances at Adobe. “We had to launch an organization within an organization to support this more agile approach.”

That process began in 2017, and the Adobe Commerce platform based on Adobe I/O Runtime will be in public beta until July 2018.

AddSearch News

admin

leave a comment

Create Account



Log In Your Account