Serverless How to market servers

  • cloud
  • architecture
  • technical
  • business
  • technical

Serverless technologies are the greatest marketing achievement in cloud computing. They’re incredibly useful overall, but they’re also nearly as oversubscribed as VPN services and likely the most front loaded service type out there. How do you use serverless tech without getting bitten in a couple years?

Servers are Eternal

A server is just a computer. The software you run on your laptop or even your phone is functionally the same as the software run on your servers, whether those servers are in the cloud or in the basement. The terminology behind the word server is just a description of what the computer is doing - it’s serving. If you take a Mac Mini, slap some software on it and place that Mac Mini in a place where a client can get things from it then you’ve made that Mac Mini into a server. A client can be anything, it could be you testing your service, it could be North Korean hackers attempting to break in, it could be other servers making queries for their clients, it doesn’t matter. Outside of some technical considerations a server is just another computer. Scaleway and MacInCloud are cloud providers who provide literal Mac Minis available over the internet to be used as servers. They’re no different from the physical Mac Minis that you can find at Apple stores.

This naming abstraction of server / client is the same flavour of abstraction as server vs serverless. Servers are computers you manage and serverless services are computers that you don’t manage. Instead for serverless services you manage your code instead of the server, altering your code until it matches the flavour of serverless computing that is on offer from your provider.

Time

The key aspect to this process is time. By skipping out provisioning servers you theoretically save time now and in the future provisioning, updating and managing them - your cloud provider will do that for you. Your payment aside from the enhanced costs, is a code tax where you have to refactor your code to work in that provider’s serverless environment. However, as these products change you’ll have to adjust your code as well. The provider will decommission your run-time, force updates onto you and even remove packaged libraries from your environment which you now need to provision yourself. Moreover, you don’t manage the servers, so you have no ability to stop any of these changes. When the provider shakes up it’s product lineup you’ll have to adjust your code to match as the code is the only thing that you have access to.

To plan for this you’ll need future knowledge of the availability of serverless products from your available providers. That’s impossible, hence you can’t manage it. You’ll be in a reactive state to whatever changes happen.

Let’s look at a few examples.

NodeJS x Lambda

AWS Lambda Functions are one of the pioneer serverless compute services. Initially they offered functions using the NodeJS run-time, hence the product provided a containerised process that included Node.js, a few preinstalled packages including the aws-sdk and some file system and kernel access. All the customer needed to do was configure the function and provide compatible code. This was back in 2014, so the NodeJS version was the pre-release version of Node, ie version 0.xx.

The code provided to those functions back then is highly unlikely to be compatible with any current AWS Lambda product. Since its release AWS has decommissioned every NodeJS run-time earlier than v14. Version 14 will be pulled on November 27th 2023 and v16 on March 11th 2024.

From a provider perspective this is reasonable, those run-times are out of date with no further updates available. I also think it’s reasonable to require this, code should be maintained. However, the onus and the activity for this is coming from the provider’s side, not from the customer. Those dates I’ve provided are dates picked by AWS, they do not concern any limitations that your code may have, your requirements, your pipeline or your availability to deliver this maintenance. With serverless, this control is gone.

Python x Lambda x Requests

Another example that might make this loss of control more obvious was the scheduled removal of the requests package from the AWS Lambda run-time. This went back and forth for a few years before being cancelled, but proves the boundary of control. The product purchased would lack the requests package, so the solution would require the customer to provide the entirety of the requests codebase alongside their code or else refactor their code to remove the dependency on the requests package. Either way, the action comes from AWS and the obligation was to be loaded onto the customer.

I think it demonstrates some standout consideration from AWS to cancel this change as releasing it would require action on behalf of every customer that used the included package. The takeaway is that serverless is just a marketing term, there’s still servers and their maintenance is with the provider - if you’re not providing it, it’s not in your control, not even the packages.

Avoiding Pain

You can dodge some of this pain with clever architecture that allows for both standalone run-times and serverless run-times from the same codebase. The secret is in managing the entry-points.

Most serverless run-time environments require provided code to adhere a certain format, such as condensing operations into a named function or to bundle dependencies as part of a deployment chain. Both the format and the deployment chain are dictated by the provider, hence they can change at any point.

A worthwhile route for avoiding these issues is to condense operations into functions, but also implement a standardised entry-point that works outside of the provider’s function. Python run-times show this easily enough:

$ cat func.py
#!/usr/bin/env python3

import datetime
import argparse
import json

# This is your AWS Lambda entry function
def LambdaFunction(event, context):
  print(f"Here's the event that I got:\n{json.dumps(event)}")
  exit(0)

# This is a CLI Parsing function
def GetCliArgs():
  parser = argparse.ArgumentParser(description='This prints events!')
  parser.add_argument('params', nargs='+', help='Any number of key=value pairs to be provided as an event')
  args = parser.parse_args()
  return args

# This is the main function that gets run if you run the script directly in any regular environment
if __name__ == '__main__':
  cliArgs = GetCliArgs()
  keyValues = {}
  for f in cliArgs.params:
    s = f.split('=')
    keyValues[s[0]] = '='.join(s[1:])
  LambdaFunction(keyValues, None)


$ python3 func.py user=$USER event=cli
Here's the event that I got:
{"user": "smasherofallthings", "event": "cli"}

If you run this same function in an AWS Lambda with the function entry-point set to func.LambdaFunction you’ll get whatever event is passed to the Lambda invocation.

You’re still going to be on the hook for chasing cloud provider changes, however, you’ll be able to fallback to a regular environment when and as you need. This also has the bonus of explicitly declaring what your event needs to include, which can get lost in your serverless implementation if you’re relying on custom events from your cloud provider such as API Gateway.

Obviously this won’t provide the full set of serverless services, but for batch / cron style code, this works a treat. It also allows for local testing and running of the code rather than building into the provider’s development ecosystem using additional tools or environments.

Queries

Contact