Our Decision Whether or Not To Use Serverless

  1. The first flow is for events. This is a continuous stream of events from the wind turbines through the pipeline. The events are streamed every second, and should preferably have low latency through the pipeline.
  2. The second flow is for high resolution data. Wind turbines generally give out more detailed data in ten minute intervals. The data is sent to our service very ten minutes, where it needs to be transformed and saved in some sort of permanent storage. This data is primarily used for analytics, so the latency isn’t as important.

No managing servers manually

One of the nice wins of serverless is there’s no managing servers manually. However when running the functions through KEDA, we still have to manually manage the clusters, the virtual machines in the clusters etc. This means that this advantage is essentially non-existent.

Easy to scale up and down

This is one of the main discussions we’ve had. It’s been pretty unclear how much we’d win in the scaling department. There’s two things we need to consider here:

Scaling up and down on Kubernetes

Since we’ll be running these functions on Kubernetes, many of the scaling advantages are nullified a little bit. If we need to be able to scale up we’ll have to overprovision the virtual machines, which means we’ll have to pay the cost for full scaling at all times.

Our scaling requirements

The next thing to consider is, how aggressively we need to be able to scale up and down. We might have different scaling needs for each flow, so let’s try to look at those independently.

The event flow

The first flow shouldn’t have particular burst scaling requirements. The amount of events should be pretty steady, except it might potentially have periods of fewer events if a wind field goes dark. This means that we don’t have a strong need for rapidly scaling up and down. Provisioning the right amount of servers with a margin for error should be enough to handle this data flow.

High resolution flow

This was originally the flow where I expected serverless to make the most sense. Getting in a large chunk of data every ten minutes and wanting to process it as fast as possible seemed like the perfect fit for rapidly scaling up and down.

Summing up scaling

The rapid burst scaling you get out of the box with serverless, is made more complicated, but doable, running the whole thing on KEDA. However, the bigger realisation here is, that due to the data flow patterns and latency requirements of each flow, there’s no need for rapid autoscaling where serverless really shines.

  • Testing being harder. We’re sure we’d manage, but is it worth the extra time?
  • Monitoring and logging is harder, and while Azure Functions integrate nicely into Application Insights, we’re not sure we’re ready to get married to that particular monitoring service just yet.
  • Immaturity is a large one. In particular KEDA only went GA about 2 months ago at the time of writing. Our appetite for risk isn’t quite big enough to run that in an application that needs to be highly available. Building on that Azure Functions v3 which we’d like to use, was only made GA around a month ago as well.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Gustav Wengel

Gustav Wengel

Software Developer at SCADA Minds