Serverless Computing Tutorial

Serverless Computing Tutorial

Last updated on 19th Sep 2020, Blog, Tutorials

About author

Ramki (Sr Cloud Application Engineer )

He is a Award Winning Respective Industry Expert with 11+ Years Of Experience Also, He is a TOP Rated Technical Blog Writer Share's 1000+ Blogs for Freshers. Now He Share's this For Us.

(5.0) | 15621 Ratings 635

Serverless computing is one of the biggest new technologies coming into its own. Learn about how it works and various applications.

What is Serverless?

Like many trends in software, there’s no one clear view of what Serverless is. For starters, it encompasses two different but overlapping areas:

    1. 1.Serverless was first used to describe applications that significantly or fully incorporate third-party, cloud-hosted applications and services, to manage server-side logic and state. These are typically “rich client” applications—think single-page web apps, or mobile apps—that use the vast ecosystem of cloud-accessible databases (e.g., Parse, Firebase), authentication services (e.g., Auth0, AWS Cognito), and so on. These types of services have been previously described as “(Mobile) Backend as a Service”, and I use “BaaS” as shorthand in the rest of this article.
    2. 2.Serverless can also mean applications where server-side logic is still written by the application developer, but, unlike traditional architectures, it’s run in stateless compute containers that are event-triggered, ephemeral (may only last for one invocation), and fully managed by a third party. One way to think of this is “Functions as a Service” or “FaaS”. (Note: The original source for this name—a tweet by @marak—is no longer publicly available.) AWS Lambda is one of the most popular implementations of a Functions-as-a-Service platform at present, but there are many others, too.

    How Does a Typical System Look Without Serverless?

    To get a better understanding of what serverless is, let’s take a look at what you need to take care of when you have to administer infrastructure.

    Let’s say you’re using a system that will make use of a queue for delayed processing. A user visits a page that contains a pixel to track specific information while the user is visiting the site. An async request is sent to the servers to prevent the user from waiting for a response back.

    Let’s take a look at what you’ll need to solve this problem:
    • A message queuing software like RabbitMQ to store the messages and process them later.
    • A batch process where you’ll code the logic to handle each message.
    • A database to persist the processed message.

    For each of the components above, you’ll need to provision infrastructure, configure the server(s), install the software, deploy the batch processor code, scale the infrastructure when needed, and provide support for the servers and the application. Even if you have a cloud provider and use their IaaS or PaaS services, there are many elements for this simple architecture that you’ll need to take care of.

    How Does a Typical System Look With Serverless?

    Now let’s take a look at how the system from the example above will look with serverless. I’ll use the offering from AWS, but you can find similar services in Google Cloud Platform (GCP) or Microsoft’s Azure. In the following list you can see which services fit perfectly to solve the same problem from the above example:

    • A message broker with AWS SQS to store the messages and process them later.
    • An AWS Lambda function to handle each message from the queue.
    • A DynamoDB table to persist the processed message.

    By using the above services, you’ll need to provide the resources from each service with a few clicks or CLI commands, configure the services, and deploy code to a lambda function. You can forget about scaling the infrastructure because AWS will do it for you. You’ll still need to provide support but you’ll have far less to do because you won’t have to administer servers.

    Application of Serverless Computing

    Think of an application as a collection of functionality packaged together and deployed on a platform. The monolith has all the separate functionality in one ginormous package deployed onto an equally humongous platform. Since it’s difficult to change a monolith quickly, release cycles are prolonged. It’s also inefficient to scale since the whole application has to be replicated – even if only some of the functionality is under heavy load.

    Microservice applications split out the separate functional blocks into smaller packages and deploy them on lightweight containerized platforms. Each miniature application or microservice can be revised independently as long as it keeps its promises to other services using it. This is more efficient to scale because just the service under heavy load needs to be replicated. Each microservice still contains a number of functions (like create, update and delete; and typically includes a platform or framework such as Python Flask, Node JS Express or Java Spring Boot.

    Why do you need the platform? It’s just extra baggage to manage, functionally it does not add value, it’s only an enabler that allows the real functionality to run. Why not consume that as a service as well and just run the functions? That is what serverless is: individual functions running in the Cloud or Functions as a Service (FaaS); the ultimate componentization of an application. Now the Cloud platform entirely abstracts everything the application code needs to run, automatically handling scalability, failover, load balancing, etc. There are still servers involved but the developers and operators do not need to be concerned with them as the responsibility is delegated to the Cloud provider; unless they choose to run their own FaaS platform.

    Serverless or FaaS provides complete flexibility enabling discrete functions to be revisioned without redeploying the whole application, continuing to increase the velocity of application delivery. Discrete functions may be scaled efficiently without the overhead of replicating the platform and any other functions that would also be packaged in a microservice. The tradeoff is that there are many more moving parts to manage with very few standards on how to do that management. Latency may also be an issue, especially with functions that are not used that often, as there will be a delay while the function is loaded into the runtime to be executed. The Cloud providers also impose resource limits for each function making FaaS unsuitable for intensive tasks.

    Subscribe For Free Demo

    Error: Contact form not found.

    Serverless Computing Vendors

    Amazon has the most recognized Serverless Computing offering with AWS Lambda. By using other AWS services with Lambda, you can create and run an entire application without allocating a single server.

    There are additional vendors, though:

    • Google Cloud Functions
    • Azure Functions
    • IBM Cloud Functions
    • Pivotal Function Service

    All of them enable you to deploy functions that are executed on-demand and billed by the millisecond of execution time – the more the function is called, the more it costs. It’s like accessing and paying for computing power the same way as electricity.

    There are also open source Serverless projects such as Kubeless, OpenFaaS, and Fn Project.

    The Future of Serverless Computing

    With most people still trying to get their head around containerized microservice applications, Serverless Computing has yet to become mainstream. However, the never-ending quest for more speed in the application delivery lifecycle makes Serverless Computing more desirable. Before those advantages can be fully realized, though, additional tooling is needed to make it easier to manage, monitor and debug Serverless Applications.

    Benefits

    So far I’ve mostly tried to stick to just defining and explaining what Serverless architectures have come to mean. Now I’m going to discuss some of the benefits and drawbacks to such a way of designing and deploying applications. You should definitely not take any decision to use Serverless without significant consideration and weighing of pros and cons.

    Let’s start off in the land of rainbows and unicorns and look at the benefits of Serverless.

    Reduced operational cost

    Serverless is, at its most simple, an outsourcing solution. It allows you to pay someone to manage servers, databases and even application logic that you might otherwise manage yourself. Since you’re using a predefined service that many other people will also be using we see an Economy of Scale effect: you pay less for your managed database because one vendor is running thousands of very similar databases.

    The reduced costs appear to you as the total of two aspects. The first are infrastructure cost gains that come purely from sharing infrastructure (e.g., hardware, networking) with other people. The second are labor cost gains: you’ll be able to spend less of your own time on an outsourced Serverless system than on an equivalent developed and hosted by yourself.

    This benefit, however, isn’t too different than what you’ll get from Infrastructure as a Service (IaaS) or Platform as a Service (PaaS). But we can extend this benefit in two key ways, one for each of Serverless BaaS and FaaS.

    BaaS: Reduced development cost

    IaaS and PaaS are based on the premise that server and operating system management can be commodified. Serverless Backend as a Service, on the other hand, is a result of entire application components being commodified.

    Authentication is a good example. Many applications code their own authentication functionality, which often includes features such as signup, login, password management, and integration with other authentication providers. On the whole this logic is very similar across most applications, and services like Auth0 have been created to allow us to integrate ready-built authentication functionality into our application without us having to develop it ourselves.

    On the same thread are BaaS databases, like Firebase’s database service. Some mobile application teams have found it makes sense to have the client communicate directly with a server-side database. A BaaS database removes much of the database administration overhead, and typically provides mechanisms to perform appropriate authorization for different types of users, in the patterns expected of a Serverless app.

    Depending on your background, these ideas might make you squirm (likely for reasons that we’ll get into in the drawbacks section) but there’s no denying the number of successful companies that have been able to produce compelling products with barely any of their own server-side code. Joe Emison gave a couple of examples of this at the first Serverless Conference.

    FaaS: Scaling costs

    One of the joys of Serverless FaaS is that—as I put it earlier in this article—“horizontal scaling is completely automatic, elastic, and managed by the provider.” There are several benefits to this but on the basic infrastructural side the biggest benefit is that you only pay for the compute that you need, down to a 100ms boundary in the case of AWS Lambda. Depending on your traffic scale and shape, this can be a huge economic win for you.

    Example: occasional requests

    Say you’re running a server application that only processes one request every minute, it takes 50 ms to process each request, and your mean CPU usage over an hour is 0.1 percent. If this application is deployed to its own dedicated host then this is wildly inefficient. A thousand other similar applications could all share that one machine.

    Serverless FaaS captures this inefficiency, handing the benefit to you in reduced cost. With the example application above you’d be paying for just 100 ms of compute every minute, which is 0.15 percent of the time overall.

    This has the following knock-on benefits:

    • For would-be microservices that have very small load requirements it gives support to breaking down components by logic/domain even if the operational costs of such fine granularity might have been otherwise prohibitive.
    • Such cost benefits are a great democratizer. If companies or teams want to try out something new they have extremely small operational costs associated with “dipping their toe in the water” when they use FaaS for their compute needs. In fact, if your total workload is relatively small (but not entirely insignificant), you may not need to pay for any compute at all due to the “free tier” provided by some FaaS vendors.

    Example: Inconsistent traffic

    Let’s look at another example. Say your traffic profile is very spiky—perhaps your baseline traffic is 20 requests per second, but that every five minutes you receive 200 requests per second (10 times the usual number) for 10 seconds. Let’s also assume, for the sake of the example, that your baseline performance maxes out your preferred host server type, and that you don’t want to reduce your response time during the traffic spike phase. How do you solve for this?

    In a traditional environment you may need to increase your total hardware count by a factor of 10 over what it might otherwise be to handle the spikes, even though the total durations of the spikes account for less than 4 percent of total machine uptime. Auto-scaling is likely not a good option here due to how long new instances of servers will take to come up—by the time your new instances have booted the spike phase will be over.

    With Serverless FaaS however this becomes a non-issue. You literally do nothing differently than if your traffic profile was uniform, and you only pay for the extra compute capacity during the spike phases.

    Obviously I’ve deliberately picked examples here for which Serverless FaaS gives huge cost savings, but the point is to show that, from a scaling viewpoint, unless you have a very steady traffic shape that consistently uses the whole capacity of your server hosts, then you may save money using FaaS.

    One caveat about the above: if your traffic is uniform and would consistently make good utilization of a running server you may not see this cost benefit, and you may actually spend more by using FaaS. You should do some math and compare current provider costs with the equivalents of running full-time servers to see whether costs are acceptable.

    For more detail on the cost benefits of FaaS I recommend the paper “Serverless Computing: Economic and Architectural Impact” by Gojko Adzic and Robert Chatley.

    Course Curriculum

    Gain In-Depth Knowledge in Serverless Computing Training from MNC Experts

    • Instructor-led Sessions
    • Real-life Case Studies
    • Assignments
    Explore Curriculum

    Optimization is the root of some cost savings

    There is one more interesting aspect to mention about FaaS costs: any performance optimizations you make to your code will not only increase the speed of your app, but they’ll have a direct and immediate link to reduction in operational costs, subject to the granularity of your vendor’s charging scheme. For example, say an application initially takes one second to process an event. If, through code optimization, this is reduced to 200 ms, it will (on AWS Lambda) immediately see an 80 percent savings in compute costs without making any infrastructural changes.

    Easier operational management

    This next section comes with a giant asterisk—some aspects of operations are still tough for Serverless, but for now we’re sticking with our unicorn and rainbow friends…

    On the Serverless BaaS side of the fence, it’s fairly obvious why operational management is more simple than other architectures: supporting fewer components equals less work.

    On the FaaS side there are a number of aspects at play though, and I’m going to dig into a couple of them.

    Scaling benefits of FaaS beyond infrastructure costs

    While scaling is fresh in our minds from the previous section it’s worth noting that not only does the scaling functionality of FaaS reduce compute cost, it also reduces operational management because the scaling is automatic.

    In the best case, if your scaling process was a manual one—say, a human being needs to explicitly add and remove instances to an array of servers—with FaaS you can happily forget about that and let your FaaS vendor scale your application for you.

    Even if you’ve gotten to the point of using auto-scaling in a non-FaaS architecture, that still requires setup and maintenance. This work is no longer necessary with FaaS.

    Similarly, since scaling is performed by the provider on every request/event, you no longer need to think about the question of how many concurrent requests you can handle before running out of memory or seeing too much of a performance hit—at least not within your FaaS-hosted components. Downstream databases and non-FaaS components will have to be reconsidered in light of a possibly significant increase in their load.

    Reduced packaging and deployment complexity

    Packaging and deploying a FaaS function is simple compared to deploying an entire server. All you’re doing is packaging all your code into a zip file, and uploading it. No Puppet/Chef, no start/stop shell scripts, no decisions about whether to deploy one or many containers on a machine. If you’re just getting started you don’t even need to package anything—you may be able to write your code in the vendor console itself (this, obviously, is not recommended for production code!).

    This process doesn’t take long to describe, but for some teams this benefit may be absolutely huge: a fully Serverless solution requires zero system administration.

    PaaS solutions have similar deployment benefits, but as we saw earlier, when comparing PaaS with FaaS, the scaling advantages are unique to FaaS.

    Time to market and continuous experimentation

    Easier operational management is a benefit that we as engineers understand, but what does that mean to our businesses?

    The obvious reason is cost: less time spent on operations equals fewer people needed for operations, as I’ve already described. But a far more important reason in my mind is time to market. As our teams and products become increasingly geared toward lean and agile processes, we want to continually try new things and rapidly update our existing systems. While simple redeployment in the context of continuous delivery allows rapid iteration of stable projects, having a good new-idea-to-initial-deployment capability allows us to try new experiments with low friction and minimal cost.

    The new-idea-to-initial-deployment story for FaaS is often excellent, especially for simple functions triggered by a maturely defined event in the vendor’s ecosystem. For instance, say your organization is already using AWS Kinesis, a Kafka-like messaging system, for broadcasting various types of real-time events through your infrastructure. With AWS Lambda you can develop and deploy a new production event listener against that Kinesis stream in minutes—you could try several different experiments all in one day!

    While the cost benefits are the most easily expressed improvements with Serverless, it’s this reduction in lead time that makes me most excited. It can enable a product development mindset of continuous experimentation, and that is a true revolution for how we deliver software in companies.

    “Greener” computing?

    Over the last couple of decades, there’s been a massive increase in the numbers and sizes of data centers in the world. As well as the physical resources necessary to build these centers, the associated energy requirements are so large that Apple, Google, and the like talk about hosting some of their data centers near sources of renewable energy in order to reduce the fossil-fuel burning impact of such sites that would otherwise be necessary.

    Idle, but powered up, servers consume an untoward amount of this energy – and they’re a big part of the reason why we need so many, and bigger data centers:

    Typical servers in business and enterprise data centers deliver between 5 and 15 percent of their maximum computing output on average over the course of the year.

    — Forbes

    That’s extraordinarily inefficient, and creates a huge environmental impact.

    On one hand it’s likely that cloud infrastructure has probably helped reduce this impact already since companies can “buy” more servers on demand, only when they absolutely need them, rather than provisioning all only possibly necessary servers a long time in advance. However one could also argue that the ease of provisioning servers may have made the situation worse if a lot of those servers are being left around without adequate capacity management.

    Whether we use a self-hosted server, IaaS, or PaaS infrastructure solution we’re still manually making capacity decisions about our applications that will often last months or years. Typically we are cautious, and rightly so, about managing capacity, and so we over-provision, leading to the inefficiencies just described. With a Serverless approach we no longer make such capacity decisions ourselves—we let the Serverless vendor provision just enough compute capacity for our needs in real time. The vendor can then make their own capacity decisions in aggregate across their customers.

    This difference should lead to far more efficient use of resources across data centers, and therefore to reductions in environmental impact compared with traditional capacity management approaches.

    Drawbacks

    So, dear reader, I hope you enjoyed your time in the land of rainbows, unicorns, and all things shiny and nice, because we’re about to get slapped around the face by the wet fish of reality.

    There’s certainly a lot to like about Serverless architectures, but they come with significant trade-offs. Some of these trade-offs are inherent to the concepts; they can’t be entirely fixed by progress, and they’re always going to need to be considered. Others are tied to current implementations; with time we can expect to see these resolved.

    Inherent drawbacks

    Vendor control

    With any outsourcing strategy you are giving up control of some of your system to a third-party vendor. Such lack of control may manifest as system downtime, unexpected limits, cost changes, loss of functionality, forced API upgrades, and more. Charity Majors, who I referenced earlier, explains this problem in much more detail in the Tradeoffs section:

    [The Vendor service], if it is smart, will put strong constraints on how you are able to use it, so they are more likely to deliver on their reliability goals. When users have flexibility and options it creates chaos and unreliability. If the platform has to choose between your happiness vs thousands of other customers’ happiness, they will choose the many over the one every time — as they should.

    — Charity Majors

    Multitenancy problems

    Multitenancy refers to the situation where multiple instances of software for several different customers (or tenants) are run on the same machine, and possibly within the same hosting application. It’s a strategy to achieve the economy of scale benefits we mentioned earlier. Service vendors try their darndest to make customers feel that they each are the only ones using their system, and typically good service vendors do a great job of that. But no one’s perfect and sometimes multitenant solutions can have problems with security (one customer being able to see another’s data), robustness (an error in one customer’s software causing a failure in a different customer’s software), and performance (a high-load customer causing another to slow down).

    These problems are not unique to Serverless systems—they exist in many other service offerings that use multitenancy. AWS Lambda is now mature enough that we don’t expect to see these kind of problems with it, but you should be on the lookout for such issues with any service that is less mature, whether it’s from AWS or other vendors.

    Vendor lock-in

    It’s very likely that whatever Serverless features you’re using from one vendor will be implemented differently by another vendor. If you want to switch vendors you’ll almost certainly need to update your operational tools (deployment, monitoring, etc.), you’ll probably need to change your code (e.g., to satisfy a different FaaS interface), and you may even need to change your design or architecture if there are differences to how competing vendor implementations behave.

    Even if you manage to easily migrate one part of your ecosystem, you may be more significantly impacted by another architectural component. For instance, say you’re using AWS Lambda to respond to events on an AWS Kinesis message bus. The differences between AWS Lambda, Google Cloud Functions and Microsoft Azure Functions may be relatively small, but you’re still not going to be able to hook up the latter two vendor implementations directly to your AWS Kinesis stream. This means that moving, or porting, your code from one solution to another isn’t going to be possible without also moving other chunks of your infrastructure.

    A lot of people are scared by this idea—it’s not a great feeling to know that if your chosen cloud vendor today needs to change tomorrow that you have a lot of work to do. Because of this some people adopt a “multi-cloud” approach, developing and operating applications in a way that’s agnostic of the actual cloud vendor being used. Often this is even more costly than a single-cloud approach—so while vendor lock-in is a legitimate concern, I still recommend picking a vendor that you’re happy with and exploiting their capabilities as much as possible. I talk more about why that is in this article.

    Course Curriculum

    Learn Serverless Computing Course to Enhance Your Skills & Career

    Weekday / Weekend BatchesSee Batch Details

    Security concerns

    Embracing a Serverless approach opens you up to a large number of security questions. Here’s just a very brief smattering of things to consider—be sure to explore what else could impact you.

    • Each Serverless vendor that you use increases the number of different security implementations embraced by your ecosystem. This increases your surface area for malicious intent and ups the likelihood of a successful attack.
    • If using a BaaS database directly from your mobile platforms you are losing the protective barrier a server-side application provides in a traditional application. While this is not a dealbreaker, it does require significant care in designing and developing your application.
    • As your organization embraces FaaS you may experience a cambrian explosion of FaaS functions across your company. Each of those functions offers another vector for problems. For instance, in AWS Lambda, every Lambda function typically goes hand in hand with a configured IAM policy, which are easy to get wrong. This is not a simple topic, nor is it one that can be ignored. IAM management needs careful consideration, at least within production AWS accounts.

    Repetition of logic across client platforms

    With a “full” BaaS architecture no custom logic is written on the server side—it’s all in the client. This may be fine for your first client platform, but as soon as you need your next platform you’re going to need to repeat the implementation of a subset of that logic—and you wouldn’t have needed this repetition in a more traditional architecture. For instance, if using a BaaS database in this kind of system, all your client apps (perhaps web, native iOS, and native Android) now need to be able to communicate with your vendor database, and will need to understand how to map from your database schema to application logic.

    Furthermore, if you want to migrate to a new database at any point, you’re going to need to replicate that coding/coordination change across all your different clients.

    Loss of server optimizations

    With a full BaaS architecture there is no opportunity to optimize your server design for client performance. The ‘Backend For Frontend’ pattern exists to abstract certain underlying aspects of your whole system within the server, partly so that the client can perform operations more quickly and use less battery power in the case of mobile applications. Such a pattern is not available for full BaaS.

    Both this and the previous drawback exist for full BaaS architectures where all custom logic is in the client and the only backend services are vendor supplied. A mitigation of both of these is to embrace FaaS, or some other kind of lightweight server-side pattern, to move certain logic to the server.

    No in-server state for Serverless FaaS

    After a couple of BaaS-specific drawbacks, let’s talk about FaaS for a moment. I said earlier:

    FaaS functions have significant restrictions when it comes to local .. state. .. You should not assume that state from one invocation of a function will be available to another invocation of the same function.

    The reason for this assumption is that with FaaS we typically have no control over when the host containers for our functions start and stop.

    I also said earlier that the alternative to local state was to follow factor number 6 of the Twelve-Factor app, which is to embrace this very constraint:

    Twelve-factor processes are stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service, typically a database.

    — The Twelve-Factor App

    Heroku recommends this way of thinking, but you can bend the rules when running on their PaaS since you have control of when Heroku Dynos are started and stopped. With FaaS there’s no bending the rules.

    So where does your state go with FaaS if you can’t keep it in memory? The quote above refers to using a database, and in many cases a fast NoSQL database, out-of-process cache (e.g., Redis), or an external object/file store (e.g., S3) will be some of your options. But these are all a lot slower than in-memory or on-machine persistence. You’ll need to consider whether your application is a good fit for this.

    Another concern in this regard is in-memory caches. Many apps that are reading from a large data set stored externally will keep an in-memory cache of part of that data set. You may be reading from “reference data” tables in a database and using something like Ehcache. Alternatively you may be reading from an HTTP service that specifies cache headers, in which case your in-memory HTTP client can provide a local cache.

    FaaS does allow some use of local cache, and this may be useful assuming your functions are used frequently enough. For instance, with AWS Lambda we typically expect a function instance to stick around for a few hours as long as it’s used at least once every few minutes. That means we can use the (configurable) 3 GB RAM, or 512 MB local “/tmp” space, that Lambda can provide us. For some caches this may be sufficient. Otherwise you will need to no longer assume in-process cache, and you’ll need to use a low-latency external cache like Redis or Memcached. However this requires extra work, and may be prohibitively slow depending on your use case.

    Implementation drawbacks

    The previously described drawbacks are likely always going to exist with Serverless. We’ll see improvements in mitigating solutions, but they’re always going to be there.

    The remaining drawbacks, however, come down purely to the current state of the art. With inclination and investment on the part of vendors and/or a heroic community these can all be wiped out. In fact this list has shrunk since the first version of this article.

    Configuration

    When I wrote the first version of this article AWS offered very little in the way of configuration for Lambda functions. I’m glad to say that has now been fixed, but it’s still something that’s worth checking if you use a less mature platform.

    DoS yourself

    Here’s an example of why caveat emptor is a key phrase whenever you’re dealing with FaaS. AWS Lambda limits how many concurrent executions of your Lambda functions you can be running at a given time. Say that this limit is one thousand; that means that at any time you are allowed to be executing one thousand function instances. If you to need to go above that you may start getting exceptions, queueing, and/or general slow down.

    The problem here is that this limit is across an entire AWS account. Some organizations use the same AWS account for both production and testing. That means if someone, somewhere, in your organization performs a new type of load test and starts trying to execute one thousand concurrent Lambda functions, you’ll accidentally DoS your production applications. Oops.

    Even if you use different AWS accounts for production and development, one overloaded production lambda (e.g., processing a batch upload from a customer) could cause your separate real-time lambda-backed production API to become unresponsive.

    Amazon provides some protection here, by way of reserved concurrency. Reserved concurrency allows you to limit the concurrency of a Lambda function so that it doesn’t blow up the rest of your account, while simultaneously making sure there is always capacity available no matter what the other functions in an account are doing. However, reserved concurrency is not turned on by default for an account, and it needs careful management.

    Execution duration

    Earlier in the article I mentioned that AWS Lambda functions are aborted if they run for longer than five minutes. This has been consistent now for a couple of years, and AWS has shown no signs of changing it.

    Startup latency

    AWS has improved this area over time, but there are still significant concerns here, especially for only occasionally triggered JVM-implemented functions and/or functions that need access to VPC resources. Continued improvements are expected in this area.

    Okay, that’s enough picking on AWS Lambda specifically. I’m sure other vendors also have some pretty ugly skeletons barely in their closets.

    serverless Sample Resumes! Download & Edit, Get Noticed by Top Employers! Download

    Testing

    Unit testing Serverless apps is fairly simple for reasons I’ve talked about earlier: any code that you write is “just code,” and for the most part there aren’t a whole bunch of custom libraries you have to use or interfaces that you have to implement.

    Integration testing Serverless apps, on the other hand, is hard. In the BaaS world you’re deliberately relying on externally provided systems rather than, for instance, your own database. So should your integration tests use the external systems too? If yes, then how amenable are those systems to testing scenarios? Can you easily tear up and tear down state? Can your vendor give you a different billing strategy for load testing?

    If you want to stub those external systems for integration testing does the vendor provide a local stub simulation? If so, how good is the fidelity of the stub? If the vendor doesn’t supply a stub how will you implement one yourself?

    The same kinds of problems exist in FaaS land, although there’s been improvement in this area. It’s now possible to run FaaS functions locally for both Lambda and Microsoft Azure. However no local environment can fully simulate the cloud environment; relying solely on local FaaS environments is not a strategy I’d recommend. In fact, I’d go further and suggest that your canonical environment for running automated integration tests, at least as part of a deployment pipeline, should be the cloud, and that you should use the local testing environments primarily for interactive development and debugging. These local testing environments continue to improve – SAM CLI, for example, provides fast feedback for developing a Lambda-backed HTTP API application.

    And remember those cross-account execution limits I mentioned a couple of sections ago when running integration tests in the cloud? You probably want to at least isolate such tests from your production cloud accounts, and likely use even more fine-grained accounts than that.

    Part of the reason that considering integration tests is a big deal is that our units of integration with Serverless FaaS (i.e., each function) are a lot smaller than with other architectures, so we rely on integration testing a lot more than we may with other architectural styles.

    Relying on cloud-based testing environments rather than running everything locally on my laptop has been quite a shock to me. But times change, and the capabilities we get from the cloud are similar to what engineers at Google and the like have had for over a decade. Amazon now even lets you run your IDE in the cloud. I haven’t quite made that jump yet—but it’s probably coming.

    Debugging

    Debugging with FaaS is an interesting area. There’s been progress here, mostly related to running FaaS functions locally, in line with the testing updates discussed above. Microsoft, as I mentioned earlier, provides excellent debugging support for functions run locally, yet triggered by remote events. 

    Conclusion

    Every time you need to think about servers for your applications make sure to consider using a serverless offering from a public cloud provider. AWS is leading the way in serverless, but Azure and GCP are catching up quickly. Serverless is becoming more relevant and prevalent each day that goes by; you’ll find there are plenty of frameworks, conferences, and other resources that will help you to implement your solution.

Are you looking training with Right Jobs?

Contact Us

Popular Courses