Scale a service
The Nomad Autoscaler is a tool that can scale workloads and client nodes in a Nomad cluster automatically. It supports two kinds of scaling scenarios:
- Horizontal application autoscaling is when the autoscaler controls the number of allocations (service instances) Nomad schedules.
- Horizontal cluster autoscaling is when the autoscaler controls the number of Nomad client nodes in the cluster.
Both types of autoscaling are configured with scaling policies that scale according to changes in resource usage, including CPU consumption, memory, or metrics from other Application Performance Monitoring (APM) tools.
When you deploy an application as microservices, the autoscaler helps you scale each service independently. If only one service experiences additional load, then Nomad can add additional allocations for that service only. This approach uses available resources more efficiently than scaling the entire application.
In this tutorial, you deploy a version of HashiCups with a modified job definition for the frontend
service. The modified job instructs the autoscaler to create additional instances during high CPU load.
Infrastructure overview
At the beginning of the tutorial, you have Consul API gateway deployed on the public client node of your cluster.
Prerequisites
This tutorial uses the infrastructure set up in the previous tutorial of this collection, Integrate service mesh and gateway. Complete that tutorial to set up the infrastructure if you have not done so.
Review configuration files
The frontend
service renders the HashiCups UI and contains a value in the page footer that shows which instance send the response to the request. This tutorial uses the footer to show how scaling functions when the autoscaler responds to the increased load.
This version of HashiCups adds a scaling
block to the frontend
service, and includes running the Nomad Autoscaler as a job in Nomad.
Additional configurations for the autoscaler exist in the shared/jobs
directory and include 05.autoscaler.config.sh
and 05.autoscaler.nomad.hcl
.
Review the autoscaler configuration
The Nomad Autoscaler is a separate piece of software that runs as a system process like the Consul and Nomad agents, or as a job in Nomad. It scales workloads running in Nomad based on the scaling
block in the jobspec.
The repository provides a script, 05.autoscaler.config.sh
, that automates the initial configuration required for Nomad to integrate with the Aurtoscaler.
The set up script first cleans up previous ACL configurations and then applies the ACL policy for the autoscaler.
Review the autoscaler jobspec
The autoscaler runs as a Docker container. Its configuration defines the Nomad cluster address as well as the Application Performance Monitoring (APM) tool used to monitor data.
This jobspec uses the Nomad APM plugin. It is suitable for scaling based on CPU and memory usage. It is not as flexible as other APM plugins, but does not require additional installation or configuration. If you want to scale based on other metrics, consider using the Prometheus plugin or the Datadog plugin.
Review the HashiCups jobspec
Open the 05.hashicups.nomad.hcl
jobspec file and view the contents.
Nomad scales the frontend
service when CPU usage of all tasks in the frontend
group reaches 70% of the maximum allocated CPU for the group. The target value strategy plugin is responsible for the CPU usage calculation. Scaling up happens in increments of one instance maximum, while scaling down happens up to two instances maximum. These values are part of the strategy
block configuration.
Review the load test script
The load testing script makes requests to the HashiCups URL with the hey
tool to trigger scaling. It does so in several waves and adds more requests with each additional wave.
Deploy Nomad autoscaler
Deploy the Nomad autoscaler before you deploy the HashiCups application.
Run the autoscaler setup script and jobspec
Run the autoscaler configuration script.
Submit the autoscaler job to Nomad.
Deploy HashiCups
Submit the HashiCups job to Nomad.
Scale the frontend
service
Get the public address of the API gateway and export it as the API_GW
environment variable.
Open the Nomad UI and log in with the ui -authenticate
command. This command opens a web browser window on your machine. Alternatively, you can open the Nomad UI with the IP in Nomad_UI
and log in with Nomad_UI_token
.
The hashicups
job, which consists of multiple services, appears in the list of jobs.
Click the hashicups job, and then select the frontend task from the list of task groups.
This page displays a graph that shows scaling events at the bottom of the page. Keep this page open so that you can reference it when scaling starts.
Run the load test script and observe the graph on the frontend task page in the Nomad UI. Observe Nomad create additional allocations when the autoscaler scales the frontend service up, and then remove the allocations as the autoscaler scales the service back down.
In the Consul UI, the number of instances of the frontend
service registered in the catalog changes as the autoscaler scales up and down.
In the Consul UI, click the frontend service and then click on the Instances tab name to view details about each instance.
Before you clean up your environment, you can re-run the load script and observe changes in the Nomad UI and Consul UI as they occur.
Clean up
After you complete this tutorial, you should clean up the deployment. If you want to keep experimenting with the cluster you can clean the cluster state without destroying the underlying infrastructure.
When you are finished, we recommend you destroy the infrastructure to avoid unnecessary costs.
Open up the terminal session from where you submitted the jobs and stop the deployment when you are ready to move on. The nomad job stop
command can accept more than one job.
Clean the autoscaler configuration.
Stop the API gateway deployment.
Remove Consul intentions.
Remove Consul and Nomad configuration.
Next steps
In this tutorial, you deployed a version of HashiCups with a modified job defintion for the frontend
service that instructed the Nomad Autoscaler to scale up and down based on CPU load.
In this collection, you learned how to migrate a monolithic application to microservices and run them in Nomad with Consul. You deployed a cluster running Consul and Nomad, configured access to the CLI and UI components, deployed several versions of the HashiCups application to show different stages of integration with Consul and Nomad, and automatically and independently scaled one of the HashiCups services with the Nomad Autoscaler.
Check out the resources below to learn more and continue your learning and development.