Node Pools
Node pools are a way to group clients and segment infrastructure into logical units that can be targeted by jobs for a strong control over where allocations are placed.
Without node pools, allocations for a job can be placed in any eligible client in the cluster. Affinities and constraints can help express preferences for certain nodes, but they do not easily prevent other jobs from placing allocations in a set of nodes.
A node pool can be created using the nomad node pool apply
command and passing a node pool specification file.
Clients can then be added to this node pool by setting the
node_pool
attribute in their configuration file, or using the
equivalent -node-pool
command line flag.
To help streamline this process, nodes can create node pools on demand. If a client configuration references a node pool that does not exist yet, Nomad creates the node pool automatically on client registration.
Jobs can then reference node pools using the node_pool
attribute.
Similarly to the namespace
attribute, the node pool must exist beforehand,
otherwise the job registration results in an error. Only nodes in the given
node pool are considered for placement. If none are available the deployment
is kept as pending until a client is added to the node pool.
Multi-region Clusters
In federated multi-region clusters, node pools are automatically replicated from the authoritative region to all non-authoritative regions, and requests to create or modify a new node pool are forwarded from non-authoritative to the authoritative region.
Since the replication data only flows in one direction, clients in non-authoritative regions are not able to create node pools on demand.
A client in a non-authoritative region that references a node pool that does
not exist yet is kept in the initializing
status until the node pool is
created and replicated to all regions.
Built-in Node Pools
In addition to the user generated node pools Nomad automatically creates two built-in node pools that cannot be deleted nor modified.
default
: Node pools are an optional feature of Nomad. Thenode_pool
attribute in both the client configuration and job files are optional. When not specified, these values are set to use thedefault
built-in node pool.all
: In some situations, it is useful to be able to run a job across all clients in a cluster, regardless of their node pool configuration. For these scenarios the job may use the built-inall
node pool which always includes all clients registered in the cluster. Unlike other node pools, theall
node pool can only be used in jobs and not in client configuration.
Nomad Enterprise Enterprise
Nomad Enterprise provides additional features that make node pools more powerful and easier to manage.
Scheduler Configuration
Node pools in Nomad Enterprise are able to customize some aspects of the Nomad scheduler and override certain global configuration per node pool.
This allows experimenting with with functionalities such as memory
oversubscription in isolation, or adjusting the scheduler algorithm between
spread
or binpacking
depending on the types of workload being deployed in a
given set of clients.
When using the built-in all
node pool the global scheduler configuration is
applied.
Refer to the scheduler_config
parameter in the
node pool specification for more information.
Node Pool Governance
Node pools and namespaces share some similarities, with both providing a way to group resources in isolated logical units. Jobs are grouped into namespaces and clients into node pools.
With the Node Pool Governance feature of Nomad Enterprise it is possible to automatically link a node pool to a namespace, and so a job is able to simply specify a namespace and the node pool is inferred from the namespace configuration.
This connection is done using the node_pool_config ->
default
attribute in the namespace specification.
Now any job in the dev
namespace only places allocations in nodes in the
dev
node pool, and so the node_pool
attribute may be omitted from the job
specification.
Jobs are able o override the namespace default node pool by specifying a
different node_pool
value.
The namespace can enforce if this behavior is allowed or limit which node pools
can and cannot be used with the allowed
and
denied
parameters.
Multi-region Jobs
Multi-region jobs can specify different node pools to be used in each region by
overriding the top-level node_pool
job value, or the namespace default
node
pool, in each region
block.
Node Pool Patterns
The sections below describe some node pool patterns that can be used to achieve specific goals.
Infrastructure and System Jobs
It is a common for a Nomad cluster to have certain jobs that are focused on providing the underlying infrastructure for more business focused applications. Some examples include reverse proxies for traffic ingress, CSI plugins, and periodic maintenance jobs.
These jobs can be isolated in their own namespace but they may have different scheduling requirements.
Reverse proxies, and only reverse proxies, may need to run in clients that are exposed to public traffic, and CSI controller plugins may require clients to have high-privileged access to cloud resources and APIs.
Other jobs, like CSI node plugins and periodic maintenance jobs, may need to
run as system
jobs in all clients of the cluster.
Node pools can be used to achieve the isolation required by the first set of
jobs, and the built-in all
node pool can be used for the jobs that must run
in every client. To keep them organized, all jobs are registered in the same
infra
namespace.
Use positive and negative constraints to fine-tune placements when targeting
the built-in all
node pool.
With Nomad Enterprise and Node Pool Governance, the infra
namespace can be
configured to use a specific namespace by default and only allow the specific
node pools required.
Mixed Scheduling Algorithms
The different scheduling algorithms provided by Nomad are best suited for different types of environments and workloads.
The binpack
algorithm aims to maximize resource usage and pack as much
workload as possible in the given set of of clients. This makes it ideal for
cloud environments where infrastructure is billed by the hour and can be
quickly scaled in and out. By maximizing workload density a cluster running in
cloud instances can reduce the number of clients needed to run everything that
is necessary.
The spread
algorithm behaves in the opposite direction, making use of every
client available to reduce density and potential noisy neighbors and resource
contention. This makes it ideal for environments where clients are
pre-provisioned and scale more slowly, such as on-premise deployments.
Clusters in a mixed environment can use node pools to adjust the scheduler
algorithm per node type. Cloud instances may be placed in a node pool that uses
the binpack
algorithm while bare-metal nodes are placed in a node pool
configured to use spread
.
Another scenario where mixing algorithms may be useful is to separate workloads
that are more sensitive to noisy neighbors (and thus use the spread
algorithm), from those that are able to be packed more tightly (binpack
).
Running Workloads on Server Agents
In general, it is not advisable to run a Nomad agent with both client and server enabled. Some of the reasons to avoid this pattern include:
Resource Usage: Nomad servers can consume a significant amount of memory and CPU in order to store state and schedule workloads. Allocations running in the same machine may reduce the amount of resources available to servers.
Security: Nomad servers are often restricted to internal network traffic and have tight control over what they run. A client agent may expose the node to arbitrary execution of jobs.
Despite these important caveats, sometimes it may be useful to run specific operations across an entire fleet of nodes, servers included.
Node pools can help prevent unexpected placement of allocations in client agents running alongside servers.
Jobs can target server agents using the servers
or all
node pools.