Configure Default Memory Requests And Limits For A
- Documentation
- Kubernetes Blog
- Training
- Partners
- Community
- Case Studies
- Versions
-
- Documentation
- Getting started
- Concepts
- Tasks
- Install Tools
- Administer a Cluster
- Administration with kubeadm
- Overprovision Node Capacity For A Cluster
- Migrating from dockershim
- Generate Certificates Manually
- Manage Memory, CPU, and API Resources
- Configure Default Memory Requests and Limits for a Namespace
- Configure Default CPU Requests and Limits for a Namespace
- Configure Minimum and Maximum Memory Constraints for a Namespace
- Configure Minimum and Maximum CPU Constraints for a Namespace
- Configure Memory and CPU Quotas for a Namespace
- Configure a Pod Quota for a Namespace
- Install a Network Policy Provider
- Access Clusters Using the Kubernetes API
- Advertise Extended Resources for a Node
- Autoscale the DNS Service in a Cluster
- Change the Access Mode of a PersistentVolume to ReadWriteOncePod
- Change the default StorageClass
- Switching from Polling to CRI Event-based Updates to Container Status
- Change the Reclaim Policy of a PersistentVolume
- Cloud Controller Manager Administration
- Configure a kubelet image credential provider
- Configure Quotas for API Objects
- Control CPU Management Policies on the Node
- Control Topology Management Policies on a node
- Customising DNS Service
- Debugging DNS Resolution
- Declare Network Policy
- Developing Cloud Controller Manager
- Enable Or Disable A Kubernetes API
- Encrypting Confidential Data at Rest
- Decrypt Confidential Data that is Already Encrypted at Rest
- Guaranteed Scheduling For Critical Add-On Pods
- IP Masquerade Agent User Guide
- Limit Storage Consumption
- Migrate Replicated Control Plane To Use Cloud Controller Manager
- Namespaces Walkthrough
- Operating etcd clusters for Kubernetes
- Reserve Compute Resources for System Daemons
- Running Kubernetes Node Components as a Non-root User
- Safely Drain a Node
- Securing a Cluster
- Set Kubelet Parameters Via A Configuration File
- Share a Cluster with Namespaces
- Upgrade A Cluster
- Use Cascading Deletion in a Cluster
- Using a KMS provider for data encryption
- Using CoreDNS for Service Discovery
- Using NodeLocal DNSCache in Kubernetes Clusters
- Using sysctls in a Kubernetes Cluster
- Utilising the NUMA-aware Memory Manager
- Verify Signed Kubernetes Artifacts
- Configure Pods and Containers
- Monitoring, Logging, and Debugging
- Manage Kubernetes Objects
- Managing Secrets
- Inject Data Into Applications
- Run Applications
- Run Jobs
- Access Applications in a Cluster
- Extend Kubernetes
- TLS
- Manage Cluster Daemons
https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/
Configure Default Memory Requests and Limits for a Namespace
- Documentation
Define a default memory resource limit for a namespace, so that every new Pod in that namespace has a memory resource limit configured.
This page shows how to configure default memory requests and limits for a namespace.
A Kubernetes cluster can be divided into namespaces. Once you have a namespace that has a default memory limit, and you then try to create a Pod with a container that does not specify its own memory limit, then the control plane assigns the default memory limit to that container.
Kubernetes assigns a default memory request under certain conditions that are explained later in this topic.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
You must have access to create namespaces in your cluster.
Each node in your cluster must have at least 2 GiB of memory.
Create a namespace
Create a namespace so that the resources you create in this exercise are isolated from the rest of your cluster.
kubectl create namespace default-mem-example
Create a LimitRange and a Pod
Here’s a manifest for an example LimitRange. The manifest specifies a default memory request and a default memory limit.
admin/resource/memory-defaults.yaml
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
Create the LimitRange in the default-mem-example namespace:
kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults.yaml --namespace=default-mem-example
Now if you create a Pod in the default-mem-example namespace, and any container within that Pod does not specify its own values for memory request and memory limit, then the control plane applies default values: a memory request of 256MiB and a memory limit of 512MiB.
Here’s an example manifest for a Pod that has one container. The container does not specify a memory request and limit.
admin/resource/memory-defaults-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: default-mem-demo
spec:
containers:
- name: default-mem-demo-ctr
image: nginx
Create the Pod.
kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod.yaml --namespace=default-mem-example
View detailed information about the Pod:
kubectl get pod default-mem-demo --output=yaml --namespace=default-mem-example
The output shows that the Pod’s container has a memory request of 256 MiB and a memory limit of 512 MiB. These are the default values specified by the LimitRange.
containers:
- image: nginx
imagePullPolicy: Always
name: default-mem-demo-ctr
resources:
limits:
memory: 512Mi
requests:
memory: 256Mi
Delete your Pod:
kubectl delete pod default-mem-demo --namespace=default-mem-example
What if you specify a container’s limit, but not its request?
Here’s a manifest for a Pod that has one container. The container specifies a memory limit, but not a request:
admin/resource/memory-defaults-pod-2.yaml
apiVersion: v1
kind: Pod
metadata:
name: default-mem-demo-2
spec:
containers:
- name: default-mem-demo-2-ctr
image: nginx
resources:
limits:
memory: "1Gi"
Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-2.yaml --namespace=default-mem-example
View detailed information about the Pod:
kubectl get pod default-mem-demo-2 --output=yaml --namespace=default-mem-example
The output shows that the container’s memory request is set to match its memory limit. Notice that the container was not assigned the default memory request value of 256Mi.
resources:
limits:
memory: 1Gi
requests:
memory: 1Gi
What if you specify a container’s request, but not its limit?
Here’s a manifest for a Pod that has one container. The container specifies a memory request, but not a limit:
admin/resource/memory-defaults-pod-3.yaml
apiVersion: v1
kind: Pod
metadata:
name: default-mem-demo-3
spec:
containers:
- name: default-mem-demo-3-ctr
image: nginx
resources:
requests:
memory: "128Mi"
Create the Pod:
kubectl apply -f https://k8s.io/examples/admin/resource/memory-defaults-pod-3.yaml --namespace=default-mem-example
View the Pod’s specification:
kubectl get pod default-mem-demo-3 --output=yaml --namespace=default-mem-example
The output shows that the container’s memory request is set to the value specified in the container’s manifest. The container is limited to use no more than 512MiB of memory, which matches the default memory limit for the namespace.
resources:
limits:
memory: 512Mi
requests:
memory: 128Mi
Note:
A LimitRange does not check the consistency of the default values it applies. This means that a default value for the limit that is set by LimitRange may be less than the request value specified for the container in the spec that a client submits to the API server. If that happens, the final Pod will not be scheduleable. See Constraints on resource limits and requests for more details.
Motivation for default memory limits and requests
If your namespace has a memory resource quota configured, it is helpful to have a default value in place for memory limit. Here are three of the restrictions that a resource quota imposes on a namespace:
- For every Pod that runs in the namespace, the Pod and each of its containers must have a memory limit. (If you specify a memory limit for every container in a Pod, Kubernetes can infer the Pod-level memory limit by adding up the limits for its containers).
- Memory limits apply a resource reservation on the node where the Pod in question is scheduled. The total amount of memory reserved for all Pods in the namespace must not exceed a specified limit.
- The total amount of memory actually used by all Pods in the namespace must also not exceed a specified limit.
When you add a LimitRange:
If any Pod in that namespace that includes a container does not specify its own memory limit, the control plane applies the default memory limit to that container, and the Pod can be allowed to run in a namespace that is restricted by a memory ResourceQuota.
Clean up
Delete your namespace:
kubectl delete namespace default-mem-example
What’s next
For cluster administrators
-
Configure Minimum and Maximum Memory Constraints for a Namespace
-
Configure Minimum and Maximum CPU Constraints for a Namespace
For app developers
Feedback
Was this page helpful?
Last modified October 30, 2024 at 5:17 PM PST: KEP 2837: Pod Level Resources Alpha (0374213f57)
Edit this page Create child page Create documentation issue Print entire section
- Before you begin
- Create a namespace
- Create a LimitRange and a Pod
- What if you specify a container’s limit, but not its request?
- What if you specify a container’s request, but not its limit?
- Motivation for default memory limits and requests
- Clean up
- What’s next
| © 2025 The Kubernetes Authors | Documentation Distributed under CC BY 4.0 |
© 2025 The Linux Foundation ®. All rights reserved. The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our Trademark Usage page
ICP licence: 京ICP备17074266号-3