Dear friends today we will see how to manage containers in kubernetes? so let’s start and see step by step this process.
In this blog, we will explore the core principles, tools, and best practices for managing containers in Kubernetes. Whether you’re new to the ecosystem or looking to sharpen your skills, this guide will give you a solid foundation in Kubernetes container management.

Kubernetes (K8s) has become the de facto standard for container orchestration, allowing organizations to efficiently deploy, manage, and scale containerized applications. If you’re working in cloud-native environments or adopting DevOps practices, understanding how to manage containers using Kubernetes is essential.
Why Use Kubernetes to Manage Containers?
Before diving into the how, it’s helpful to understand the why. Managing a few containers manually using Docker CLI might seem manageable, but things quickly spiral out of control as your system grows. Kubernetes addresses this challenge by providing:
- Automated deployment and scaling
- Self-healing and failover capabilities
- Service discovery and load balancing
- Secrets and configuration management
- Rolling updates and rollbacks
These features allow developers and operators to maintain complex, distributed systems with less effort and higher reliability.
Core Concepts of Kubernetes
To manage containers effectively in Kubernetes, you must understand the building blocks that the platform uses:
1. Pod
The smallest deployable unit in Kubernetes. A pod can contain one or more containers that share the same network namespace and storage.
2. Deployment
A controller that manages stateless applications. It ensures the desired number of pod replicas are running and manages rolling updates.
3. StatefulSet
Similar to Deployments, but designed for stateful applications. Pods have stable, persistent identities and storage.
4. DaemonSet
Ensures a copy of a pod runs on all (or some) nodes in the cluster. Useful for logging and monitoring agents.
5. Job and CronJob
For batch and scheduled jobs respectively. Jobs ensure that a pod completes successfully at least once.
6. Service
An abstraction that defines a logical set of pods and a policy by which to access them. Services enable communication between microservices.
Managing Containers Step-by-Step
Here’s how to manage containers in Kubernetes effectively.
1. Deploying a Container
The first step is creating a deployment. You can define deployments using YAML manifests or the kubectl
CLI.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
kubectl apply -f nginx-deployment.yaml
This tells Kubernetes to maintain three running containers of the NGINX image.
2. Exposing a Container
To make your container accessible, expose it via a Service.
kubectl expose deployment nginx-deployment --port=80 --type=LoadBalancer
Kubernetes creates a Service object, assigns a stable IP, and (if using a cloud provider) provisions a load balancer.
3. Scaling Containers
Kubernetes makes scaling easy. You can manually scale the number of replicas:
kubectl scale deployment nginx-deployment --replicas=5
Or you can set up Horizontal Pod Autoscaling based on CPU or memory usage.
kubectl autoscale deployment nginx-deployment --cpu-percent=50 --min=1 --max=10
This keeps your application responsive under varying loads.
4. Rolling Updates and Rollbacks
Kubernetes supports zero-downtime updates by default.
To update the image:
kubectl set image deployment/nginx-deployment nginx=nginx:1.25.0
To roll back:
kubectl rollout undo deployment/nginx-deployment
You can also monitor rollout status:
kubectl rollout status deployment/nginx-deployment
5. Monitoring and Logging
You can check the status of your containers using:
kubectl get pods
kubectl describe pod
kubectl logs
For centralized monitoring and logging, integrate tools like Prometheus, Grafana, Fluentd, or ELK Stack.
6. Health Checks
Kubernetes uses liveness and readiness probes to manage container health.
Example readiness probe:
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
Readiness checks determine if a pod is ready to serve traffic, while liveness checks help restart misbehaving containers automatically.
7. Configuring Containers
Use ConfigMaps and Secrets to manage configuration data and sensitive information without hardcoding them into images.
Create a ConfigMap:
kubectl create configmap app-config --from-literal=ENV=production
Mount it in a pod:
env:
- name: ENV
valueFrom:
configMapKeyRef:
name: app-config
key: ENV
8. Resource Management
Limit resource usage to avoid contention.
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
Requests are guaranteed minimums; limits are maximums.
9. Networking and Security
Use Network Policies to control traffic flow. RBAC (Role-Based Access Control) controls what users and services can do within the cluster.
Example RBAC Role:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
Best Practices for Managing Containers
- Use namespaces to separate environments (e.g., dev, staging, prod).
- Always use specific image tags (not
latest
) to avoid unpredictability. - Implement resource limits to prevent noisy neighbors.
- Use readiness and liveness probes to ensure healthy containers.
- Keep manifests in version control and use tools like Helm or Kustomize.
- Automate deployments using CI/CD pipelines (e.g., ArgoCD, GitLab CI).
Conclusion
Managing containers in Kubernetes involves more than just spinning up pods—it requires thoughtful configuration, monitoring, and scaling strategies to run applications efficiently and reliably. By leveraging Kubernetes abstractions like Deployments, Services, and ConfigMaps, and adhering to best practices, you can build robust and scalable cloud-native applications.
Whether you’re a developer or an operations engineer, mastering container management in Kubernetes is a vital skill in today’s infrastructure landscape. As your expertise grows, so will your ability to harness the full power of Kubernetes.