Day 5: Networking with Services
What You'll Learn Today
- The problem of unstable Pod IPs and how Services solve it
- Four Service types (ClusterIP, NodePort, LoadBalancer, ExternalName)
- Service discovery and DNS
- Practical Service configurations
Why Services Are Needed
In the Docker book, Docker Compose used service names for inter-container communication. Kubernetes needs the same, but with additional challenges in a multi-node cluster.
flowchart TB
subgraph Problem["The Problem"]
P1["Pod A\nIP: 10.0.0.5"]
P2["Pod B\nIP: 10.0.0.6"]
P3["Pod C\nIP: 10.0.0.7"]
end
subgraph After["After Recreation"]
P4["Pod A'\nIP: 10.0.0.12"]
P5["Pod B'\nIP: 10.0.0.15"]
P6["Pod C'\nIP: 10.0.0.18"]
end
Problem -->|"Pods get new IPs\nwhen recreated!"| After
style Problem fill:#ef4444,color:#fff
style After fill:#f59e0b,color:#fff
| Problem | Description |
|---|---|
| Unstable IPs | Pods get new IP addresses on every recreation |
| Multiple Pods | Which Pod should receive traffic when there are replicas? |
| Load balancing | Need to distribute traffic across multiple Pods |
Services provide a stable access point that solves all these problems.
How Services Work
A Service uses label selectors to target Pods and provides a stable IP address and DNS name.
flowchart TB
CLIENT["Client"] --> SVC["Service\nweb-service\nClusterIP: 10.96.0.100"]
SVC --> P1["Pod 1\n10.0.0.5"]
SVC --> P2["Pod 2\n10.0.0.6"]
SVC --> P3["Pod 3\n10.0.0.7"]
style SVC fill:#3b82f6,color:#fff
style P1 fill:#22c55e,color:#fff
style P2 fill:#22c55e,color:#fff
style P3 fill:#22c55e,color:#fff
Service Types
1. ClusterIP (Default)
Assigns a virtual IP accessible only from within the cluster.
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: ClusterIP
selector:
app: web
ports:
- port: 80
targetPort: 80
protocol: TCP
2. NodePort
Exposes the Service on a static port on each node.
apiVersion: v1
kind: Service
metadata:
name: web-nodeport
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
nodePort: 30080 # Range: 30000-32767
flowchart TB
EXT["External Access\nhttp://NodeIP:30080"] --> NODE["Node\nPort 30080"]
NODE --> SVC["Service\nPort 80"]
SVC --> P1["Pod 1"]
SVC --> P2["Pod 2"]
style EXT fill:#8b5cf6,color:#fff
style SVC fill:#3b82f6,color:#fff
3. LoadBalancer
Automatically provisions a cloud provider load balancer.
apiVersion: v1
kind: Service
metadata:
name: web-lb
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 80
4. ExternalName
Creates a DNS alias to an external service.
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
type: ExternalName
externalName: db.example.com
Service Type Comparison
| Type | Access From | External IP | Use Case |
|---|---|---|---|
| ClusterIP | Inside cluster only | None | Internal microservice communication |
| NodePort | External via node IP | None (uses node IP) | Development & testing |
| LoadBalancer | External via LB | Yes | Production external access |
| ExternalName | Inside cluster | - | DNS reference to external services |
Service Discovery
DNS Name Resolution
CoreDNS runs inside the cluster and automatically registers Services.
<service-name>.<namespace>.svc.cluster.local
# Access from same namespace
curl http://web-service
# Access from different namespace
curl http://web-service.default.svc.cluster.local
# Short form (same namespace)
curl http://web-service.default
Environment Variables
When a Pod starts, Service information from the same namespace is injected as environment variables.
# Inside the Pod
env | grep WEB_SERVICE
# WEB_SERVICE_SERVICE_HOST=10.96.0.100
# WEB_SERVICE_SERVICE_PORT=80
DNS-based discovery is more flexible and recommended.
Practical Example: Web App + Database
Let's build a web application with a database β similar to what we did with Docker Compose, but in Kubernetes.
1. Database Deployment and Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:17
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
value: myapp
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
value: secret123
---
apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
type: ClusterIP
selector:
app: postgres
ports:
- port: 5432
targetPort: 5432
2. Web App Deployment and Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: nginx:1.27
ports:
- containerPort: 80
env:
- name: DATABASE_HOST
value: postgres-service
- name: DATABASE_PORT
value: "5432"
---
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 80
nodePort: 30080
flowchart TB
EXT["External Access\n:30080"] --> WEBSVC["web-service\nNodePort"]
WEBSVC --> W1["Web Pod 1"]
WEBSVC --> W2["Web Pod 2"]
WEBSVC --> W3["Web Pod 3"]
W1 --> DBSVC["postgres-service\nClusterIP"]
W2 --> DBSVC
W3 --> DBSVC
DBSVC --> DB["PostgreSQL Pod"]
style WEBSVC fill:#3b82f6,color:#fff
style DBSVC fill:#8b5cf6,color:#fff
style DB fill:#22c55e,color:#fff
Endpoints
Services track Pod IP addresses through Endpoints objects.
# Check endpoints
kubectl get endpoints web-service
# NAME ENDPOINTS AGE
# web-service 10.0.0.5:80,10.0.0.6:80,10.0.0.7:80 5m
Summary
| Concept | Description |
|---|---|
| Service | Provides stable access point to Pods |
| ClusterIP | Internal only. Default Service type |
| NodePort | External access via node port. For dev/testing |
| LoadBalancer | External access via cloud LB. For production |
| Service Discovery | Access Services by DNS name |
| Endpoints | Object tracking Pod IPs for a Service |
Key Takeaways
- Pod IPs are unstable β use Services for stable access
- ClusterIP for internal communication, NodePort or LoadBalancer for external access
- DNS names (
<service-name>.<namespace>) enable Service-to-Service communication
Practice Exercises
Exercise 1: Basics
Create a Deployment (replicas: 1) and ClusterIP Service for Redis on port 6379.
Exercise 2: Multi-Service
Create frontend (nginx, replicas: 3, NodePort) and backend (httpd, replicas: 2, ClusterIP) Deployments with Services. Verify that frontend Pods can curl the backend Service.
Challenge
Compare communication using direct Pod IPs vs. Services. Delete and recreate Pods to observe what happens in each case.
References
Next up: In Day 6, you'll learn about "Storage and Data Persistence" β how Kubernetes handles the Volume concepts from the Docker book using PersistentVolumes and PersistentVolumeClaims.