Istio Installation & Architecture: Your First Service Mesh
When your Kubernetes cluster grows beyond a handful of services, you start running into problems that Kubernetes itself was never designed to solve: encrypted service-to-service communication, fine-grained traffic routing, distributed tracing, and centralized policy enforcement. A service mesh addresses all of these by injecting a network proxy alongside each of your workloads, creating an infrastructure layer that handles the complexity of service communication without changing your application code. Organizations running dozens or hundreds of microservices find that the alternative --- implementing these capabilities in each service individually through shared libraries --- leads to inconsistent behavior, language-specific implementations, and a maintenance burden that grows linearly with the number of services.
Istio is the most widely adopted service mesh and integrates deeply with Kubernetes. This guide covers its architecture in detail, all installation methods with production-ready configurations, sidecar injection patterns, gateway setup, resource optimization, and the operational considerations you need to understand before deploying Istio in production.
What Is a Service Mesh and Why You Need One
A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It moves networking concerns out of application code and into the infrastructure, where they can be managed uniformly across all services regardless of language or framework.
The core capabilities a service mesh provides:
- Mutual TLS (mTLS) --- Automatic encryption and identity verification between services, giving you zero-trust networking without code changes
- Traffic management --- Canary deployments, traffic splitting, retries, timeouts, and circuit breaking without application changes
- Observability --- Automatic metrics, distributed tracing, and access logging for all traffic flowing through the mesh
- Policy enforcement --- Authorization policies that control which services can communicate with each other and what operations they can perform
Without a mesh, each team implements these capabilities inconsistently in their application code or shared libraries. A Java team might use Resilience4j for circuit breaking, while a Go team writes custom retry logic, and a Python team has no resilience patterns at all. With a mesh, these become infrastructure concerns handled uniformly across every service.
When You Need a Service Mesh
Not every cluster needs a service mesh. Consider Istio when:
| Signal | Why It Matters |
|---|---|
| More than 10 microservices | Mesh benefits compound with service count |
| Multiple teams deploying independently | Consistent policies across teams |
| Compliance requirements (PCI, HIPAA, SOC2) | mTLS and audit logging |
| Canary deployments needed | Traffic splitting without app changes |
| Debugging distributed systems is painful | Distributed tracing and service graph |
| Services in multiple languages | Language-agnostic networking features |
If you have a monolith or a small number of services with a single team, the operational overhead of a service mesh likely outweighs the benefits.
Istio Architecture
Istio has two main components: the control plane and the data plane. Understanding their interaction is essential for troubleshooting and capacity planning.
Control Plane: istiod
The control plane is consolidated into a single binary called istiod, which handles three major responsibilities:
| Component | Responsibility | Impact of Failure |
|---|---|---|
| Pilot | Converts routing rules into Envoy configuration, pushes config to all sidecars via xDS API | No config updates, but existing config continues to work |
| Citadel | Certificate authority --- issues and rotates mTLS certificates for all workloads | Certificate rotation fails, eventual mTLS failures |
| Galley | Configuration validation and processing | Invalid configs may not be caught |
All three run within the istiod process. In earlier Istio versions (pre-1.5), these were separate deployments. The consolidation into a single process reduced operational complexity and resource overhead significantly.
istiod communicates with the Envoy sidecars using the xDS (x Discovery Service) API. This includes:
- LDS (Listener Discovery) --- What ports to listen on
- RDS (Route Discovery) --- How to route requests
- CDS (Cluster Discovery) --- What upstream clusters exist
- EDS (Endpoint Discovery) --- Which pod IPs belong to each cluster
- SDS (Secret Discovery) --- TLS certificates and keys
When you apply an Istio configuration resource (VirtualService, DestinationRule, etc.), istiod translates it into Envoy configuration and pushes it to all relevant sidecars. This push typically takes milliseconds for small meshes and a few seconds for large meshes with thousands of sidecars.
Data Plane: Envoy Sidecars
Every pod in the mesh gets an Envoy proxy injected as a sidecar container. This proxy intercepts all inbound and outbound network traffic for the pod using iptables rules. The sidecar:
- Terminates and originates mTLS connections
- Enforces routing rules from VirtualService and DestinationRule resources
- Collects metrics (request count, latency, error rate) and reports them to Prometheus
- Propagates tracing headers and generates trace spans
- Enforces authorization policies
- Implements circuit breaking, retries, and timeouts
Inbound traffic flow:
Client Pod --mTLS--> [ iptables ] --> [ Envoy Sidecar ] --> [ App Container ]
Outbound traffic flow:
[ App Container ] --> [ iptables ] --> [ Envoy Sidecar ] --mTLS--> Destination Pod
Traffic interception works through iptables rules injected by the istio-init init container. These rules redirect all TCP traffic to the Envoy proxy on ports 15001 (outbound) and 15006 (inbound). The application is completely unaware that a proxy exists.
Ambient Mesh: Sidecar-Less Architecture
Starting with Istio 1.22, the ambient mesh profile offers an alternative to sidecar injection. Instead of a per-pod proxy, ambient mesh uses:
- ztunnel --- A per-node L4 proxy that handles mTLS and basic network policy (similar to a DaemonSet)
- waypoint proxies --- Optional L7 proxies deployed per-namespace or per-service for advanced traffic management
The ambient architecture significantly reduces resource overhead since you deploy far fewer proxy instances. However, it is still maturing and not yet recommended for production workloads that need full L7 traffic management.
| Feature | Sidecar Mesh | Ambient Mesh |
|---|---|---|
| mTLS | Yes | Yes (via ztunnel) |
| L7 traffic management | Yes | Yes (via waypoint) |
| Resource overhead | High (1 proxy per pod) | Low (1 ztunnel per node) |
| Maturity | Production-ready | Beta |
| Startup latency | Higher (sidecar init) | Lower |
Installation Methods
Prerequisites
- Kubernetes cluster (1.27+)
kubectlconfigured and working- Cluster admin permissions (for CRDs and webhooks)
- At least 2 CPU cores and 4GB RAM available for Istio components
- For production: 4+ CPU cores and 8GB+ RAM recommended
Verify your cluster is ready:
# Check Kubernetes version
kubectl version --short
# Check available resources
kubectl top nodes
# Check for any existing mesh installations
kubectl get namespace istio-system 2>/dev/null && echo "istio-system exists" || echo "Clean install"
Method 1: istioctl (Recommended for Getting Started)
# Download istioctl
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.22.0 sh -
cd istio-1.22.0
export PATH=$PWD/bin:$PATH
# Verify the cluster is ready for Istio
istioctl x precheck
# Install with the default profile
istioctl install --set profile=default -y
# Verify installation
istioctl verify-install
# Check component health
kubectl get pods -n istio-system
kubectl get svc -n istio-system
For a customized installation:
# Install with custom configuration
istioctl install -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: default
meshConfig:
accessLogFile: /dev/stdout
accessLogEncoding: JSON
enableTracing: true
defaultConfig:
tracing:
sampling: 10.0
holdApplicationUntilProxyStarts: true
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
components:
pilot:
k8s:
resources:
requests:
cpu: 500m
memory: 2Gi
limits:
cpu: "2"
memory: 4Gi
hpaSpec:
minReplicas: 2
maxReplicas: 5
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: "2"
memory: 1Gi
hpaSpec:
minReplicas: 2
maxReplicas: 10
service:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
name: http2
- port: 443
targetPort: 8443
name: https
values:
global:
proxy:
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
EOF
Method 2: Helm (Recommended for Production)
Helm gives you the most control over the installation and integrates well with GitOps workflows:
# Add the Istio Helm repository
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
# Create the istio-system namespace
kubectl create namespace istio-system
# Step 1: Install the Istio base chart (CRDs and cluster-wide resources)
helm install istio-base istio/base \
--namespace istio-system \
--set defaultRevision=default \
--wait
# Step 2: Install istiod (control plane)
helm install istiod istio/istiod \
--namespace istio-system \
--values istiod-values.yaml \
--wait
# Step 3: Install the ingress gateway
kubectl create namespace istio-ingress
helm install istio-ingress istio/gateway \
--namespace istio-ingress \
--values gateway-values.yaml \
--wait
Production istiod-values.yaml:
# istiod-values.yaml
pilot:
autoscaleEnabled: true
autoscaleMin: 2
autoscaleMax: 5
resources:
requests:
cpu: 500m
memory: 2Gi
limits:
cpu: "2"
memory: 4Gi
env:
# Increase pilot push throttle for large meshes
PILOT_PUSH_THROTTLE: "100"
PILOT_DEBOUNCE_AFTER: "100ms"
PILOT_DEBOUNCE_MAX: "1s"
# Enable locality load balancing
PILOT_ENABLE_LOCALITY_LOAD_BALANCING: "true"
meshConfig:
accessLogFile: /dev/stdout
accessLogEncoding: JSON
enableTracing: true
defaultConfig:
holdApplicationUntilProxyStarts: true
tracing:
sampling: 1.0
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
global:
proxy:
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
# Log level for sidecar proxies
logLevel: warning
# Lifecycle hooks for graceful shutdown
lifecycle:
preStop:
exec:
command:
- "/bin/sh"
- "-c"
- "sleep 5"
# Enable protocol detection
protocolDetection:
timeout: 100ms
Production gateway-values.yaml:
# gateway-values.yaml
replicaCount: 2
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 80
resources:
requests:
cpu: 500m
memory: 512Mi
limits:
cpu: "2"
memory: 1Gi
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
ports:
- name: http2
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
podDisruptionBudget:
minAvailable: 1
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: istio-ingress
Method 3: Istio Operator
The operator watches for IstioOperator custom resources and reconciles the installation:
# Install the operator
istioctl operator init
# Apply an IstioOperator resource
kubectl apply -f - <<EOF
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio-control-plane
spec:
profile: default
components:
pilot:
k8s:
resources:
requests:
cpu: 500m
memory: 2Gi
hpaSpec:
minReplicas: 2
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
hpaSpec:
minReplicas: 2
meshConfig:
accessLogFile: /dev/stdout
enableTracing: true
defaultConfig:
holdApplicationUntilProxyStarts: true
EOF
Note: The Istio project is gradually deprecating the operator in favor of Helm-based installations. For new deployments, prefer Helm.
Installation Profiles
Istio ships with several pre-configured profiles suited for different use cases:
| Profile | istiod | Ingress Gateway | Egress Gateway | Use Case |
|---|---|---|---|---|
| minimal | Yes | No | No | Control plane only, bring your own gateway |
| default | Yes | Yes | No | Production starting point |
| demo | Yes | Yes | Yes | Evaluation and demos, extra features enabled |
| empty | No | No | No | Base for custom configurations |
| ambient | Yes | Yes | No | Ambient mesh (sidecar-less L4) |
For production, start with default and customize from there:
# View what a profile installs
istioctl profile dump default
# Compare two profiles
istioctl profile diff default demo
# Generate the manifest without applying (for review)
istioctl manifest generate --set profile=default > istio-manifest.yaml
Automatic Sidecar Injection
The easiest way to add pods to the mesh is namespace-level automatic injection. When enabled, Istio's mutating webhook automatically injects an Envoy sidecar into every new pod created in that namespace.
# Enable injection for a namespace
kubectl label namespace default istio-injection=enabled
# Verify the label
kubectl get namespace -L istio-injection
Output:
NAME STATUS AGE ISTIO-INJECTION
default Active 30d enabled
kube-system Active 30d
istio-system Active 1d
After labeling, restart existing deployments to inject sidecars into running pods:
# Restart all deployments in the namespace
kubectl rollout restart deployment -n default
# Verify sidecars are injected (look for 2/2 READY)
kubectl get pods -n default
# NAME READY STATUS RESTARTS AGE
# myapp-5d8f9c7b4a-abc12 2/2 Running 0 2m
Revision-Based Injection (Canary Upgrades)
For production environments, use revision labels to safely upgrade Istio by running two control plane versions simultaneously:
# Install a new Istio revision
istioctl install --revision=1-22-0 --set profile=default
# Label namespace to use the specific revision
kubectl label namespace default istio.io/rev=1-22-0 --overwrite
kubectl label namespace default istio-injection- # Remove old label
# Restart pods to pick up the new sidecar version
kubectl rollout restart deployment -n default
# After validating, remove the old control plane
istioctl uninstall --revision=1-21-0
Selective Injection
You can also control injection at the pod level using annotations:
# Opt out a specific pod from injection
apiVersion: apps/v1
kind: Deployment
metadata:
name: legacy-app
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: legacy-app
image: legacy-app:1.0
Common reasons to opt out:
- Legacy applications that break with the proxy (non-standard protocols, UDP-only)
- Job pods where sidecar lifecycle causes issues (the sidecar keeps running after the job container exits)
- DaemonSet pods on nodes with limited resources
For Jobs, use the istio.io/rev label at the namespace level and handle sidecar shutdown explicitly:
apiVersion: batch/v1
kind: Job
metadata:
name: data-migration
spec:
template:
metadata:
annotations:
# Tell Istio to use a shorter drain duration
proxy.istio.io/config: '{"terminationDrainDuration": "5s"}'
spec:
containers:
- name: migration
image: migration:1.0
command: ["/bin/sh", "-c"]
args:
- |
# Run the migration
/app/migrate
# Signal the sidecar to quit
curl -sf -XPOST http://localhost:15020/quitquitquit
restartPolicy: Never
Manual Injection
For cases where automatic injection is not suitable:
# Inject sidecar into a deployment manifest
istioctl kube-inject -f deployment.yaml | kubectl apply -f -
# Or inject directly from a running deployment
kubectl get deployment myapp -o yaml | istioctl kube-inject -f - | kubectl apply -f -
# Inject with a specific revision
istioctl kube-inject --revision=1-22-0 -f deployment.yaml | kubectl apply -f -
Verifying the Mesh
After installation and injection, verify everything is working:
# Check istiod is running and healthy
kubectl get pods -n istio-system
# NAME READY STATUS RESTARTS AGE
# istiod-7f8c6b5b6d-abcde 1/1 Running 0 5m
# istiod-7f8c6b5b6d-fghij 1/1 Running 0 5m
# Check ingress gateway
kubectl get pods -n istio-ingress
# NAME READY STATUS RESTARTS AGE
# istio-ingress-6b5c4d7f8a-xyz12 1/1 Running 0 5m
# istio-ingress-6b5c4d7f8a-abc34 1/1 Running 0 5m
# Check proxy sync status
istioctl proxy-status
# NAME CDS LDS EDS RDS ECDS ISTIOD
# myapp-5d8f9c7b4a-abc12.default SYNCED SYNCED SYNCED SYNCED istiod-7f8c6b5b6d-abcde
# Analyze configuration for issues
istioctl analyze --all-namespaces
# Check a specific proxy's configuration
istioctl proxy-config clusters myapp-5d8f9c7b4a-abc12.default
istioctl proxy-config listeners myapp-5d8f9c7b4a-abc12.default
istioctl proxy-config routes myapp-5d8f9c7b4a-abc12.default
The istioctl proxy-status command shows all connected proxies and whether their configuration is synced with the control plane. A status of SYNCED means the proxy has the latest configuration. STALE means a push failed or is pending.
Deploying the Bookinfo Sample Application
Istio includes a sample application called Bookinfo that demonstrates mesh capabilities:
# Ensure the namespace has injection enabled
kubectl label namespace default istio-injection=enabled
# Deploy the application
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.22/samples/bookinfo/platform/kube/bookinfo.yaml
# Wait for all pods to be ready with sidecars (2/2)
kubectl wait --for=condition=ready pod --all -n default --timeout=120s
# Verify all pods have 2/2 containers (app + sidecar)
kubectl get pods
# Deploy the gateway and virtual service
kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.22/samples/bookinfo/networking/bookinfo-gateway.yaml
# Get the ingress gateway IP
export INGRESS_HOST=$(kubectl -n istio-ingress get service istio-ingress \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-ingress get service istio-ingress \
-o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
# Test the application
curl -s http://${INGRESS_HOST}:${INGRESS_PORT}/productpage | head -20
# Generate some traffic for observability tools
for i in $(seq 1 100); do
curl -s -o /dev/null http://${INGRESS_HOST}:${INGRESS_PORT}/productpage
sleep 0.1
done
Gateway and VirtualService Basics
To expose services outside the mesh, you need a Gateway and a VirtualService. These are the fundamental building blocks of Istio's traffic management.
Gateway
A Gateway configures the ingress gateway's listeners (ports, protocols, TLS):
apiVersion: networking.istio.io/v1
kind: Gateway
metadata:
name: production-gateway
namespace: default
spec:
selector:
istio: ingress # Selects the ingress gateway pods
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*.example.com"
tls:
httpsRedirect: true # Redirect all HTTP to HTTPS
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
credentialName: production-tls-cert # Kubernetes TLS secret
hosts:
- "*.example.com"
Create the TLS secret:
kubectl create secret tls production-tls-cert \
--cert=tls.crt --key=tls.key \
-n istio-ingress
VirtualService
A VirtualService defines how traffic reaching the gateway is routed to your services:
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
name: myapp-routing
spec:
hosts:
- "myapp.example.com"
gateways:
- production-gateway
http:
- match:
- uri:
prefix: /api
route:
- destination:
host: api-service
port:
number: 8080
timeout: 10s
retries:
attempts: 3
perTryTimeout: 3s
- match:
- uri:
prefix: /health
route:
- destination:
host: health-service
port:
number: 8080
- route:
- destination:
host: frontend-service
port:
number: 3000
Kubernetes Gateway API (Future Direction)
Istio is increasingly supporting the Kubernetes Gateway API as an alternative to its own Gateway and VirtualService resources:
# Kubernetes Gateway API (requires Gateway API CRDs)
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: production-gateway
namespace: default
annotations:
networking.istio.io/service-type: LoadBalancer
spec:
gatewayClassName: istio
listeners:
- name: https
port: 443
protocol: HTTPS
tls:
mode: Terminate
certificateRefs:
- name: production-tls-cert
allowedRoutes:
namespaces:
from: Same
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: myapp-route
spec:
parentRefs:
- name: production-gateway
hostnames:
- "myapp.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /api
backendRefs:
- name: api-service
port: 8080
Resource Overhead Considerations
Adding a service mesh is not free. Each sidecar consumes CPU and memory, and the control plane needs resources too. Understanding this overhead is critical for capacity planning.
Per-Sidecar Resource Usage
| Resource | Default Request | Default Limit | Tuned Production |
|---|---|---|---|
| CPU | 100m | 2000m | 50m-200m |
| Memory | 128Mi | 1Gi | 64Mi-256Mi |
For 100 pods, that is an additional 5-20 CPU cores and 6.4-25.6 GB of memory at the request level. Factor this into your node sizing.
Latency Overhead
The sidecar proxy adds latency to every request. Typical overhead:
| Scenario | Added Latency | Notes |
|---|---|---|
| mTLS handshake (first connection) | 1-3ms | Amortized over connection lifetime |
| Per-request proxy hop | 0.5-1ms | Each direction (source and destination) |
| Total round-trip overhead | 1-2ms | For established connections |
For most applications, 1-2ms of added latency is negligible. For ultra-low-latency workloads (high-frequency trading, real-time gaming), evaluate carefully.
Tuning Sidecar Resources
# Per-pod resource tuning via annotations
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
template:
metadata:
annotations:
sidecar.istio.io/proxyCPU: "50m"
sidecar.istio.io/proxyMemory: "64Mi"
sidecar.istio.io/proxyCPULimit: "500m"
sidecar.istio.io/proxyMemoryLimit: "256Mi"
# Reduce Envoy concurrency for low-traffic services
proxy.istio.io/config: '{"concurrency": 1}'
spec:
containers:
- name: myapp
image: myapp:1.0
Global Resource Configuration
# In IstioOperator or Helm values
meshConfig:
defaultConfig:
concurrency: 2 # Number of worker threads per sidecar
proxyMetadata:
ISTIO_META_DNS_CAPTURE: "true"
values:
global:
proxy:
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
# Reduce scope of config pushed to each sidecar
includeIPRanges: "10.0.0.0/8" # Only intercept traffic to cluster IPs
Sidecar Resource (Limiting Config Scope)
For large meshes, use the Sidecar resource to limit the configuration pushed to each sidecar:
# Only configure this sidecar to know about services it actually calls
apiVersion: networking.istio.io/v1
kind: Sidecar
metadata:
name: myapp-sidecar
namespace: team-a
spec:
workloadSelector:
labels:
app: myapp
egress:
- hosts:
- "./*" # All services in same namespace
- "istio-system/*" # Istio system services
- "team-b/payment-service.team-b.svc.cluster.local" # Specific cross-namespace service
This dramatically reduces memory usage and config push time in meshes with hundreds of services.
Troubleshooting Common Issues
| Problem | Diagnosis | Solution |
|---|---|---|
| Pod stuck in Init | kubectl describe pod shows init container failing | Check istio-init logs, verify iptables permissions |
| 503 errors after injection | istioctl analyze shows config issues | Check DestinationRule subsets match deployment labels |
| Sidecar not injected | kubectl get pod -o yaml shows no istio-proxy | Verify namespace label and webhook configuration |
| High memory usage | istioctl proxy-config shows large config | Use Sidecar resource to limit scope |
| Slow config pushes | pilot_xds_push_time metric is high | Increase istiod resources, reduce config scope |
# Comprehensive diagnostics
istioctl analyze --all-namespaces
# Check webhook configuration
kubectl get mutatingwebhookconfigurations | grep istio
# Debug sidecar injection issues
istioctl experimental check-inject -n default
# View proxy logs for a specific pod
kubectl logs myapp-pod -c istio-proxy --tail=100
# Get a complete dump of a proxy's configuration
istioctl proxy-config all myapp-pod -o json > proxy-dump.json
Uninstalling Istio
If you need to remove Istio:
# Remove injection labels first
kubectl label namespace default istio-injection-
kubectl rollout restart deployment -n default
# Wait for pods to restart without sidecars
kubectl wait --for=condition=ready pod --all -n default --timeout=120s
# Uninstall via istioctl
istioctl uninstall --purge -y
# Or uninstall via Helm (reverse order of installation)
helm uninstall istio-ingress -n istio-ingress
helm uninstall istiod -n istio-system
helm uninstall istio-base -n istio-system
# Clean up namespaces
kubectl delete namespace istio-system istio-ingress
# Remove CRDs (optional, irreversible)
kubectl get crd -oname | grep --color=never 'istio.io' | xargs kubectl delete
Summary
Installing Istio is the easy part. Use istioctl for evaluation, Helm for production, and always start with the default profile. Enable namespace-level sidecar injection, deploy a sample application to verify the mesh, and pay attention to the resource overhead your sidecar proxies add. Use the Sidecar resource to limit configuration scope in large meshes, and consider revision-based injection for safe control plane upgrades. The holdApplicationUntilProxyStarts setting is essential for production to prevent race conditions between your application and the sidecar. With the mesh running, you are ready to explore traffic management, security, and observability, which are covered in the rest of this series.
SRE & Observability Engineer
If it's not measured, it doesn't exist. SLO-driven, metrics-obsessed, and the person who gets paged at 3 AM so you don't have to. Observability isn't optional.
Related Articles
Istio Traffic Management: Routing, Canary, and Circuit Breaking
Configure Istio VirtualServices, DestinationRules, and Gateways for advanced traffic routing, canary deployments, fault injection, and circuit breaking.
Istio mTLS & Security: Zero-Trust Service Communication
Enable mutual TLS in Istio, configure PeerAuthentication and AuthorizationPolicy, and secure service-to-service communication with zero-trust principles.
Istio Observability: Kiali, Jaeger, and Prometheus Integration
Leverage Istio's built-in observability — Kiali service graph, Jaeger distributed tracing, Prometheus metrics, and Grafana dashboards for your service mesh.