Blue-green, canary, rolling deployments: реализация в CI/CD.
Выбор правильной стратегии деплоя критически важен для минимизации downtime и рисков. Изучите blue-green, canary и rolling deployments с практическими примерами реализации в CI/CD.
| Стратегия | Downtime | Риск | Сложность | Использование |
|---|---|---|---|---|
| Recreate | Есть | Высокий | Низкая | Development, тестирование |
| Rolling | Нет | Средний | Средняя | Production, стандартный выбор |
| Blue-Green | Нет | Низкий | Высокая | Критичные системы |
| Canary | Нет | Минимальный | Высокая | Постепенный rollout |
Постепенная замена старых подов новыми.
Начало: [v1] [v1] [v1] [v1] [v1]
Шаг 1: [v2] [v1] [v1] [v1] [v1]
Шаг 2: [v2] [v2] [v1] [v1] [v1]
Шаг 3: [v2] [v2] [v2] [v1] [v1]
Шаг 4: [v2] [v2] [v2] [v2] [v1]
Шаг 5: [v2] [v2] [v2] [v2] [v2]
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Максимум 1 под сверх replicas
maxUnavailable: 0 # Недоступных подов не должно быть
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: myregistry/my-app:v2.0.0
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10Параметры:
maxSurge: 1 — создаётся 1 новый под перед удалением старогоmaxUnavailable: 0 — гарантирует zero-downtimereadinessProbe — трафик идёт только на готовые подыname: Rolling Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Configure kubectl
uses: azure/k8s-set-context@v3
with:
method: kubeconfig
kubeconfig: ${{ secrets.KUBE_CONFIG }}
- name: Deploy with rolling update
run: |
kubectl set image deployment/my-app \
app=myregistry/my-app:${{ github.sha }} \
--record
# Ждём завершения rollout
kubectl rollout status deployment/my-app --timeout=300s
env:
KUBECONFIG: ${{ secrets.KUBE_CONFIG }}
- name: Rollback on failure
if: failure()
run: kubectl rollout undo deployment/my-appДва идентичных окружения: одно активное (blue), другое неактивное (green).
Состояние 1: [Blue: v1] ← трафик
[Green: v2] ← ожидание
Переключение: [Blue: v1]
[Green: v2] ← трафик
Состояние 2: [Blue: v1] ← ожидание
[Green: v2] ← трафик
# Blue deployment (текущая версия)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-blue
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: blue
template:
metadata:
labels:
app: my-app
version: blue
spec:
containers:
- name: app
image: myregistry/my-app:v1.0.0
# Green deployment (новая версия)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-green
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: green
template:
metadata:
labels:
app: my-app
version: green
spec:
containers:
- name: app
image: myregistry/my-app:v2.0.0
# Service для переключения
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
version: blue # Переключить на green для активации
ports:
- port: 80
targetPort: 8080name: Blue-Green Deploy
on:
push:
branches: [main]
env:
CURRENT_VERSION: blue
NEW_VERSION: green
jobs:
deploy-green:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Deploy green version
run: |
kubectl apply -f k8s/green-deployment.yaml
kubectl set image deployment/my-app-green \
app=myregistry/my-app:${{ github.sha }}
# Ждём готовности green
kubectl rollout status deployment/my-app-green
- name: Run smoke tests on green
run: |
kubectl port-forward svc/my-app-green 8080:80 &
sleep 5
./run-smoke-tests.sh http://localhost:8080
- name: Switch traffic to green
run: |
kubectl patch service my-app-service \
-p '{"spec":{"selector":{"version":"green"}}}'
- name: Verify switch
run: |
VERSION=$(curl -s http://my-app-service/health | jq -r .version)
if [ "$VERSION" != "v2.0.0" ]; then
echo "Switch failed!"
exit 1
fi
- name: Scale down blue (опционально)
run: kubectl scale deployment/my-app-blue --replicas=0 - name: Rollback to blue
if: failure()
run: |
kubectl patch service my-app-service \
-p '{"spec":{"selector":{"version":"blue"}}}'
echo "Rolled back to blue version"Постепенный перенос трафика на новую версию.
Этап 1: [v1: 95%] [v2: 5%] ← 5% трафика на v2
Этап 2: [v1: 75%] [v2: 25%] ← 25% трафика на v2
Этап 3: [v1: 50%] [v2: 50%] ← 50% трафика на v2
Этап 4: [v1: 0%] [v2: 100%] ← полный переход
# DestinationRule для версий
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: my-app-dr
spec:
host: my-app
subsets:
- name: stable
labels:
version: stable
- name: canary
labels:
version: canary
# VirtualService для распределения трафика
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: my-app-vs
spec:
hosts:
- my-app
http:
- route:
- destination:
host: my-app
subset: stable
weight: 95
- destination:
host: my-app
subset: canary
weight: 5name: Canary Deploy
on:
push:
branches: [main]
jobs:
canary-deploy:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Deploy canary (5%)
run: |
kubectl apply -f k8s/canary-deployment.yaml
kubectl set image deployment/my-app-canary \
app=myregistry/my-app:${{ github.sha }}
kubectl rollout status deployment/my-app-canary
# Устанавливаем 5% трафика
kubectl patch virtualservice my-app-vs \
--type='json' \
-p='[{"op": "replace", "path": "/spec/http/0/route/0/weight", "value": 95},
{"op": "replace", "path": "/spec/http/0/route/1/weight", "value": 5}]'
- name: Monitor canary metrics
run: |
for i in {1..6}; do
ERROR_RATE=$(curl -s http://prometheus/api/v1/query \
-d "query=rate(http_errors_total{version='canary'}[5m])" | jq -r '.data.result[0].value[1]')
if (( $(echo "$ERROR_RATE > 0.01" | bc -l) )); then
echo "Error rate too high: $ERROR_RATE"
exit 1
fi
sleep 60
done
- name: Increase to 25%
run: |
kubectl patch virtualservice my-app-vs \
--type='json' \
-p='[{"op": "replace", "path": "/spec/http/0/route/0/weight", "value": 75},
{"op": "replace", "path": "/spec/http/0/route/1/weight", "value": 25}]'
- name: Full rollout (100%)
run: |
kubectl patch virtualservice my-app-vs \
--type='json' \
-p='[{"op": "replace", "path": "/spec/http/0/route/0/weight", "value": 0},
{"op": "replace", "path": "/spec/http/0/route/1/weight", "value": 100}]'
# Promote canary to stable
kubectl label deployment/my-app-canary version=stable --overwriteFlagger автоматизирует canary-деплои:
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: my-app
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
service:
port: 80
targetPort: 8080
analysis:
interval: 60s
threshold: 5
maxWeight: 50
stepWeight: 10
metrics:
- name: request-success-rate
thresholdRange:
min: 99
interval: 1m
- name: request-duration
thresholdRange:
max: 500
interval: 1mПоведение:
Полное удаление старой версии перед развёртыванием новой.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
strategy:
type: Recreate # Все поды удаляются перед созданием новых
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: myregistry/my-app:v2.0.0Использование:
# Стандартный production деплой
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%Применение:
# Критичные системы с требованием zero-downtime
# Полная изоляция версий
# Быстрый откат через switch serviceПрименение:
# Постепенный rollout с мониторингом
# A/B тестирование
# Раннее обнаружение проблемПрименение:
name: Production Rolling Deploy
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Pre-deployment health check
run: |
HEALTH=$(curl -sf $PROD_URL/health | jq -r .status)
if [ "$HEALTH" != "ok" ]; then
echo "Pre-deployment health check failed!"
exit 1
fi
env:
PROD_URL: ${{ vars.PROD_URL }}
- name: Deploy with rolling update
run: |
kubectl set image deployment/app app=myregistry/app:${{ github.sha }}
kubectl rollout status deployment/app --timeout=600s
env:
KUBECONFIG: ${{ secrets.KUBE_CONFIG }}
- name: Post-deployment health check
run: |
for i in {1..10}; do
HEALTH=$(curl -sf $PROD_URL/health | jq -r .status)
if [ "$HEALTH" = "ok" ]; then
echo "Health check passed!"
exit 0
fi
sleep 10
done
exit 1
env:
PROD_URL: ${{ vars.PROD_URL }}
- name: Automatic rollback
if: failure()
run: |
kubectl rollout undo deployment/app
echo "Automatic rollback completed"name: Blue-Green with Terraform
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v3
- name: Deploy new version
run: |
terraform init
terraform apply -auto-approve \
-var="app_version=${{ github.sha }}" \
-var="deploy_strategy=blue-green"
- name: Run integration tests
run: ./tests/integration.sh
- name: Switch traffic
run: |
terraform apply -auto-approve \
-var="active_environment=green"
- name: Cleanup old version
run: |
terraform apply -auto-approve \
-var="scale_blue=0"name: Canary with Auto-Rollback
on:
push:
branches: [main]
jobs:
canary:
runs-on: ubuntu-latest
environment: production
steps:
- uses: actions/checkout@v4
- name: Deploy canary
run: |
helm upgrade my-app ./chart \
--set image.tag=${{ github.sha }} \
--set canary.enabled=true \
--set canary.weight=10
- name: Monitor and promote
id: monitor
run: |
for weight in 10 25 50 75 100; do
# Check metrics
ERROR_RATE=$(./get-error-rate.sh canary)
LATENCY=$(./get-latency.sh canary)
if (( $(echo "$ERROR_RATE > 0.01" | bc -l) )) || \
(( $(echo "$LATENCY > 500" | bc -l) )); then
echo "Canary failed at ${weight}%"
exit 1
fi
# Increase weight
helm upgrade my-app ./chart \
--set canary.weight=$weight
sleep 120
done
- name: Rollback on failure
if: failure()
run: |
helm rollback my-app
kubectl delete deployment my-app-canaryВопросы ещё не добавлены
Вопросы для этой подтемы ещё не добавлены.