Categories: All - configuration - installation - labels - network

by christophe Mias 1 year ago

679

K8s

The process of setting up and managing a Kubernetes (k8s) cluster involves several critical steps and commands. Initially, the cluster is installed and configured using tools like kubeadm, which handles the initial setup on the master node.

K8s

Administration k8s

ConfigMaps et Secrets

* centraliser, améliorer et faciliter la gestion des configurations

* etcd, consul / vault...

* stocker les fichiers de configuration : - les isoler - les sécuriser - les manipuler - les partager (entre pods)

* amélioration des config-file de docker/dockerfile - reconstruction des images



───────────────────────────────────────────────────────────
   1   │ apiVersion: v1
   2   │ kind: ConfigMap
   3   │ metadata:
   4   │   name: personne
   5   │ data:
   6   │   nom: Xavier
   7   │   passion: blogging
   8   │   clef:
   9   │
  10   │    age.key=40
  11   │    taille.key=180


samik@ImacKris-2:~/kubernetes/manifestes » kubectl apply -f monconfmap.yml
configmap/personne created

samik@ImacKris-2:~/kubernetes/manifestes » kubectl logs monpod
NOM=Xavier
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.96.0.1:443
HOSTNAME=monpod
MONNGINX_SERVICE_HOST=10.105.198.108
MONNGINX_PORT_8080_TCP_ADDR=10.105.198.108
SHLVL=1
HOME=/root
MONNGINX_SERVICE_PORT_8080_80=8080
MONNGINX_PORT_8080_TCP_PORT=8080
MONNGINX_PORT_8080_TCP_PROTO=tcp
MONNGINX_PORT=tcp://10.105.198.108:8080
MONNGINX_SERVICE_PORT=8080
MONNGINX_PORT_8080_TCP=tcp://10.105.198.108:8080
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
PASSION=blogging
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=10.96.0.1
PWD=/

volume

kind: ConfigMap
apiVersion: v1
metadata:
  name: hello
data:
  clef: |
    Bonjour
    les
    Xavkistes !!!


repertoire

création de 2 fichiers :


samik@ImacKris-2:~/kubernetes/manifestes » ll *html
.rw-r--r-- 16 samik 13 nov 15:37 index.html
.rw-r--r-- 20 samik 13 nov 15:38 samik.html

apiVersion: v1
kind: Pod
metadata:
  name: monpod
spec:
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
      volumeMounts:
        - mountPath: /usr/share/nginx/html/
          name: mondir
  volumes:
    - name: mondir
      configMap:
        name: mondir

kubectl create configmap mondir --from-file=index.html --from-file=samik.html

samik@ImacKris-2:~/kubernetes/manifestes » kubectl describe configmaps mondir
Name:     mondir
Namespace:  default
Labels:    <none>
Annotations: <none>

Data
====
index.html:
----
page par defaut

samik.html:
----
 page additionnelle

Events: <none>

checlk

[vagrant@kmaster ~]$ kubectl get all -o wide
NAME     READY  STATUS  RESTARTS  AGE   IP        NODE          NOMINATED NODE  READINESS GATES
pod/monpod  1/1   Running  0     5m18s  192.168.136.104  kworker2.example.com  <none>      <none>

NAME         TYPE    CLUSTER-IP    EXTERNAL-IP  PORT(S)     AGE  SELECTOR
service/kubernetes  ClusterIP  10.96.0.1    <none>    443/TCP     52d  <none>
service/monnginx   NodePort  10.105.198.108  <none>    8080:31818/TCP  27h  app=monnginx

NAME                      REFERENCE       TARGETS     MINPODS  MAXPODS  REPLICAS  AGE
horizontalpodautoscaler.autoscaling/monnginx  Deployment/monnginx  <unknown>/80%  1     5     2     9d
[vagrant@kmaster ~]$ curl 192.168.136.104
Bonjour
les
Xavkistes !!!

apiVersion: v1
kind: Pod
metadata:
  name: monpod
spec:
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
      volumeMounts:
        - mountPath: /usr/share/nginx/html/
          name: monvolumeconfig
  volumes:
    - name: monvolumeconfig
      configMap:
        name: hello
        items:
          - key: clef
            path: index.html

configMapKeyRef

* variables env : configMapKeyRef

apiVersion: v1
kind: Pod
metadata:
  name: monpod
spec:
  containers:
  - name: test-container
    image: busybox
    command: ["/bin/sh", "-c", "env"]
    env:
    - name: NOM
      valueFrom:
        configMapKeyRef:
          name: personne
          key: nom
    - name: PASSION
      valueFrom:
        configMapKeyRef:
          name: personne
          key: passion


apiVersion: v1
kind: Pod
metadata:
  name: monpod
spec:
  containers:
  - name: test-container
    image: busybox
    command: ["/bin/sh", "-c", "env"]
    envFrom:
    - configMapRef:
        name: personne

get/describe

samik@ImacKris-2:~/kubernetes/manifestes » kubectl get configmaps

NAME   DATA  AGE

langue  1   3s

samik@ImacKris-2:~/kubernetes/manifestes » kubectl describe configmaps

Name:     langue

Namespace:  default

Labels:    <none>

Annotations: <none>


Data

====

LANGUAGE:

----

Fr

Events: <none>

create secret

samik@ImacKris-2:~/kubernetes/manifestes » kubectl create secret generic mysql-password --from-literal=MYSQL_PASSWORD=monmotdepasse


secret/mysql-password created

replace

* création d'un fichier manifeste et exécution

kubectl replace -f manifeste.yml


ex:

samik@ImacKris-2:~/kubernetes/manifestes » kubectl create configmap maconf --from-literal=LANGUAGE=Es -o yaml --dry-run  1 ↵

W1113 13:19:50.764212  97661 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: v1
data:
 LANGUAGE: Es
kind: ConfigMap
metadata:
 creationTimestamp: null
 name: maconf



* génération à blanc et remplacement

 kubectl create configmap maconf --from-literal=LANGUAGE=Es -o yaml --dry-run | kubectl replace -f - 

Rq : redémarrage nécessaire

edit

kubectl edit configmaps maconfiguration:


ouvre l'éditeur d'un yml

```

1 # Please edit the object below. Lines beginning with a '#' will be ignored,

 2 # and an empty file will abort the edit. If an error occurs while saving thii

  s file will be

 3 # reopened with the relevant failures.

 4 #

 5 apiVersion: v1

 6 data:

 7  maconf.cfg: "192.168.0.11  imacpat\n192.168.0.44  kaisenlinux\n192.1688

  .0.23

 8   \  mbp\n192.168.0.69\tabacus\n192.168.0.28\tnexus\n192.168.0.57\tipadkrr

  is\n192.168.100.102

 9   VCS2\n192.168.100.101 VCS1\n192.168.5.10 gitlab.example.com\n192.168.0.22

  00 monitor

 10   \n"

 11 kind: ConfigMap

 12 metadata:

 13  creationTimestamp: "2020-11-13T12:07:19Z"

 14  name: maconfiguration

 15  namespace: default

 16  resourceVersion: "1409322"

 17  selfLink: /api/v1/namespaces/default/configmaps/maconfiguration

 18  uid: c07e5649-9bce-4543-80a5-932f640b3d05

```

create

samik@ImacKris-2:~/kubernetes/manifestes » kubectl create configmap langue --from-literal=LANGUAGE=Fr

configmap/langue created



samik@ImacKris-2:~/kubernetes/manifestes » kubectl create configmap langue --from-literal=LANGUAGE=Fr --from-literal=ENCODING=UTF-8


configmap/langue created

samik@ImacKris-2:~/kubernetes/manifestes » kubectl get configmaps

NAME   DATA  AGE

langue  2   7s

samik@ImacKris-2:~/kubernetes/manifestes » kubectl describe configmaps

Name:     langue

Namespace:  default

Labels:    <none>

Annotations: <none>


Data

====

ENCODING:

----

UTF-8

LANGUAGE:

----

Fr

Events: <none>

--from-file

kubectl create configmap maconfiguration --from-file maconf.cfg



samik@ImacKris-2:~/kubernetes/manifestes » kubectl describe configmaps maconfiguration

Name:     maconfiguration

Namespace:  default

Labels:    <none>

Annotations: <none>


Data

====

maconf.cfg:

----

192.168.0.11  imacpat

192.168.0.44  kaisenlinux

192.168.0.23  mbp

192.168.0.69 abacus

192.168.0.28 nexus

192.168.0.57 ipadkris

192.168.100.102 VCS2

192.168.100.101 VCS1

192.168.5.10 gitlab.example.com

192.168.0.200 monitor


Events: <none>

delete

samik@ImacKris-2:~/kubernetes/manifestes » kubectl delete configmap langue

configmap "langue" deleted

get/describe secrets

samik@ImacKris-2:~/kubernetes/manifestes » kubectl get secrets

NAME         TYPE                 DATA  AGE

default-token-nmzdm  kubernetes.io/service-account-token  3   52d

mysql-password    Opaque                1   2m30s


samik@ImacKris-2:~/kubernetes/manifestes » kubectl describe secrets mysql-password

Name:     mysql-password

Namespace:  default

Labels:    <none>

Annotations: <none>


Type: Opaque


Data

====

MYSQL_PASSWORD: 13 bytes

contextes

kubectl config get-contexts
kubectl config use-context samik
création
kubectl config set-context samik --namespace samik --user kubernetes-admin --cluster kubernetes
kubectl get pods -n kube-system
kubectl create namespace samik
kubectl create deploy monnginx --image nginx -n samik

Labels & annotations

kubectl get pods --selector "env=prod" --show-labels
kubectl label

env=dev

Etat de santé "healthcheck

TCPSocket
HttpGet
command
variables
modes
scénario
readiness
= remplacement des pods defectueux
liveness
= restart auto

ressources

generer
YAML

régénérer un déploiement & service

permet de lancer un déploiement

kubectl apply -f ~/kubernetes/mondeploy.yml

kubectl get deployments.apps monnginx -o yaml > ~/kubernetes/mondeploy.yml

descriptif

kubectl get ...

-o yaml

> fichier.yml

lister
kubectl explain pod
kubectl api-versions
kubectl api-resources -o wide

Logs

logs, events et describe
describe

kubectl describe

deploy nginx

service nginx

pods nginx

events

kubectl get events nginx...

kubectl logs nginx

Pods et services
kubectl get

all

-o wide

deploy

composants
kubectl get daemonsets

-n kube-system

kubectl get componentstatuses
nodes
kubectl describe nodes kubmaster
kubectl get nodes
externe
NFS

exemples

plusieurs pods sur un V

debian

apiVersion: v1
kind: Pod
metadata:
  name: debian-deploy
spec:
  containers:
    - image: debian
      name: madebian
      resources: {}
      volumeMounts:
        - mountPath: /tmp/
          name: monvolume
  volumes:
    - name: monvolume
      persistentVolumeClaim:
        claimName: mynfspvc


nginx

apiVersion: v1
kind: Pod
metadata:
  name: nginx-deploy
spec:
  containers:
    - image: nginx
      name: monnginx
      resources: {}
      volumeMounts:
        - mountPath: /usr/share/nginx/html
          name: monvolume
  volumes:
    - name: monvolume
      persistentVolumeClaim:
        claimName: mynfspvc


BDD

* attention : particularité BDD (deploy/statefulset)

* création du PV

apiVersion: v1
kind: Pod
metadata:
  name: debian-deploy
spec:
  containers:
  - image: mysql:5.6
    name: mysql
    env:
    - name: MYSQL_ROOT_PASSWORD
      value: "123456"
    ports:
    - containerPort: 3306
    volumeMounts:
    - mountPath: /var/lib/mysql
      name: monvolume
  volumes:
  - name: monvolume
    persistentVolumeClaim:
      claimName: mynfspvc


Création du Pod


* manifeste Pod :

apiVersion: v1
kind: Pod
metadata:
  name: nginx-deploy
spec:
  containers:
  - image: nginx
    name: nginx
    resources: {}
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: www
  volumes:
  - name: www
    persistentVolumeClaim:
      claimName: pvc-nfs-pv1


PVC1

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mynfspvc
spec:
  storageClassName: myclass
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 500Mi


PVC

Création du Persistent Volume Claim


* manifest PVC :

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mynfspvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 500Mi


PV1

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mynfspv
spec:
  storageClassName: myclass
  capacity:
    storage: 1Gi
  accessModes:
  - ReadWriteMany
  #persistentVolumeReclaimPolicy: Delete
  nfs:
    server: 192.168.56.1
    path: "/srv/exports"


PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mynfspv
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.56.1
    path: "/srv/exports"


persistent volume

séparation entre provision et consommation

* kubernetes propose du provisioning - persistentVolumes et persistentVolumesClaim

* imbrication : PV > PVC > Pods provisioning > quota pods > utilisation pods 

* Server NFS > PV > PVC > Pod


* suivant provider, utilisation de reclaimPolicy :

  1. - delete : si suppression du PVC > sup du PV
  2. - recycle : recyclage (pas de suppression mais il est vidé)
  3. - retain : volume conservé mais versions


Suivant les règles Access Modes des PV :

Types :


utilisation

Utilisation par les Pods

kind: Pod
apiVersion: v1
metadata:
  name: monpods
spec:
  volumes:
    - name: monstorage
      persistentVolumeClaim:
       claimName: monpvc
  containers:
    - name: monnginx
      image: nginx
      ports:
        - containerPort: 80
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: monstorage


persistent volume Claim: PVC

Persistent Volume Claim

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: monpvc
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
kubectl get pvc


persistent volume: PV

Persistent Volume

kind: PersistentVolume
apiVersion: v1
metadata:
  name: monpv
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/pvdata"

Rq : - ReadWriteOnce : monté sur un simple pod - ReadOnlyMany : montés sur plusieurs pods en lecture - ReadWriteMany : lecture écriture sur plusieurs pods

kubectl get pv


hostPath

Volume : hostPath

attention : uniquement sur le pod

apiVersion: v1
kind: Pod
metadata:
  name: monpod
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: monvolume
  volumes:
  - name: monvolume
    hostPath:
      path: /srv/data
      type: Directory


emptyDir

Volume : emptyDir

répartir le travail entre les pods

spec:
  containers:
  - name: monnginx
    image: nginx
    ports:
    - containerPort: 80
    volumeMounts:
    - mountPath: /usr/share/nginx/html
      name: monvolume
  - name: mondebian
    image: debian
    command: ["sleep", "600"]
    volumeMounts:
    - mountPath: /worktmp/
      name: monvolume
  - name: monalpine
    image: alpine
    command: ['sh', '-c', 'echo "Bonjour xavki" >/myjob/index.html' ]
    volumeMounts:
    - mountPath: /myjob/
      name: monvolume
  volumes:
  - name: monvolume
    emptyDir: {}


Volume : emptyDir en ram

  volumes:
  - name: monvolume
    emptyDir:
      medium: Memory


replicaset

ReplicaSet : dupliquer vos pods


* créer des réplicas de pods


* 2 manières :

- attachée aux pods :

- template de pods

- au sein du même fichier


- détachée des pods

- création de pods puis d'un replicaset

- selector pour sélectionner les pods ciblés

HPA

kubectl autoscale rs frontend --max=10

autoscaling

Squelette

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: monhpa
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: monfront
  minReplicas: 3
  maxReplicas: 10
  targetCPUUtilizationPercentage: 11


Création d’un déploiement

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: monfront
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: monfront
    spec:
      containers:
      - name: monpod
        image: httpd
        resources:
          requests:
            cpu: 10m


ajout à un pod
création replicaset

Replicaset : appariement de Pod

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: front2
  labels:
    app: front
spec:
  replicas: 3
  selector:
    matchLabels:
      type: front
      env: prod
  template:
    metadata:
      labels:
        type: front
        env: prod
    spec:
      containers:
      - name: nginx
        image: nginx


creation pod

Replicaset : appariement de Pod

apiVersion: v1
kind: Pod
metadata:
  name: mypod
  labels:
    env: prod
    type: front
spec:
  containers:
  - name: nginx
    image: nginx


kubectl describe rs

samik@ImacKris-2:~/kubernetes/manifestes » kubectl describe rs myfirstdeploy-9dc984dd8

Name:      myfirstdeploy-9dc984dd8

Namespace:   default

Selector:    app=monfront,pod-template-hash=9dc984dd8

Labels:     app=monfront

        pod-template-hash=9dc984dd8

Annotations:  deployment.kubernetes.io/desired-replicas: 2

        deployment.kubernetes.io/max-replicas: 4

        deployment.kubernetes.io/revision: 1

Controlled By: Deployment/myfirstdeploy

Replicas:    2 current / 2 desired

Pods Status:  2 Running / 0 Waiting / 0 Succeeded / 0 Failed

Pod Template:

 Labels:    app=monfront

        pod-template-hash=9dc984dd8

 Annotations: kubernetes.io/change-cause: Mise à jour version 1.16

 Containers:

  podfront:

  Image:    nginx:1.16

  Port:     80/TCP

  Host Port:  0/TCP

  Readiness:  http-get http://:80/ delay=5s timeout=1s period=5s #success=1 #failure=3

  Environment: <none>

  Mounts:    <none>

 Volumes:    <none>

Events:

 Type  Reason      Age  From          Message

 ----  ------      ---- ----          -------

 Normal SuccessfulCreate 100s replicaset-controller Created pod: myfirstdeploy-9dc984dd8-pdjbz

 Normal SuccessfulCreate 100s replicaset-controller Created pod: myfirstdeploy-9dc984dd8-z86j9

kubectl get rs

samik@ImacKris-2:~/kubernetes/manifestes » kubectl get rs

NAME           DESIRED  CURRENT  READY  AGE

myfirstdeploy-9dc984dd8  2     2     0    9s

squelette

apiVersion: apps/v1
kind: replicaSet		# set ressources
metadata: 					# metadata spécifiques au replicaset
spec:								# conf du réplicaset
  replicas: 2				# nombre de replicas
  selector:					# utilisation de la sélection
  matchLabels:		# sélection sur les labels
        lab1: toto		# filtre
template:					# caractéristiques du template de pods
metadata:				# metadata des pods créés
  labels:				# définition des labels
    lab1: toto  # création du label qui va matcher
spec: 					# spec des pods

Pods

création avec un manifeste
fichier monpodnginx
kubectl apply -f monpodnginx.yml
creation de 2 pods

docker

kworker2

kworker1

docker ps

kubectl get pods -o wide

suppression

kubectl delete pod

anothershell

kubectl run anothershell -it --image busybox -- sh

myshell

kubectl run myshell -it --image busybox -- sh

Installation du cluster k8s: kubadm

-> Kubernetes : initialisation et join <-


<br>
* initilisation sur le master


```
kubeadm init --apiserver-advertise-address=192.168.56.101  
--node-name $HOSTNAME --pod-network-cidr=10.244.0.0/16
```

Rq :édition du host nécessaire sur Vbox et Vagrant

<br>
* création du fichier de configuration

```
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
```

---------------------------------------------------------------------


-> Mise en place du réseau interne : choix de flannel <-


<br>
* ajout pod pour gestion du réseau interne

```
sysctl net.bridge.bridge-nf-call-iptables=1

kubectl apply -f 
https://raw.githubusercontent.com/coreos/flannel/
a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

# si nécessaire
kubectl edit cm -n kube-system kube-flannel-cfg
# edit network 10.244.0.0/16 to 10.10.0.0/16 pour dashboard
```



-------------------------------------------------------------------------------------------


-> Kubernetes : join <-



* on vérifie l'état des pods system :

```
kubectl get pods --all-namespace
kubectl get nodes
```

<br>
* on fait le join sur le node :

```
kubeadm join 192.168.56.101:6443 --token 5q8bsc.141bc9wjsc026u6w 
--discovery-token-ca-cert-hash sha256:e0f57e3f3055bfe4330d9e93cbd8de967dde4e4a0963f324d2fe0ccf8427fcfb
```

<br>
* on vérifie l'état des pods system :

```
kubectl get pods --all-namespace
kubectl get nodes
```


clone par git

ici, un exemple de k8s centos:

git clone https://exxsyseng@bitbucket.org/exxsyseng/k8s_centos.git

deploiement

Type

Deployment : Rolling Update - partie 1


* 2 types de déploiements pour des montées de version : - rolling update - recreate

* penser aux montées de versions : progressivité, itérations

RollingUpdate

Exemple : notre bon vieux nginx

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myfirstdeploy
  namespace: default
spec:
  replicas: 5
  selector:
    matchLabels:
      app: monfront
  template:
    metadata:
      labels:
        app: monfront
    spec:
      containers:
      - image: nginx:1.16			 # suivante 1.17
        imagePullPolicy: Always
        name: podfront
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
             path: /
             port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
          successThreshold: 1

Exposition via le service

apiVersion: v1
kind: Service
metadata:
  name: myfirstdeploy
spec:
  clusterIP: 10.99.29.169
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: monfront
  type: ClusterIP

Montée de version

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myfirstdeploy
  namespace: default
spec:
  replicas: 5
  selector:
    matchLabels:
      app: monfront
  template:
    metadata:
      labels:
        app: monfront
    spec:
      containers:
      - image: nginx:1.17    
        imagePullPolicy: Always
        name: podfront
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
             path: /
             port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
          successThreshold: 1



  strategy:
    type: RollingUpdate				# type
    rollingUpdate:						# définition
      maxSurge: 2							# nb pods sup autorisé
      maxUnavailable: 0				# nb de pods down autorisés

Exemple :

- on autorise pas de réduction de nombre de pods

- maxUnavailable = 0

- on peut déborder de 2 pods

- maxSurge = 2


En plus : 

- minReadySeconds : délai pour lancer un autre update de pod

- progressDeadlineSeconds : délai max pour le déploiement sinon fail

- revisionHistoryLimit : nombre d'historiques en mémoire



apiVersion: apps/v1
kind: Deployment
metadata:
  name: myfirstdeploy
  namespace: default
spec:
  replicas: 5
  selector:
    matchLabels:
      app: monfront
  strategy:
    type: RollingUpdate				# type
    rollingUpdate:						# définition
      maxSurge: 2							# nb pods sup autorisé
      maxUnavailable: 0				# nb de pods down autorisés
  template:
    metadata:
      labels:
        app: monfront
    spec:
      containers:
      - image: nginx:1.16
        imagePullPolicy: Always
        name: podfront
        ports:
        - containerPort: 80
        readinessProbe:
          httpGet:
             path: /
             port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
          successThreshold: 1```




actions

* utilisation des annotations

spec:
  template:
    metadata:
      annotations:
        kubernetes.io/change-cause: "Mise à jour version 1.16"


kubectl rollout

history

samik@ImacKris-2:~/kubernetes/manifestes » kubectl rollout history deployment myfirstdeploy

deployment.apps/myfirstdeploy

REVISION CHANGE-CAUSE

1     <none>

2     <none>

5     Mise à jour version 1.16

6     Mise à jour version 1.17


après un undo:


samik@ImacKris-2:~/kubernetes/manifestes » kubectl rollout history deployment myfirstdeploy               130 ↵

deployment.apps/myfirstdeploy

REVISION CHANGE-CAUSE

1     <none>

2     <none>

7     Mise à jour version 1.16

8     Mise à jour version 1.17

undo

samik@ImacKris-2:~ » kubectl rollout undo deployment myfirstdeploy

deployment.apps/myfirstdeploy rolled back


samik@ImacKris-2:~/kubernetes/manifestes » kubectl rollout status deployments.apps myfirstdeploy

Waiting for deployment "myfirstdeploy" rollout to finish: 2 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 2 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 3 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 3 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 3 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 3 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 3 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 3 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 4 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 4 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 4 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 5 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 5 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 5 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 5 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 6 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 6 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 6 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 6 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 6 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 6 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 7 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 7 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 7 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 8 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 8 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 9 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 9 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 9 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 9 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 9 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 2 old replicas are pending termination...

Waiting for deployment "myfirstdeploy" rollout to finish: 2 old replicas are pending termination...

Waiting for deployment "myfirstdeploy" rollout to finish: 2 old replicas are pending termination...

Waiting for deployment "myfirstdeploy" rollout to finish: 1 old replicas are pending termination...

Waiting for deployment "myfirstdeploy" rollout to finish: 1 old replicas are pending termination...

deployment "myfirstdeploy" successfully rolled out

pause/resume

samik@ImacKris-2:~/kubernetes/manifestes » kubectl rollout pause deployments myfirstdeploy

deployment.apps/myfirstdeploy paused


samik@ImacKris-2:~ » kubectl rollout resume deployment myfirstdeploy                           1 ↵

deployment.apps/myfirstdeploy resumed

status

samik@ImacKris-2:~/kubernetes/manifestes » kubectl rollout pause deployments myfirstdeploy

deployment.apps/myfirstdeploy paused

samik@ImacKris-2:~/kubernetes/manifestes » kubectl rollout status deployments.apps myfirstdeploy

Waiting for deployment "myfirstdeploy" rollout to finish: 2 out of 10 new replicas have been updated...


Waiting for deployment spec update to be observed...

Waiting for deployment spec update to be observed...

Waiting for deployment "myfirstdeploy" rollout to finish: 2 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 2 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 4 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 4 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 4 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 4 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 4 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 5 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 5 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 5 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 5 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 6 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 6 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 6 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 6 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 7 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 7 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 7 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 7 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 8 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 8 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 8 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 8 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 8 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 9 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 9 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 9 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 9 out of 10 new replicas have been updated...

Waiting for deployment "myfirstdeploy" rollout to finish: 2 old replicas are pending termination...

Waiting for deployment "myfirstdeploy" rollout to finish: 2 old replicas are pending termination...

Waiting for deployment "myfirstdeploy" rollout to finish: 2 old replicas are pending termination...

Waiting for deployment "myfirstdeploy" rollout to finish: 1 old replicas are pending termination...

Waiting for deployment "myfirstdeploy" rollout to finish: 1 old replicas are pending termination...

deployment "myfirstdeploy" successfully rolled out

recreate

brutal !!!!

Cas concret
création de déploiement

check

kubectl get pods

kubectl describe deployments.apps monnginx

kubectl create deployment --image

kubectl create deployment my-dep --image=busybox --port=5701

kubectl create deployment monnginx --image nginx

scaling

kubectl autoscale deployment monnginx --min=2 --max=10

kubectl scale deployment monnginx --replicas=2

exposition de port

non recommandé

kubectl port-forward nginx-5c7588df-kj2pn 8080:80

kubectl expose deployment nginx --type NodePort --port 80

nodeport

Accès

Port sur le master

kubectl get services

Ip Master

kubectl create service nodeport monnginx --tcp=8080:80

collecte des métriques

par defaut: limitrange
description
installation
Modifier components.yaml
recuperer avec wget
pod simple avec limite et request
tester
kubectl top nodes

K8s

Comment

API
CLI
kubelet

services sur les workers

ex: lancement pods

kubectl

interagir avec le cluster

* on vérifie l"état des pods system : 59 │ 60 │ ``` 61 │ kubectl get pods --all-namespace 62 │ kubectl get nodes 63 │ ```

configmaps

samik@ImacKris-2:~/kubernetes/presentations-kubernetes(master○) » kubectl get configmaps --all-namespaces


NAMESPACE NAME DATA AGE kube-public cluster-info 1 41d kube-system calico-config 4 41d kube-system coredns 1 41d kube-system extension-apiserver-authentication 6 41d kube-system kube-proxy 2 41d kube-system kubeadm-config 2 41d kube-system kubelet-config-1.19 1 41d

fichiers

ls -lh  .kube

total 24

drwxr-xr-x  4 samik staff  128B 3 nov 11:33 cache

-rw-------  1 samik staff  11K 12 nov 10:51 config

drwxr-xr-x 184 samik staff  5,8K 3 nov 11:24 http-cache

Exemples

samik@ImacKris-2:~/kubernetes/presentations-kubernetes(master○) » kubectl get nodes 1 ↵ NAME STATUS ROLES AGE VERSION kmaster.example.com Ready master 41d v1.19.2 kworker1.example.com Ready 41d v1.19.2 kworker2.example.com Ready 41d v1.19.2 samik@ImacKris-2:~/kubernetes/presentations-kubernetes(master○) » kubectl get pods No resources found in default namespace. samik@ImacKris-2:~/kubernetes/presentations-kubernetes(master○) » kubectl get pods --al......l-namesp samik@ImacKris-2:~/kubernetes/presentations-kubernetes(master○) » kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-56b44cd6d5-tzhxs 0/1 NodeAffinity 0 41d kube-system calico-node-6th7r 0/1 NodeAffinity 0 41d kube-system calico-node-7jchl 1/1 Running 0 41d kube-system calico-node-rdkrp 1/1 Running 0 41d kube-system coredns-f9fd979d6-mxgnq 0/1 Completed 0 41d kube-system coredns-f9fd979d6-rhbrv 0/1 Completed 0 41d kube-system etcd-kmaster.example.com 0/1 Running 1 41d kube-system kube-apiserver-kmaster.example.com 1/1 Running 1 41d kube-system kube-controller-manager-kmaster.example.com 0/1 Running 1 41d kube-system kube-proxy-25x8m 1/1 Running 1 41d kube-system kube-proxy-gb2t9 1/1 Running 0 41d kube-system kube-proxy-kgxw7 1/1 Running 0 41d kube-system kube-scheduler-kmaster.example.com 0/1 Running 1 41d

kubeadm

installation du cluster

Quoi

orchestrateur de conteneur (comme Swarm pour docker mais bien plus poussé)

Exemples de test
Docker Desktop
minikube
Orchestrateur
conteneur

conteneurs : docker mais pas que (CoreOS...)

Pourquoi

insensible au fournisseur
serveur Ÿ
Google Cloud
AWS
vSphere
démultiplier (scalabilité)
Haute dispo#
MicroServices
Orchestrer

Archi

infra kubernetes
context

namespace

* namespace : un espace cloisonné (potentiellement gestion de droits...) namespace : permettre de lancer plusieurs pods identiques sur un même cluster ordonner et sécuriser ses déploiements exemple : le namespace kube-system

pod

service

port/ip

infra hard
hotes (ESX,hyperV,etc.)

VM

Containeur

Infra soft
cluster

esclaves

workers

pods : pierre centrale de K8S 20 │ - ensemble cohérent de conteneurs 21 │ - un ou plusieurs conteneurs 22 │ - une instance de K8S

containers

Maitre

master

nomenclature

espace de nommage (namespace)
"sous"-cluster

aka cluster virtuel, ensemble de service

permet de cloisonner

volumes

* volumes: lieux d"échanges entre pods

non persistents = interne au pod
persistents = externe
controller
= pods répliqués

spécificités

New node

deployment

template

contexte
pods

A éviter car: - pas de persistance - préférer les déploiements

container

communique par localhost (aka IP du pod)

même volumes

même rzo

manifeste

format

json

yaml

exemple

Pods multiconteneurs

``` kind: Pod metadata: name: monpod spec: containers: - name: nginx image: nginx ports: - containerPort: 80 - name: mondebian image: debian command: ["sleep", "600"] ```

A noter

La colonne ready indique le nombre de container NAME READY STATUS RESTARTS AGE pod/monnginx-fdb889c86-hjlq4 1/1 Running 1 2d1h pod/monnginx-fdb889c86-n9b8f 1/1 Running 1 2d1h pod/monnginx2-59bf8fd596-p8c97 1/1 Running 1 46h pod/monpod 2/2 Running 2 29m

kubectl describe pod/monpod -n samik|grep -B 1 -i Container

```IP: 192.168.136.81 Containers: -- nginx: Container ID: docker://a5b289f97a68f8a0874f97a3c224023c698425bc885609459e9306824b092807 -- mondebian: Container ID: docker://ff789fe18376b29416c84412415c70360a4fd1c78df6243e1d8b879d66a10763 -- Ready True ContainersReady True -- Normal Pulled 49m kubelet Successfully pulled image "nginx" in 4.634615802s Normal Created 49m kubelet Created container nginx Normal Started 49m kubelet Started container nginx -- Normal Pulling 8m29s (x5 over 49m) kubelet Pulling image "debian" Normal Created 8m25s (x5 over 48m) kubelet Created container mondebian Normal Started 8m25s (x5 over 48m) kubelet Started container mondebian```

Exposition de ports

Configuration : namespace, labels, annotations

type

multicontainer

monocontainer

deployement

manifeste décortiqué

apiVersion: apps/v1		#la version de l'API , requis
kind: Deployment		#le type de manifeste, ici un déploiement
metadata:				#le données globales
  name: myfirstdeploy
  namespace: default
spec:					#les spécificités
  replicas: 5			# le nombre de pods répliqués
  selector:				# les critéres de sélection des pods
    matchLabels:		# les étiquettes des pods
      app: monfront
  template:				# les modéles à appliquer à nos pods
    metadata:			# on leurs appose les etiquettes "monfront" 
      labels:
        app: monfront
    spec:				# les spec dans les pods
      containers:		# les containeurs désirés
      - image: nginx:1.16			 # image de base suivante 1.17
        imagePullPolicy: Always		# garder l'image à jour
        name: podfront				# le nom du cont
        ports:
        - containerPort: 80			# le port d'accés
        readinessProbe:				# les conditions dr conformités auquel le conteneur doit répondre
          httpGet:
             path: /
             port: 80
          initialDelaySeconds: 5	# au bout de 5 sec
          periodSeconds: 5			# et toute les 5 sec
          successThreshold: 1		# et le cont est valide au bout d'une seule validation


création/suppression
croissance/décroissance (scaling)
services
accès par port

serv>port>pod