Difference between revisions of "Kubernetes"
(→Install Kubernetes to Ubuntu) |
|||
(5 intermediate revisions by the same user not shown) | |||
Line 4: | Line 4: | ||
Kubernetes works with containerd and CRI-O. Its suitability for running and managing large cloud-native workloads has led to widespread adoption of it in the data center. There are multiple distributions of this platform – from ISVs as well as hosted-on cloud offerings from all the major public cloud vendors. | Kubernetes works with containerd and CRI-O. Its suitability for running and managing large cloud-native workloads has led to widespread adoption of it in the data center. There are multiple distributions of this platform – from ISVs as well as hosted-on cloud offerings from all the major public cloud vendors. | ||
+ | |||
+ | =Show all current pods= | ||
+ | kubectl get pods | ||
+ | |||
+ | =Show current persistent volumes= | ||
+ | kubectl get pv | ||
+ | |||
+ | =Show current persistent volume claims= | ||
+ | kubectl get pvc | ||
+ | |||
+ | =Copy a file into a container of a pod= | ||
+ | |||
+ | kubectl cp start.sh pod1:/tmp/ -c container1 | ||
+ | |||
+ | =Execute a command within a container of a pod= | ||
+ | |||
+ | kubectl exec -it pod1 -c container1 -- /tmp/start.sh | ||
+ | |||
+ | =Create a persistent volume for a pod to claim= | ||
+ | Create the yaml first. | ||
+ | echo "--- | ||
+ | apiVersion: v1 | ||
+ | kind: PersistentVolume | ||
+ | metadata: | ||
+ | name: persistentvolume01 | ||
+ | spec: | ||
+ | accessModes: | ||
+ | - ReadWriteOnce | ||
+ | capacity: | ||
+ | storage: 10Gi | ||
+ | storageClassName: manual | ||
+ | hostPath: | ||
+ | path: /mnt/somedir" > persistentvolume01.yaml | ||
+ | |||
+ | Create the actual volume using the yaml: | ||
+ | |||
+ | kubectl create -f persistentvolume01.yaml | ||
+ | |||
+ | Delete the persistent volume if necessary: | ||
+ | |||
+ | kubectl delete -f persistentvolume01.yaml | ||
+ | |||
+ | Enter the MongoDB database shell 'mongosh': | ||
+ | kubectl exec -it database-sw-mongo-0 -- mongosh -u $(kubectl get secret database-sw-mongo-admin -o jsonpath='{.data.user}' | base64 -d) -p $(kubectl get secret database-sw-mongo-admin -o jsonpath='{.data.password}' | base64 -d) --authenticationDatabase admin --tls --tlsAllowInvalidHostnames --tlsAllowInvalidCertificates database | ||
+ | |||
+ | |||
+ | Set password for MongoDB user to be "somehash": | ||
+ | db.AspNetUsers.updateOne( {"Name": "username"}, { $set: {"PasswordHash": "somehash"} } ) | ||
+ | |||
+ | |||
+ | Show all current attachments in selected database: | ||
+ | db.getCollection("Records").aggregate([ { /* match all records with an attachment field value*/ $match: { "Values": { $elemMatch: { "v._v": { $elemMatch: { _t: "Attachment" } } } } } }, { /* project to reduce size*/ $project: { "Values": 1 } }, { /* unwind into individual fields*/ $unwind: "$Values" }, { /* match attachment fields*/ $match: { "Values.v._v": { $elemMatch: { _t: "Attachment" } } } }, { /* unwind into individual attachments*/ $unwind: "$Values.v._v" }] ); | ||
+ | |||
=Install Kubernetes to Ubuntu= | =Install Kubernetes to Ubuntu= | ||
Line 19: | Line 72: | ||
Enable services: | Enable services: | ||
− | microk8s enable dashboard dns ingress metallb | + | microk8s enable dashboard dns ingress metallb |
Use the following to check for available services to enable: | Use the following to check for available services to enable: | ||
Line 39: | Line 92: | ||
Joining a node to the cluster should only take a few seconds. Afterwards you should be able to see the node has joined: | Joining a node to the cluster should only take a few seconds. Afterwards you should be able to see the node has joined: | ||
microk8s kubectl get no | microk8s kubectl get no | ||
+ | |||
+ | =Use NFS for Persistent Volumes= | ||
+ | Provision NFS mounts as Kubernetes Persistent Volumes on MicroK8s. | ||
+ | |||
+ | ==NFS server== | ||
+ | Either use a current NFS server or install a NFS server. The following is how to install to Ubuntu: | ||
+ | apt install nfs-kernel-server | ||
+ | Directory /srv/nfs is the share folder. | ||
+ | mkdir -p /srv/nfs | ||
+ | chown nobody:nogroup /srv/nfs | ||
+ | chmod 0777 /srv/nfs | ||
+ | Edit the /etc/exports. The following will allow all IP addresses in the 10.0.0.0/24 subnet: | ||
+ | /srv/nfs 10.0.0.0/24(rw,sync,no_subtree_check) | ||
+ | Restart the NFS server: | ||
+ | systemctl restart nfs-kernel-server | ||
+ | |||
+ | ==Install the CSI driver for NFS== | ||
+ | Enable the Helm3 addon (if not already enabled) and add the repository for the NFS CSI driver: | ||
+ | microk8s enable helm3 | ||
+ | microk8s helm3 repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts | ||
+ | microk8s helm3 repo update | ||
+ | This will install the Helm chart under the kube-system namespace: | ||
+ | microk8s helm3 install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system --set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet | ||
+ | After deploying the Helm chart, wait for the CSI controller and node pods to come up using the following kubectl command: | ||
+ | microk8s kubectl wait pod --selector app.kubernetes.io/name=csi-driver-nfs --for condition=ready --namespace kube-system | ||
+ | If successful, you will see "condition met". | ||
+ | List the available CSI drivers in the Kubernetes cluster: | ||
+ | microk8s kubectl get csidrivers | ||
+ | ==Create a StorageClass for NFS== | ||
+ | This creates a Kubernetes Storage Class which uses the nfs.csi.k8s.io CSI driver. Create the following file sc-nfs.yaml and change 10.0.0.42 to the NFS server: | ||
+ | |||
+ | apiVersion: storage.k8s.io/v1 | ||
+ | kind: StorageClass | ||
+ | metadata: | ||
+ | name: nfs-csi | ||
+ | provisioner: nfs.csi.k8s.io | ||
+ | parameters: | ||
+ | server: 10.0.0.42 | ||
+ | share: /srv/nfs | ||
+ | reclaimPolicy: Delete | ||
+ | volumeBindingMode: Immediate | ||
+ | mountOptions: | ||
+ | - hard | ||
+ | - nfsvers=4.1 | ||
+ | Apply it on the MicroK8s cluster: | ||
+ | microk8s kubectl apply -f - < sc-nfs.yaml | ||
+ | |||
+ | The final step is to create a new 5gb PersistentVolumeClaim using the nfs-csi storage class. This is as simple as specifying storageClassName as nfs-csi in the PVC definition within the file pvc-nfs.yaml: | ||
+ | apiVersion: v1 | ||
+ | kind: PersistentVolumeClaim | ||
+ | metadata: | ||
+ | name: my-pvc | ||
+ | spec: | ||
+ | storageClassName: nfs-csi | ||
+ | accessModes: [ReadWriteOnce] | ||
+ | resources: | ||
+ | requests: | ||
+ | storage: 5Gi | ||
+ | Then create the PVC with: | ||
+ | microk8s kubectl apply -f - < pvc-nfs.yaml | ||
+ | Check the PVC configuration: | ||
+ | microk8s kubectl describe pvc my-pvc | ||
+ | |||
+ | == References == | ||
+ | |||
+ | * [https://microk8s.io/docs/nfs Microk8s Documentation | Use NFS for Persistent Volumes] |
Latest revision as of 09:23, 28 June 2024
Kubernetes (/ˌk(j)uːbərˈnɛtɪs, -ˈneɪtɪs, -ˈneɪtiːz, -ˈnɛtiːz/, commonly abbreviated K8s) is an open-source container orchestration system for automating software deployment, scaling, and management. Originally designed by Google, the project is now maintained by the Cloud Native Computing Foundation.
The name Kubernetes originates from Greek, meaning 'helmsman' or 'pilot'. Kubernetes is often abbreviated as K8s, counting the eight letters between the K and the s (a numeronym).
Kubernetes works with containerd and CRI-O. Its suitability for running and managing large cloud-native workloads has led to widespread adoption of it in the data center. There are multiple distributions of this platform – from ISVs as well as hosted-on cloud offerings from all the major public cloud vendors.
Contents
- 1 Show all current pods
- 2 Show current persistent volumes
- 3 Show current persistent volume claims
- 4 Copy a file into a container of a pod
- 5 Execute a command within a container of a pod
- 6 Create a persistent volume for a pod to claim
- 7 Install Kubernetes to Ubuntu
- 8 Clustering
- 9 Use NFS for Persistent Volumes
Show all current pods
kubectl get pods
Show current persistent volumes
kubectl get pv
Show current persistent volume claims
kubectl get pvc
Copy a file into a container of a pod
kubectl cp start.sh pod1:/tmp/ -c container1
Execute a command within a container of a pod
kubectl exec -it pod1 -c container1 -- /tmp/start.sh
Create a persistent volume for a pod to claim
Create the yaml first.
echo "--- apiVersion: v1 kind: PersistentVolume metadata: name: persistentvolume01 spec: accessModes: - ReadWriteOnce capacity: storage: 10Gi storageClassName: manual hostPath: path: /mnt/somedir" > persistentvolume01.yaml
Create the actual volume using the yaml:
kubectl create -f persistentvolume01.yaml
Delete the persistent volume if necessary:
kubectl delete -f persistentvolume01.yaml
Enter the MongoDB database shell 'mongosh':
kubectl exec -it database-sw-mongo-0 -- mongosh -u $(kubectl get secret database-sw-mongo-admin -o jsonpath='{.data.user}' | base64 -d) -p $(kubectl get secret database-sw-mongo-admin -o jsonpath='{.data.password}' | base64 -d) --authenticationDatabase admin --tls --tlsAllowInvalidHostnames --tlsAllowInvalidCertificates database
Set password for MongoDB user to be "somehash":
db.AspNetUsers.updateOne( {"Name": "username"}, { $set: {"PasswordHash": "somehash"} } )
Show all current attachments in selected database:
db.getCollection("Records").aggregate([ { /* match all records with an attachment field value*/ $match: { "Values": { $elemMatch: { "v._v": { $elemMatch: { _t: "Attachment" } } } } } }, { /* project to reduce size*/ $project: { "Values": 1 } }, { /* unwind into individual fields*/ $unwind: "$Values" }, { /* match attachment fields*/ $match: { "Values.v._v": { $elemMatch: { _t: "Attachment" } } } }, { /* unwind into individual attachments*/ $unwind: "$Values.v._v" }] );
Install Kubernetes to Ubuntu
The following commands will install microk8s to Ubuntu:
sudo snap install microk8s --classic
Add your user to the microk8s admin group and fix permissions:
sudo usermod -a -G microk8s $USER sudo chown -f -R $USER ~/.kube
Log out and log back in to that user for this to take effect.
Check the status of the service:
microk8s status --wait-ready
Enable services:
microk8s enable dashboard dns ingress metallb
Use the following to check for available services to enable:
microk8s enable --help
Start using microk8s:
microk8s kubectl get all --all-namespaces
Access the dashboard:
microk8s dashboard-proxy
Clustering
To create a cluster out of two or more already-running MicroK8s instances, use the microk8s add-node command. As of MicroK8s 1.19, clustering of three or more nodes will automatically enable high availability. The MicroK8s instance on which the command is run will host the Kubernetes control plane:
microk8s add-node
The add-node command prints a microk8s join command which should be executed on the MicroK8s instance(s) that you wish to join to the cluster (NOT THE NODE YOU RAN add-node FROM). For example:
microk8s join ip-172-31-20-243:25000/DDOkUupkmaBezNnMheTBqFYHLWINGDbf
Joining a node to the cluster should only take a few seconds. Afterwards you should be able to see the node has joined:
microk8s kubectl get no
Use NFS for Persistent Volumes
Provision NFS mounts as Kubernetes Persistent Volumes on MicroK8s.
NFS server
Either use a current NFS server or install a NFS server. The following is how to install to Ubuntu:
apt install nfs-kernel-server
Directory /srv/nfs is the share folder.
mkdir -p /srv/nfs chown nobody:nogroup /srv/nfs chmod 0777 /srv/nfs
Edit the /etc/exports. The following will allow all IP addresses in the 10.0.0.0/24 subnet:
/srv/nfs 10.0.0.0/24(rw,sync,no_subtree_check)
Restart the NFS server:
systemctl restart nfs-kernel-server
Install the CSI driver for NFS
Enable the Helm3 addon (if not already enabled) and add the repository for the NFS CSI driver:
microk8s enable helm3 microk8s helm3 repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts microk8s helm3 repo update
This will install the Helm chart under the kube-system namespace:
microk8s helm3 install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system --set kubeletDir=/var/snap/microk8s/common/var/lib/kubelet
After deploying the Helm chart, wait for the CSI controller and node pods to come up using the following kubectl command:
microk8s kubectl wait pod --selector app.kubernetes.io/name=csi-driver-nfs --for condition=ready --namespace kube-system
If successful, you will see "condition met". List the available CSI drivers in the Kubernetes cluster:
microk8s kubectl get csidrivers
Create a StorageClass for NFS
This creates a Kubernetes Storage Class which uses the nfs.csi.k8s.io CSI driver. Create the following file sc-nfs.yaml and change 10.0.0.42 to the NFS server:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-csi provisioner: nfs.csi.k8s.io parameters: server: 10.0.0.42 share: /srv/nfs reclaimPolicy: Delete volumeBindingMode: Immediate mountOptions: - hard - nfsvers=4.1
Apply it on the MicroK8s cluster:
microk8s kubectl apply -f - < sc-nfs.yaml
The final step is to create a new 5gb PersistentVolumeClaim using the nfs-csi storage class. This is as simple as specifying storageClassName as nfs-csi in the PVC definition within the file pvc-nfs.yaml:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: my-pvc spec: storageClassName: nfs-csi accessModes: [ReadWriteOnce] resources: requests: storage: 5Gi
Then create the PVC with:
microk8s kubectl apply -f - < pvc-nfs.yaml
Check the PVC configuration:
microk8s kubectl describe pvc my-pvc