Kubernetes Intro

MobaXterm is your ultimate toolbox for remote computing.
https://mobaxterm.mobatek.net/

Google Kubernetes Engine(GKE): How to Stop and Start GKE Cluster

YAML Formater:
https://codebeautify.org/yaml-editor-online

Kubernetes

1. Go to Google Console
2. Create Billing Account
3. Navigate Left Menu -> Solutions -> All Products -> Kubernetes
4. Create Project -> Select Project ->
5. Enable Kubernetes Engine API

7. Create Cluster
8. By default it will create Autopilot cluster, but we need to click “switch Standrd Cluster”
9. click “USE A SETUP Guide ”
10. click “My First Cluster”
Create your first cluster
Cluster name: my-first-cluster-1
Cluster zone: us-central1-c
Version: Rapid release channel
Machine type: g1-small instead of e2-medium
Boot disk size: 32GB instead of 100GB boot disk size
Autoscaling: Disabled
Cloud Operations for GKE: Disabled
Cluster name: my-first-cluster-1
Cluster zone: us-central1-c
Version: Rapid release channel
Machine type: g1-small instead of e2-medium
Boot disk size: 32GB instead of 100GB boot disk size
Autoscaling: Disabled
Cloud Operations for GKE: Disabled11.
11. Click “Create Now”, it will take some minutes . Then at the top we can see, it will be created
12. Notifications : Create Kubernetes Engine cluster “my-first-cluster-1”
13. Click … dots on right column, it shows connect , click it. we can see command line access
gcloud container clusters get-credentials my-first-cluster-1 –zone us-central1-c –project upbeat-voice-428207-d3
14. a pop up will show. click “Authorize”

15. gcloud auth login –no-launch-browser

Enter the following verification code in gcloud CLI on the machine you want to log into. This is a credential similar to your password and should not be shared with others.

xxxxx

16. From console shell we can switch to editor, while switching editor if we face problem , follow steps :
1. clear cookies
2. gcloud auth login –no-launch-browser
3. again do step 16
4. execute “gcloud init”
5. Pick configuration to use (1/2)
6. Choose the account you would like to use to perform operations for this configuration
7. Select mail id
8. or switch to safe mode (&cloudshellsafemode=true)
9. In case cloud shell does not goes to your cluster, check below steps
1. check safe mode
2. check the current context
kubectl config use-context gke_upbeat-voice-428207-d3_us-central1-c_my-first-cluster-1
17. # When you create a new cluster, you need to update your kubeconfig file to point to the new cluster.
1.Authenticate to the New Cluster:
Use gcloud to get the credentials for the new cluster, which will update your kubeconfig file with the new cluster’s information.

gcloud container clusters get-credentials –zone –project
gcloud container clusters get-credentials my-first-cluster-1 –zone us-central1-c –project upbeat-voice-428207-d3

Replace CLUSTER_NAME, ZONE, and PROJECT_ID with your new cluster’s name, the zone where the cluster is located, and your project ID respectively.

2.Verify the Context:

kubectl config current-context

//gke_upbeat-voice-428207-d3_us-central1-c_my-first-cluster-1

kubectl config use-context gke_upbeat-voice-428207-d3_us-central1-c_my-first-cluster-1

# get project list :
gcloud projects list


# On Cloud Shell
1.kubectl get nodes // created node details
2. kubectl get pods // if any pods created it will display
Kubectl
1. Imperative Approach
kubectl
pod
Deployement
Service
2. Declarative
YAML (manifest information) & kubectl
Pod
Replicaset (Only Can do in Declarative mode)
Deployement
Service

# kubectl

command : get, create, apply, describe, exec, logs, delete
type : pods, nodes, rs, deployment, service
options: -o wide, -o yaml, –show-labels

Ex: kubectl get pod webserver -o wide , kubectl describe pod webserver

# kubectl is used for creating, viewing and deleting objects

# 1. create pod
kubectl run mywebserver –image=nginx
kubectl exec -it mywebserver — bash
root@mywebserver:/# ls
kubectl delete pod mywebserver

# Running from editor in cloud shell
1.open editor
2. create yaml file

“single-container-pod.yml”

apiVersion: v1
kind: Pod
metadata:
name: first-pod
labels:
app: myweb
tire: dev
spec:
containers:
– name: my-first-container
image: nginx
# ls

# Create a pod :
kubectl create -f single-container-pod.yml
“pod/first-pod created”

# to get detail of pod
kubectl describe pod first-pod

1. Scheduler will assigned default node
2. Image pulled
3. Container will be created

Normal Scheduled 5m23s default-scheduler Successfully assigned default/first-pod to gke-my-first-cluster-1-default-pool-ef715727-337m
Normal Pulling 5m22s kubelet Pulling image “nginx”
Normal Pulled 5m22s kubelet Successfully pulled image “nginx” in 124ms (124ms including waiting). Image size: 70984068 bytes.
Normal Created 5m22s kubelet Created container my-first-container
Normal Started 5m22s kubelet Started container my-first-container

# To know on which cluster/node the pod is running :
kubectl get pods -o wide

# To interact with pod
kubectl exec -it first-pod — /bin/sh

# to exit from pod ctr+d

# if you do modification in yml file , we can not create same pod again, we need to use apply command

# after editing yml, we can check by describe

# we can run command from yaml file

– This file , execute the container but die immediately , but we can find it is executed or not
“command.yml”. we can run shell script from command . while create pod we can create command and arguments

apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
– name: command-deom-container
image: debian
command: [“printenv”]
args: [“HOSTNAME”, “KUBERNETES_PORT”]

//CrashLoopBackOff

# we can check log
kubectl logs command-demo

command-demo
tcp://34.118.224.1:443

MULTI CONTAINER POD
===================
– For similar purpose we should not create two containers
– But for multiple purpose we create two containers

“multi-container-pod.yml”

apiVersion: v1
kind: Pod
metadata:
name: multi-pod
spec:
containers:
– name: c00
image: ubuntu:latest
command: [“/bin/bash”, “-c”, “while true; do echo Test Lab1?; sleep 10; done”]
– name: c01
image: ubuntu:latest
command: [“/bin/bash”, “-c”, “while true; do echo Test Lab2?; sleep 10; done”]

# Debug with Interactive Pod:
Launch an interactive pod to manually check the commands.

kubectl run debug-pod –rm -i –tty –image=ubuntu:latest — /bin/bash

# Check Container Logs for Errors:
kubectl logs multi-pod -c c00
kubectl logs multi-pod -c c01

# Ensure Correct Image and Commands:
Ensure that the image used (ubuntu:latest) includes bash. Alternatively, use another image like debian that includes bash.

# we can check by describe command

ENVIRONMENT
===========
“environments.yml”
apiVersion: v1
kind: Pod
metadata:
name: environments1
spec:
containers:
– name: c00
image: ubuntu:latest
command: [“/bin/bash”, “-c”, “while true; do echo Test Lab1?; sleep 10; done”]
env:
– name: Google_Cloud
value: Google

kubectl exec environments1 -it — /bin/bash

root@environments1:/# ps -ef (or) ps ef
root@environments1:/# env

# describe pod, we can see details
# IP address will be created only for pod , not for container. But in Docker each will have own IP address
# suppose if we have created two container in pod, each have own environment value
# if we try to interact with pod , if we execute, by default it goes to 1 container in the list
kubectl exec environments1 -it — /bin/bash

# if we want to interact with specific container
kubectl exec multi-pod-env -it -c con1 — /bin/bash (-c con1 is container name)
root@environments1:/# env
exit

// like same we can check container 2

PORT ACCESS
===========
: how to expose port to container
if we have two container(nginx, curl) in pod (it will be host), we can access other container

“pod-port.yml”
apiVersion: v1
kind: Pod
metadata:
name: webserver
labels:
app: myweb
tier: dev
spec:
containers:
– name: my-first-container
image: nginx
ports:
– containerPort: 80
– name: curl
image: appropriate/curl # Using a different image that includes curl and bash
command: [“/bin/sh”, “-c”, “while true; do curl http://localhost:80/; sleep 10; done”]

# logs from first container :
kubectl logs webserver

// we can see GET : http request

# logs from curl container :
kubectl logs -c curl pod/webserver

// We can see nginx home page

LABELS, SELECTORS And ANNOTATIONS
=================================

# Inside the k8s Objects (Pod, Replica, Deployement), labels and selectors contribution is important

# For different purpose we create different pod(staging, dev, prod) but application is same, instaed of giving different name for prod, we can use tags, labels . based on that we can filter easily.
Ex : if we want to scale only prod pod we can filter that

labels:
environment: prod / dev / staging
app: nginx

selector:
matchlabels:
environment: prod / dev / staging

# while create pod, we define labels. In future while scale (which environment ? prod,dev..) we select the labels by matchlabels

Two type of selectors :
1. Equality-based (old method)
2. Set-based

1. Equality-based:
——————
Operators: =, ==, l=

Examples :
environment = production
tier != frontend

Command line :
$ kubectl get pods -l environment = production (-l for lion)

In manifest:
selector:
environment:production
tier: frontend

Supports Objects: Services, Replication Controller

2. Set-based:
————-
Operators : in, notin, exists

Examples:
environment in (production, qa)
tier notin (frontend, backend)

Command line :
$kubectl get pods -l environment in production (-l for lion)

In Manifest :
…selector:
matchExpressions:
– {key: environment, operator: In, values: [prod, qa]}
– {key: tier, operator: Notin, values: [frontend,backend]}

Supports Objects: Job, Deployement, Replica Set and Daemon Set

Examples :
==========
“pod-label-1.yml”

apiVersion: v1
kind: Pod
metadata:
name: webapp1
labels:
environtment: prod
app: nginx
spec:
containers:
– name: webapp-cont
image: nginx
port:
– containerPort: 80

“pod-label-2.yml”

apiVersion: v1
kind: Pod
metadata:
name: webapp1
labels:
environtment: dev
app: nginx
spec:
containers:
– name: webapp-cont
image: nginx
port:
– containerPort: 80

“pod-label-3.yml”

apiVersion: v1
kind: Pod
metadata:
name: webapp1
labels:
environtment: staging
app: nginx
spec:
containers:
– name: webapp-cont
image: nginx
port:
– containerPort: 80

# cloud shell
kubectl apply -f pod-label-1.yml
kubectl apply -f pod-label-2.yml
kubectl apply -f pod-label-3.yml

kubectl get pods

# to see labels
kubectl get pods –show-labels

# filter based on labels using equality based selection
kubectl get pods -l ‘environment = prod’ –show-labels // prod
kubectl get pods -l ‘environment != staging’ –show-labels // prod,dev
kubectl get pods -l ‘environment != dev’ –show-labels // prod, staging

# set based selection
kubectl get pods -l ‘environment in (prod,dev)’ –show-labels // prod,dev
kubectl get pods -l ‘environment notin (prod,dev)’ –show-labels // staging
kubectl get pods -l ‘environment notin (staging)’ –show-labels // prod,dev

ANNOTATIONS
===========
“metadata”:{
“annotations”:{
“key1″:”value1”,
“key2″:”value2″,
}
}

metadata:{
name: lco-annotations-demo
annotations:{
imageregistry:”htttps://hub.docker.com/”,
timestamp: 1234566
JIRA-issue: “https://xyz/issue/abc-123″
node-version: 13.1.0
previous-configuration:”{some json containing the previously deployed configuration of the object}”
}
}

REPLICA-SET (For Highavilabilty and LoadBalancing)
===========
# “nginx-rs.yml”

apiVersion: app/v1
kind: ReplicaSet
metadata:
name: nginx-rs
spec:
replicas: 3
template:
metadata:
name: nginx-pod
labels:
app: nginx-app
tier: frontend
spec:
containers:
– name: nginx-container
image: nginx
ports:
– containePort: 80
selector: # selection process to filter pod
matchlabels:
app: nginx-app
matchExpressions:
– {key: tier, operator: In, values: [frontend]}

# kubectl apply -f nginx-rs.yml
# kubectl get rs
# kubectl describe rs nginx-rs

# if we want to scale UP :
kubectl scale rs nginx-rs –replica=5
kubectl get pods
// it will run 5 pods

# if we want to scale DOWN :
kubectl scale rs nginx-rs –replica=3
// it will run only 3 pods

# to delete replica :
kubectl delete -f nginx-rs.yml (we are giving file name here)
kubectl get pods

kubectl get pods
kubectl get rs
kubectl get rc
kubectl get deploy

DEPLOYMENT (Version update – No down time- while pod down another one up)
=========================================
# “mydeploy.yml”

# mydeploy.yml
apiVersion: app/v1
kind: Deployment
metadata:
name: Deployments # deployment name
spec:
replicas: 2
selector:
matchLabels:
name: deployment
template:
metadata:
name: testpod # pod
labels:
name: deployment # label based
spec:
containers:
– name: c00
image: ubuntu
command: [“/bin/bash”, “-c”, “whie:true; do echo version1; sleep 5; done”]

kubectl get deploy
kubectl get rs
kubectl get pods
kubectl describe deploy mydeployments
# to check pods :
kubectl logs -f mydeployments-56788

# do some changes in yml file like os: centos, command changes, now
# kubectl apply -f mydeploy.yml
kubectl get pods

// we can see status of pods. one terminating, one containercreating , then running. same applicable for antoher 2s

# old replica set will be deleted and new one will be created
kubectl get rs

check pod details
kubectl get pods
kubectl logs -f mydeployments-56788
//version2 printing

# after do some changes, we can check the details of pod
kubectl exec mydeployments-56788 — cat /etc/os-release (pod name note)

# whenever we do changes in yml version, new replica will be created old will be deleted
# conform the version 3 by checking the pods
kubectl get pods
kubectl logs -f

ROLLBACK
——–
kubectl get pods
kubectl logs -f //version 3

check : kubectl get rs (3 rd version will be listed along with previous rs’s)
# kubectl rollout undo deployment mydeployments

// there will be 3 rs
// no 2 will be running others will be stopped
// check statuses desired,current,ready
//check , for that we get pods, because olds are deleted
# kubectl get pods
kubectl logs -f // vresion will be printed

# Again if we do undo. it will not go to version 1 , it will go to version 3
kubectl rollout undo deployment mydeployments

# kubectl get pods
kubectl logs -f // vresion will be printed

# if we want to go to version 1
kubectl rollout history deployment mydeployments
//Revision : 1,4,5

# to go to version 1
kubectl rollout undo deployment mydeployments –to-revision=1

# kubectl get rs
# kubectl get pods
kubectl logs -f // vresion will be printed

SERVICE Object
==============

Type of Serice
—————
1. ClusterIP : Pod-to-Pod [Cloud No] [Within Node]
2. NodePort : External Client-to-Pod (No loadbalancing between nodes) [Cloud No] [Outside Communication]
3. LoadBalancer : External Client-to-Pod (LoadBalancing between nodes) [Cloud Yes] [Outside Communication]

# Here Node will have IP address & Pod will have IP address. But in container container level IP address exist
1. create deployment, service yaml file
check
ls – is file exist
kubectl get pods
kubectl get rs
kubectl get svc – by default it will return default clusterIP

# “nginx-deploy.yml”

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 3
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app # pod label should match with service selector label name
spec:
containers:
– name: nginx-container
image: nginx
ports:
– containerPort: 80

# “nginx-svc-np.yml”

apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: nginx-app
spec:
selector:
app: nginx-app
type: NodePort
ports:
– nodePort: 31111 # NodePort (Optional) (Node Port Range: 3000-32768)
port: 80 #Service Port , service port and targetPort need not be same
targetPort: 80 # Container Port , this and container port should be same

# create deployment & services

kubectl apply -f nginx-deploy.yml
kubectl get pods

kubectl apply -f nginx-svc-np.yml
kubectl get svc

//my-service NodePort 34.118.225.140 80:31111/TCP 9s
-here 80 is service port and 31111 is nodeport

To get node ip detail use -o wide :
kubectl get nodes -o wide

gke-my-first-cluster-1-default-pool-41f75d87-j089 Ready 79m v1.30.1-gke.1329000 10.128.0.9 35.225.140.182 Container-Optimized OS from Google 6.1.90+ containerd://1.7.15

//we try from chrome “35.225.140.182:31111” ,if it is not working , we need to firewall enable from google cloud , for this

# Create Firewall Rule
gcloud compute firewall-rules create fw-rule-gke-node-port –allow tcp:31111

# Replace NODE_PORT
gcloud compute firewall-rules create fw-rule-gke-node-port –allow tcp:30080

# List Firewall Rules
gcloud compute firewall-rules list

# After created firewall , check the chrome , it will work, we are able to access our pod
http://35.225.140.182:31111/

DELETE Objects
==============
# Deleting a Deployment

kubectl delete deployment nginx-deployment
kubectl get deployments / deploy

# Deleting a Service

kubectl delete service my-service
kubectl get services / svc

# check End Point
kubectl get ep

LOAD BALANCER
=============

Inside the service yml , we need to include the LoadBalancer property :
1.# “nginx-deploy-lb.yml”

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx-app
spec:
replicas: 3
selector:
matchLabels:
app: nginx-app
template:
metadata:
labels:
app: nginx-app # pod label should match with service selector label name
spec:
containers:
– name: nginx-container
image: nginx
ports:
– containerPort: 80

2.# “nginx-svc-lb.yml”

apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: nginx-app
spec:
selector:
app: nginx-app
type: LoadBalancer
ports:
– nodePort: 31000 # NodePort (Optional) (Node Port Range: 3000-32768)
port: 80 #Service Port , service port and targetPort need not be same
targetPort: 80 # Container Port , this and container port should be same

kubectl apply -f nginx-deploy-lb.yml
kubectl get pods

kubectl apply -f nginx-svc-lb.yml
kubectl get svc
kubectl describe svc my-service
kubectl get ep (end poinds)

// now you can see loadbalancer external ip will be generated
// my-service LoadBalancer 34.118.230.247 130.211.221.25 80:31000/TCP 49s
// http://130.211.221.25/ (this loadbalancer ip will work directly, on each refresh it will go to different pod , it will distribute requests)

# While describe we can see ,
– Annotations,
– Selector,
– Type,
– NodePort (if did not mention in service , it will creat random port),
– IP (this is we need to map with domain name “abc.com”),
– EndPoint (it will have mapped three end points , for the three pods. How it happen means deployments template/pod ‘s label and the service’s selector are being mapped, so all the pods associated with services)

# LoadBalancer is an extention of a NodePort
– NodePort does not do distribute
– LoadBalancer do distribute traffic

CLUSTERIP – (It is a service communication between nodes)
# NodePort or LoadBalancer or Ingress Serice(revers-proxy, https .. advance) these service types used in frontend
=========
1. # “backend-deployment.yml”

apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
selector:
matchLabels:
app: hello
tier: backend
track: stable
replicas: 3
template:
metadata:
labels:
app: hello # this hello & backend labels will be associated with backend service
tier: backend
track: stable
spec:
container:
– name: hello
image: “gcr.io/google-samples/hello-go-gke:1.0”
ports:
– name: http
containierPort: 80

2. # “backend-service.yml”

apiVersion: v1
kind: Service
metadata:
name: hello
spec:
selector:
app: hello
tier: backend
ports:
– protocol: TCP
port: 80
targetPort: http # targetPort can be given directly or give a name/define in deployment and use it here.
# here we dont give type here, it default will be clusterip

3. # “frontend-deployment.yml”

apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: hello
tier: frontend
track: stable
replicas: 1 # here replica only one , but backend 3 . We can have any number in frontend also
template:
metadata:
labels:
app: hello
tier: frontend
track: stable
spec:
containers:
– name: nginx
image: “gcr.io/google-samples/hello-frontend:1.0”
lifecycle:
preStop:
exec:
command: [“/usr/sbin/nginx”, “-s”, “quit”]

4. # “frontend-service.yml”

apiVersin: v1
kind: Service
metadata:
name: frontend
spec:
selector:
app: hello
tier: frontend
ports:
– protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer # note type because it is frontend for traffic

# kubectl apply -f backend-deployment.yml
# kubectl get pods // 3 pods

# kubectl apply -f backend-service.yml
# kubectl get svc // hello clusterIP will created

# kubectl apply -f frontend-deployment.yml // no replicas
# kubectl get pods // 3 pods

# kubectl apply -f frontend-service.yml
# kubectl get svc
// frontend LoadBalancer will be created
// hello clusterIP will created

kubectl describe svc hello
// we can see Type: ClusterIP, EndPoint:…(3 pods ip)

STORAGE VOLUMES
================
Type:
1. Volumes – Same lifetime as pods [tightly coupled] [Ephemeral – temporary]
2. Persistent Volumes – Beyond pods lifetime [Durable] [Decoupled – create another pod and attach it]

1. # “pod-with-emptydir.yml”

apiVerison: apps/v1
kind: Deployment
metadata:
name: mydeployments
spec:
replicas: 1 # its just storage so no need more about replica
selector:
matchLabels:
name: deployment
template:
metadata:
name: testpod
labels:
name: deployment
spec:
containers:
– name: c00
image: centos
command: [“/bin/bash”, “-c”, “while true; do echo Storage Volume; sleep 10; done”]
volumeMounts:
– mountPath: /data # root data directory
– name: data-volume # name of the volume
volumes:
– name: data-volume
emptyDir: {} # every pod will have this empty directory

# kubectl apply -f pod-with-emptydir.yml
# kubectl get pods
# kubectl describe pod mydeployments-xxx
//we can see Volume, Mountpath
# interact with pod
kubectl exec mydeployments-xxx -it /bin/bash
/# ls => data directory
/# cd data
# echo “Welcome Google” > mydata.txt
# ls => mydata.txt
/# exit // exit from pod

# Now restart pod or rollout , while what happen is , pod will be deleted and new pod will be created
# then interact with pod , check wether the data exist or not
# kubectl get deploy // get deployment
# kubectl get pods
# kubectl rollout restart deployment mydeployments
# kubectl get pods
// old will be “Terminating”, new will be create running
# kubectl exec mydeployments-xxx -it /bin/bash
/# ls => data exist
/# cd data
/# ls => empty files

//until the pod lifetime the files exist then it will be deleted

Multipod With EmptyDir
———————-
Create 3 container with 3 mountPath (/mounted-data-1,2,3) but volume name same “data-volume” , all volumes shareable

1. # “multi-pod-with-emptydir.yml”

apiVersion: v1
kind: pod
metadata:
name: shared-emptydir-volume
spec:
containers:
– image: ubuntu
name: container-1
command:
– /bin/bash
– -ec
– sleep 3600
volumeMounts:
– mountPath: /mounted-data-1
– name: data-volume
– image: ubuntu
name: container-2
command:
– /bin/bash
– -ec
– sleep 3600
volumeMounts:
– mountPath: /mounted-data-2
name: data-volume
– image: ubuntu
name: container-3
command:
– /bin/bash
– -ec
– sleep 3600
volumeMounts:
– mountpath: /mounted-data-3
name: data-volume
volumes:
– name: data-volume
emptyDir: {}

# kubectl apply -f multi-pod-with-emptydir.yml
# kubectl get pods
# Now interact with pod with container
kubectl exec shared-emptydir-volume -c container-1 -it — /bin/bash
// now we are in first container
/# ls => mounted-data-1
/# cd mounted-data-1
/# ls
/# echo “Data from container-1” > data-1.txt
/# ls
/# exit
now go to container-2
kubectl exec shared-emptydir-volume -c container-2 -it — /bin/bash
// now we are in second container
/# ls => mounted-data-2
/# cd mounted-data-2
/# ls => data-1.txt
// Now we can see “data-1.txt”, all container share same volume
/# cd mounted-data-2
mounted-data-2/# echo “Data from Container-2” > data-2.txt
/# ls => data-1.txt, data-2.txt
/# exit
kubectl exec shared-emptydir-volume -c container-3 -it — /bin/bash
// now we are in third container
/# ls => mounted-data-3
/# cd mounted-data-3
/# ls => data-1.txt, data-2.txt

# all the 3 container share same volume, path will be different but volume same. from anywher we create files it will be available in volume.
# now all(3 container, 1 volume) are exist in pod , if we delete pod then all will be deleted

Leave a Reply

Your email address will not be published. Required fields are marked *