Kubernetes Interview Questions and Answers
Last updated on 27th Sep 2020, Blog, Interview Question
Prepare better for your Application developer interview with the top Kubernetes interview questions curated by our experts. These Kubernetes Interview Questions & Answers will help convert your Application developer/DevOps engineer interview into a top job offer. The following list of interview questions on Kubernetes covers the conceptual questions for freshers and experts and helps you answer different questions like the difference between config map and secret, ways to monitor that a Pod is always running, ways to test a manifest without actually executing it. Get well prepared with these interview questions and answers for Kubernetes.
1.What is Kubernetes?
Kubernetes is an extensible, portable, and an open-source platform used for managing services and containerized workloads. It is a large and fast-growing ecosystem as its services, tools, and supports that are frequently and widely available.
2.Describe the history of Kubernetes?
Kubernetes word has originated from Greek with a meaning pilot or helmsman. It was foremostly designed by Google in 2014. It has become the building block for running the workload productions at scale for Google. Later it has been maintained by Cloud Native Computing Foundation.
3.What is a container?
It always helps to know what is being deployed in your pod, because what’s a deployment without knowing what you’re deploying in it? A container is a standard unit of software that packages up code and all its dependencies. Two optional secondary answers I have received and am OK with include: a) a slimmed-down image of an OS and b) an application running in a limited OS environment. Bonus points if you can name orchestration software that uses containers other than Docker, like your favorite public cloud’s container service.
4.What are the reasons why Kubernetes is more useful by walking back in time?
Kubernetes mainly contains three important deployments. They are:
- Traditional Deployment
- Virtualized Deployment
- Container Deployment
These three are the most crucial aspects that are useful by going back in time.
Traditional Deployment: Earlier in this era, applications can run on the physical servers by various organizations. This causes allocation issues related to resources, which can be solved by running each and every application on the different servers.
Virtualized Deployment: The introduction of virtualization was done so that it allows us to run many numbers of virtual machines on only one server CPU.
Container Deployment: Container deployment has flexible isolation properties in order to share an operating system among applications.
5.Why do we need Kubernetes and what it can do?
Kubernetes is the container that provides a good way to run and bundle your applications. We need to effectively manage the containers in the production environment that allows us to run applications. It also provides a framework to run distributed systems resiliently.
6.What are the features of Kubernetes?
The features of Kubernetes are as follows:
- Storage orchestration
- Automated rollbacks and rollouts
- Configuration management
- Packing of bin automatically
- Load balancing and service discovery
7.List out the components of Kubernetes?
There are mainly three components to deliver a functioning Kubernetes cluster. They are:
- Node components
- Master components
8.How does Kubernetes relate to Docker?
Kubernetes is a container for Docker which is more comprehensive than Docker Swarm and is designed to counter clusters of the nodes at a level in a well-defined manner. Whereas, Docker is the platform tool for building and running the Docker containers.
9.Define Kube Scheduler?
It is the important aspect of the master node which notices the newly created pods with no assigned node for it, and selects one of the nodes to run on them.
10.What are the benefits of Kubernetes?
The benefits of Kubernetes are as follows:
- It provides easy service organizations with pods.
- It works on any of the OS as it is an open-source modular tool.
- It has a huge community among container orchestration tools.
Subscribe For Free Demo
Error: Contact form not found.
11.What does a cloud controller manager do?
A Cloud Controller Manager (CCM) is a daemon that allows for embedding cloud-specific control loops. It abstracts the cloud-specific vendor code from the core Kubernetes code. It also helps manage communication with underlying cloud services. Its design is based on the plugin mechanism, meaning that cloud vendors integrate their code with the CCM using plugins.
12.Mention the namespaces that initially the Kubernetes starts with?
Initially the Kubernetes starts with three namespaces, and they are:
- kube-public: This is created automatically and can be read by all the users and it is the most reserved for cluster usage.
- default: It is for the objects who do not contain namespaces.
- kube-system: It is for the objects which are created by the Kubernetes system.
Example of the initial namespaces in Kubernetes is given below:
kubectl get namespace
13.What are Kubernetes pods?
Pods are defined as the group of the containers that are set up on the same host. Applications within the pod also have access to shared volumes.
It is the node agent that runs on each node. It works based on PodSpec, which is a JSON object in terms of a pod. The Kubelet logs take a set of PodSpecs that provides various mechanisms and ensures that the PodSpecs are running effectively.
15.What is the command Kubectl and its syntax?
It is defined as a CLI (command-line interface. for performing and running commands against Kubernetes clusters.
The syntax for Kubectl is:
- kubectl [command] [TYPE] [NAME] [flags]
16.What do you understand by the term Kube-proxy?
This is a network-proxy that runs on each and every node and also reflects as defined in the Kubernetes API. This proxy can also perform stream forwarding across a set of backends. It is one of the optional add-ons that provide the DNS cluster for the cluster APIs.
The syntax to configure Proxy is:
- kube-proxy [flags]
17.Describe in brief the working of the master node in Kubernetes?
Kubernetes master is mainly designed to control the nodes and the nodes mainly consist of a crucial part called containers. Now, here comes the pods these pods are made up of a group of containers based upon the requirements and configurations. Every container which we utilize is present inside a pod so, if the set-up for the pod is made then the can deploy using CUI (Command Line Interface.. Scheduling of the pods is done based on the node and relevant requirements. The connection between the node and the master components in the Kubernetes is made using the Kube-apiserver.
18.What is the function of Kube-apiserver?
This API server of Kubernetes is mainly used to configure and validate API objects that include replication controllers, services, pods, and many more. Kube-apiserver services the REST operations and provides the frontend to the cluster’s shared region through which interaction takes place between the components.
The representation for Kube-apiserver is provided as follows:
- kube-apiserver [flags]
19.What is the role of a Kube-scheduler?
It is defined as a workload-specific, policy rich, and topology-aware function which majorly impacts on availability, capability, and performance. The duty of a scheduler is to collect individual and collective resource requirements, data locality, hardware/software policy constraints, inter-workload interference, and many more into its account. API shows or displays the necessary workload requirements.
The representation for the Kube-scheduler is:
- kube-scheduler [flags]
20.Describe a few words about Kubernetes Controller Manager?
Kube-controller-manager is a divinity that embeds the crucial core control loops shipped with the Kubernetes. In most of the robotic and automation applications, control loops are the non-terminating loops that regulate the state of the particular system. In Kubernetes, the controller itself is the control loop that watches the shared state of the cluster using the apiserver. Examples of the controllers that ships today with Kubernetes are namespaces, replications, and many more.
The representation for the Kube-controller-manager is given as:
- kube-controller-manager [flags]
21.What do you mean by the term etcd?
Kubernetes uses etcd to store all its data. The reason behind it is that Kubernetes is a distributed system so as to store distributed data it uses etcd. Etcd is a distributed, most reliable key-value for storing the most critical data.
22.Define the term Minikube in Kubernetes?
To easily learn Kubernetes locally minikube tools are used. This runs on the single-node Kubernetes cluster inside a virtual machine.
23.What is Kubernetes load balancing?
The process of load balancing lets you show or display the services. There are two types of load balancing in kubernetes, and they are:
- Internal load balancing
- External load balancing
Internal load balancing: This balancing is used to balance the loads automatically and allocates the pods within the necessary configuration.
External load balancing: It transfers or drags the entire traffic from the external loads to backend pods.
24.List out the components that interact with the node interface of Kubernetes?
The following are the components that interact with the node interface of Kubernetes, and they are:
- Node Controller
25.Name the process which runs on Kubernetes Master Node?
The process that runs on Kubernetes Master Node is called the Kube-apiserver process.
Enroll for Kubernetes Training from Top-Rated DevOps Instructors
- Instructor-led Sessions
- Real-life Case Studies
26.What are Kubernetes Minions?
Nodes in the Kubernetes are called minions previously, it is a work machine in the Kubernetes. Each and every node in the Kubernetes contains the services to run the pods.
27.What is a heapster?
Heapster is a metrics collection and performance management system for the Kubernetes versions. It allows us for a collection of workloads, pods, and many more.
28.What are minions in the Kubernetes cluster?
- They are components of the master node.
- They are the work-horse / worker node of the cluster.[Ans]
- They are monitoring engines used widely in kubernetes.
- They are docker container services.
29.What is the future scope for Kubernetes?
Kubernetes will become one of the most used operating systems (OS. for the cloud in the future. The future of Kubernetes mostly lies in virtual machines (VM. than n containers.
30.How can you ensure the security of your environment while using Kubernetes?
You can follow and implement the following security measures while using Kubernetes:
- Restrict ETCD access
- Limit direct access to nodes
- Define resource uota
- Everything should be logged on the production environment
- Use images from the authorized repository
- Create strict rules and policies for resources
- Conduct continuous security and vulnerability scanning
- Apply security updates regularly
31.What is orchestration in software?
Application orchestration in the software process means that we can integrate two or more applications. We will be able to automate arrangement, coordination, and management of computer software. The goal of any orchestration process is to streamline and optimize frequent repeatable processes.
32.What is a pod in Kubernetes?
We can think of Kubernetes pod as a group of containers that are run on the same host. So, if we regularly deploy single containers, then our container and the pod will be one and the same.
33.What are the components of Kubernetes Master machine? Explain
The following are the key components of Kubernetes Master machine:
- ETCD: ETCD is used to store the configuration data of every node present in the cluster. It can store a good amount of key values which can be shared with several nodes in the cluster. Because of its sensitivity, Kubernetes API Server can only access ETCD. But, it contains a shared key value store which can be accessed by everyone.
- API Server: Kubernetes itself is an API server that controls and manages the operations in a cluster through API Server. This server provides an interface to access various system libraries and tools to communicate with it.
- Process Planner(Scheduler): Scheduling is the major component of Kubernetes Master machine. Scheduler shares the workload. Scheduler is responsible to monitor the amount of workload distributed and used in the cluster nodes. It also keeps the workload after monitoring the available resources to receive the workload.
- Control Manager: This component is responsible to administer the current position of cluster. It is equivalent to a daemon process that continuously runs in an unending loop which collects and sends the collected data to the API server. It handles and controls various controllers.
34.Explain the node components of Kubernetes.
The following are the major components of a server node to exchange information with Kubernetes.
- Docker: Every node contains Docker to run the containers smoothly and effectively. Docker is the basic component of every node in a cluster.
- Proxy service of Kubernetes: Proxy service is responsible to establish communication to the host. Every node communicates with the host through proxy. Proxy service helps nodes to transmit data to the containers upon its request and is also responsible for load balancing. It is also responsible to control pods present in node, data volumes, creation of new containers, secrets etc.,
- Service of Kubelet: Kubelet service helps every node to share information from the control pane and vice versa. Kubelet is responsible for reading the details of node configuration and the write values which were present in the ETCD store. This service administers the port forwarding, protocols of network etc.,
35.What is the role of a load balancer?
A load balancer provides a standard way to distribute network traffic among different backend services, thus maximizing scalability. Depending on the working environment, there can be two types of load balancer – Internal or External.
The Internal Load Balancer can automatically balance the load and allocate the required configuration to the pods. On the other hand, the External Load Balancer guides the external load traffic to the backend pods. In Kubernetes, the two load balancing methods operate through the kube-proxy feature.
36.State the functions of Kubernetes namespace.
The primary functions of Kubernetes namespace are stated below:
- Namespaces assist information exchange between pod to pod through the same namespace.
- They are considered as virtual clusters which will be present on the same cluster.
- Namespaces are used to deliver logical segregation of team and their corresponding environments.
37.How do you create a Namespace?
To create a namespace, the following command should be written:
- kubectl create –f namespace.yml
38.Write commands to control the Namespace.
To control the namespace, we have to create a name space initially:
- kubectl create –f namespace.yml
- Then, we have to check the available namespaces from the list:
- kubet get namespace
- To get a specific name space we require, use the following command:
- kubet get namespace<xyz>
- To describe the services offered by the namespace, use the command:
- kubet describe namespace<xyz>
- If you want to delete a namespace from the list, use the following command:
- kubet delete namespace<xyz>
Note: xyz is given for example. You can give any name in the namespace region.
39.Explain how you will set up Kubernetes.
Virtual Data Center is the basic setup before installing Kubernetes. Virtual Data center is actually believed to be a set of machines which can interact with each of them through a network. If the user does not have any existing infrastructure for cloud, he can go for setting up a Virtual Data Center in the PROFITBRICKS. Once completing this setup, the user has to setup and configure the master and node. For instance, we can consider the setup in Linux Ubuntu. Same setup can be followed in other Linux machines.
Installation of Docker is the basic setup to run Kubernetes. But, there are some prerequisites needed before installing Kubernetes. We shall install Docker initially to start with. Following steps should be followed to install Docker. User has to provide login credentials and login as a root user.
Install the apt package and update it if necessary. If update is needed, use the commands:
- sudo apt-get update
- sudo apt-get install apt-transport-https ca-certificates.
40.Once update is installed, add new key for GPG using the command:
- sudo apt-key adv
This key will be extracted from the Docker list further, update the image of the API package using the command:
- sudo apt-get update
- Install Docker Engine. Check whether the kernel version you are using is the right one.
- After installing Docker Engine, install etcd. Now, install Kubernetes on the machines.
41.What do you understand by container resource monitoring?
From the user perspective, it is vital to understand resource utilization at different abstraction layers and levels, like container pods, services, and the entire cluster. Each level can be monitored using various tools, namely:
42.What do you know about Pods in Kubernetes?
Pods actually contain a class of containers which are installed and run on the same host. Containers were present on pods and therefore configuring the pods as per the specifications is important. As per the requirement of the nodes in a cluster, scheduling of pods can be established.
43.What are the types of Kubernetes pods? How do you create them?
Kubernetes contain two kinds of Pods. They are:
Single Container Pod: User has to give Kubectl run command where he defined the image in Docker registry to create a single container pod.
The following command is used to create a single container pod:
- kubectl run <abcd> –image<xyz1234> where
- abcd….. name of the pod
- xyz234….image name on the registry
Multi Container pods: To create multi container pods, we need to create a yaml file including the details of the containers. User has to define the complete specifications of the containers such as its name, image, port details, image pull policy, database name, etc.,
44.What is the use of the API server in Kubernetes?
The API server is responsible to provide a front end to the clusters that are shared. Through this interface, the master and node communicate with one another. The primary function of API server is to substantiate and configure the API objects which includes pods, associated services, controllers etc.,
45.What do you mean by Kubernetes images?
There is no specific support to Kubernetes images as on date and Docker images actually support Kubernetes. To create an infrastructure for Containers, Docker images are the primary elements to form it. Every container present inside a pod will contain a Docker image running on it.
Get Kubernetes Course from leading Professional Certification Training ProviderWeekday / Weekend BatchesSee Batch Details
46.Tell me about the functions of Kubernetes Jobs.
The important function of Kubernetes jobs is to form a single or multiple pods and to monitor, log how well they are running. Jobs reflect the running of pods and they assure how many pods finished successfully. A job is said to be complete if the specified number of pods successfully run and complete.
47.What do you know about Labels in Kubernetes?
Keys will contain some values. Labels contain pairs of key values connected to pods, associated services and the replication controllers. Generally, labels were added to some object during creation. During run time, they can be modified.
48.What do you know about Selectors and what are the types of selectors in Kubernetes API?
Since multiple objects have the possibility of the same labels, selectors are used in Kubernetes. Label Selectors are unique and users use it to choose a set of objects. Till date, Kubernetes API allows two kinds of Label selectors.
- Selectors based on Set: This kind of selector permits to filter the keys as per the set of values.
- Selectors based on Equality: This kind of selector permit filter as per key and by value. If there is any matching object found, it should meet the expectations of the specified labels.
49.What do you know about Minions? Explain.
Minion is nothing but a node present in the Kubernetes cluster on a working machine. Minions can either be a virtual machine, a physical one or a cloud sample. Every node present in a cluster should meet the configuration specifications to run a pod on the node. Two prime services such as kubelet and proxy services along with Docker were needed to establish interface and communication with the nodes which run the Docker containers present in the pod which were created on the node. Minions were not actually formed by Kubernetes but could be formed by a cluster manager present in virtual or physical machines or by a service provider for a cloud.
50.What do you mean by Node Controller?
Node controllers are the group of services which were running in the Kubernetes Master. Node controllers are responsible to observe the activities of the nodes present on a cluster. They do this as per the identity of the metadata name assigned to a node. Node controller checks for the validity of a node. If the node is found valid, it assigns a fresh created pod to the valid node. If the node is invalid, node controller will wait till the node becomes valid so as to assign a pod.
51.Tell me about Google container Engine.
Google container Engine is available open source and is a Kubernetes based Engine which supports clusters which can run within the public cloud services of Google. This engine services as a platform for Docker containers and clusters.
52.What do you mean by Ingress network?
Ingress network provides a set of rules to enter into the Kubernetes cluster. This network is responsible to provide the incoming connections further This allows inbound connections, further configured according to the required specifications so as to offer services through URLs which are available externally,through load balance traffic, or by providing virtual hosting which is name based. Therefore, an Ingress network can be defined as an API object that controls and administers external access to the services present in a cluster, through HTTP.
53.What do you know about Kubernetes Service?
Kubernetes service is defined as analytical pairs of pods. As per the information present on top of the pod, it will contain a DNS name and one IP address through which pods can be accessed. Kubernetes service is very useful to regulate and administer load balancing as per specific requirements. Kubernetes service also supports pods in scaling them too easily.
54.What are the types of Kubernetes services?
The following are the types of Kubernetes services:
- Node port: Node port helps to fetch the details of a static port of the node deployed currently. With the assistance of Cluster IP, Node port routing can be established automatically. Users can access this node port service away from the cluster through the following command: NodeIP:nodePort.
- ClusterIP: Cluster IP is responsible to fetch the information present in a Kubernetes cluster. It also aids in limiting the service within a cluster.
- Load balancing: Load balancing is an important service available in Kubernetes to automatically balance the load in case of traffic. The above two services such as NodePort and ClusterIP were automatically created with which they help the external load balancer to do routing.
55.What are the functions of the Replication controller?
The following are the main functions of the replication controller:
- It is responsible to control and administer the lifecycle of the pod.
- It is responsible to monitor and verify whether the allowed number of pod replicas were running
- It helps the user to check the running status of the pod
- Replication controller lets the user alter a particular pod. The user can drag its position to the top or to the bottom.
56.What do you know about the Replica set?
Replica set is considered as a substitute to the replication controller. The prime function of replica set is to assure the number of pod replicas running. There are two types of Label selectors supported by Kubernetes API. They are: Equality based selectors and Set based selectors. The primary difference between the replication controller and replica set is that, replication controller supports equality based selector alone whereas the replica set allows both the types of selectors.
57.How do you update, delete and rollback in a Deployment strategy?
- Update: Through this feature, users could be able to update the existing deployment during runtime and before its completion. Through update, the ongoing deployment will end and a fresh deployment will be created.
- Delete: Through this feature, the user could be able to cancel or pause the ongoing deployment by deleting the deployment before its completion. Creating similar deployment will resume the deployment.
- Rollback: Users can restore a database or program to a previously defined state. This process is called Rollback. Users would be able to rollback the ongoing deployment through this feature.
58.What do you mean by “Recreate” and “Rolling Update” in Deployment strategy?
With the aid of Deployment strategies, users could be able to replace the existing replication controller to a new replication controller. Recreate is used to kill all the running (existing. replication controllers and creates newer replication controllers. Recreate helps the user in faster deployment whereas it increases the downtime, if in case the new pods haven’t replaced the down old pods.
Rolling update also helps the user to replace the existing replica controller to newer ones. But, the deployment time is slow and in fact, we could say, there is no deployment at all. Here, some old pods and some new pods were readily available to the user to process any time.
59.Write a command to create and fetch the deployment:
- To create: kubectl
- create –f Deployment.yaml –record
- To fetch: kubectl get deployments
60.Write a command to check the status of deployment and to update a deployment.
- To check the status: kubectl rollout status deployment/Deployment.
- To update a deployment: kubectl set image deployment/Deployment tomcat = tomcat:6.0
61.What do you mean by volumes? What are the differences between Docker volumes and Kubernetes Volumes?
Volumes can be considered as directories through which the containers in a pod can be accessed. The differences between Kubernetes volumes and Docker volumes are:
|Volumes are not limited to any particular container
|Volumes are limited to a particular pod in a container
|It supports all or any of the container deployed in a pod of kubernetes
|Does not support all container deployed in Docker
|Supports many types of storage on the pod and also supports multiple of storage at the same time
|No such support in Docker
62.List the Kubernetes volume you are aware of.
The following are some of the Kubernetes volume which are widely used:
- NFS: Network File System lets an ongoing NFS to let you mount on your pod. Though you remove the pod from the node, NFS volume will not be erased but only the volume is unmounted.
- Flocker: Flocker is available open source and is used to control and administer data volumes. It is a manager for data volume for a clustered container. Through Flocker volume, users can create a Flocker dataset and mount the same to the pod. If there is no such dataset available in Flocker, the user has to create the same through Flocker API.
- EmptyDIR: Once a pod is assigned to a node, EmptyDIR is created. This volume stays active till the pod is alive and running on that particular node. EmptyDIR volume does not contain anything in the initial state and is empty; the user can read or write files from this volume. The data present in the volume gets erased once the pod is removed from that particular node.
- AWS Elastic Block Store: This volume mounts Amazon Web Services Elastic Block Store on to your pod. Though you remove the pod from the node, data in the volume remains.
- GCE Persistent Disk: This volume mounts Google Compute Engine Persistent Disk on to your pod. Similar to AWS Elastic Block store, the data in the volume remains even after removing the pod from the node.
- Host path: Host path mounts a directory or file from the file system of the host on to your pod.
- RBD: Rados Block Device volume lets a Rados Block device to be mounted on to your pod. Similar to AWS Elastic Block store and GCE Persistent Disk Volumes, even after removing the pod from the node, the data in the volume remains.
63.What do you mean by Persistent Volume?
Persistent Volume is a network storage unit controlled by the administrator. PV is a strategy used to control an individual pod present in a cluster.
64.What do you mean by Persistent Volume Claim?
Persistent Volume Claim is actually the storage provided to the pods in Kubernetes after the request from Kubernetes. Users are not expected to have knowledge in the provisioning and the claims have to be created where the pod is created and in the same namespace.
65.Define Secrets in Kubernetes.
As the name implies, secrets are sensitive information and in this context, they are login credentials of the user. Secrets are objects in Kubernetes which store sensitive information namely the user name and the passwords after encrypting them.
66.How do you create secrets in Kubernetes?
Secrets can be created in various ways in Kubernetes. Some of them are:
- Through Text (txt. files
- Through Yaml File
To create secrets from these files, the user has to create a username and password using the kubectl command. The secret file has to be saved in the corresponding file format.
67.Explain the Network Policy in Kubernetes.
Network policy contains a set of protocols to achieve information transfer between the pods and defines how those pods present in the same name space transfers information with one another. It also defines data transfer with the network endpoint. User has to enable the network policy in the API server while configuring it in run time. Through the resources available in the network policy, select pods using labels and set the rules to permit the data traffic to a particular pod.
68.What will happen while adding a new API to Kubernetes?
If you add a fresh API to Kubernetes, the same will provide extra features to Kubernetes. So, adding a new API will improve the functioning ability of Kubernetes. But, this will increase the cost and maintenance of the entire system. So, there is a need to maintain the cost and complexity of the system. This can be achieved by defining some sets for the new API.
69.How do you make changes in the API?
Changes in the API server has to be done by the team members of Kubernetes. They are responsible to add a new API without affecting the functions in the existing system.
70.What are the API versions available? Explain.
Kubernetes supports several versions of API in order to provide support to multiple Ans: structures. Versioning is available at Alpha level, Beta level and Stable level. All these version features are in multiple standards.
Alpha level versions have alpha values. This version is prone to errors but the user can drop for support to rectify errors at any time. But, this version is limited to tests in a short time alone.
Beta level versions contain beta values. Scripts present in this version will be firm because they are completely tested. Users can look for support any time in case of any errors. This version is not recommended to use in commercial applications. Stable level versions get many updates often. User has to get the recent version. Generally the version name will be vX, where ‘v’ refers to the version and ‘x’ refers to an integer.
71.Explain Kubectl command.
Kubectl commands provide an interface to establish communication between pods. They are also used to control and administer the pods present in the Kubernetes cluster. To communicate with the Kubernetes cluster, the user has to declare the kubectl command locally. These commands are also used to communicate and control the cluster and the Kubernetes objects.
72.What are the kubectl commands you are aware of?
- kubectl apply
- kubectl annotate
- kubectl attach
- kubectl api-versions
- kubectl autoscale
- kubectl config
- kubectl cluster-info
- kubectl cluster-info dump
- kubectl set cluster
- kubectl get clusters
- kubectl set-credentials
73.Using the create command along with kubectl, what are the things possible?
Users can create several things using the create command with kubectl. They are:
- Creating namespace
- Creating deployment
- Creating secrets
- Creating secret generic
- Creating secret docker registry
- Creating uota
- Creating service account
- Creating node port
- Creating load balancer
- Creating Cluster IP
74.What is kubectl drain?
kubectl drain command is used to drain a specific node during maintenance. Once this command is given, the node goes for maintenance and is made unavailable to any user. This is done to avoid assigning this node to a new container. The node will be made available once it completes maintenance.
75.How do you create an application in Kubernetes?
Creating an application in Kubernetes requires creating an application in Docker, since Docker is essential for Kubernetes to perform its operation smoothly. Users can do any of the following two things to install Docker: can download or do the installation using Docker file. Since Docker is available open source, the existing image from Docker hub can be downloaded and the same has to be stored in a local Docker registry.
To create a new application using a Docker file, the user has to create a Docker file initially. Once creating an image, the same can be transferred to the container after testing it completely.
76.What do you mean by application deployment in Kubernetes?
Deployment is the process of transferring images to the container and assigning the images to pods present in Kubernetes cluster. Application deployment automatically sets up the application cluster thereby setting the pod, replication controller, replica set and the deployment of service. Cluster set up is organized properly so as to ensure proper communication between the pods. This setup also sets up a load balancer to divert traffic between pods. Pods exchange information between one another through objects in Kubernetes.
77.Define Autoscaling in Kubernetes.
One of the important features of Kubernetes is Autoscaling. Autoscaling can be defined as scaling the nodes according to the demand for service response. Through this feature, cluster increases the number of nodes as per the service response demand and decreases the nodes in case of the decrease in service response requirement. This feature is supported currently in Google Container Engine and Google Cloud Engine and AWS is expected to provide this feature at the earliest.
78.How will you do monitoring in Kubernetes?
To manage larger clusters, monitoring is needed. Monitoring is yet another important support in Kubernetes. To do monitoring, we have several tools. Monitoring through Prometheus is a famous and widely used tool. This tool not only monitors, but also comes with an alert system. It is available as open source. Promotheus is developed at SoundCloud. This method has the capability to handle multi-dimensional data more accurately than other methods. Promotheus needs some more components to do monitoring.
- Prometheus node explore
- Prom ranch exporter
79.What is Kubernetes Log?
Kubernetes container logs are much similar to Docker container logs. But, Kubernetes allows users to view logs of deployed pods i.e running pods. Through the following functions in Kubernetes, we can get even specific information as well.
- Container name of Kubernetes
- Pod name of Kubernetes
- Namespace of Kubernetes
- Kubernetes UID and
- Docker image name
80.What do you know about Sematext Docker Agent?
Sematext Docker agent is more famous among day developers. It is a log collection agent with metrics and events. Sematext Docker agent runs as a small container in each Docker host and gathers metrics, events and logs for all the containers and cluster nodes. If core services are deployed in Docker containers,it observes every container inclusive of a container for Kubernetes core services.
81.Kubernetes cluster data is stored in which of the following?
- None of the above
82.Which of them is a Kubernetes Controller?
- Rolling Updates
- Both ReplicaSet and Deployment[Ans]
83.Which of the following are core Kubernetes objects?
- All of the above[Ans]
84.The Kubernetes Network proxy runs on which node?
- Master Node
- Worker Node
- All the nodes[Ans]
- None of the above
85.What are the responsibilities of a node controller?
- To assign a CIDR block to the nodes
- To maintain the list of nodes
- To monitor the health of the nodes
- All of the above[Ans]
86.What are the responsibilities of a Replication Controller?
- Update or delete multiple pods with a single command
- Helps to achieve the desired state
- Creates a new pod, if the existing pod crashes
- All of the above[Ans]
87.How to define a service without a selector?
- Specify the external name[Ans]
- Specify an endpoint with IP Address and port
- Just by specifying the IP address
- Specifying the label and api-version
88.What did the 1.8 version of Kubernetes introduce?
- Taints and Tolerations[Ans]
- Cluster level Logging
- Federated Clusters
89.The handler invoked by Kubelet to check if a container’s IP address is open or not is?
- TCPSocket Action[Ans]
- None of the above
90.How can a company ensure optimal distribution of resources?
Kubernetes helps in the powerful distribution of resources by allocating the resources that are used by a particular application.
91.How does application deployment on hosts differ from deployment on containers?
When you deploy an application on hosts, the kernel of the operating system allows many libraries to be installed on it. All the applications share the various libraries present on that operating system. However, the architecture of deploying applications on containers is a bit different.
In the containerized architecture, the kernel is the only thing common between the applications. Other applications cannot encroach upon the libraries and binaries needed by one application. So, they exist in isolation from the rest of the system. For example, if a particular application requires Python, then only that application will get access to it.
92.What do you understand by container orchestration? Why do you need it?
Suppose that there are 4-5 microservices for an application. Now, these microservices would be in individual containers. So, container orchestration would be required to allow the services to communicate with one another and work together to fulfill the server’s needs. The process is just like a musical orchestra where different instruments are played in harmony to make up a composition.
93.What is the use of kube-controller-manager?
It is the Kubernetes Controller Manager. The kube-controller-manager is a daemon that embeds the core control loops which regulate the system state, and it is a non-terminating loop.
94.What is the role of clusters in Kubernetes?
Kubernetes allows you to enforce the required state management by feeding cluster services of a specific configuration. Then, these cluster services run that configuration in the infrastructure. The following steps are involved in the process:
- The deployment file contains all the configurations to be fed into the cluster services.
- The deployment file is fed into the API.
- Now, the cluster services schedule the pods in the environment
- Cluster services also ensure that the right number of pods are running
So, the Kubernetes cluster is essentially made up of the API, the worker nodes, and the Kubelet process of the nodes.
95.What is Kubectl used for?
Kubectl is a tool for controlling Kubernetes clusters. In fact, “ctl” stands for control. It is a command-line interface that allows you to pass commands to the cluster and manage the Kubernetes component.
96.Define Google Container Engine.
Google Container Engine (GKE. is a management platform that supports Docker containers and clusters that run within Google’s public cloud services. It is an open-source engine based on Kubernetes.
97.Explain the usage of nodes in Kubernetes.
A node provides the necessary services to run pods. Also known as minions, nodes can run on a physical or virtual machine depending upon the cluster. In Kubernetes, a node is the main worker machine, and master components manage each node in the system.
Now that you are up-to-date on the basics let’s look at a few more Kubernetes interview questions and answers to gain clarity.
98.What are the two main components of the Kubernetes architecture?
The master node and the worker node make up the Kubernetes architecture. Both components have multiple in-built services within them. For example, the master component has the kube-controller-manager, kube-scheduler, etcd, and kube-apiserver. The worker node has services like container runtime, kubelet, and kube-proxy running on each node.
Are you looking training with Right Jobs?Contact Us
- DevOps Interview Questions and Answers
- Docker Container Tutorial
- Ansible Interview Questions and Answers
- Kubernetes Cheat Sheet Tutorial
- SaltStack Tutorial
- What is Dimension Reduction? | Know the techniques
- Difference between Data Lake vs Data Warehouse: A Complete Guide For Beginners with Best Practices
- What is Dimension Reduction? | Know the techniques
- What does the Yield keyword do and How to use Yield in python ? [ OverView ]
- Agile Sprint Planning | Everything You Need to Know