How does cert-manager integrate with Kubernetes resources to provide TLS certificates for an application?
It manages Certificate resources and Secrets that can be used by Ingress objects for TLS.
It replaces default Kubernetes API certificates with those from external authorities.
It updates kube-proxy configuration to ensure encrypted traffic between Services.
It injects TLS certificates directly into Pods when the workloads are deployed.
cert-manager is a widely adopted Kubernetes add-on that automates the management and lifecycle of TLS certificates in cloud native environments. Its primary function is to issue, renew, and manage certificates by integrating directly with Kubernetes-native resources, rather than modifying core cluster components or injecting certificates manually into workloads.
Option A correctly describes how cert-manager operates. cert-manager introduces Custom Resource Definitions (CRDs) such as Certificate, Issuer, and ClusterIssuer. These resources define how certificates should be requested and from which certificate authority they should be obtained, such as Let’s Encrypt or a private PKI. Once a certificate is successfully issued, cert-manager stores it in a Kubernetes Secret. These Secrets can then be referenced by Ingress resources, Gateway API resources, or directly by applications to enable TLS.
Option B is incorrect because cert-manager does not replace or interfere with Kubernetes API server certificates. The Kubernetes control plane manages its own internal certificates independently, and cert-manager is focused on application-level TLS, not control plane security.
Option C is incorrect because cert-manager does not interact with kube-proxy or manage service-to-service encryption. Traffic encryption between Services is typically handled by service meshes or application-level TLS configurations, not cert-manager.
Option D is incorrect because cert-manager does not inject certificates directly into Pods at deployment time. Instead, Pods consume certificates indirectly by mounting the Secrets created and maintained by cert-manager. This design aligns with Kubernetes best practices by keeping certificate management decoupled from application deployment logic.
According to Kubernetes and cert-manager documentation, cert-manager’s strength lies in its native integration with Kubernetes APIs and declarative workflows. By managing Certificate resources and automatically maintaining Secrets for use by Ingress or Gateway resources, cert-manager simplifies TLS management, reduces operational overhead, and improves security across cloud native application delivery pipelines. This makes option A the accurate and fully verified answer.
Which is the correct kubectl command to display logs in real time?
kubectl logs -p test-container-1
kubectl logs -c test-container-1
kubectl logs -l test-container-1
kubectl logs -f test-container-1
To stream logs in real time with kubectl, you use the follow option -f, so D is correct. In Kubernetes, kubectl logs retrieves logs from containers in a Pod. By default, it returns the current log output and exits. When you add -f, kubectl keeps the connection open and continuously prints new log lines as they are produced, similar to tail -f on Linux. This is especially useful for debugging live behavior, watching startup sequences, or monitoring an application during a rollout.
The other flags serve different purposes. -p (as seen in option A) requests logs from the previous instance of a container (useful after a restart/crash), not real-time streaming. -c (option B) selects a specific container within a multi-container Pod; it doesn’t stream by itself (though it can be combined with -f). -l (option C) is used with kubectl logs to select Pods by label, but again it is not the streaming flag; streaming requires -f.
In real troubleshooting, you commonly combine flags, e.g. kubectl logs -f pod-name -c container-name for streaming logs from a specific container, or kubectl logs -f -l app=myapp to stream from Pods matching a label selector (depending on kubectl behavior/version). But the key answer to “display logs in real time” is the follow flag: -f.
Therefore, the correct selection is D.
Which of the following is a feature Kubernetes provides by default as a container orchestration tool?
A portable operating system.
File system redundancy.
A container image registry.
Automated rollouts and rollbacks.
Kubernetes provides automated rollouts and rollbacks for workloads by default (via controllers like Deployments), so D is correct. In Kubernetes, application delivery is controller-driven: you declare the desired state (new image, new config), and controllers reconcile the cluster toward that state. Deployments implement rolling updates, gradually replacing old Pods with new ones while respecting availability constraints. Kubernetes tracks rollout history and supports rollback to previous ReplicaSets when an update fails or is deemed unhealthy.
This is a core orchestration capability: it reduces manual intervention and makes change safer. Rollouts use readiness checks and update strategies to avoid taking the service down, and kubectl rollout status/history/undo supports day-to-day release operations.
The other options are not “default Kubernetes orchestration features”:
Kubernetes is not a portable operating system (A). It’s a platform for orchestrating containers on top of an OS.
Kubernetes does not provide filesystem redundancy by itself (B). Storage redundancy is handled by underlying storage systems and CSI drivers (e.g., replicated block storage, distributed filesystems).
Kubernetes does not include a built-in container image registry (C). You use external registries (Docker Hub, ECR, GCR, Harbor, etc.). Kubernetes pulls images but does not host them as a core feature.
So the correct “provided by default” orchestration feature in this list is the ability to safely manage application updates via automated rollouts and rollbacks.
=========
What is the common standard for Service Meshes?
Service Mesh Specification (SMS)
Service Mesh Technology (SMT)
Service Mesh Interface (SMI)
Service Mesh Function (SMF)
A widely referenced interoperability standard in the service mesh ecosystem is the Service Mesh Interface (SMI), so C is correct. SMI was created to provide a common set of APIs for basic service mesh capabilities—helping users avoid being locked into a single mesh implementation for core features. While service meshes differ in architecture and implementation (e.g., Istio, Linkerd, Consul), SMI aims to standardize how common behaviors are expressed.
In cloud native architecture, service meshes address cross-cutting concerns for service-to-service communication: traffic policies, observability, and security (mTLS, identity). Rather than baking these concerns into every application, a mesh typically introduces data-plane proxies and a control plane to manage policy and configuration. SMI sits above those implementations as a common API model.
The other options are not commonly used industry standards. You may see other efforts and emerging APIs, but among the listed choices, SMI is the recognized standard name that appears in cloud native discussions and tooling integrations.
Also note a practical nuance: even with SMI, not every mesh implements every SMI spec fully, and many users still adopt mesh-specific CRDs and APIs for advanced features. But for this question’s framing—“common standard”—Service Mesh Interface is the correct answer.
Which command provides information about the field replicas within the spec resource of a deployment object?
kubectl get deployment.spec.replicas
kubectl explain deployment.spec.replicas
kubectl describe deployment.spec.replicas
kubectl explain deployment --spec.replicas
The correct command to get field-level schema information about spec.replicas in a Deployment is kubectl explain deployment.spec.replicas, so B is correct. kubectl explain is designed to retrieve documentation for resource fields directly from Kubernetes API discovery and OpenAPI schemas. When you use kubectl explain deployment.spec.replicas, kubectl shows what the field means, its type, and any relevant notes—exactly what “provides information about the field” implies.
This differs from kubectl get and kubectl describe. kubectl get is for retrieving actual objects or listing resources; it does not accept dot-paths like deployment.spec.replicas as a normal resource argument. You can use JSONPath/custom-columns with kubectl get deployment
Option D is not valid syntax: kubectl explain deployment --spec.replicas is not how kubectl explain accepts nested field references. The correct pattern is positional dot notation: kubectl explain
Understanding spec.replicas matters operationally: it defines the desired number of Pod replicas for a Deployment. The Deployment controller ensures that the corresponding ReplicaSet maintains that count, supporting self-healing if Pods fail. While autoscalers can adjust replicas automatically, the field remains the primary declarative knob. The question is specifically about finding information (schema docs) for that field, which is why kubectl explain deployment.spec.replicas is the verified correct answer.
=========
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Flux and ArgoCD
GitOps Toolkit
Linkerd and Istio
Helm and Kustomize
Tools that continuously reconcile cluster state to match a Git repository’s desired configuration are GitOps controllers, and the best match here is Flux and ArgoCD, so A is correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (“GitOps Toolkit”) is related—Flux uses a GitOps Toolkit internally—but the question asks for a “tool” that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore, A is the verified correct answer.
=========
What is a DaemonSet?
It’s a type of workload that ensures a specific set of nodes run a copy of a Pod.
It’s a type of workload responsible for maintaining a stable set of replica Pods running in any node.
It’s a type of workload that needs to be run periodically on a given schedule.
It’s a type of workload that provides guarantees about ordering, uniqueness, and identity of a set of Pods.
A DaemonSet ensures that a copy of a Pod runs on each node (or a selected subset of nodes), which matches option A and makes it correct. DaemonSets are ideal for node-level agents that should exist everywhere, such as log shippers, monitoring agents, CNI components, storage daemons, and security scanners.
DaemonSets differ from Deployments/ReplicaSets because their goal is not “N replicas anywhere,” but “one replica per node” (subject to node selection). When nodes are added to the cluster, the DaemonSet controller automatically schedules the DaemonSet Pod onto the new nodes. When nodes are removed, the Pods associated with those nodes are cleaned up. You can restrict placement using node selectors, affinity rules, or tolerations so that only certain nodes run the DaemonSet (for example, only Linux nodes, only GPU nodes, or only nodes with a dedicated label).
Option B sounds like a ReplicaSet/Deployment behavior (stable set of replicas), not a DaemonSet. Option C describes CronJobs (scheduled, recurring run-to-completion workloads). Option D describes StatefulSets, which provide stable identity, ordering, and uniqueness guarantees for stateful replicas.
Operationally, DaemonSets matter because they often run critical cluster services. During maintenance and upgrades, DaemonSet update strategy determines how those node agents roll out across the fleet. Since DaemonSets can tolerate taints (like master/control-plane node taints), they can also be used to ensure essential agents run across all nodes, including special pools. Thus, the correct definition is A.
=========
Which of the following options includes valid API versions?
alpha1v1, beta3v3, v2
alpha1, beta3, v2
v1alpha1, v2beta3, v2
v1alpha1, v2beta3, 2.0
Kubernetes API versions follow a consistent naming pattern that indicates stability level and versioning. The valid forms include stable versions like v1, and pre-release versions such as v1alpha1, v1beta1, etc. Option C contains valid-looking Kubernetes version strings—v1alpha1, v2beta3, v2—so C is correct.
In Kubernetes, the “v” prefix is part of the standard for API versions. A stable API uses v1, v2, etc. Pre-release APIs include a stability marker: alpha (earliest, most changeable) and beta (more stable but still may change). The numeric suffix (e.g., alpha1, beta3) indicates iteration within that stability stage.
Option A is invalid because strings like alpha1v1 and beta3v3 do not match Kubernetes conventions (the v comes first, and alpha/beta are qualifiers after the version: v1alpha1). Option B is invalid because alpha1 and beta3 are missing the leading version prefix; Kubernetes API versions are not just “alpha1.” Option D includes 2.0, which looks like semantic versioning but is not the Kubernetes API version format. Kubernetes uses v2, not 2.0, for API versions.
Understanding this matters because API versions signal compatibility guarantees. Stable APIs are supported for a defined deprecation window, while alpha/beta APIs may change in incompatible ways and can be removed more easily. When authoring manifests, selecting the correct apiVersion ensures the API server accepts your resource and that controllers interpret fields correctly.
Therefore, among the choices, C is the only option comprised of valid Kubernetes-style API version strings.
=========
What is the core functionality of GitOps tools like Argo CD and Flux?
They track production changes made by a human in a Git repository and generate a human-readable audit trail.
They replace human operations with an agent that tracks Git commands.
They automatically create pull requests when dependencies are outdated.
They continuously compare the desired state in Git with the actual production state and notify or act upon differences.
The defining capability of GitOps controllers such as Argo CD and Flux is continuous reconciliation: they compare the desired state stored in Git to the actual state in the cluster and then alert and/or correct drift, making D correct. In GitOps, Git becomes the single source of truth for declarative configuration (Kubernetes manifests, Helm charts, Kustomize overlays). The controller watches Git for changes and applies them, and it also watches the cluster for divergence.
This is more than “auditing human changes” (option A). GitOps does provide auditability because changes are made via commits and pull requests, but the core functionality is the reconciliation loop that keeps cluster state aligned with Git, including optional automated sync/remediation. Option B is not accurate because GitOps is not about tracking user Git commands; it’s about reconciling desired state definitions. Option C (automatically creating pull requests for outdated dependencies) is a useful feature in some tooling ecosystems, but it is not the central defining behavior of GitOps controllers.
In Kubernetes delivery terms, this approach improves reliability: rollouts become repeatable, configuration drift is detected, and recovery is simpler (reapply known-good state from Git). It also supports separation of duties: platform teams can control policies and base layers, while app teams propose changes via PRs.
So the verified statement is: GitOps tools continuously reconcile Git desired state with cluster actual state—exactly option D.
What is the main role of the Kubernetes DNS within a cluster?
Acts as a DNS server for virtual machines that are running outside the cluster.
Provides a DNS as a Service, allowing users to create zones and registries for domains that they own.
Allows Pods running in dual stack to convert IPv6 calls into IPv4 calls.
Provides consistent DNS names for Pods and Services for workloads that need to communicate with each other.
Kubernetes DNS (commonly implemented by CoreDNS) provides service discovery inside the cluster by assigning stable, consistent DNS names to Services and (optionally) Pods, which makes D correct. In a Kubernetes environment, Pods are ephemeral—IP addresses can change when Pods restart or move between nodes. DNS-based discovery allows applications to communicate using stable names rather than hardcoded IPs.
For Services, Kubernetes creates DNS records like service-name.namespace.svc.cluster.local, which resolve to the Service’s virtual IP (ClusterIP) or, for headless Services, to the set of Pod endpoints. This supports both load-balanced communication (standard Service) and per-Pod addressing (headless Service, commonly used with StatefulSets). Kubernetes DNS is therefore a core building block that enables microservices to locate each other reliably.
Option A is not Kubernetes DNS’s purpose; it serves cluster workloads rather than external VMs. Option B describes a managed DNS hosting product (creating zones/registries), which is outside the scope of cluster DNS. Option C describes protocol translation, which is not the role of DNS. Dual-stack support relates to IP families and networking configuration, not DNS translating IPv6 to IPv4.
In day-to-day Kubernetes operations, DNS reliability impacts everything: if DNS is unhealthy, Pods may fail to resolve Services, causing cascading outages. That’s why CoreDNS is typically deployed as a highly available add-on in kube-system, and why DNS caching and scaling are important for large clusters.
So the correct statement is D: Kubernetes DNS provides consistent DNS names so workloads can communicate reliably.
=========
Which of the following is the name of a container orchestration software?
OpenStack
Docker
Apache Mesos
CRI-O
C (Apache Mesos) is correct because Mesos is a cluster manager/orchestrator that can schedule and manage workloads (including containerized workloads) across a pool of machines. Historically, Mesos (often paired with frameworks like Marathon) was used to orchestrate services and batch jobs at scale, similar in spirit to Kubernetes’ scheduling and cluster management role.
Why the other answers are not correct as “container orchestration software” in this context:
OpenStack (A) is primarily an IaaS cloud platform for provisioning compute, networking, and storage (VM-focused). It’s not a container orchestrator, though it can host Kubernetes or containers.
Docker (B) is a container platform/tooling ecosystem (image build, runtime, local orchestration via Docker Compose/Swarm historically), but “Docker” itself is not the best match for “container orchestration software” in the multi-node cluster orchestration sense that the question implies.
CRI-O (D) is a container runtime implementing Kubernetes’ CRI; it runs containers on a node but does not orchestrate placement, scaling, or service lifecycle across a cluster.
Container orchestration typically means capabilities like scheduling, scaling, service discovery integration, health management, and rolling updates across multiple hosts. Mesos fits that definition: it provides resource management and scheduling over a cluster and can run container workloads via supported containerizers. Kubernetes ultimately became the dominant orchestrator for many use cases, but Mesos is clearly recognized as orchestration software in this category.
So, among these choices, the verified orchestration platform is Apache Mesos (C).
=========
Which of the following sentences is true about namespaces in Kubernetes?
You can create a namespace within another namespace in Kubernetes.
You can create two resources of the same kind and name in a namespace.
The default namespace exists when a new cluster is created.
All the objects in the cluster are namespaced by default.
The true statement is C: the default namespace exists when a new cluster is created. Namespaces are a Kubernetes mechanism for partitioning cluster resources into logical groups. When you set up a cluster, Kubernetes creates some initial namespaces (including default, and commonly kube-system, kube-public, and kube-node-lease). The default namespace is where resources go if you don’t specify a namespace explicitly.
Option A is false because namespaces are not hierarchical; Kubernetes does not support “namespaces inside namespaces.” Option B is false because within a given namespace, resource names must be unique per resource kind. You can’t have two Deployments with the same name in the same namespace. You can have a Deployment named web in one namespace and another Deployment named web in a different namespace—namespaces provide that scope boundary. Option D is false because not all objects are namespaced. Many resources are cluster-scoped (for example, Nodes, PersistentVolumes, ClusterRoles, ClusterRoleBindings, and StorageClasses). Namespaces apply only to namespaced resources.
Operationally, namespaces support multi-tenancy and environment separation (dev/test/prod), RBAC scoping, resource quotas, and policy boundaries. For example, you can grant a team access only to their namespace and enforce quotas that prevent them from consuming excessive CPU/memory. Namespaces also make organization and cleanup easier: deleting a namespace removes most namespaced resources inside it (subject to finalizers).
So, the verified correct statement is C: the default namespace exists upon cluster creation.
=========
How can you monitor the progress for an updated Deployment/DaemonSets/StatefulSets?
kubectl rollout watch
kubectl rollout progress
kubectl rollout state
kubectl rollout status
To monitor rollout progress for Kubernetes workload updates (most commonly Deployments, and also StatefulSets and DaemonSets where applicable), the standard kubectl command is kubectl rollout status, which makes D correct.
Kubernetes manages updates declaratively through controllers. For a Deployment, an update typically creates a new ReplicaSet and gradually shifts replicas from the old to the new according to the strategy (e.g., RollingUpdate with maxUnavailable and maxSurge). For StatefulSets, updates may be ordered and respect stable identities, and for DaemonSets, an update replaces node-level Pods according to update strategy. In all cases, you often want a single command that tells you whether the controller has completed the update and whether the new replicas are available. kubectl rollout status queries the resource status and prints a progress view until completion or timeout.
The other commands listed are not the canonical kubectl subcommands. kubectl rollout watch, kubectl rollout progress, and kubectl rollout state are not standard rollout verbs in kubectl. The supported rollout verbs typically include status, history, undo, pause, and resume (depending on kubectl version and resource type).
Operationally, kubectl rollout status deployment/
=========
Which control plane component is responsible for updating the node Ready condition if a node becomes unreachable?
The kube-proxy
The node controller
The kubectl
The kube-apiserver
The correct answer is B: the node controller. In Kubernetes, node health is monitored and reflected through Node conditions such as Ready. The Node Controller (a controller that runs as part of the control plane, within the controller-manager) is responsible for monitoring node heartbeats and updating node status when a node becomes unreachable or unhealthy.
Nodes periodically report status (including kubelet heartbeats) to the API server. The Node Controller watches these updates. If it detects that a node has stopped reporting within expected time windows, it marks the node condition Ready as Unknown (or otherwise updates conditions) to indicate the control plane can’t confirm node health. This status change then influences higher-level behaviors such as Pod eviction and rescheduling: after grace periods and eviction timeouts, Pods on an unhealthy node may be evicted so the workload can be recreated on healthy nodes (assuming a controller manages replicas).
Option A (kube-proxy) is a node component for Service traffic routing and does not manage node health conditions. Option C (kubectl) is a CLI client; it does not participate in control plane health monitoring. Option D (kube-apiserver) stores and serves Node status, but it doesn’t decide when a node is unreachable; it persists what controllers and kubelets report. The “decision logic” for updating the Ready condition in response to missing heartbeats is the Node Controller’s job.
So, the component that updates the Node Ready condition when a node becomes unreachable is the node controller, which is option B.
=========
Which of these components is part of the Kubernetes Control Plane?
CoreDNS
cloud-controller-manager
kube-proxy
kubelet
The Kubernetes control plane is the set of components responsible for making cluster-wide decisions (like scheduling) and detecting and responding to cluster events (like starting new Pods when they fail). In upstream Kubernetes architecture, the canonical control plane components include kube-apiserver, etcd, kube-scheduler, and kube-controller-manager, and—when running on a cloud provider—the cloud-controller-manager. That makes option B the correct answer: cloud-controller-manager is explicitly a control plane component that integrates Kubernetes with the underlying cloud.
The cloud-controller-manager runs controllers that talk to cloud APIs for infrastructure concerns such as node lifecycle, routes, and load balancers. For example, when you create a Service of type LoadBalancer, a controller in this component is responsible for provisioning a cloud load balancer and updating the Service status. This is clearly control-plane behavior: reconciling desired state into real infrastructure state.
Why the others are not control plane components (in the classic classification): kubelet is a node component (agent) responsible for running and managing Pods on a specific node. kube-proxy is also a node component that implements Service networking rules on nodes. CoreDNS is usually deployed as a cluster add-on for DNS-based service discovery; it’s critical, but it’s not a control plane component in the strict architectural list.
So, while many clusters run CoreDNS in kube-system, the Kubernetes component that is definitively “part of the control plane” among these choices is cloud-controller-manager (B).
=========
What are the characteristics for building every cloud-native application?
Resiliency, Operability, Observability, Availability
Resiliency, Containerd, Observability, Agility
Kubernetes, Operability, Observability, Availability
Resiliency, Agility, Operability, Observability
Cloud-native applications are typically designed to thrive in dynamic, distributed environments where infrastructure is elastic and failures are expected. The best set of characteristics listed is Resiliency, Agility, Operability, Observability, making D correct.
Resiliency means the application and its supporting platform can tolerate failures and continue providing service. In Kubernetes terms, resiliency is supported through self-healing controllers, replica management, health probes, and safe rollout mechanisms, but the application must also be designed to handle transient failures, retries, and graceful degradation.
Agility reflects the ability to deliver changes quickly and safely. Cloud-native systems emphasize automation, CI/CD, declarative configuration, and small, frequent releases—often enabled by Kubernetes primitives like Deployments and rollout strategies. Agility is about reducing the friction to ship improvements while maintaining reliability.
Operability is how manageable the system is in production: clear configuration, predictable deployments, safe scaling, and automation-friendly operations. Kubernetes encourages operability through consistent APIs, controllers, and standardized patterns for configuration and lifecycle.
Observability means you can understand what’s happening inside the system using telemetry—metrics, logs, and traces—so you can troubleshoot issues, measure SLOs, and improve performance. Kubernetes provides many integration points for observability, but cloud-native apps must also emit meaningful signals.
Options B and C include items that are not “characteristics” (containerd is a runtime; Kubernetes is a platform). Option A includes “availability,” which is important, but the canonical cloud-native framing in this question emphasizes the four qualities in D as the foundational build characteristics.
=========
Ceph is a highly scalable distributed storage solution for block storage, object storage, and shared filesystems with years of production deployments. Which open-source cloud native storage orchestrator automates deployment and management of Ceph to provide self-managing, self-scaling, and self-healing storage services?
CubeFS
OpenEBS
Rook
MinIO
Rook is the open-source, cloud-native storage orchestrator specifically designed to automate the deployment, configuration, and lifecycle management of Ceph within Kubernetes environments. Its primary goal is to transform complex, traditionally manual storage systems like Ceph into Kubernetes-native services that are easy to operate and highly resilient.
Ceph itself is a mature and powerful distributed storage platform that supports block storage (RBD), object storage (RGW), and shared filesystems (CephFS). However, operating Ceph directly requires deep expertise, careful configuration, and continuous operational management. Rook addresses this challenge by running Ceph as a set of Kubernetes-managed components and exposing storage capabilities through Kubernetes Custom Resource Definitions (CRDs). This allows administrators to declaratively define storage clusters, pools, filesystems, and object stores using familiar Kubernetes patterns.
Rook continuously monitors the health of the Ceph cluster and takes automated actions to maintain the desired state. If a Ceph daemon fails or a node becomes unavailable, Rook works with Kubernetes scheduling and Ceph’s internal replication mechanisms to ensure data durability and service continuity. This enables self-healing behavior. Scaling storage capacity is also simplified—adding nodes or disks allows Rook and Ceph to automatically rebalance data, providing self-scaling capabilities without manual intervention.
The other options are incorrect for this use case. CubeFS is a distributed filesystem but is not a Ceph orchestrator. OpenEBS focuses on container-attached storage and local or replicated volumes rather than managing Ceph itself. MinIO is an object storage server compatible with S3 APIs, but it does not orchestrate Ceph or provide block and filesystem services.
Therefore, the correct and verified answer is Option C: Rook, which is the officially recognized Kubernetes-native orchestrator for Ceph, delivering automated, resilient, and scalable storage management aligned with cloud-native principles.
What are the two essential operations that the kube-scheduler normally performs?
Pod eviction or starting
Resource monitoring and reporting
Filtering and scoring nodes
Starting and terminating containers
The kube-scheduler is a core control plane component in Kubernetes responsible for assigning newly created Pods to appropriate nodes. Its primary responsibility is decision-making, not execution. To make an informed scheduling decision, the kube-scheduler performs two essential operations: filtering and scoring nodes.
The scheduling process begins when a Pod is created without a node assignment. The scheduler first evaluates all available nodes and applies a set of filtering rules. During this phase, nodes that do not meet the Pod’s requirements are eliminated. Filtering criteria include resource availability (CPU and memory requests), node selectors, node affinity rules, taints and tolerations, volume constraints, and other policy-based conditions. Any node that fails one or more of these checks is excluded from consideration.
Once filtering is complete, the scheduler moves on to the scoring phase. In this step, each remaining eligible node is assigned a score based on a collection of scoring plugins. These plugins evaluate factors such as resource utilization balance, affinity preferences, topology spread constraints, and custom scheduling policies. The purpose of scoring is to rank nodes according to how well they satisfy the Pod’s placement preferences. The node with the highest total score is selected as the best candidate.
Option A is incorrect because Pod eviction is handled by other components such as the kubelet and controllers, and starting Pods is the responsibility of the kubelet. Option B is incorrect because resource monitoring and reporting are performed by components like metrics-server, not the scheduler. Option D is also incorrect because starting and terminating containers is entirely handled by the kubelet and the container runtime.
By separating filtering (eligibility) from scoring (preference), the kube-scheduler provides a flexible, extensible, and policy-driven scheduling mechanism. This design allows Kubernetes to support diverse workloads and advanced placement strategies while maintaining predictable scheduling behavior.
Therefore, the correct and verified answer is Option C: Filtering and scoring nodes, as documented in Kubernetes scheduling architecture.
A request for 500 mebibytes of ephemeral storage must be specified in a YAML file. How should this be written?
500Mi
500mi
500m
0.5M
In Kubernetes, resource quantities must be expressed using specific, well-defined units. When requesting ephemeral storage, Kubernetes follows the same quantity format rules as other resources such as memory. These rules distinguish between binary units (base-2) and decimal units (base-10), and the correct unit must be used to avoid configuration errors or unintended resource allocation.
The term mebibyte (MiB) is a binary unit equal to 2²⁰ bytes (1,048,576 bytes). Kubernetes represents mebibytes using the suffix Mi with a capital “M” and lowercase “i”. Therefore, a request for 500 mebibytes of ephemeral storage must be written as 500Mi, making option A the correct answer.
Option B, 500mi, is incorrect because Kubernetes resource units are case-sensitive. The lowercase mi is not a valid unit and will be rejected by the API server. Option C, 500m, is also incorrect because the suffix m represents millicpu when used with CPU resources and has no meaning for storage quantities. Using m for ephemeral storage would result in a validation error. Option D, 0.5M, is incorrect because M represents a decimal megabyte (10⁶ bytes), not a mebibyte, and Kubernetes best practices require binary units for memory-based resources like ephemeral storage.
Ephemeral storage requests are typically defined under the container’s resources.requests.ephemeral-storage field. Correctly specifying the unit ensures that the scheduler can accurately account for node storage capacity and enforce eviction thresholds when necessary.
In summary, Kubernetes requires precise, case-sensitive units for resource specifications. Since the question explicitly asks for 500 mebibytes, the only valid and correct representation is 500Mi, which aligns exactly with Kubernetes resource quantity conventions.
What function does kube-proxy provide to a cluster?
Implementing the Ingress resource type for application traffic.
Forwarding data to the correct endpoints for Services.
Managing data egress from the cluster nodes to the network.
Managing access to the Kubernetes API.
kube-proxy is a node-level networking component that helps implement the Kubernetes Service abstraction. Services provide a stable virtual IP and DNS name that route traffic to a set of Pods (endpoints). kube-proxy watches the API for Service and EndpointSlice/Endpoints changes and then programs the node’s networking rules so that traffic sent to a Service is forwarded (load-balanced) to one of the correct backend Pod IPs. This is why B is correct.
Conceptually, kube-proxy turns the declarative Service configuration into concrete dataplane behavior. Depending on the mode, it may use iptables rules, IPVS, or integrate with eBPF-capable networking stacks (sometimes kube-proxy is replaced or bypassed by CNI implementations, but the classic kube-proxy role remains the canonical answer). In iptables mode, kube-proxy creates NAT rules that rewrite traffic from the Service virtual IP to one of the Pod endpoints. In IPVS mode, it programs kernel load-balancing tables for more scalable service routing. In all cases, the job is to connect “Service IP/port” to “Pod IP/port endpoints.”
Option A is incorrect because Ingress is a separate API resource and requires an Ingress Controller (like NGINX Ingress, HAProxy, Traefik, etc.) to implement HTTP routing, TLS termination, and host/path rules. kube-proxy is not an Ingress controller. Option C is incorrect because general node egress management is not kube-proxy’s responsibility; egress behavior typically depends on the CNI plugin, NAT configuration, and network policies. Option D is incorrect because API access control is handled by the API server’s authentication/authorization layers (RBAC, webhooks, etc.), not kube-proxy.
So kube-proxy’s essential function is: keep node networking rules in sync so that Service traffic reaches the right Pods. It is one of the key components that makes Services “just work” across nodes without clients needing to know individual Pod IPs.
=========
What is ephemeral storage?
Storage space that need not persist across restarts.
Storage that may grow dynamically.
Storage used by multiple consumers (e.g., multiple Pods).
Storage that is always provisioned locally.
The correct answer is A: ephemeral storage is non-persistent storage whose data does not need to survive Pod restarts or rescheduling. In Kubernetes, ephemeral storage typically refers to storage tied to the Pod’s lifetime—such as the container writable layer, emptyDir volumes, and other temporary storage types. When a Pod is deleted or moved to a different node, that data is generally lost.
This is different from persistent storage, which is backed by PersistentVolumes and PersistentVolumeClaims and is designed to outlive individual Pod instances. Ephemeral storage is commonly used for caches, scratch space, temporary files, and intermediate build artifacts—data that can be recreated and is not the authoritative system of record.
Option B is incorrect because “may grow dynamically” describes an allocation behavior, not the defining characteristic of ephemeral storage. Option C is incorrect because multiple consumers is about access semantics (ReadWriteMany etc.) and shared volumes, not ephemerality. Option D is incorrect because ephemeral storage is not “always provisioned locally” in a strict sense; while many ephemeral forms are local to the node, the definition is about lifecycle and persistence guarantees, not necessarily physical locality.
Operationally, ephemeral storage is an important scheduling and reliability consideration. Pods can request/limit ephemeral storage similarly to CPU/memory, and nodes can evict Pods under disk pressure. Mismanaged ephemeral storage (logs written to the container filesystem, runaway temp files) can cause node disk exhaustion and cascading failures. Best practices include shipping logs off-node, using emptyDir intentionally with size limits where supported, and using persistent volumes for state that must survive restarts.
So, ephemeral storage is best defined as storage that does not need to persist across restarts/rescheduling, matching option A.
=========
Which of the following is a good habit for cloud native cost efficiency?
Follow an automated approach to cost optimization, including visibility and forecasting.
Follow manual processes for cost analysis, including visibility and forecasting.
Use only one cloud provider to simplify the cost analysis.
Keep your legacy workloads unchanged, to avoid cloud costs.
The correct answer is A. In cloud-native environments, costs are highly dynamic: autoscaling changes compute footprint, ephemeral environments come and go, and usage-based billing applies to storage, network egress, load balancers, and observability tooling. Because of this variability, automation is the most sustainable way to achieve cost efficiency. Automated visibility (dashboards, chargeback/showback), anomaly detection, and forecasting help teams understand where spend is coming from and how it changes over time. Automated optimization actions can include right-sizing requests/limits, enforcing TTLs on preview environments, scaling down idle clusters, and cleaning unused resources.
Manual processes (B) don’t scale as complexity grows. By the time someone reviews a spreadsheet or dashboard weekly, cost spikes may have already occurred. Automation enables fast feedback loops and guardrails, which is essential for preventing runaway spend caused by misconfiguration (e.g., excessive log ingestion, unbounded autoscaling, oversized node pools).
Option C is not a cost-efficiency “habit.” Single-provider strategies may simplify some billing views, but they can also reduce leverage and may not be feasible for resilience/compliance; it’s a business choice, not a best practice for cloud-native cost management. Option D is counterproductive: keeping legacy workloads unchanged often wastes money because cloud efficiency typically requires adapting workloads—right-sizing, adopting autoscaling, and using managed services appropriately.
In Kubernetes specifically, cost efficiency is tightly linked to resource management: accurate CPU/memory requests, limits where appropriate, cluster autoscaler tuning, and avoiding overprovisioning. Observability also matters because you can’t optimize what you can’t measure. Therefore, the best habit is an automated cost optimization approach with strong visibility and forecasting—A.
=========
How is application data maintained in containers?
Store data into data folders.
Store data in separate folders.
Store data into sidecar containers.
Store data into volumes.
Container filesystems are ephemeral: the writable layer is tied to the container lifecycle and can be lost when containers are recreated. Therefore, maintaining application data correctly means storing it in volumes, making D the correct answer. In Kubernetes, volumes provide durable or shareable storage that is mounted into containers at specific paths. Depending on the volume type, the data can persist across container restarts and even Pod rescheduling.
Kubernetes supports many volume patterns. For transient scratch data you might use emptyDir (ephemeral for the Pod’s lifetime). For durable state, you typically use PersistentVolumes consumed by PersistentVolumeClaims (PVCs), backed by storage systems via CSI drivers (cloud disks, SAN/NAS, distributed storage). This decouples the application container image from its state and enables rolling updates, rescheduling, and scaling without losing data.
Options A and B (“folders”) are incomplete because folders inside the container filesystem do not guarantee persistence. A folder is only as durable as the underlying storage; without a mounted volume, it lives in the container’s writable layer and will disappear when the container is replaced. Option C is incorrect because “sidecar containers” are not a data durability mechanism; sidecars can help ship logs or sync data, but persistent data should still be stored on volumes (or external services like managed databases).
From an application delivery standpoint, the principle is: containers should be immutable and disposable, and state should be externalized. Volumes (and external managed services) make this possible. In Kubernetes, this is a foundational pattern enabling safe rollouts, self-healing, and portability: the platform can kill and recreate Pods freely because data is maintained independently via volumes.
Therefore, the verified correct choice is D: Store data into volumes.
=========
Which mechanism allows extending the Kubernetes API?
ConfigMap
CustomResourceDefinition
MutatingAdmissionWebhook mechanism
Kustomize
The correct answer is B: CustomResourceDefinition (CRD). Kubernetes is designed to be extensible. A CRD lets you define your own resource types (custom API objects) that behave like native Kubernetes resources: they can be created with YAML, stored in etcd, retrieved via the API server, and managed using kubectl. For example, operators commonly define CRDs such as Databases, RedisClusters, or Certificates to model higher-level application concepts.
A CRD extends the API by adding a new kind under a group/version (e.g., example.com/v1). You typically pair CRDs with a controller (often called an operator) that watches these custom objects and reconciles real-world resources (Deployments, StatefulSets, cloud resources) to match the desired state specified in the CRD instances. This is the same control-loop pattern used for built-in controllers—just applied to your custom domain.
Why the other options aren’t correct: ConfigMaps store configuration data but do not add new API types. A MutatingAdmissionWebhook can modify or validate requests for existing resources, but it doesn’t define new API kinds; it enforces policy or injects defaults. Kustomize is a manifest customization tool (patch/overlay) and doesn’t extend the Kubernetes API surface.
CRDs are foundational to much of the Kubernetes ecosystem: cert-manager, Argo, Istio, and many operators rely heavily on CRDs. They also support schema validation via OpenAPI v3 schemas, which improves safety and tooling (better error messages, IDE hints). Therefore, the mechanism for extending the Kubernetes API is CustomResourceDefinition, option B.
=========
Why do administrators need a container orchestration tool?
To manage the lifecycle of an elevated number of containers.
To assess the security risks of the container images used in production.
To learn how to transform monolithic applications into microservices.
Container orchestration tools such as Kubernetes are the future.
The correct answer is A. Container orchestration exists because running containers at scale is hard: you need to schedule workloads onto machines, keep them healthy, scale them up and down, roll out updates safely, and recover from failures automatically. Administrators (and platform teams) use orchestration tools like Kubernetes to manage the lifecycle of many containers across many nodes—handling placement, restart, rescheduling, networking/service discovery, and desired-state reconciliation.
At small scale, you can run containers manually or with basic scripts. But at “elevated” scale (many services, many replicas, many nodes), manual management becomes unreliable and brittle. Orchestration provides primitives and controllers that continuously converge actual state toward desired state: if a container crashes, it is restarted; if a node dies, replacement Pods are scheduled; if traffic increases, replicas can be increased via autoscaling; if configuration changes, rolling updates can be coordinated with readiness checks.
Option B (security risk assessment) is important, but it’s not why orchestration tools exist. Image scanning and supply-chain security are typically handled by CI/CD tooling and registries, not by orchestration as the primary purpose. Option C is a separate architectural modernization effort; orchestration can support microservices, but it isn’t required “to learn transformation.” Option D is an opinion statement rather than a functional need.
So the core administrator need is lifecycle management at scale: ensuring workloads run reliably, predictably, and efficiently across a fleet. That is exactly what option A states.
=========
A Kubernetes _____ is an abstraction that defines a logical set of Pods and a policy by which to access them.
Selector
Controller
Service
Job
A Kubernetes Service is the abstraction that defines a logical set of Pods and the policy for accessing them, so C is correct. Pods are ephemeral: their IPs change as they are recreated, rescheduled, or scaled. A Service solves this by providing a stable endpoint (DNS name and virtual IP) and routing rules that send traffic to the current healthy Pods backing the Service.
A Service typically uses a label selector to identify which Pods belong to it. Kubernetes then maintains endpoint data (Endpoints/EndpointSlice) for those Pods and uses the cluster dataplane (kube-proxy or eBPF-based implementations) to forward traffic from the Service IP/port to one of the backend Pod IPs. This is what the question means by “logical set of Pods” and “policy by which to access them” (for example, round-robin-like distribution depending on dataplane, session affinity options, and how ports map via targetPort).
Option A (Selector) is only the query mechanism used by Services and controllers; it is not itself the access abstraction. Option B (Controller) is too generic; controllers reconcile desired state but do not provide stable network access policies. Option D (Job) manages run-to-completion tasks and is unrelated to network access abstraction.
Services can be exposed in different ways: ClusterIP (internal), NodePort, LoadBalancer, and ExternalName. Regardless of type, the core Service concept remains: stable access to a dynamic set of Pods. This is foundational to Kubernetes networking and microservice communication, and it is why Service discovery via DNS works effectively across rolling updates and scaling events.
Thus, the correct answer is Service (C).
=========
Which group of container runtimes provides additional sandboxed isolation and elevated security?
rune, cgroups
docker, containerd
runsc, kata
crun, cri-o
The runtimes most associated with sandboxed isolation are gVisor’s runsc and Kata Containers, making C correct. Standard container runtimes (like containerd with runc) rely primarily on Linux namespaces and cgroups for isolation. That isolation is strong for many use cases, but it shares the host kernel, which can be a concern for multi-tenant or high-risk workloads.
gVisor (runsc) provides a user-space kernel-like layer that intercepts and mediates system calls, reducing the container’s direct interaction with the host kernel. Kata Containers takes a different approach: it runs containers inside lightweight virtual machines, providing hardware-virtualization boundaries (or VM-like isolation) while still integrating into container workflows. Both are used to increase isolation compared to traditional containers, and both can be integrated with Kubernetes through compatible CRI/runtime configurations.
The other options are incorrect for the question’s intent. “rune, cgroups” is not a meaningful pairing here (cgroups is a Linux resource mechanism, not a runtime). “docker, containerd” are commonly used container platforms/runtimes but are not specifically the “sandboxed isolation” category (containerd typically uses runc for standard isolation). “crun, cri-o” represents a low-level OCI runtime (crun) and a CRI implementation (CRI-O), again not specifically a sandboxed-isolation grouping.
So, when the question asks for the group that provides additional sandboxing and elevated security, the correct, well-established answer is runsc + Kata.
Which tools enable Kubernetes HorizontalPodAutoscalers to use custom, application-generated metrics to trigger scaling events?
Prometheus and the prometheus-adapter.
Graylog and graylog-autoscaler metrics.
Graylog and the kubernetes-adapter.
Grafana and Prometheus.
To scale on custom, application-generated metrics, the Horizontal Pod Autoscaler (HPA) needs those metrics exposed through the Kubernetes custom metrics (or external metrics) API. A common and Kubernetes-documented approach is Prometheus + prometheus-adapter, making A correct. Prometheus scrapes application metrics (for example, request rate, queue depth, in-flight requests) from /metrics endpoints. The prometheus-adapter then translates selected Prometheus time series into the Kubernetes Custom Metrics API so the HPA controller can fetch them and make scaling decisions.
Why not the other options: Grafana is a visualization tool; it does not provide the metrics API translation layer required by HPA, so “Grafana and Prometheus” is incomplete. Graylog is primarily a log management system; it’s not the standard solution for feeding custom metrics into HPA via the Kubernetes metrics APIs. The “kubernetes-adapter” term in option C is not the standard named adapter used in the common Kubernetes ecosystem for Prometheus-backed custom metrics (the recognized component is prometheus-adapter).
This matters operationally because HPA is not limited to CPU/memory. CPU and memory use resource metrics (often from metrics-server), but modern autoscaling often needs application signals: message queue length, requests per second, latency, or business metrics. With Prometheus and prometheus-adapter, you can define HPA rules such as “scale to maintain queue depth under X” or “scale based on requests per second per pod.” This can produce better scaling behavior than CPU-based scaling alone, especially for I/O-bound services or workloads with uneven CPU profiles.
So the correct tooling combination in the provided choices is Prometheus and the prometheus-adapter, option A.
=========
A Pod named my-app must be created to run a simple nginx container. Which kubectl command should be used?
kubectl create nginx --name=my-app
kubectl run my-app --image=nginx
kubectl create my-app --image=nginx
kubectl run nginx --name=my-app
In Kubernetes, the simplest and most direct way to create a Pod that runs a single container is to use the kubectl run command with the appropriate image specification. The command kubectl run my-app --image=nginx explicitly instructs Kubernetes to create a Pod named my-app using the nginx container image, which makes option B the correct answer.
The kubectl run command is designed to quickly create and run a Pod (or, in some contexts, a higher-level workload resource) from the command line. When no additional flags such as --restart=Always are specified, Kubernetes creates a standalone Pod by default. This is ideal for simple use cases like testing, demonstrations, or learning scenarios where only a single container is required.
Option A is incorrect because kubectl create nginx --name=my-app is not valid syntax; the create subcommand requires a resource type (such as pod, deployment, or service) or a manifest file. Option C is also incorrect because kubectl create my-app --image=nginx omits the resource type and therefore is not a valid kubectl create command. Option D is incorrect because kubectl run nginx --name=my-app attempts to use the deprecated --name flag, which is no longer supported in modern versions of kubectl.
Using kubectl run with explicit naming and image flags is consistent with Kubernetes command-line conventions and is widely documented as the correct approach for creating simple Pods. The resulting Pod can be verified using commands such as kubectl get pods and kubectl describe pod my-app.
In summary, Option B is the correct and verified answer because it uses valid kubectl syntax to create a Pod named my-app running the nginx container image in a straightforward and predictable way.
What Linux namespace is shared by default by containers running within a Kubernetes Pod?
Host Network
Network
Process ID
Process Name
By default, containers in the same Kubernetes Pod share the network namespace, which means they share the same IP address and port space. Therefore, the correct answer is B (Network).
This shared network namespace is a key part of the Pod abstraction. Because all containers in a Pod share networking, they can communicate with each other over localhost and coordinate tightly, which is the basis for patterns like sidecars (service mesh proxies, log shippers, config reloaders). It also means containers must coordinate port usage: if two containers try to bind the same port on 0.0.0.0, they’ll conflict because they share the same port namespace.
Option A (“Host Network”) is different: hostNetwork: true is an optional Pod setting that puts the Pod into the node’s network namespace, not the Pod’s shared namespace. It is not the default and is generally used sparingly due to security and port-collision risks. Option C (“Process ID”) is not shared by default in Kubernetes; PID namespace sharing requires explicitly enabling process namespace sharing (e.g., shareProcessNamespace: true). Option D (“Process Name”) is not a Linux namespace concept.
The Pod model also commonly implies shared storage volumes (if defined) and shared IPC namespace in some configurations, but the universally shared-by-default namespace across containers in the same Pod is the network namespace. This default behavior is why Kubernetes documentation explains a Pod as a “logical host” for one or more containers: the containers are co-located and share certain namespaces as if they ran on the same host.
So, the correct, verified answer is B: containers in the same Pod share the Network namespace by default.
=========
What feature must a CNI support to control specific traffic flows for workloads running in Kubernetes?
Border Gateway Protocol
IP Address Management
Pod Security Policy
Network Policies
To control which workloads can communicate with which other workloads in Kubernetes, you use NetworkPolicy resources—but enforcement depends on the cluster’s networking implementation. Therefore, for traffic-flow control, the CNI/plugin must support Network Policies, making D correct.
Kubernetes defines the NetworkPolicy API as a declarative way to specify allowed ingress and egress traffic based on selectors (Pod labels, namespaces, IP blocks) and ports/protocols. However, Kubernetes itself does not enforce NetworkPolicy rules; enforcement is provided by the network plugin (or associated dataplane components). If your CNI does not implement NetworkPolicy, the objects may exist in the API but have no effect—Pods will communicate freely by default.
Option B (IP Address Management) is often part of CNI responsibilities, but IPAM is about assigning addresses, not enforcing L3/L4 security policy. Option A (BGP) is used by some CNIs to advertise routes (for example, in certain Calico deployments), but BGP is not the general requirement for policy enforcement. Option C (Pod Security Policy) is a deprecated/removed Kubernetes admission feature related to Pod security settings, not network flow control.
From a Kubernetes security standpoint, NetworkPolicies are a key tool for implementing least privilege at the network layer—limiting lateral movement, reducing blast radius, and segmenting environments. But they only work when the chosen CNI supports them. Thus, the correct answer is D: Network Policies.
=========
In a cloud native environment, who is usually responsible for maintaining the workloads running across the different platforms?
The cloud provider.
The Site Reliability Engineering (SRE) team.
The team of developers.
The Support Engineering team (SE).
B (the Site Reliability Engineering team) is correct. In cloud-native organizations, SREs are commonly responsible for the reliability, availability, and operational health of workloads across platforms (multiple clusters, regions, clouds, and supporting services). While responsibilities vary by company, the classic SRE charter is to apply software engineering to operations: build automation, standardize runbooks, manage incident response, define SLOs/SLIs, and continuously improve system reliability.
Maintaining workloads “across different platforms” implies cross-cutting operational ownership: deployments need to behave consistently, rollouts must be safe, monitoring and alerting must be uniform, and incident practices must work across environments. SRE teams typically own or heavily influence the observability stack (metrics/logs/traces), operational readiness, capacity planning, and reliability guardrails (error budgets, progressive delivery, automated rollback triggers). They also collaborate closely with platform engineering and application teams, but SRE is often the group that ensures production workloads meet reliability targets.
Why other options are less correct:
The cloud provider (A) maintains the underlying cloud services, but not your application workloads’ correctness, SLOs, or operational processes.
Developers (C) do maintain application code and may own on-call in some models, but the question asks “usually” in cloud-native environments; SRE is the widely recognized function for workload reliability across platforms.
Support Engineering (D) typically focuses on customer support and troubleshooting from a user perspective, not maintaining platform workload reliability at scale.
So, the best and verified answer is B: SRE teams commonly maintain and ensure reliability of workloads across cloud-native platforms.
=========
What is the default value for authorization-mode in Kubernetes API server?
--authorization-mode=RBAC
--authorization-mode=AlwaysAllow
--authorization-mode=AlwaysDeny
--authorization-mode=ABAC
The Kubernetes API server supports multiple authorization modes that determine whether an authenticated request is allowed to perform an action (verb) on a resource. Historically, the API server’s default authorization mode was AlwaysAllow, meaning that once a request was authenticated, it would be authorized without further checks. That is why the correct answer here is B.
However, it’s crucial to distinguish “default flag value” from “recommended configuration.” In production clusters, running with AlwaysAllow is insecure because it effectively removes authorization controls—any authenticated user (or component credential) could do anything the API permits. Modern Kubernetes best practices strongly recommend enabling RBAC (Role-Based Access Control), often alongside Node and Webhook authorization, so that permissions are granted explicitly using Roles/ClusterRoles and RoleBindings/ClusterRoleBindings. Many managed Kubernetes distributions and kubeadm-based setups commonly enable RBAC by default as part of cluster bootstrap profiles, even if the API server’s historical default flag value is AlwaysAllow.
So, the exam-style interpretation of this question is about the API server flag default, not what most real clusters should run. With RBAC enabled, authorization becomes granular: you can control who can read Secrets, who can create Deployments, who can exec into Pods, and so on, scoped to namespaces or cluster-wide. ABAC (Attribute-Based Access Control) exists but is generally discouraged compared to RBAC because it relies on policy files and is less ergonomic and less commonly used. AlwaysDeny is useful for hard lockdown testing but not for normal clusters.
In short: AlwaysAllow is the API server’s default mode (answer B), but RBAC is the secure, recommended choice you should expect to see enabled in almost any serious Kubernetes environment.
=========
What factors influence the Kubernetes scheduler when it places Pods on nodes?
Pod memory requests, node taints, and Pod affinity.
Pod labels, node labels, and request labels.
Node taints, node level, and Pod priority.
Pod priority, container command, and node labels.
The Kubernetes scheduler chooses a node for a Pod by evaluating scheduling constraints and cluster state. Key inputs include resource requests (CPU/memory), taints/tolerations, and affinity/anti-affinity rules. Option A directly names three real, high-impact scheduling factors—Pod memory requests, node taints, and Pod affinity—so A is correct.
Resource requests are fundamental: the scheduler must ensure the target node has enough allocatable CPU/memory to satisfy the Pod’s requests. Requests (not limits) drive placement decisions. Taints on nodes repel Pods unless the Pod has a matching toleration, which is commonly used to reserve nodes for special workloads (GPU nodes, system nodes, restricted nodes) or to protect nodes under certain conditions. Affinity and anti-affinity allow expressing “place me near” or “place me away” rules—e.g., keep replicas spread across failure domains or co-locate components for latency.
Option B includes labels, which do matter, but “request labels” is not a standard scheduler concept; labels influence scheduling mainly through selectors and affinity, not as a direct category called “request labels.” Option C mixes a real concept (taints, priority) with “node level,” which isn’t a standard scheduling factor term. Option D includes “container command,” which does not influence scheduling; the scheduler does not care what command the container runs, only placement constraints and resources.
Under the hood, kube-scheduler uses a two-phase process (filtering then scoring) to select a node, but the inputs it filters/scores include exactly the kinds of constraints in A. Therefore, the verified best answer is A.
=========
In a cloud native environment, how do containerization and virtualization differ in terms of resource management?
Containerization uses hypervisors to manage resources, while virtualization does not.
Containerization shares the host OS, while virtualization runs a full OS for each instance.
Containerization consumes more memory than virtualization by default.
Containerization allocates resources per container, virtualization does not isolate them.
The fundamental difference between containerization and virtualization in a cloud native environment lies in how they manage and isolate resources, particularly with respect to the operating system. The correct description is that containerization shares the host operating system, while virtualization runs a full operating system for each instance, making option B the correct answer.
In virtualization, each virtual machine (VM) includes its own complete guest operating system running on top of a hypervisor. The hypervisor virtualizes hardware resources—CPU, memory, storage, and networking—and allocates them to each VM. Because every VM runs a full OS, virtualization introduces significant overhead in terms of memory usage, disk space, and startup time. However, it provides strong isolation between workloads, which is useful for running different operating systems or untrusted workloads on the same physical hardware.
In contrast, containerization operates at the operating system level rather than the hardware level. Containers share the host OS kernel and isolate applications using kernel features such as namespaces and control groups (cgroups). This design makes containers much lighter weight than virtual machines. Containers start faster, consume fewer resources, and allow higher workload density on the same infrastructure. Resource limits and isolation are still enforced, but without duplicating the entire operating system for each application instance.
Option A is incorrect because hypervisors are a core component of virtualization, not containerization. Option C is incorrect because containers generally consume less memory than virtual machines due to the absence of a full guest OS. Option D is incorrect because virtualization does isolate resources very strongly, while containers rely on OS-level isolation rather than hardware-level isolation.
In cloud native architectures, containerization is preferred for microservices and scalable workloads because of its efficiency and portability. Virtualization is still valuable for stronger isolation and heterogeneous operating systems. Therefore, Option B accurately captures the key resource management distinction between the two models.
What is the API that exposes resource metrics from the metrics-server?
custom.k8s.io
resources.k8s.io
metrics.k8s.io
cadvisor.k8s.io
The correct answer is C: metrics.k8s.io. Kubernetes’ metrics-server is the standard component that provides resource metrics (primarily CPU and memory) for nodes and pods. It aggregates this information (sourced from kubelet/cAdvisor) and serves it through the Kubernetes aggregated API under the group metrics.k8s.io. This is what enables commands like kubectl top nodes and kubectl top pods, and it is also a key data source for autoscaling with the Horizontal Pod Autoscaler (HPA) when scaling on CPU/memory utilization.
Why the other options are wrong:
custom.k8s.io is not the standard API group for metrics-server resource metrics. Custom metrics are typically served through the custom metrics API (commonly custom.metrics.k8s.io) via adapters (e.g., Prometheus Adapter), not metrics-server.
resources.k8s.io is not the metrics-server API group.
cadvisor.k8s.io is not exposed as a Kubernetes aggregated metrics API. cAdvisor is a component integrated into kubelet that provides container stats, but metrics-server is the thing that exposes the aggregated Kubernetes metrics API, and the canonical group is metrics.k8s.io.
Operationally, it’s important to understand the boundary: metrics-server provides basic resource metrics suitable for core autoscaling and “top” views, but it is not a full observability system (it does not store long-term metrics history like Prometheus). For richer metrics (SLOs, application metrics, long-term trending), teams typically deploy Prometheus or a managed monitoring backend. Still, when the question asks specifically which API exposes metrics-server data, the answer is definitively metrics.k8s.io.
=========
A Pod has been created, but when checked with kubectl get pods, the READY column shows 0/1. What Kubernetes feature causes this behavior?
Node Selector
Readiness Probes
DNS Policy
Security Contexts
The READY column in the output of kubectl get pods indicates how many containers in a Pod are currently considered ready to serve traffic, compared to the total number of containers defined in that Pod. A value of 0/1 means that the Pod has one container, but that container is not yet marked as ready. The Kubernetes feature responsible for determining this readiness state is the readiness probe.
Readiness probes are used by Kubernetes to decide when a container is ready to accept traffic. These probes can be configured to perform HTTP requests, execute commands, or check TCP sockets inside the container. If a readiness probe is defined and it fails, Kubernetes marks the container as not ready, even if the container is running successfully. As a result, the READY column will show 0/1, and the Pod will be excluded from Service load balancing until the probe succeeds.
Option A (Node Selector) is incorrect because node selectors influence where a Pod is scheduled, not whether its containers are considered ready after startup. Option C (DNS Policy) affects how DNS resolution works inside a Pod and has no direct impact on readiness reporting. Option D (Security Contexts) define security-related settings such as user IDs, capabilities, or privilege levels, but they do not control the READY status shown by kubectl.
Readiness probes are particularly important for applications that take time to initialize, load configuration, or warm up caches. By using readiness probes, Kubernetes ensures that traffic is only sent to containers that are fully prepared to handle requests. This improves reliability and prevents failed or premature connections.
According to Kubernetes documentation, a container without a readiness probe is considered ready by default once it is running. However, when a readiness probe is defined, its result directly controls the READY state. Therefore, the presence and behavior of readiness probes is the verified and correct reason why a Pod may show 0/1 in the READY column, making option B the correct answer.
Can a Kubernetes Service expose multiple ports?
No, you can only expose one port per each Service.
Yes, but you must specify an unambiguous name for each port.
Yes, the only requirement is to use different port numbers.
No, because the only port you can expose is port number 443.
Yes, a Kubernetes Service can expose multiple ports, and when it does, each port should have a unique, unambiguous name, making B correct. In the Service spec, the ports field is an array, allowing you to define multiple port mappings (e.g., 80 for HTTP and 443 for HTTPS, or grpc and metrics). Each entry can include port (Service port), targetPort (backend Pod port), and protocol.
The naming requirement becomes important because Kubernetes needs to disambiguate ports, especially when other resources refer to them. For example, an Ingress backend or some proxies/controllers can reference Service ports by name. Also, when multiple ports exist, a name helps humans and automation reliably select the correct port. Kubernetes documentation and common practice recommend naming ports whenever there is more than one, and in several scenarios it’s effectively required to avoid ambiguity.
Option A is incorrect because multi-port Services are common and fully supported. Option C is insufficient: while different port numbers are necessary, naming is the correct distinguishing rule emphasized by Kubernetes patterns and required by some integrations. Option D is incorrect and nonsensical—Services can expose many ports and are not restricted to 443.
Operationally, exposing multiple ports through one Service is useful when a single backend workload provides multiple interfaces (e.g., application traffic and a metrics endpoint). You can keep stable discovery under one DNS name while still differentiating ports. The backend Pods must still listen on the target ports, and selectors determine which Pods are endpoints. The key correctness point for this question is: multi-port Services are allowed, and each port should be uniquely named to avoid confusion and integration issues.
=========
What is CloudEvents?
It is a specification for describing event data in common formats for Kubernetes network traffic management and cloud providers.
It is a specification for describing event data in common formats in all cloud providers including major cloud providers.
It is a specification for describing event data in common formats to provide interoperability across services, platforms and systems.
It is a Kubernetes specification for describing events data in common formats for iCloud services, iOS platforms and iMac.
CloudEvents is an open specification for describing event data in a common way to enable interoperability across services, platforms, and systems, so C is correct. In cloud-native architectures, many components communicate asynchronously via events (message brokers, event buses, webhooks). Without a standard envelope, each producer and consumer invents its own event structure, making integration brittle. CloudEvents addresses this by standardizing core metadata fields—like event id, source, type, specversion, and time—and defining how event payloads are carried.
This helps systems interoperate regardless of transport. CloudEvents can be serialized as JSON or other encodings and carried over HTTP, messaging systems, or other protocols. By using a shared spec, you can route, filter, validate, and transform events more consistently.
Option A is too narrow and incorrectly ties CloudEvents to Kubernetes traffic management; CloudEvents is broader than Kubernetes. Option B is closer but still framed incorrectly—CloudEvents is not merely “for all cloud providers,” it is an interoperability spec across services and platforms, including but not limited to cloud provider event systems. Option D is clearly incorrect.
In Kubernetes ecosystems, CloudEvents is relevant to event-driven systems and serverless platforms (e.g., Knative Eventing and other eventing frameworks) because it provides a consistent event contract across producers and consumers. That consistency reduces coupling, supports better tooling (schema validation, tracing correlation), and makes event-driven architectures easier to operate at scale.
So, the correct definition is C: a specification for common event formats to enable interoperability across systems.
=========
In a Kubernetes cluster, what is the primary role of the Kubernetes scheduler?
To manage the lifecycle of the Pods by restarting them when they fail.
To monitor the health of the nodes and Pods in the cluster.
To handle network traffic between services within the cluster.
To distribute Pods across nodes based on resource availability and constraints.
The Kubernetes scheduler is a core control plane component responsible for deciding where Pods should run within a cluster. Its primary role is to assign newly created Pods that do not yet have a node assigned to an appropriate node based on a variety of factors such as resource availability, scheduling constraints, and policies.
When a Pod is created, it enters a Pending state until the scheduler selects a suitable node. The scheduler evaluates all available nodes and filters out those that do not meet the Pod’s requirements. These requirements may include CPU and memory requests, node selectors, node affinity rules, taints and tolerations, topology spread constraints, and other scheduling policies. After filtering, the scheduler scores the remaining nodes to determine the best placement for the Pod and then binds the Pod to the selected node.
Option A is incorrect because restarting failed Pods is handled by other components such as the kubelet and higher-level controllers like Deployments, ReplicaSets, or StatefulSets—not the scheduler. Option B is incorrect because monitoring node and Pod health is primarily the responsibility of the kubelet and the Kubernetes controller manager, which reacts to node failures and ensures desired state. Option C is incorrect because handling network traffic is managed by Services, kube-proxy, and the cluster’s networking implementation, not the scheduler.
Option D correctly describes the scheduler’s purpose. By distributing Pods across nodes based on resource availability and constraints, the scheduler helps ensure efficient resource utilization, high availability, and workload isolation. This intelligent placement is essential for maintaining cluster stability and performance, especially in large-scale or multi-tenant environments.
According to Kubernetes documentation, the scheduler’s responsibility is strictly focused on Pod placement decisions. Once a Pod is scheduled, the scheduler’s job is complete for that Pod, making option D the accurate and fully verified answer.
What is a Dockerfile?
A bash script that is used to automatically build a docker image.
A config file that defines which image registry a container should be pushed to.
A text file that contains all the commands a user could call on the command line to assemble an image.
An image layer created by a running container stored on the host.
A Dockerfile is a text file that contains a sequence of instructions used to build a container image, so C is correct. These instructions include choosing a base image (FROM), copying files (COPY/ADD), installing dependencies (RUN), setting environment variables (ENV), defining working directories (WORKDIR), exposing ports (EXPOSE), and specifying the default startup command (CMD/ENTRYPOINT). When you run docker build (or compatible tools like BuildKit), the builder executes these instructions to produce an image composed of immutable layers.
In cloud-native application delivery, Dockerfiles (more generally, OCI image build definitions) are a key step in the supply chain. The resulting image artifact is what Kubernetes runs in Pods. Best practices include using minimal base images, pinning versions, avoiding embedding secrets, and using multi-stage builds to keep runtime images small. These practices improve security and performance, and make delivery pipelines more reliable.
Option A is incorrect because a Dockerfile is not a bash script, even though it can run shell commands through RUN. Option B is incorrect because registry destinations are handled by tooling and tagging/push commands (or CI pipeline configuration), not by the Dockerfile itself. Option D is incorrect because an image layer created by a running container is more closely related to container filesystem changes and commits; a Dockerfile is the build recipe, not a runtime-generated layer.
Although the question uses “Dockerfile,” the concept maps well to OCI-based container image creation generally: you define a reproducible build recipe that produces an immutable image artifact. That artifact is then versioned, scanned, signed, stored in a registry, and deployed to Kubernetes through manifests/Helm/GitOps. Therefore, C is the correct and verified definition.
=========
During a team meeting, a developer mentions the significance of open collaboration in the cloud native ecosystem. Which statement accurately reflects principles of collaborative development and community stewardship?
Open source projects succeed when contributors focus on code quality without the overhead of community engagement.
Maintainers of open source projects act independently to make technical decisions without requiring input from contributors.
Community stewardship emphasizes guiding project growth but does not necessarily include sustainability considerations.
Community events and working groups foster collaboration by bringing people together to share knowledge and build connections.
Open collaboration and community stewardship are foundational principles of the cloud native ecosystem, particularly within projects governed by organizations such as the Cloud Native Computing Foundation (CNCF). These principles emphasize that successful open source projects are not driven solely by code quality, but by healthy, inclusive, and sustainable communities.
Option D accurately reflects these principles. Community events, special interest groups, and working groups play a vital role in fostering collaboration. They provide structured and informal spaces where contributors, maintainers, and users can exchange ideas, share operational experiences, mentor new participants, and collectively guide the direction of projects. This collaborative approach helps ensure that projects evolve in ways that meet real-world needs and benefit from diverse perspectives.
Option A is incorrect because community engagement is not an “overhead” but a critical success factor. Kubernetes and other cloud native projects explicitly recognize that documentation, communication, governance, and contributor onboarding are just as important as writing high-quality code. Without active community participation, projects often struggle with adoption, contributor burnout, and long-term viability.
Option B is incorrect because modern open source governance values transparency and shared decision-making. While maintainers have responsibilities such as reviewing changes and ensuring project stability, they are expected to solicit feedback, encourage discussion, and incorporate contributor input through open processes. This approach builds trust and accountability within the community.
Option C is also incorrect because sustainability is a core aspect of community stewardship. Stewardship includes ensuring that projects can be maintained over time, preventing maintainer burnout, encouraging new contributors, and establishing governance models that support long-term health.
According to cloud native and Kubernetes documentation, strong communities enable innovation, resilience, and scalability—both technically and socially. By bringing people together through events and working groups, community stewardship reinforces collaboration and shared ownership, making option D the correct and fully verified answer.
E QUESTION NO: 5 [Cloud Native Application Delivery]
What does SBOM stand for?
A. System Bill of Materials
B. Software Bill Operations Management
C. Security Baseline for Open Source Management
D. Software Bill of Materials
Answer: D
SBOM stands for Software Bill of Materials, a critical concept in modern cloud native application delivery and software supply chain security. An SBOM is a formal, structured inventory that lists all components included in a software artifact, such as libraries, frameworks, dependencies, and their versions. This includes both direct and transitive dependencies that are bundled into applications, containers, or container images.
In cloud native environments, applications are often built using numerous open source components and third-party libraries. While this accelerates development, it also increases the risk of hidden vulnerabilities. An SBOM provides transparency into what software is actually running in production, enabling organizations to quickly identify whether they are affected by newly disclosed vulnerabilities or license compliance issues.
Option A is incorrect because SBOM is specific to software, not systems or hardware materials. Option B is incorrect because it describes a management process rather than a standardized inventory of software components. Option C is incorrect because SBOM is not a security baseline or policy framework; instead, it is a factual record of software contents that supports security and compliance efforts.
SBOMs are especially important in containerized and Kubernetes-based workflows. Container images often bundle many dependencies into a single artifact, making it difficult to assess risk without a detailed inventory. By generating and distributing SBOMs alongside container images, teams can integrate vulnerability scanning, compliance checks, and risk assessment earlier in the delivery pipeline. This practice aligns with the principles of DevSecOps and shift-left security.
Kubernetes and cloud native security guidance emphasize SBOMs as a foundational element of software supply chain security. They support faster incident response, improved trust between software producers and consumers, and stronger governance across the lifecycle of applications. As a result, Software Bill of Materials is the correct and fully verified expansion of SBOM, making option D the accurate answer.
What is the name of the Kubernetes resource used to expose an application?
Port
Service
DNS
Deployment
To expose an application running on Pods so that other components can reliably reach it, Kubernetes uses a Service, making B the correct answer. Pods are ephemeral: they can be recreated, rescheduled, and scaled, which means Pod IPs change. A Service provides a stable endpoint (virtual IP and DNS name) and load-balances traffic across the set of Pods selected by its label selector.
Services come in multiple forms. The default is ClusterIP, which exposes the application inside the cluster. NodePort exposes the Service on a static port on each node, and LoadBalancer (in supported clouds) provisions an external load balancer that routes traffic to the Service. ExternalName maps a Service name to an external DNS name. But across these variants, the abstraction is consistent: a Service defines how to access a logical group of Pods.
Option A (Port) is not a Kubernetes resource type; ports are fields within resources. Option C (DNS) is a supporting mechanism (CoreDNS creates DNS entries for Services), but DNS is not the resource you create to expose the app. Option D (Deployment) manages Pod replicas and rollouts, but it does not directly provide stable networking access; you typically pair a Deployment with a Service to expose it.
This is a core cloud-native pattern: controllers manage compute, Services manage stable connectivity, and higher-level gateways like Ingress provide L7 routing for HTTP/HTTPS. So, the Kubernetes resource used to expose an application is Service (B).
=========
Which of the following is a definition of Hybrid Cloud?
A combination of services running in public and private data centers, only including data centers from the same cloud provider.
A cloud native architecture that uses services running in public clouds, excluding data centers in different availability zones.
A cloud native architecture that uses services running in different public and private clouds, including on-premises data centers.
A combination of services running in public and private data centers, excluding serverless functions.
A hybrid cloud architecture combines public cloud and private/on-premises environments, often spanning multiple infrastructure domains while maintaining some level of portability, connectivity, and unified operations. Option C captures the commonly accepted definition: services run across public and private clouds, including on-premises data centers, so C is correct.
Hybrid cloud is not limited to a single cloud provider (which is why A is too restrictive). Many organizations adopt hybrid cloud to meet regulatory requirements, data residency constraints, latency needs, or to preserve existing investments while still using public cloud elasticity. In Kubernetes terms, hybrid strategies often include running clusters both on-prem and in one or more public clouds, then standardizing deployment through Kubernetes APIs, GitOps, and consistent security/observability practices.
Option B is incorrect because excluding data centers in different availability zones is not a defining property; in fact, hybrid deployments commonly use multiple zones/regions for resilience. Option D is a distraction: serverless inclusion or exclusion does not define hybrid cloud. Hybrid is about the combination of infrastructure environments, not a specific compute model.
A practical cloud-native view is that hybrid architectures introduce challenges around identity, networking, policy enforcement, and consistent observability across environments. Kubernetes helps because it provides a consistent control plane API and workload model regardless of where it runs. Tools like service meshes, federated identity, and unified monitoring can further reduce fragmentation.
So, the most accurate definition in the given choices is C: hybrid cloud combines public and private clouds, including on-premises infrastructure, to run services in a coordinated architecture.
=========
Let’s assume that an organization needs to process large amounts of data in bursts, on a cloud-based Kubernetes cluster. For instance: each Monday morning, they need to run a batch of 1000 compute jobs of 1 hour each, and these jobs must be completed by Monday night. What’s going to be the most cost-effective method?
Run a group of nodes with the exact required size to complete the batch on time, and use a combination of taints, tolerations, and nodeSelectors to reserve these nodes to the batch jobs.
Leverage the Kubernetes Cluster Autoscaler to automatically start and stop nodes as they’re needed.
Commit to a specific level of spending to get discounted prices (with e.g. “reserved instances” or similar mechanisms).
Use PriorityClasses so that the weekly batch job gets priority over other workloads running on the cluster, and can be completed on time.
Burst workloads are a classic elasticity problem: you need large capacity for a short window, then very little capacity the rest of the week. The most cost-effective approach in a cloud-based Kubernetes environment is to scale infrastructure dynamically, matching node count to current demand. That’s exactly what Cluster Autoscaler is designed for: it adds nodes when Pods cannot be scheduled due to insufficient resources and removes nodes when they become underutilized and can be drained safely. Therefore B is correct.
Option A can work operationally, but it commonly results in paying for a reserved “standing army” of nodes that sit idle most of the week—wasteful for bursty patterns unless the nodes are repurposed for other workloads. Taints/tolerations and nodeSelectors are placement tools; they don’t reduce cost by themselves and may increase waste if they isolate nodes. Option D (PriorityClasses) affects which Pods get scheduled first given available capacity, but it does not create capacity. If the cluster doesn’t have enough nodes, high priority Pods will still remain Pending. Option C (reserved instances or committed-use discounts) can reduce unit price, but it assumes relatively predictable baseline usage. For true bursts, you usually want a smaller baseline plus autoscaling, and optionally combine it with discounted capacity types if your cloud supports them.
In Kubernetes terms, the control loop is: batch Jobs create Pods → scheduler tries to place Pods → if many Pods are Pending due to insufficient CPU/memory, Cluster Autoscaler observes this and increases the node group size → new nodes join and kube-scheduler places Pods → after jobs finish and nodes become empty, Cluster Autoscaler drains and removes nodes. This matches cloud-native principles: elasticity, pay-for-what-you-use, and automation. It minimizes idle capacity while still meeting the completion deadline.
=========
What helps an organization to deliver software more securely at a higher velocity?
Kubernetes
apt-get
Docker Images
CI/CD Pipeline
A CI/CD pipeline is a core practice/tooling approach that enables organizations to deliver software faster and more securely, so D is correct. CI (Continuous Integration) automates building and testing code changes frequently, reducing integration risk and catching defects early. CD (Continuous Delivery/Deployment) automates releasing validated builds into environments using consistent, repeatable steps—reducing manual errors and enabling rapid iteration.
Security improves because automation enables standardized checks on every change: static analysis, dependency scanning, container image scanning, policy validation, and signing/verification steps can be integrated into the pipeline. Instead of relying on ad-hoc human processes, security controls become repeatable gates. In Kubernetes environments, pipelines commonly build container images, run tests, publish artifacts to registries, and then deploy via manifests, Helm, or GitOps controllers—keeping deployments consistent and auditable.
Option A (Kubernetes) is a platform that helps run and manage workloads, but by itself it doesn’t guarantee secure high-velocity delivery. It provides primitives (rollouts, declarative config, RBAC), yet the delivery workflow still needs automation. Option B (apt-get) is a package manager for Debian-based systems and is not a delivery pipeline. Option C (Docker Images) are artifacts; they improve portability and repeatability, but they don’t provide the end-to-end automation of building, testing, promoting, and deploying across environments.
In cloud-native application delivery, the pipeline is the “engine” that turns code changes into safe production releases. Combined with Kubernetes’ declarative deployment model (Deployments, rolling updates, health probes), a CI/CD pipeline supports frequent releases with controlled rollouts, fast rollback, and strong auditability. That is exactly what the question is targeting. Therefore, the verified answer is D.
=========
What does “Continuous Integration” mean?
The continuous integration and testing of code changes from multiple sources manually.
The continuous integration and testing of code changes from multiple sources via automation.
The continuous integration of changes from one environment to another.
The continuous integration of new tools to support developers in a project.
The correct answer is B: Continuous Integration (CI) is the practice of frequently integrating code changes from multiple contributors and validating them through automated builds and tests. The “continuous” part is about doing this often (ideally many times per day) and consistently, so integration problems are detected early instead of piling up until a painful merge or release window.
Automation is essential. CI typically includes steps like compiling/building artifacts, running unit and integration tests, executing linters, checking formatting, scanning dependencies for vulnerabilities, and producing build reports. This automation creates fast feedback loops that help developers catch regressions quickly and maintain a releasable main branch.
Option A is incorrect because manual integration/testing does not scale and undermines the reliability and speed that CI is meant to provide. Option C confuses CI with deployment promotion across environments (which is more aligned with Continuous Delivery/Deployment). Option D is unrelated: adding tools can support CI, but it isn’t the definition.
In cloud-native application delivery, CI is tightly coupled with containerization and Kubernetes: CI pipelines often build container images from source, run tests, scan images, sign artifacts, and push to registries. Those validated artifacts then flow into CD processes that deploy to Kubernetes using manifests, Helm, or GitOps controllers. Without CI, Kubernetes rollouts become riskier because you lack consistent validation of what you’re deploying.
So, CI is best defined as automated integration and testing of code changes from multiple sources, which matches option B.
=========
What are the 3 pillars of Observability?
Metrics, Logs, and Traces
Metrics, Logs, and Spans
Metrics, Data, and Traces
Resources, Logs, and Tracing
The correct answer is A: Metrics, Logs, and Traces. These are widely recognized as the “three pillars” because together they provide complementary views into system behavior:
Metrics are numeric time series collected over time (CPU usage, request rate, error rate, latency percentiles). They are best for dashboards, alerting, and capacity planning because they are structured and aggregatable. In Kubernetes, metrics underpin autoscaling and operational visibility (node/pod resource usage, cluster health signals).
Logs are discrete event records (often text) emitted by applications and infrastructure components. Logs provide detailed context for debugging: error messages, stack traces, warnings, and business events. In Kubernetes, logs are commonly collected from container stdout/stderr and aggregated centrally for search and correlation.
Traces capture the end-to-end journey of a request through a distributed system, breaking it into spans. Tracing is crucial in microservices because a single user request may cross many services; traces show where latency accumulates and which dependency fails. Tracing also enables root cause analysis when metrics indicate degradation but don’t pinpoint the culprit.
Why the other options are wrong: a span is a component within tracing, not a top-level pillar; “data” is too generic; and “resources” are not an observability signal category. The pillars are defined by signal type and how they’re used operationally.
In cloud-native practice, these pillars are often unified via correlation IDs and shared context: metrics alerts link to logs and traces for the same timeframe/request. Tooling like Prometheus (metrics), log aggregators (e.g., Loki/Elastic), and tracing systems (Jaeger/Tempo/OpenTelemetry) work together to provide a complete observability story.
Therefore, the verified correct answer is A.
=========
What is an ephemeral container?
A specialized container that runs as root for infosec applications.
A specialized container that runs temporarily in an existing Pod.
A specialized container that extends and enhances the main container in a Pod.
A specialized container that runs before the app container in a Pod.
B is correct: an ephemeral container is a temporary container you can add to an existing Pod for troubleshooting and debugging without restarting the Pod. This capability is especially useful when a running container image is minimal (distroless) and lacks debugging tools like sh, curl, or ps. Instead of rebuilding the workload image or disrupting the Pod, you attach an ephemeral container that includes the tools you need, then inspect processes, networking, filesystem mounts, and runtime behavior.
Ephemeral containers are not part of the original Pod spec the same way normal containers are. They are added via a dedicated subresource and are generally not restarted automatically like regular containers. They are meant for interactive investigation, not for ongoing workload functionality.
Why the other options are incorrect:
D describes init containers, which run before app containers start and are used for setup tasks.
C resembles the “sidecar” concept (a supporting container that runs alongside the main container), but sidecars are normal containers defined in the Pod spec, not ephemeral containers.
A is not a definition; ephemeral containers are not “root by design” (they can run with various security contexts depending on policy), and they aren’t limited to infosec use cases.
In Kubernetes operations, ephemeral containers complement kubectl exec and logs. If the target container is crash-looping or lacks a shell, exec may not help; adding an ephemeral container provides a safe and Kubernetes-native debugging path. So, the accurate definition is B.
=========
Which component of the node is responsible to run workloads?
The kubelet.
The kube-proxy.
The kube-apiserver.
The container runtime.
The verified correct answer is D (the container runtime). On a Kubernetes node, the container runtime (such as containerd or CRI-O) is the component that actually executes containers—it creates container processes, manages their lifecycle, pulls images, and interacts with the underlying OS primitives (namespaces, cgroups) through an OCI runtime like runc. In that direct sense, the runtime is what “runs workloads.”
It’s important to distinguish responsibilities. The kubelet (A) is the node agent that orchestrates what should run on the node: it watches the API server for Pods assigned to the node and then asks the runtime to start/stop containers accordingly. Kubelet is essential for node management, but it does not itself execute containers; it delegates execution to the runtime via CRI. kube-proxy (B) handles Service traffic routing rules (or is replaced by other dataplanes) and does not run containers. kube-apiserver (C) is a control plane component that stores and serves cluster state; it is not a node workload runner.
So, in the execution chain: scheduler assigns Pod → kubelet sees Pod assigned → kubelet calls runtime via CRI → runtime launches containers. When troubleshooting “containers won’t start,” you often inspect kubelet logs and runtime logs because the runtime is the component that can fail image pulls, sandbox creation, or container start operations.
Therefore, the best answer to “which node component is responsible to run workloads” is the container runtime, option D.
=========
What is the practice of bringing financial accountability to the variable spend model of cloud resources?
FaaS
DevOps
CloudCost
FinOps
The practice of bringing financial accountability to cloud spending—where costs are variable and usage-based—is called FinOps, so D is correct. FinOps (Financial Operations) is an operating model and culture that helps organizations manage cloud costs by connecting engineering, finance, and business teams. Because cloud resources can be provisioned quickly and billed dynamically, traditional budgeting approaches often fail to keep pace. FinOps addresses this by introducing shared visibility, governance, and optimization processes that enable teams to make cost-aware decisions while still moving fast.
In Kubernetes and cloud-native architectures, variable spend shows up in many ways: autoscaling node pools, over-provisioned resource requests, idle clusters, persistent volumes, load balancers, egress traffic, managed services, and observability tooling. FinOps practices encourage tagging/labeling for cost attribution, defining cost KPIs, enforcing budget guardrails, and continuously optimizing usage (right-sizing resources, scaling policies, turning off unused environments, and selecting cost-effective architectures).
Why the other options are incorrect: FaaS (Function as a Service) is a compute model (serverless), not a financial accountability practice. DevOps is a cultural and technical practice focused on collaboration and delivery speed, not specifically cloud cost accountability (though it can complement FinOps). CloudCost is not a widely recognized standard term in the way FinOps is.
In practice, FinOps for Kubernetes often involves improving resource efficiency: aligning requests/limits with real usage, using HPA/VPA appropriately, selecting instance types that match workload profiles, managing cluster autoscaler settings, and allocating shared platform costs to teams via labels/namespaces. It also includes forecasting and anomaly detection, because cloud-native spend can spike quickly due to misconfigurations (e.g., runaway autoscaling or excessive log ingestion).
So, the correct term for financial accountability in cloud variable spend is FinOps (D).
=========
Which of the following is a primary use case of Istio in a Kubernetes cluster?
To manage and control the versions of container runtimes used on nodes between services.
To provide secure built-in database management features for application workloads.
To provision and manage persistent storage volumes for stateful applications.
To provide service mesh capabilities such as traffic management, observability, and security between services.
Istio is a widely adopted service mesh for Kubernetes that focuses on managing service-to-service communication in distributed, microservices-based architectures. Its primary use case is to provide advanced traffic management, observability, and security capabilities between services, making option D the correct answer.
In a Kubernetes cluster, applications often consist of many independent services that communicate over the network. Managing this communication using application code alone becomes complex and error-prone as systems scale. Istio addresses this challenge by inserting a transparent data plane—typically based on Envoy proxies—alongside application workloads. These proxies intercept all inbound and outbound traffic, enabling consistent policy enforcement without requiring code changes.
Istio’s traffic management features include fine-grained routing, retries, timeouts, circuit breaking, fault injection, and canary or blue–green deployments. These capabilities allow operators to control how traffic flows between services, test new versions safely, and improve overall system resilience. For observability, Istio provides detailed telemetry such as metrics, logs, and distributed traces, giving deep insight into service performance and behavior. On the security front, Istio enables mutual TLS (mTLS) for service-to-service communication, strong identity, and access policies to secure traffic within the cluster.
Option A is incorrect because container runtime management is handled at the node and cluster level by Kubernetes and the underlying operating system, not by Istio. Option B is incorrect because Istio does not provide database management functionality. Option C is incorrect because persistent storage provisioning is handled by Kubernetes storage APIs and CSI drivers, not by service meshes.
By abstracting networking concerns away from application code, Istio helps teams operate complex microservices environments more safely and efficiently. Therefore, the correct and verified answer is Option D, which accurately reflects Istio’s core purpose and documented use cases in Kubernetes ecosystems.
Which statement about the Kubernetes network model is correct?
Pods can only communicate with Pods exposed via a Service.
Pods can communicate with all Pods without NAT.
The Pod IP is only visible inside a Pod.
The Service IP is used for the communication between Services.
Kubernetes’ networking model assumes that every Pod has its own IP address and that Pods can communicate with other Pods across nodes without requiring network address translation (NAT). That makes B correct. This is one of Kubernetes’ core design assumptions and is typically implemented via CNI plugins that provide flat, routable Pod networking (or equivalent behavior using encapsulation/routing).
This model matters because scheduling is dynamic. The scheduler can place Pods anywhere in the cluster, and applications should not need to know whether a peer is on the same node or a different node. With the Kubernetes network model, Pod-to-Pod communication works uniformly: a Pod can reach any other Pod IP directly, and nodes can reach Pods as well. Services and DNS add stable naming and load balancing, but direct Pod connectivity is part of the baseline model.
Option A is incorrect because Pods can communicate directly using Pod IPs even without Services (subject to NetworkPolicies and routing). Services are abstractions for stable access and load balancing; they are not the only way Pods can communicate. Option C is incorrect because Pod IPs are not limited to visibility “inside a Pod”; they are routable within the cluster network. Option D is misleading: Services are often used by Pods (clients) to reach a set of Pods (backends). “Service IP used for communication between Services” is not the fundamental model; Services are virtual IPs for reaching workloads, and “Service-to-Service communication” usually means one workload calling another via the target Service name.
A useful way to remember the official model: (1) all Pods can communicate with all other Pods (no NAT), (2) all nodes can communicate with all Pods (no NAT), (3) Pod IPs are unique cluster-wide. This enables consistent microservice connectivity and supports higher-level traffic management layers like Ingress and service meshes.
=========
What are the advantages of adopting a GitOps approach for your deployments?
Reduce failed deployments, operational costs, and fragile release processes.
Reduce failed deployments, configuration drift, and fragile release processes.
Reduce failed deployments, operational costs, and learn git.
Reduce failed deployments, configuration drift and improve your reputation.
The correct answer is B: GitOps helps reduce failed deployments, reduce configuration drift, and reduce fragile release processes. GitOps is an operating model where Git is the source of truth for declarative configuration (Kubernetes manifests, Helm releases, Kustomize overlays). A GitOps controller (like Flux or Argo CD) continuously reconciles the cluster’s actual state to match what’s declared in Git. This creates a stable, repeatable deployment pipeline and minimizes “snowflake” environments.
Reducing failed deployments: changes go through pull requests, code review, automated checks, and controlled merges. Deployments become predictable because the controller applies known-good, versioned configuration rather than ad-hoc manual commands. Rollbacks are also simpler—reverting a Git commit returns the cluster to the prior desired state.
Reducing configuration drift: without GitOps, clusters often drift because humans apply hotfixes directly in production or because different environments diverge over time. With GitOps, the controller detects drift and either alerts or automatically corrects it, restoring alignment with Git.
Reducing fragile release processes: releases become standardized and auditable. Git history provides an immutable record of who changed what and when. Promotion between environments becomes systematic (merge/branch/tag), and the same declarative artifacts are used consistently.
The other options include items that are either not the primary GitOps promise (like “learn git”) or subjective (“improve your reputation”). Operational cost reduction can happen indirectly through fewer incidents and more automation, but the most canonical and direct GitOps advantages in Kubernetes delivery are reliability and drift control—captured precisely in B.
=========
What is the order of 4C’s in Cloud Native Security, starting with the layer that a user has the most control over?
Cloud -> Container -> Cluster -> Code
Container -> Cluster -> Code -> Cloud
Cluster -> Container -> Code -> Cloud
Code -> Container -> Cluster -> Cloud
The Cloud Native Security “4C’s” model is commonly presented as Code, Container, Cluster, Cloud, ordered from the layer you control most directly to the one you control least—therefore D is correct. The idea is defense-in-depth across layers, recognizing that responsibilities are shared between developers, platform teams, and cloud providers.
Code is where users have the most direct control: application logic, dependencies, secure coding practices, secrets handling patterns, and testing. This includes validating inputs, avoiding vulnerabilities, and scanning dependencies. Next is the Container layer: building secure images, minimizing image size/attack surface, using non-root users, setting file permissions, and scanning images for known CVEs. Container security is about ensuring the artifact you run is trustworthy and hardened.
Then comes the Cluster layer: Kubernetes configuration and runtime controls, including RBAC, admission policies (OPA/Gatekeeper), Pod Security standards, network policies, runtime security, audit logging, and node hardening practices. Cluster controls determine what can run and how workloads interact. Finally, the Cloud layer includes the infrastructure and provider controls—IAM, VPC/networking, KMS, managed control plane protections, and physical security—which users influence through configuration but do not fully own.
The model’s value is prioritization: start with what you control most (code), then harden the container artifact, then enforce cluster policy and runtime protections, and finally ensure cloud controls are configured properly. This layered approach aligns well with Kubernetes security guidance and modern shared-responsibility models.
Manual reclamation policy of a PV resource is known as:
claimRef
Delete
Retain
Recycle
The correct answer is C: Retain. In Kubernetes persistent storage, a PersistentVolume (PV) has a persistentVolumeReclaimPolicy that determines what happens to the underlying storage asset after its PersistentVolumeClaim (PVC) is deleted. The reclaim policy options historically include Delete and Retain (and Recycle, which is deprecated/removed in many modern contexts). “Manual reclamation” refers to the administrator having to manually clean up and/or rebind the storage after the claim is released—this behavior corresponds to Retain.
With Retain, when the PVC is deleted, the PV moves to a “Released” state, but the actual storage resource (cloud disk, NFS path, etc.) is not deleted automatically. Kubernetes will not automatically make that PV available for a new claim until an administrator takes action—typically cleaning the data, removing the old claim reference, and/or creating a new PV/PVC binding flow. This is important for data safety: you don’t want to automatically delete sensitive or valuable data just because a claim was removed.
By contrast, Delete means Kubernetes (via the storage provisioner/CSI driver) will delete the underlying storage asset when the claim is deleted—useful for dynamic provisioning and disposable environments. Recycle used to scrub the volume contents and make it available again, but it’s not the recommended modern approach and has been phased out in favor of dynamic provisioning and explicit workflows.
So, the policy that implies manual intervention and manual cleanup/reuse is Retain, which is option C.
=========
Which two elements are shared between containers in the same pod?
Network resources and liveness probes.
Storage and container image registry.
Storage and network resources.
Network resources and Dockerfiles.
The correct answer is C: Storage and network resources. In Kubernetes, a Pod is the smallest schedulable unit and acts like a “logical host” for its containers. Containers inside the same Pod share a number of namespaces and resources, most notably:
Network: all containers in a Pod share the same network namespace, which means they share a single Pod IP address and the same port space. They can talk to each other via localhost and coordinate tightly without exposing separate network endpoints.
Storage: containers in a Pod can share data through Pod volumes. Volumes (like emptyDir, ConfigMap/Secret volumes, or PVC-backed volumes) are defined at the Pod level and can be mounted into multiple containers within the Pod. This enables common patterns like a sidecar writing logs to a shared volume that the main container generates, or an init/sidecar container producing configuration or certificates for the main container.
Why other options are wrong: liveness probes (A) are defined per container (or per Pod template) but are not a “shared” resource between containers. A container image registry (B) is an external system and not a shared in-Pod element. Dockerfiles (D) are build-time artifacts, irrelevant at runtime, and not shared resources.
This question is a classic test of Pod fundamentals: multi-container Pods work precisely because they share networking and volumes. This is also why the sidecar pattern is feasible—sidecars can intercept traffic on localhost, export metrics, or ship logs while sharing the same lifecycle boundary and scheduling placement.
Therefore, the verified correct choice is C.
=========
What is a Kubernetes Service Endpoint?
It is the API endpoint of our Kubernetes cluster.
It is a name of special Pod in kube-system namespace.
It is an IP address that we can access from the Internet.
It is an object that gets IP addresses of individual Pods assigned to it.
A Kubernetes Service routes traffic to a dynamic set of backends (usually Pods). The set of backend IPs and ports is represented by endpoint-tracking resources. Historically this was the Endpoints object; today Kubernetes commonly uses EndpointSlice for scalability, but the concept remains the same: endpoints represent the concrete network destinations behind a Service. That’s why D is correct: a Service endpoint is an object that contains the IP addresses (and ports) of the individual Pods (or other backends) associated with that Service.
When a Service has a selector, Kubernetes automatically maintains endpoints by watching which Pods match the selector and are Ready, then publishing those Pod IPs into Endpoints/EndpointSlices. Consumers don’t usually use endpoints directly; instead they call the Service DNS name, and kube-proxy (or an alternate dataplane) forwards traffic to one of the endpoints. Still, endpoints are critical because they are what make Service routing accurate and up to date during scaling events, rolling updates, and failures.
Option A confuses this with the Kubernetes API server endpoint (the cluster API URL). Option B is incorrect; there’s no special “Service Endpoint Pod.” Option C describes an external/public IP concept, which may exist for LoadBalancer Services, but “Service endpoint” in Kubernetes vocabulary is about the backend destinations, not the public entrypoint.
Operationally, endpoints are useful for debugging: if a Service isn’t routing traffic, checking Endpoints/EndpointSlices shows whether the Service actually has backends and whether readiness is excluding Pods. This ties directly into Kubernetes service discovery and load balancing: the Service is the stable front door; endpoints are the actual backends.
=========
What service account does a Pod use in a given namespace when the service account is not specified?
admin
sysadmin
root
default
D (default) is correct. In Kubernetes, if you create a Pod (or a controller creates Pods) without specifying spec.serviceAccountName, Kubernetes assigns the Pod the default ServiceAccount in that namespace. The ServiceAccount determines what identity the Pod uses when accessing the Kubernetes API (for example, via the in-cluster token mounted into the Pod, when token automounting is enabled).
Every namespace typically has a default ServiceAccount created automatically. The permissions associated with that ServiceAccount are determined by RBAC bindings. In many clusters, the default ServiceAccount has minimal permissions (or none) as a security best practice, because leaving it overly privileged would allow any Pod to access sensitive cluster APIs.
Why the other options are wrong: Kubernetes does not automatically choose “admin,” “sysadmin,” or “root” service accounts. Those are not standard implicit identities, and automatically granting admin privileges would be insecure. Instead, Kubernetes follows a predictable, least-privilege-friendly default: use the namespace’s default ServiceAccount unless you explicitly request a different one.
Operationally, this matters for security and troubleshooting. If an application in a Pod is failing with “forbidden” errors when calling the API, it often means it’s using the default ServiceAccount without the necessary RBAC permissions. The correct fix is usually to create a dedicated ServiceAccount and bind only the required roles, then set serviceAccountName in the Pod template. Conversely, if you’re hardening a cluster, you often disable automounting of service account tokens for Pods that don’t need API access.
Therefore, the verified correct answer is D: default.
=========
What’s the difference between a security profile and a security context?
Security Contexts configure Clusters and Namespaces at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
Security Contexts configure Pods and Containers at runtime. Security profiles are control plane mechanisms to enforce specific settings in the Security Context.
Security Profiles configure Pods and Containers at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
Security Profiles configure Clusters and Namespaces at runtime. Security Contexts are control plane mechanisms to enforce specific settings in the Security Profile.
The correct answer is B. In Kubernetes, a securityContext is part of the Pod and container specification that configures runtime security settings for that workload—things like runAsUser, runAsNonRoot, Linux capabilities, readOnlyRootFilesystem, allowPrivilegeEscalation, SELinux options, seccomp profile selection, and filesystem group (fsGroup). These settings directly affect how the Pod’s containers run on the node.
A security profile, in contrast, is a higher-level policy/standard enforced by the cluster control plane (typically via admission control) to ensure workloads meet required security constraints. In modern Kubernetes, this concept aligns with mechanisms like Pod Security Standards (Privileged, Baseline, Restricted) enforced through Pod Security Admission. The “profile” defines what is allowed or forbidden (for example, disallow privileged containers, disallow hostPath mounts, require non-root, restrict capabilities). The control plane enforces these constraints by validating or rejecting Pod specs that do not comply—ensuring consistent security posture across namespaces and teams.
Option A and D are incorrect because security contexts do not “configure clusters and namespaces at runtime”; security contexts apply to Pods/containers. Option C reverses the relationship: security profiles don’t configure Pods at runtime; they constrain what security context settings (and other fields) are acceptable.
Practically, you can think of it as:
SecurityContext = workload-level configuration knobs (declared in manifests, applied at runtime).
SecurityProfile/Standards = cluster-level guardrails that determine which knobs/settings are permitted.
This separation supports least privilege: developers declare needed runtime settings, and cluster governance ensures those settings stay within approved boundaries. Therefore, B is the verified answer.
=========
What are the two steps performed by the kube-scheduler to select a node to schedule a pod?
Grouping and placing
Filtering and selecting
Filtering and scoring
Scoring and creating
The kube-scheduler selects a node in two main phases: filtering and scoring, so C is correct. First, filtering identifies which nodes are feasible for the Pod by applying hard constraints. These include resource availability (CPU/memory requests), node taints/tolerations, node selectors and required affinities, topology constraints, and other scheduling requirements. Nodes that cannot satisfy the Pod’s requirements are removed from consideration.
Second, scoring ranks the remaining feasible nodes using priority functions to choose the “best” placement. Scoring can consider factors like spreading Pods across nodes/zones, packing efficiency, affinity preferences, and other policies configured in the scheduler. The node with the highest score is selected (with tie-breaking), and the scheduler binds the Pod by setting spec.nodeName.
Option B (“filtering and selecting”) is close but misses the explicit scoring step that is central to scheduler design. The scheduler does “select” a node, but the canonical two-step wording in Kubernetes scheduling is filtering then scoring. Options A and D are not how scheduler internals are described.
Operationally, understanding filtering vs scoring helps troubleshoot scheduling failures. If a Pod can’t be scheduled, it failed in filtering—kubectl describe pod often shows “0/… nodes are available” reasons (insufficient CPU, taints, affinity mismatch). If it schedules but lands in unexpected places, it’s often about scoring preferences (affinity weights, topology spread preferences, default scheduler profiles).
So the verified correct answer is C: kube-scheduler uses Filtering and Scoring.
=========
Which statement best describes the role of kubelet on a Kubernetes worker node?
kubelet manages the container runtime and ensures that all Pods scheduled to the node are running as expected.
kubelet configures networking rules on each node to handle traffic routing for Services in the cluster.
kubelet monitors cluster-wide resource usage and assigns Pods to the most suitable nodes for execution.
kubelet acts as the primary API component that stores and manages cluster state information.
The kubelet is the primary node-level agent in Kubernetes and is responsible for ensuring that workloads assigned to a worker node are executed correctly. Its core function is to manage container execution on the node and ensure that all Pods scheduled to that node are running as expected, which makes option A the correct answer.
Once the Kubernetes scheduler assigns a Pod to a node, the kubelet on that node takes over responsibility for running the Pod. It continuously watches the API server for Pod specifications that target its node and then interacts with the container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). The kubelet starts, stops, and restarts containers to match the desired state defined in the Pod specification.
In addition to lifecycle management, the kubelet performs ongoing health monitoring. It executes liveness, readiness, and startup probes, reports Pod and node status back to the API server, and enforces resource limits defined in the Pod specification. If a container crashes or becomes unhealthy, the kubelet initiates recovery actions such as restarting the container.
Option B is incorrect because configuring Service traffic routing is the responsibility of kube-proxy and the cluster’s networking layer, not the kubelet. Option C is incorrect because cluster-wide resource monitoring and Pod placement decisions are handled by the kube-scheduler. Option D is incorrect because cluster state is managed by the API server and stored in etcd, not by the kubelet.
In summary, the kubelet acts as the executor and supervisor of Pods on each worker node. It bridges the Kubernetes control plane and the actual runtime environment, ensuring that containers are running, healthy, and aligned with the declared configuration. Therefore, Option A is the correct and verified answer.
What does vertical scaling an application deployment describe best?
Adding/removing applications to meet demand.
Adding/removing node instances to the cluster to meet demand.
Adding/removing resources to applications to meet demand.
Adding/removing application instances of the same application to meet demand.
Vertical scaling means changing the resources allocated to a single instance of an application (more or less CPU/memory), which is why C is correct. In Kubernetes terms, this corresponds to adjusting container resource requests and limits (for CPU and memory). Increasing resources can help a workload handle more load per Pod by giving it more compute or memory headroom; decreasing can reduce cost and improve cluster packing efficiency.
This differs from horizontal scaling, which changes the number of instances (replicas). Option D describes horizontal scaling: adding/removing replicas of the same workload, typically managed by a Deployment and often automated via the Horizontal Pod Autoscaler (HPA). Option B describes scaling the infrastructure layer (nodes) which is cluster/node autoscaling (Cluster Autoscaler in cloud environments). Option A is not a standard scaling definition.
In practice, vertical scaling in Kubernetes can be manual (edit the Deployment resource requests/limits) or automated using the Vertical Pod Autoscaler (VPA), which can recommend or apply new requests based on observed usage. A key nuance is that changing requests/limits often requires Pod restarts to take effect, so vertical scaling is less “instant” than HPA and can disrupt workloads if not planned. That’s why many production teams prefer horizontal scaling for traffic-driven workloads and use vertical scaling to right-size baseline resources or address memory-bound/cpu-bound behavior.
From a cloud-native architecture standpoint, understanding vertical vs horizontal scaling helps you design for elasticity: use vertical scaling to tune per-instance capacity; use horizontal scaling for resilience and throughput; and combine with node autoscaling to ensure the cluster has sufficient capacity. The definition the question is testing is simple: vertical scaling = change resources per application instance, which is option C.
What is an important consideration when choosing a base image for a container in a Kubernetes deployment?
It should be minimal and purpose-built for the application to reduce attack surface and improve performance.
It should always be the latest version to ensure access to the newest features.
It should be the largest available image to ensure all dependencies are included.
It can be any existing image from the public repository without consideration of its contents.
Choosing an appropriate base image is a critical decision in building containerized applications for Kubernetes, as it directly impacts security, performance, reliability, and operational efficiency. A key best practice is to select a minimal, purpose-built base image, making option A the correct answer.
Minimal base images—such as distroless images or slim variants of common distributions—contain only the essential components required to run the application. By excluding unnecessary packages, shells, and utilities, these images significantly reduce the attack surface. Fewer components mean fewer potential vulnerabilities, which is especially important in Kubernetes environments where containers are often deployed at scale and exposed to dynamic network traffic.
Smaller images also improve performance and efficiency. They reduce image size, leading to faster image pulls, quicker Pod startup times, and lower network and storage overhead. This is particularly beneficial in large clusters or during frequent deployments, scaling events, or rolling updates. Kubernetes’ design emphasizes fast, repeatable deployments, and lightweight images align well with these goals.
Option B is incorrect because always using the latest image version can introduce instability or unexpected breaking changes. Kubernetes best practices recommend using explicitly versioned and tested images to ensure predictable behavior and reproducibility. Option C is incorrect because large images increase the attack surface, slow down deployments, and often include unnecessary dependencies that are never used by the application. Option D is incorrect because blindly using public images without inspecting their contents or provenance introduces serious security and compliance risks.
Kubernetes documentation and cloud-native security guidance consistently emphasize the principle of least privilege and minimalism in container images. A well-chosen base image supports secure defaults, faster operations, and easier maintenance, all of which are essential for running reliable workloads in production Kubernetes environments.
Therefore, the correct and verified answer is Option A.
What Kubernetes control plane component exposes the programmatic interface used to create, manage and interact with the Kubernetes objects?
kube-controller-manager
kube-proxy
kube-apiserver
etcd
The kube-apiserver is the front door of the Kubernetes control plane and exposes the programmatic interface used to create, read, update, delete, and watch Kubernetes objects—so C is correct. Every interaction with cluster state ultimately goes through the Kubernetes API. Tools like kubectl, client libraries, GitOps controllers, operators, and core control plane components (scheduler and controllers) all communicate with the API server to submit desired state and to observe current state.
The API server is responsible for handling authentication (who are you?), authorization (what are you allowed to do?), and admission control (should this request be allowed and possibly mutated/validated?). After a request passes these gates, the API server persists the object’s desired state to etcd (the backing datastore) and returns a response. The API server also provides a watch mechanism so controllers can react to changes efficiently, enabling Kubernetes’ reconciliation model.
It’s important to distinguish this from the other options. etcd stores cluster data but does not expose the cluster’s primary user-facing API; it’s an internal datastore. kube-controller-manager runs control loops (controllers) that continuously reconcile resources (like Deployments, Nodes, Jobs) but it consumes the API rather than exposing it. kube-proxy is a node-level component implementing Service networking rules and is unrelated to the control-plane API endpoint.
Because Kubernetes is “API-driven,” the kube-apiserver is central: if it is unavailable, you cannot create workloads, update configurations, or even reliably observe cluster state. This is why high availability architectures prioritize multiple API server instances behind a load balancer, and why securing the API server (RBAC, TLS, audit) is a primary operational concern.
=========
What does “continuous” mean in the context of CI/CD?
Frequent releases, manual processes, repeatable, fast processing
Periodic releases, manual processes, repeatable, automated processing
Frequent releases, automated processes, repeatable, fast processing
Periodic releases, automated processes, repeatable, automated processing
The correct answer is C: in CI/CD, “continuous” implies frequent releases, automation, repeatability, and fast feedback/processing. The intent is to reduce batch size and latency between code change and validation/deployment. Instead of integrating or releasing in large, risky chunks, teams integrate changes continually and rely on automation to validate and deliver them safely.
“Continuous” does not mean “periodic” (which eliminates B and D). It also does not mean “manual processes” (which eliminates A and B). Automation is core: build, test, security checks, and deployment steps are consistently executed by pipeline systems, producing reliable outcomes and auditability.
In practice, CI means every merge triggers automated builds and tests so the main branch stays in a healthy state. CD means those validated artifacts are promoted through environments with minimal manual steps, often including progressive delivery controls (canary, blue/green), automated rollbacks on health signal failures, and policy checks. Kubernetes works well with CI/CD because it is declarative and supports rollout primitives: Deployments, readiness probes, and rollback revision history enable safer continuous delivery when paired with pipeline automation.
Repeatability is a major part of “continuous.” The same pipeline should run the same way every time, producing consistent artifacts and deployments. This reduces “works on my machine” issues and shortens incident resolution because changes are traceable and reproducible. Fast processing and frequent releases also mean smaller diffs, easier debugging, and quicker customer value delivery.
So, the combination that accurately reflects “continuous” in CI/CD is frequent + automated + repeatable + fast, which is option C.
=========
Which of the following is a recommended security habit in Kubernetes?
Run the containers as the user with group ID 0 (root) and any user ID.
Disallow privilege escalation from within a container as the default option.
Run the containers as the user with user ID 0 (root) and any group ID.
Allow privilege escalation from within a container as the default option.
The correct answer is B. A widely recommended Kubernetes security best practice is to disallow privilege escalation inside containers by default. In Kubernetes Pod/Container security context, this is represented by allowPrivilegeEscalation: false. This setting prevents a process from gaining more privileges than its parent process—commonly via setuid/setgid binaries or other privilege-escalation mechanisms. Disallowing privilege escalation reduces the blast radius of a compromised container and aligns with least-privilege principles.
Options A and C are explicitly unsafe because they encourage running as root (UID 0 and/or GID 0). Running containers as root increases risk: if an attacker breaks out of the application process or exploits kernel/runtime vulnerabilities, having root inside the container can make privilege escalation and lateral movement easier. Modern Kubernetes security guidance strongly favors running as non-root (runAsNonRoot: true, explicit runAsUser), dropping Linux capabilities, using read-only root filesystems, and applying restrictive seccomp/AppArmor/SELinux profiles where possible.
Option D is the opposite of best practice. Allowing privilege escalation by default increases the attack surface and violates the idea of secure defaults.
Operationally, this habit is often enforced via admission controls and policies (e.g., Pod Security Admission in “restricted” mode, or policy engines like OPA Gatekeeper/Kyverno). It’s also important for compliance: many security baselines require containers to run as non-root and to prevent privilege escalation.
So, the recommended security habit among the choices is clearly B: Disallow privilege escalation.
=========
In CNCF, who develops specifications for industry standards around container formats and runtimes?
Open Container Initiative (OCI)
Linux Foundation Certification Group (LFCG)
Container Network Interface (CNI)
Container Runtime Interface (CRI)
The organization responsible for defining widely adopted standards around container formats and runtime specifications is the Open Container Initiative (OCI), so A is correct. OCI defines the image specification (how container images are structured and stored) and the runtime specification (how to run a container), enabling interoperability across tooling and vendors. This is foundational to the cloud-native ecosystem because it allows different build tools, registries, runtimes, and orchestration platforms to work together reliably.
Within Kubernetes and CNCF-adjacent ecosystems, OCI standards are the reason an image built by one tool can be pushed to a registry and pulled/run by many different runtimes. For example, a Kubernetes node running containerd or CRI-O can run OCI-compliant images consistently. OCI standardization reduces fragmentation and vendor lock-in, which is a core motivation in open source cloud-native architecture.
The other options are not correct for this question. CNI (Container Network Interface) is a standard for configuring container networking, not container image formats and runtimes. CRI (Container Runtime Interface) is a Kubernetes-specific interface between kubelet and container runtimes—it enables pluggable runtimes for Kubernetes, but it is not the industry standard body for container format/runtime specifications. “LFCG” is not the recognized standards body here.
In short: OCI defines the “language” for container images and runtime behavior, which is why the same image can be executed across environments. Kubernetes relies on those standards indirectly through runtimes and tooling, but the specification work is owned by OCI. Therefore, the verified correct answer is A.
=========
TESTED 11 Mar 2026

