Why do administrators need a container orchestration tool?
To manage the lifecycle of an elevated number of containers.
To assess the security risks of the container images used in production.
To learn how to transform monolithic applications into microservices.
Container orchestration tools such as Kubernetes are the future.
The correct answer isA. Container orchestration exists because running containers at scale is hard: you need to schedule workloads onto machines, keep them healthy, scale them up and down, roll out updates safely, and recover from failures automatically. Administrators (and platform teams) use orchestration tools like Kubernetes to manage thelifecycleof many containers across many nodes—handling placement, restart, rescheduling, networking/service discovery, and desired-state reconciliation.
At small scale, you can run containers manually or with basic scripts. But at “elevated” scale (many services, many replicas, many nodes), manual management becomes unreliable and brittle. Orchestration provides primitives and controllers that continuously converge actual state toward desired state: if a container crashes, it is restarted; if a node dies, replacement Pods are scheduled; if traffic increases, replicas can be increased via autoscaling; if configuration changes, rolling updates can be coordinated with readiness checks.
Option B (security risk assessment) is important, but it’s not why orchestration tools exist. Image scanning and supply-chain security are typically handled by CI/CD tooling and registries, not by orchestration as the primary purpose. Option C is a separate architectural modernization effort; orchestration can support microservices, but it isn’t required “to learn transformation.” Option D is an opinion statement rather than a functional need.
So the core administrator need is lifecycle management at scale: ensuring workloads run reliably, predictably, and efficiently across a fleet. That is exactly what optionAstates.
=========
Which of the following options includes valid API versions?
alpha1v1, beta3v3, v2
alpha1, beta3, v2
v1alpha1, v2beta3, v2
v1alpha1, v2beta3, 2.0
Kubernetes API versions follow a consistent naming pattern that indicates stability level and versioning. The valid forms include stable versions likev1, and pre-release versions such asv1alpha1,v1beta1, etc. OptionCcontains valid-looking Kubernetes version strings—v1alpha1, v2beta3, v2—soCis correct.
In Kubernetes, the “v” prefix is part of the standard for API versions. A stable API uses v1, v2, etc. Pre-release APIs include a stability marker: alpha (earliest, most changeable) and beta (more stable but still may change). The numeric suffix (e.g., alpha1, beta3) indicates iteration within that stability stage.
Option A is invalid because strings like alpha1v1 and beta3v3 do not match Kubernetes conventions (the v comes first, and alpha/beta are qualifiers after the version: v1alpha1). Option B is invalid because alpha1 and beta3 are missing the leading version prefix; Kubernetes API versions are not just “alpha1.” Option D includes 2.0, which looks like semantic versioning but is not the Kubernetes API version format. Kubernetes uses v2, not 2.0, for API versions.
Understanding this matters because API versions signal compatibility guarantees. Stable APIs are supported for a defined deprecation window, while alpha/beta APIs may change in incompatible ways and can be removed more easily. When authoring manifests, selecting the correct apiVersion ensures the API server accepts your resource and that controllers interpret fields correctly.
Therefore, among the choices,Cis the only option comprised of valid Kubernetes-style API version strings.
=========
What is a sidecar container?
A Pod that runs next to another container within the same Pod.
A container that runs next to another Pod within the same namespace.
A container that runs next to another container within the same Pod.
A Pod that runs next to another Pod within the same namespace.
Asidecar containeris an additional container that runs alongside the main application containerwithin the same Pod, sharing network and storage context. That matches optionC, soCis correct. The sidecar pattern is used to add supporting capabilities to an application without modifying the application code. Because both containers are in the same Pod, the sidecar can communicate with the main container over localhost and share volumes for files, sockets, or logs.
Common sidecar examples include: log forwarders that tail application logs and ship them to a logging system, proxies (service mesh sidecars like Envoy) that handle mTLS and routing policy, config reloaders that watch ConfigMaps and signal the main process, and local caching agents. Sidecars are especially powerful in cloud-native systems because they standardize cross-cutting concerns—security, observability, traffic policy—across many workloads.
Options A and D incorrectly describe “a Pod running next to …” which is not how sidecars work; sidecars are containers, not separate Pods. Running separate Pods “next to” each other in a namespace does not give the same shared network namespace and tightly coupled lifecycle. Option B is also incorrect for the same reason: a sidecar is not a separate Pod; it is a container in the same Pod.
Operationally, sidecars share the Pod lifecycle: they are scheduled together, scaled together, and generally terminated together. This is both a benefit (co-location guarantees) and a responsibility (resource requests/limits should include the sidecar’s needs, and failure modes should be understood). Kubernetes is increasingly formalizing sidecar behavior (e.g., sidecar containers with ordered startup semantics), but the core definition remains:a helper container in the same Pod.
=========
What is the default value for authorization-mode in Kubernetes API server?
--authorization-mode=RBAC
--authorization-mode=AlwaysAllow
--authorization-mode=AlwaysDeny
--authorization-mode=ABAC
The Kubernetes API server supports multiple authorization modes that determine whether an authenticated request is allowed to perform an action (verb) on a resource. Historically, the API server’sdefaultauthorization mode wasAlwaysAllow, meaning that once a request was authenticated, it would be authorized without further checks. That is why the correct answer here isB.
However, it’s crucial to distinguish “default flag value” from “recommended configuration.” In production clusters, running with AlwaysAllow is insecure because it effectively removes authorization controls—any authenticated user (or component credential) could do anything the API permits. Modern Kubernetes best practices strongly recommend enablingRBAC(Role-Based Access Control), often alongside Node and Webhook authorization, so that permissions are granted explicitly using Roles/ClusterRoles and RoleBindings/ClusterRoleBindings. Many managed Kubernetes distributions and kubeadm-based setups commonly enable RBAC by default as part of cluster bootstrap profiles, even if the API server’s historical default flag value is AlwaysAllow.
So, the exam-style interpretation of this question is about the API server flag default, not what most real clusters should run. With RBAC enabled, authorization becomes granular: you can control who can read Secrets, who can create Deployments, who can exec into Pods, and so on, scoped to namespaces or cluster-wide. ABAC (Attribute-Based Access Control) exists but is generally discouraged compared to RBAC because it relies on policy files and is less ergonomic and less commonly used. AlwaysDeny is useful for hard lockdown testing but not for normal clusters.
In short:AlwaysAllowis the API server’s default mode (answer B), butRBACis the secure, recommended choice you should expect to see enabled in almost any serious Kubernetes environment.
=========
Which group of container runtimes provides additional sandboxed isolation and elevated security?
rune, cgroups
docker, containerd
runsc, kata
crun, cri-o
The runtimes most associated withsandboxed isolationaregVisor’s runscandKata Containers, makingCcorrect. Standard container runtimes (like containerd with runc) rely primarily on Linux namespaces and cgroups for isolation. That isolation is strong for many use cases, but it shares the host kernel, which can be a concern for multi-tenant or high-risk workloads.
gVisor (runsc)provides a user-space kernel-like layer that intercepts and mediates system calls, reducing the container’s direct interaction with the host kernel.Kata Containerstakes a different approach: it runs containers inside lightweight virtual machines, providing hardware-virtualization boundaries (or VM-like isolation) while still integrating into container workflows. Both are used to increase isolation compared to traditional containers, and both can be integrated with Kubernetes through compatible CRI/runtime configurations.
The other options are incorrect for the question’s intent. “rune, cgroups” is not a meaningful pairing here (cgroups is a Linux resource mechanism, not a runtime). “docker, containerd” are commonly used container platforms/runtimes but are not specifically the “sandboxed isolation” category (containerd typically uses runc for standard isolation). “crun, cri-o” represents a low-level OCI runtime (crun) and a CRI implementation (CRI-O), again not specifically a sandboxed-isolation grouping.
So, when the question asks for the group that providesadditional sandboxingandelevated security, the correct, well-established answer isrunsc + Kata.
What helps an organization to deliver software more securely at a higher velocity?
Kubernetes
apt-get
Docker Images
CI/CD Pipeline
ACI/CD pipelineis a core practice/tooling approach that enables organizations to deliver softwarefaster and more securely, soDis correct. CI (Continuous Integration) automates building and testing code changes frequently, reducing integration risk and catching defects early. CD (Continuous Delivery/Deployment) automates releasing validated builds into environments using consistent, repeatable steps—reducing manual errors and enabling rapid iteration.
Security improves because automation enables standardized checks on every change: static analysis, dependency scanning, container image scanning, policy validation, and signing/verification steps can be integrated into the pipeline. Instead of relying on ad-hoc human processes, security controls become repeatable gates. In Kubernetes environments, pipelines commonly build container images, run tests, publish artifacts to registries, and then deploy via manifests, Helm, or GitOps controllers—keeping deployments consistent and auditable.
Option A (Kubernetes) is a platform that helps run and manage workloads, but by itself it doesn’t guarantee secure high-velocity delivery. It provides primitives (rollouts, declarative config, RBAC), yet the delivery workflow still needs automation. Option B (apt-get) is a package manager for Debian-based systems and is not a delivery pipeline. Option C (Docker Images) are artifacts; they improve portability and repeatability, but they don’t provide the end-to-end automation of building, testing, promoting, and deploying across environments.
In cloud-native application delivery, the pipeline is the “engine” that turns code changes into safe production releases. Combined with Kubernetes’ declarative deployment model (Deployments, rolling updates, health probes), a CI/CD pipeline supports frequent releases with controlled rollouts, fast rollback, and strong auditability. That is exactly what the question is targeting. Therefore, the verified answer isD.
=========
What edge and service proxy tool is designed to be integrated with cloud native applications?
CoreDNS
CNI
gRPC
Envoy
The correct answer isD: Envoy. Envoy is a high-performanceedge and service proxydesigned for cloud-native environments. It is commonly used as the data plane in service meshes and modern API gateways because it provides consistent traffic management, observability, and security features across microservices without requiring every application to implement those capabilities directly.
Envoy operates at Layer 7 (application-aware) and supports protocols like HTTP/1.1, HTTP/2, gRPC, and more. It can handle routing, load balancing, retries, timeouts, circuit breaking, rate limiting, TLS termination, and mutual TLS (mTLS). Envoy also emits rich telemetry (metrics, access logs, tracing) that integrates well with cloud-native observability stacks.
Why the other options are incorrect:
CoreDNS (A)provides DNS-based service discovery within Kubernetes; it is not an edge/service proxy.
CNI (B)is a specification and plugin ecosystem for container networking (Pod networking), not a proxy.
gRPC (C)is an RPC protocol/framework used by applications; it’s not a proxy tool. (Envoy can proxy gRPC traffic, but gRPC itself isn’t the proxy.)
In Kubernetes architectures, Envoy often appears in two places: (1) at the edge as part of an ingress/gateway layer, and (2) sidecar proxies alongside Pods in a service mesh (like Istio) to standardize service-to-service communication controls and telemetry. This is why it is described as “designed to be integrated with cloud native applications”: it’s purpose-built for dynamic service discovery, resilient routing, and operational visibility in distributed systems.
So the verified correct choice isD (Envoy).
=========
Which of the following is a valid PromQL query?
SELECT * from http_requests_total WHERE job=apiserver
http_requests_total WHERE (job="apiserver")
SELECT * from http_requests_total
http_requests_total(job="apiserver")
Prometheus Query Language (PromQL) uses a function-and-selector syntax, not SQL. A valid query typically starts with a metric name and optionally includeslabel matchersin curly braces. In the simplified quiz syntax given, the valid PromQL-style selector is best represented byD: http_requests_total(job="apiserver"), soDis correct.
Conceptually, what this query means is “select time series for the metric http_requests_total where the job label equals apiserver.” In standard PromQL formatting you most often see this as: http_requests_total{job="apiserver"}. Many training questions abbreviate braces and focus on the idea of filtering by labels; the key is that PromQL uses label matchers rather than SQL WHERE clauses.
Options A and C are invalid because they use SQL (SELECT * FROM ...) which is not PromQL. Option B is also invalid because PromQL does not use the keyword WHERE. PromQL filtering is done by applying label matchers directly to the metric selector.
In Kubernetes observability, PromQL is central to building dashboards and alerts from cluster metrics. For example, you might compute rates from counters: rate(http_requests_total{job="apiserver"}[5m]), aggregate by labels: sum by (code) (...), or alert on error ratios. Understanding the selector and label-matcher model is foundational because Prometheus metrics are multi-dimensional—labels define the slices you can filter and aggregate on.
So, within the provided options,Dis the only one that follows PromQL’s metric+label-filter style and therefore is the verified correct answer.
=========
How can you monitor the progress for an updated Deployment/DaemonSets/StatefulSets?
kubectl rollout watch
kubectl rollout progress
kubectl rollout state
kubectl rollout status
To monitor rollout progress for Kubernetes workload updates (most commonly Deployments, and also StatefulSets and DaemonSets where applicable), the standard kubectl command iskubectl rollout status, which makesDcorrect.
Kubernetes manages updates declaratively through controllers. For a Deployment, an update typically creates a new ReplicaSet and gradually shifts replicas from the old to the new according to the strategy (e.g., RollingUpdate with maxUnavailable and maxSurge). For StatefulSets, updates may be ordered and respect stable identities, and for DaemonSets, an update replaces node-level Pods according to update strategy. In all cases, you often want a single command that tells you whether the controller has completed the update and whether the new replicas are available. kubectl rollout status queries the resource status and prints a progress view until completion or timeout.
The other commands listed are not the canonical kubectl subcommands. kubectl rollout watch, kubectl rollout progress, and kubectl rollout state are not standard rollout verbs in kubectl. The supported rollout verbs typically include status, history, undo, pause, and resume (depending on kubectl version and resource type).
Operationally, kubectl rollout status deployment/
=========
Which of the following is a recommended security habit in Kubernetes?
Run the containers as the user with group ID 0 (root) and any user ID.
Disallow privilege escalation from within a container as the default option.
Run the containers as the user with user ID 0 (root) and any group ID.
Allow privilege escalation from within a container as the default option.
The correct answer isB. A widely recommended Kubernetes security best practice is todisallow privilege escalationinside containers by default. In Kubernetes Pod/Container security context, this is represented by allowPrivilegeEscalation: false. This setting prevents a process from gaining more privileges than its parent process—commonly via setuid/setgid binaries or other privilege-escalation mechanisms. Disallowing privilege escalation reduces the blast radius of a compromised container and aligns with least-privilege principles.
Options A and C are explicitly unsafe because they encourage running as root (UID 0 and/or GID 0). Running containers as root increases risk: if an attacker breaks out of the application process or exploits kernel/runtime vulnerabilities, having root inside the container can make privilege escalation and lateral movement easier. Modern Kubernetes security guidance strongly favors running as non-root (runAsNonRoot: true, explicit runAsUser), dropping Linux capabilities, using read-only root filesystems, and applying restrictive seccomp/AppArmor/SELinux profiles where possible.
Option D is the opposite of best practice. Allowing privilege escalation by default increases the attack surface and violates the idea of secure defaults.
Operationally, this habit is often enforced via admission controls and policies (e.g., Pod Security Admission in “restricted” mode, or policy engines like OPA Gatekeeper/Kyverno). It’s also important for compliance: many security baselines require containers to run as non-root and to prevent privilege escalation.
So, the recommended security habit among the choices is clearlyB: Disallow privilege escalation.
=========
Which of the following options include resources cleaned by the Kubernetes garbage collection mechanism?
Stale or expired CertificateSigningRequests (CSRs) and old deployments.
Nodes deleted by a cloud controller manager and obsolete logs from the kubelet.
Unused container and container images, and obsolete logs from the kubelet.
Terminated pods, completed jobs, and objects without owner references.
Kubernetes garbage collection (GC) is about cleaning upAPI objects and related resourcesthat are no longer needed, so the correct answer isD. Two big categories it targets are (1) objects that have finished their lifecycle (liketerminated Podsandcompleted Jobs, depending on controllers and TTL policies), and (2) “dangling” objects that are no longer referenced properly—often described asobjects without owner references(or where owners are gone), which can happen when a higher-level controller is deleted or when dependent resources are left behind.
A key Kubernetes concept here isOwnerReferences: many resources are created “owned” by a controller (e.g., a ReplicaSet owned by a Deployment, Pods owned by a ReplicaSet). When an owning object is deleted, Kubernetes’ garbage collector can remove dependent objects based on deletion propagation policies (foreground/background/orphan). This prevents resource leaks and keeps the cluster tidy and performant.
The other options are incorrect because they refer to cleanup tasks outside Kubernetes GC’s scope. Kubelet logs (B/C) are node-level files and log rotation is handled by node/runtime configuration, not the Kubernetes garbage collector. Unused container images (C) are managed by the container runtime’s image GC and kubelet disk pressure management, not the Kubernetes API GC. Nodes deleted by a cloud controller (B) aren’t “garbage collected” in the same sense; node lifecycle is handled by controllers and cloud integrations, but not as a generic GC cleanup category like ownerRef-based object deletion.
So, when the question asks specifically about “resources cleaned by Kubernetes garbage collection,” it’s pointing to Kubernetes object lifecycle cleanup:terminated Pods, completed Jobs, and orphaned objects—exactly what optionDstates.
=========
In a cloud native environment, who is usually responsible for maintaining the workloads running across the different platforms?
The cloud provider.
The Site Reliability Engineering (SRE) team.
The team of developers.
The Support Engineering team (SE).
B (the Site Reliability Engineering team)is correct. In cloud-native organizations, SREs are commonly responsible for thereliability, availability, and operational healthof workloads across platforms (multiple clusters, regions, clouds, and supporting services). While responsibilities vary by company, the classic SRE charter is to apply software engineering to operations: build automation, standardize runbooks, manage incident response, define SLOs/SLIs, and continuously improve system reliability.
Maintaining workloads “across different platforms” implies cross-cutting operational ownership: deployments need to behave consistently, rollouts must be safe, monitoring and alerting must be uniform, and incident practices must work across environments. SRE teams typically own or heavily influence the observability stack (metrics/logs/traces), operational readiness, capacity planning, and reliability guardrails (error budgets, progressive delivery, automated rollback triggers). They also collaborate closely with platform engineering and application teams, but SRE is often the group that ensures production workloads meet reliability targets.
Why other options are less correct:
Thecloud provider (A)maintains the underlying cloud services, but not your application workloads’ correctness, SLOs, or operational processes.
Developers (C)do maintain application code and may own on-call in some models, but the question asks “usually” in cloud-native environments; SRE is the widely recognized function for workload reliability across platforms.
Support Engineering (D)typically focuses on customer support and troubleshooting from a user perspective, not maintaining platform workload reliability at scale.
So, the best and verified answer isB: SRE teams commonly maintain and ensure reliability of workloads across cloud-native platforms.
=========
Which control plane component is responsible for updating the node Ready condition if a node becomes unreachable?
The kube-proxy
The node controller
The kubectl
The kube-apiserver
The correct answer isB: the node controller. In Kubernetes, node health is monitored and reflected through Node conditions such asReady. TheNode Controller(a controller that runs as part of the control plane, within the controller-manager) is responsible for monitoring node heartbeats and updating node status when a node becomes unreachable or unhealthy.
Nodes periodically report status (including kubelet heartbeats) to the API server. The Node Controller watches these updates. If it detects that a node has stopped reporting within expected time windows, it marks the node conditionReadyas Unknown (or otherwise updates conditions) to indicate the control plane can’t confirm node health. This status change then influences higher-level behaviors such as Pod eviction and rescheduling: after grace periods and eviction timeouts, Pods on an unhealthy node may be evicted so the workload can be recreated on healthy nodes (assuming a controller manages replicas).
Option A (kube-proxy) is a node component for Service traffic routing and does not manage node health conditions. Option C (kubectl) is a CLI client; it does not participate in control plane health monitoring. Option D (kube-apiserver) stores and serves Node status, but it doesn’t decide when a node is unreachable; it persists what controllers and kubelets report. The “decision logic” for updating the Ready condition in response to missing heartbeats is the Node Controller’s job.
So, the component that updates the Node Ready condition when a node becomes unreachable is thenode controller, which is optionB.
=========
Which of the following is a good habit for cloud native cost efficiency?
Follow an automated approach to cost optimization, including visibility and forecasting.
Follow manual processes for cost analysis, including visibility and forecasting.
Use only one cloud provider to simplify the cost analysis.
Keep your legacy workloads unchanged, to avoid cloud costs.
The correct answer isA. In cloud-native environments, costs are highly dynamic: autoscaling changes compute footprint, ephemeral environments come and go, and usage-based billing applies to storage, network egress, load balancers, and observability tooling. Because of this variability,automationis the most sustainable way to achieve cost efficiency. Automated visibility (dashboards, chargeback/showback), anomaly detection, and forecasting help teams understand where spend is coming from and how it changes over time. Automated optimization actions can include right-sizing requests/limits, enforcing TTLs on preview environments, scaling down idle clusters, and cleaning unused resources.
Manual processes (B) don’t scale as complexity grows. By the time someone reviews a spreadsheet or dashboard weekly, cost spikes may have already occurred. Automation enables fast feedback loops and guardrails, which is essential for preventing runaway spend caused by misconfiguration (e.g., excessive log ingestion, unbounded autoscaling, oversized node pools).
Option C is not a cost-efficiency “habit.” Single-provider strategies may simplify some billing views, but they can also reduce leverage and may not be feasible for resilience/compliance; it’s a business choice, not a best practice for cloud-native cost management. Option D is counterproductive: keeping legacy workloads unchanged often wastes money because cloud efficiency typically requires adapting workloads—right-sizing, adopting autoscaling, and using managed services appropriately.
In Kubernetes specifically, cost efficiency is tightly linked to resource management: accurate CPU/memory requests, limits where appropriate, cluster autoscaler tuning, and avoiding overprovisioning. Observability also matters because you can’t optimize what you can’t measure. Therefore, the best habit is anautomatedcost optimization approach with strong visibility and forecasting—A.
=========
How does dynamic storage provisioning work?
A user requests dynamically provisioned storage by including an existing StorageClass in their PersistentVolumeClaim.
An administrator creates a StorageClass and includes it in their Pod YAML definition file without creating a PersistentVolumeClaim.
A Pod requests dynamically provisioned storage by including a StorageClass and the Pod name in their PersistentVolumeClaim.
An administrator creates a PersistentVolume and includes the name of the PersistentVolume in their Pod YAML definition file.
Dynamic provisioning is the Kubernetes mechanism where storage iscreated on-demandwhen a user creates aPersistentVolumeClaim (PVC)that references aStorageClass, soAis correct. In this model, the user does not need to pre-create a PersistentVolume (PV). Instead, the StorageClass points to a provisioner (typically a CSI driver) that knows how to create a volume in the underlying storage system (cloud disk, SAN, NAS, etc.). When the PVC is created with storageClassName:
This is why option B is incorrect: you do not put a StorageClass “in the Pod YAML” to request provisioning. Pods reference PVCs, not StorageClasses directly. Option C is incorrect because the PVC does not need the Pod name; binding is done via the PVC itself. Option D describesstatic provisioning: an admin pre-creates PVs and users claim them by creating PVCs that match the PV (capacity, access modes, selectors). Static provisioning can work, but it is not dynamic provisioning.
Under the hood, the StorageClass can define parameters like volume type, replication, encryption, and binding behavior (e.g., volumeBindingMode: WaitForFirstConsumer to delay provisioning until the Pod is scheduled, ensuring the volume is created in the correct zone). Reclaim policies (Delete/Retain) define what happens to the underlying volume after the PVC is deleted.
In cloud-native operations, dynamic provisioning is preferred because it improves developer self-service, reduces manual admin work, and makes scaling stateful workloads easier and faster. The essence is:PVC + StorageClass → automatic PV creation and binding.
=========
What native runtime is Open Container Initiative (OCI) compliant?
runC
runV
kata-containers
gvisor
The Open Container Initiative (OCI) publishes open specifications for container images and container runtimes so that tools across the ecosystem remain interoperable. When a runtime is “OCI-compliant,” it means it implements the OCI Runtime Specification (how to run a container from a filesystem bundle and configuration) and/or works cleanly with OCI image formats through the usual layers (image → unpack → runtime).runCis the best-known, widely used reference implementation of the OCI runtime specification and is the low-level runtime underneath many higher-level systems. In Kubernetes, you typically interact with a higher-level container runtime (such as containerd or CRI-O) through the Container Runtime Interface (CRI). That higher-level runtime then uses a low-level OCI runtime to actually create Linux namespaces/cgroups, set up the container process, and start it. In many default installations, containerd delegates to runC for this low-level “create/start” work.
The other options are related but differ in what they are:Kata Containersuses lightweight VMs to provide stronger isolation while still presenting a container-like workflow;gVisorprovides a user-space kernel for sandboxing containers; these can be used with Kubernetes via compatible integrations, but the canonical “native OCI runtime” answer in most curricula isrunC. Finally, “runV” is not a common modern Kubernetes runtime choice in typical OCI discussions. So the most correct, standards-based answer here isA (runC)because it directly implements the OCI runtime spec and is commonly used as the default low-level runtime behind CRI implementations.
=========
Which of the following is a correct definition of a Helm chart?
A Helm chart is a collection of YAML files bundled in a tar.gz file and can be applied without decompressing it.
A Helm chart is a collection of JSON files and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is a collection of YAML files that can be applied on Kubernetes by using the kubectl tool.
A Helm chart is similar to a package and contains all the resource definitions to run an application on Kubernetes.
A Helm chart is best described as apackagefor Kubernetes applications, containing the resource definitions (as templates) and metadata needed to install and manage an application—soDis correct. Helm is a package manager for Kubernetes; the chart is the packaging format. Charts include a Chart.yaml (metadata), a values.yaml (default configuration values), and a templates/ directory containing Kubernetes manifests written as templates. When you install a chart, Helm renders those templates into concrete Kubernetes YAML manifests by substituting values, then applies them to the cluster.
Option A is misleading/incomplete. While charts are often distributed as a compressed tarball (.tgz), the defining feature is not “YAML bundled in tar.gz” but the packaging and templating model that supports install/upgrade/rollback. Option B is incorrect because Helm charts are not “collections of JSON files” by definition; Kubernetes resources can be expressed as YAML or JSON, but Helm charts overwhelmingly use templated YAML. Option C is incorrect because charts are not simply YAML applied by kubectl; Helm manages releases, tracks installed resources, and supports upgrades and rollbacks. Helm uses Kubernetes APIs under the hood, but the value of Helm is the lifecycle and packaging system, not “kubectl apply.”
In cloud-native application delivery, Helm helps standardize deployments across environments (dev/stage/prod) by externalizing configuration through values. It reduces copy/paste and supports reuse via dependencies and subcharts. Helm also supports versioning of application packages, allowing teams to upgrade predictably and roll back if needed—critical for production change management.
So, the correct and verified definition isD: a Helm chart is like a package containing the resource definitions needed to run an application on Kubernetes.
=========
Which component of the node is responsible to run workloads?
The kubelet.
The kube-proxy.
The kube-apiserver.
The container runtime.
The verified correct answer isD (the container runtime). On a Kubernetes node, thecontainer runtime(such as containerd or CRI-O) is the component that actuallyexecutes containers—it creates container processes, manages their lifecycle, pulls images, and interacts with the underlying OS primitives (namespaces, cgroups) through an OCI runtime like runc. In that direct sense, the runtime is what “runs workloads.”
It’s important to distinguish responsibilities. Thekubelet(A) is the node agent thatorchestrateswhat should run on the node: it watches the API server for Pods assigned to the node and then asks the runtime to start/stop containers accordingly. Kubelet is essential for node management, but it does not itself execute containers; it delegates execution to the runtime via CRI.kube-proxy(B) handles Service traffic routing rules (or is replaced by other dataplanes) and does not run containers.kube-apiserver(C) is a control plane component that stores and serves cluster state; it is not a node workload runner.
So, in the execution chain: scheduler assigns Pod → kubelet sees Pod assigned → kubelet calls runtime via CRI → runtime launches containers. When troubleshooting “containers won’t start,” you often inspect kubelet logs and runtime logs because the runtime is the component that can fail image pulls, sandbox creation, or container start operations.
Therefore, the best answer to “which node component is responsible to run workloads” isthe container runtime, optionD.
=========
What component enables end users, different parts of the Kubernetes cluster, and external components to communicate with one another?
kubectl
AWS Management Console
Kubernetes API
Google Cloud SDK
TheKubernetes APIis the central interface that enables communication between users, controllers, nodes, and external integrations, soCis correct. Kubernetes is fundamentally an API-driven system: all cluster state is represented as API objects, and all operations—create, update, delete, watch—flow through the API server.
End users typically interact with the Kubernetes API using tools likekubectl, client libraries, or dashboards. But those tools are clients; the shared communication “hub” is the API itself. Inside the cluster, core control plane components (controllers, scheduler) continuously watch the API for desired state and write status updates back. Worker nodes (via kubelet) also communicate with the API server to receive Pod specs, report node health, and update Pod statuses. External systems—cloud provider integrations, CI/CD pipelines, GitOps controllers, monitoring and policy engines—also integrate primarily through the Kubernetes API.
Option A (kubectl) is a CLI that talks to the Kubernetes API; it is not the underlying component that all parts use to communicate. Options B and D are cloud-provider tools and are not universal to Kubernetes clusters. Kubernetes runs across many environments, and the consistent interoperability layer is the Kubernetes API.
This API-centric architecture is what enables Kubernetes’ declarative model: you submit desired state to the API, and controllers reconcile actual state to match. It also enables extensibility: CRDs and admission webhooks expand what the API can represent and enforce. Therefore, the correct answer isC: Kubernetes API.
=========
What framework does Kubernetes use to authenticate users with JSON Web Tokens?
OpenID Connect
OpenID Container
OpenID Cluster
OpenID CNCF
Kubernetes commonly authenticates users usingOpenID Connect (OIDC)when JSON Web Tokens (JWTs) are involved, soAis correct. OIDC is an identity layer on top of OAuth 2.0 that standardizes how clients obtain identity information and how JWTs are issued and validated.
In Kubernetes, authentication happens at theAPI server. When OIDC is configured, the API server validates incoming bearer tokens (JWTs) by checking token signature and claims against the configured OIDC issuer and client settings. Kubernetes can use OIDC claims (such as sub, email, groups) to map the authenticated identity to Kubernetes RBAC subjects. This is how enterprises integrate clusters with identity providers such as Okta, Dex, Azure AD, or other OIDC-compliant IdPs.
Options B, C, and D are fabricated phrases and not real frameworks. Kubernetes documentation explicitly references OIDC as a supported method for token-based user authentication (alongside client certificates, bearer tokens, static token files, and webhook authentication). The key point is that Kubernetes does not “invent” JWT auth; it integrates with standard identity providers through OIDC so clusters can participate in centralized SSO and group-based authorization.
Operationally, OIDC authentication is typically paired with:
RBACfor authorization (“what you can do”)
Audit logging for traceability
Short-lived tokens and rotation practices for security
Group claim mapping to simplify permission management
So, the verified framework Kubernetes uses with JWTs for user authentication isOpenID Connect.
=========
What are the 3 pillars of Observability?
Metrics, Logs, and Traces
Metrics, Logs, and Spans
Metrics, Data, and Traces
Resources, Logs, and Tracing
The correct answer isA: Metrics, Logs, and Traces. These are widely recognized as the “three pillars” because together they provide complementary views into system behavior:
Metricsare numeric time series collected over time (CPU usage, request rate, error rate, latency percentiles). They are best for dashboards, alerting, and capacity planning because they are structured and aggregatable. In Kubernetes, metrics underpin autoscaling and operational visibility (node/pod resource usage, cluster health signals).
Logsare discrete event records (often text) emitted by applications and infrastructure components. Logs provide detailed context for debugging: error messages, stack traces, warnings, and business events. In Kubernetes, logs are commonly collected from container stdout/stderr and aggregated centrally for search and correlation.
Tracescapture the end-to-end journey of a request through a distributed system, breaking it into spans. Tracing is crucial in microservices because a single user request may cross many services; traces show where latency accumulates and which dependency fails. Tracing also enables root cause analysis when metrics indicate degradation but don’t pinpoint the culprit.
Why the other options are wrong: aspanis a componentwithintracing, not a top-level pillar; “data” is too generic; and “resources” are not an observability signal category. The pillars are defined by signal type and how they’re used operationally.
In cloud-native practice, these pillars are often unified via correlation IDs and shared context: metrics alerts link to logs and traces for the same timeframe/request. Tooling like Prometheus (metrics), log aggregators (e.g., Loki/Elastic), and tracing systems (Jaeger/Tempo/OpenTelemetry) work together to provide a complete observability story.
Therefore, the verified correct answer isA.
=========
A Kubernetes _____ is an abstraction that defines a logical set of Pods and a policy by which to access them.
Selector
Controller
Service
Job
A KubernetesServiceis the abstraction that defines a logical set of Pods and the policy for accessing them, soCis correct. Pods are ephemeral: their IPs change as they are recreated, rescheduled, or scaled. A Service solves this by providing a stable endpoint (DNS name and virtual IP) and routing rules that send traffic to the current healthy Pods backing the Service.
A Service typically uses alabel selectorto identify which Pods belong to it. Kubernetes then maintains endpoint data (Endpoints/EndpointSlice) for those Pods and uses the cluster dataplane (kube-proxy or eBPF-based implementations) to forward traffic from the Service IP/port to one of the backend Pod IPs. This is what the question means by “logical set of Pods” and “policy by which to access them” (for example, round-robin-like distribution depending on dataplane, session affinity options, and how ports map via targetPort).
Option A (Selector) is only the query mechanism used by Services and controllers; it is not itself the access abstraction. Option B (Controller) is too generic; controllers reconcile desired state but do not provide stable network access policies. Option D (Job) manages run-to-completion tasks and is unrelated to network access abstraction.
Services can be exposed in different ways: ClusterIP (internal), NodePort, LoadBalancer, and ExternalName. Regardless of type, the core Service concept remains: stable access to a dynamic set of Pods. This is foundational to Kubernetes networking and microservice communication, and it is why Service discovery via DNS works effectively across rolling updates and scaling events.
Thus, the correct answer isService (C).
=========
Which item is a Kubernetes node component?
kube-scheduler
kubectl
kube-proxy
etcd
A Kubernetesnode componentis a component that runs on worker nodes to support Pods and node-level networking/operations. Among the options,kube-proxyis a node component, soCis correct.
kube-proxy runs on each node and implements parts of the KubernetesServicenetworking model. It watches the API server for Service and endpoint updates and then programs node networking rules (iptables/IPVS, or equivalent) so traffic sent to a Service IP/port is forwarded to one of the backend Pod endpoints. This is essential for stable virtual IPs and load distribution across Pods.
Why the other options are not node components:
kube-scheduleris acontrol planecomponent; it assigns Pods to nodes but does not run on every node as part of node functionality.
kubectlis a client CLI tool used by humans/automation; it is not a cluster component.
etcdis the control plane datastore; it stores cluster state and is not a per-node workload component.
Operationally, kube-proxy can be replaced by some modern CNI/eBPF dataplanes, but in classic Kubernetes architecture it remains the canonical node-level component for Service rule programming. Understanding which components are node vs control plane is key for troubleshooting: node issues involve kubelet/runtime/kube-proxy/CNI; control plane issues involve API server/scheduler/controller-manager/etcd.
So, the verified node component in this list iskube-proxy (C).
=========
What sentence is true about CronJobs in Kubernetes?
A CronJob creates one or multiple Jobs on a repeating schedule.
A CronJob creates one container on a repeating schedule.
CronJobs are useful on Linux but are obsolete in Kubernetes.
The CronJob schedule format is different in Kubernetes and Linux.
The true statement isA: a KubernetesCronJobcreatesJobson a repeating schedule. CronJob is a controller designed for time-based execution. You define a schedule using standard cron syntax (minute, hour, day-of-month, month, day-of-week), and when the schedule triggers, the CronJob controller creates aJobobject. Then the Job controller creates one or more Pods to run the task to completion.
Option B is incorrect because CronJobs do not “create one container”; they create Jobs, and Jobs create Pods (which may contain one or multiple containers). Option C is wrong because CronJobs are a core Kubernetes workload primitive for recurring tasks and remain widely used for periodic work like backups, batch processing, and cleanup. Option D is wrong because Kubernetes CronJobs intentionally use cron-like scheduling expressions; the format aligns with the cron concept (with Kubernetes-specific controller behavior around missed runs, concurrency, and history).
CronJobs also provide operational controls you don’t get from plain Linux cron on a node:
concurrencyPolicy (Allow/Forbid/Replace) to manage overlapping runs
startingDeadlineSeconds to control how missed schedules are handled
history limits for successful/failed Jobs to avoid clutter
integration with Kubernetes RBAC, Secrets, ConfigMaps, and volumes for consistent runtime configuration
consistent execution environment via container images, not ad-hoc node scripts
Because the CronJob creates Jobs as first-class API objects, you get observability (events/status), predictable retries, and lifecycle management. That’s why the accurate statement isA.
=========
What is CloudEvents?
It is a specification for describing event data in common formats for Kubernetes network traffic management and cloud providers.
It is a specification for describing event data in common formats in all cloud providers including major cloud providers.
It is a specification for describing event data in common formats to provide interoperability across services, platforms and systems.
It is a Kubernetes specification for describing events data in common formats for iCloud services, iOS platforms and iMac.
CloudEventsis an open specification for describing event data in a common way to enable interoperability across services, platforms, and systems, soCis correct. In cloud-native architectures, many components communicate asynchronously via events (message brokers, event buses, webhooks). Without a standard envelope, each producer and consumer invents its own event structure, making integration brittle. CloudEvents addresses this by standardizing core metadata fields—like event id, source, type, specversion, and time—and defining how event payloads are carried.
This helps systems interoperate regardless of transport. CloudEvents can be serialized as JSON or other encodings and carried over HTTP, messaging systems, or other protocols. By using a shared spec, you can route, filter, validate, and transform events more consistently.
Option A is too narrow and incorrectly ties CloudEvents to Kubernetes traffic management; CloudEvents is broader than Kubernetes. Option B is closer but still framed incorrectly—CloudEvents is not merely “for all cloud providers,” it is an interoperability spec acrossservices and platforms, including but not limited to cloud provider event systems. Option D is clearly incorrect.
In Kubernetes ecosystems, CloudEvents is relevant to event-driven systems and serverless platforms (e.g., Knative Eventing and other eventing frameworks) because it provides a consistent event contract across producers and consumers. That consistency reduces coupling, supports better tooling (schema validation, tracing correlation), and makes event-driven architectures easier to operate at scale.
So, the correct definition isC: a specification for common event formats to enable interoperability across systems.
=========
Which type of Service requires manual creation of Endpoints?
LoadBalancer
Services without selectors
NodePort
ClusterIP with selectors
A KubernetesService without selectorsrequires you to manage its backend endpoints manually, soBis correct. Normally, a Service uses aselectorto match a set of Pods (by labels). Kubernetes then automatically maintains the backend list (historically Endpoints, now commonly EndpointSlice) by tracking which Pods match the selector and are Ready. This automation is one of the key reasons Services provide stable connectivity to dynamic Pods.
When you create a Servicewithout a selector, Kubernetes has no way to know which Pods (or external IPs) should receive traffic. In that pattern, you explicitly create anEndpointsobject (or EndpointSlices, depending on your approach and controller support) that maps the Service name to one or more IP:port tuples. This is commonly used to representexternal services(e.g., a database running outside the cluster) while still providing a stable Kubernetes Service DNS name for in-cluster clients. Another use case is advanced migration scenarios where endpoints are controlled by custom controllers rather than label selection.
Why the other options are wrong: ServicetypeslikeClusterIP,NodePort, andLoadBalancerdescribe how a Service is exposed, but they do not inherently require manual endpoint management. A ClusterIP Servicewith selectors(D) is the standard case where endpoints are automatically created and updated. NodePort and LoadBalancer Services also typically use selectors and therefore inherit automatic endpoint management; the difference is in how traffic enters the cluster, not how backends are discovered.
Operationally, when using Services without selectors, you must ensure endpoint IPs remain correct, health is accounted for (often via external tooling), and you update endpoints when backends change. The key concept is:no selector → Kubernetes can’t auto-populate endpoints → you must provide them.
=========
The Kubernetes project work is carried primarily by SIGs. What does SIG stand for?
Special Interest Group
Software Installation Guide
Support and Information Group
Strategy Implementation Group
In Kubernetes governance and project structure,SIGstands forSpecial Interest Group, soAis correct. Kubernetes is a large open source project under the Cloud Native Computing Foundation (CNCF), and its work is organized into groups that focus on specific domains—such as networking, storage, node, scheduling, security, docs, testing, and many more. SIGs provide a scalable way to coordinate contributors, prioritize work, review design proposals (KEPs), triage issues, and manage releases in their area.
Each SIG typically has regular meetings, mailing lists, chat channels, and maintainers who guide the direction of that part of the project. For example, SIG Network focuses on Kubernetes networking architecture and components, SIG Storage on storage APIs and CSI integration, and SIG Scheduling on scheduler behavior and extensibility. This structure helps Kubernetes evolve while maintaining quality, review rigor, and community-driven decision making.
The other options are not part of Kubernetes project terminology. “Software Installation Guide” and the others might sound plausible, but they are not how Kubernetes defines SIGs.
Understanding SIGs matters operationally because many Kubernetes features and design changes originate from SIGs. When you read Kubernetes enhancement proposals, release notes, or documentation, you’ll often see SIG ownership and references. In short,SIGs are the primary organizational units for Kubernetes engineering and stewardship, and SIG =Special Interest Group.
What is the resource type used to package sets of containers for scheduling in a cluster?
Pod
ContainerSet
ReplicaSet
Deployment
The Kubernetes resource used to package one or more containers into a schedulable unit is thePod, soAis correct. Kubernetes schedulesPodsonto nodes; it does not schedule individual containers. A Pod represents a single “instance” of an application component and includes one or more containers that share key runtime properties, including the same network namespace (same IP and port space) and the ability to share volumes.
Pods enable common patterns beyond “one container per Pod.” For example, a Pod may include a main application container plus a sidecar container for logging, proxying, or configuration reload. Because these containers share localhost networking and volume mounts, they can coordinate efficiently without requiring external service calls. Kubernetes manages the Pod lifecycle as a unit: the containers in a Pod are started according to container lifecycle rules and are co-located on the same node.
Option B (ContainerSet) is not a standard Kubernetes workload resource. Option C (ReplicaSet) manages asetof Pod replicas, ensuring a desired count is running, but it is not the packaging unit itself. Option D (Deployment) is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, again operating on Pods rather than being the container-packaging unit.
From the scheduling perspective, the PodSpec defines container images, commands, resources, volumes, security context, and placement constraints. The scheduler evaluates these constraints and assigns the Pod to a node. This “Pod as the atomic scheduling unit” is fundamental to Kubernetes architecture and explains why Kubernetes-native concepts (Services, selectors, readiness, autoscaling) all revolve around Pods.
=========
What Kubernetes control plane component exposes the programmatic interface used to create, manage and interact with the Kubernetes objects?
kube-controller-manager
kube-proxy
kube-apiserver
etcd
Thekube-apiserveris the front door of the Kubernetes control plane and exposes theprogrammatic interfaceused to create, read, update, delete, and watch Kubernetes objects—soCis correct. Every interaction with cluster state ultimately goes through the Kubernetes API. Tools like kubectl, client libraries, GitOps controllers, operators, and core control plane components (scheduler and controllers) all communicate with the API server to submit desired state and to observe current state.
The API server is responsible for handling authentication (who are you?), authorization (what are you allowed to do?), and admission control (should this request be allowed and possibly mutated/validated?). After a request passes these gates, the API server persists the object’s desired state toetcd(the backing datastore) and returns a response. The API server also provides a watch mechanism so controllers can react to changes efficiently, enabling Kubernetes’ reconciliation model.
It’s important to distinguish this from the other options.etcdstores cluster data but does not expose the cluster’s primary user-facing API; it’s an internal datastore.kube-controller-managerruns control loops (controllers) that continuously reconcile resources (like Deployments, Nodes, Jobs) but it consumes the API rather than exposing it.kube-proxyis a node-level component implementing Service networking rules and is unrelated to the control-plane API endpoint.
Because Kubernetes is “API-driven,” the kube-apiserver is central: if it is unavailable, you cannot create workloads, update configurations, or even reliably observe cluster state. This is why high availability architectures prioritize multiple API server instances behind a load balancer, and why securing the API server (RBAC, TLS, audit) is a primary operational concern.
=========
How long should a stable API element in Kubernetes be supported (at minimum) after deprecation?
9 months
24 months
12 months
6 months
Kubernetes has a formal API deprecation policy to balance stability for users with the ability to evolve the platform. For astable (GA) API element, Kubernetes commits to supporting that API for a minimum period after it is deprecated. The correct minimum in this question is12 months, which corresponds to optionC.
In practice, Kubernetes releases occur roughly every three to four months, and the deprecation policy is commonly communicated in terms of “releases” as well as time. A GA API that is deprecated in one release is typically kept available for multiple subsequent releases, giving cluster operators and application teams time to migrate manifests, client libraries, controllers, and automation. This matters because Kubernetes is often at the center of production delivery pipelines; abrupt API removals would break deployments, upgrades, and tooling. By guaranteeing a minimum support window, Kubernetes enables predictable upgrades and safer lifecycle management.
This policy also encourages teams to track API versions and plan migrations. For example, workloads might start on a beta API (which can change), but once an API reaches stable, users can expect a stronger compatibility promise. Deprecation warnings help surface risk early. In many clusters, you’ll see API server warnings and tooling hints when manifests use deprecated fields/versions, allowing proactive remediation before the removal release.
Options 6 or 9 months would be too short for many enterprises to coordinate changes across multiple teams and environments. 24 months may be true for some ecosystems, but the Kubernetes stated minimum in this exam-style framing is 12 months. The key operational takeaway is:don’t ignore deprecation notices—they’re your clock for migration planning. Treat API version upgrades as part of routine cluster lifecycle hygiene to avoid being blocked during Kubernetes version upgrades when deprecated APIs are finally removed.
=========
What is a Kubernetes service with no cluster IP address called?
Headless Service
Nodeless Service
IPLess Service
Specless Service
A Kubernetes Service normally provides astable virtual IP (ClusterIP)and a DNS name that load-balances traffic across matching Pods. Aheadless Serviceis a special type of Service where Kubernetes doesnotallocate a ClusterIP. Instead, the Service’s DNS returns individual Pod IPs (or other endpoint records), allowing clients to connect directly to specific backends rather than through a single virtual IP. That is why the correct answer isA (Headless Service).
Headless Services are created by setting spec.clusterIP: None. When you do this, kube-proxy does not program load-balancing rules for a virtual IP because there isn’t one. Instead, service discovery is handled via DNS records that point to the actual endpoints. This behavior is especially important for stateful or identity-sensitive systems where clients must talk to a particular replica (for example, databases, leader/follower clusters, or StatefulSet members).
This is also why headless Services pair naturally withStatefulSets. StatefulSets provide stable network identities (pod-0, pod-1, etc.) and stable DNS names. The headless Service provides the DNS domain that resolves each Pod’s stable hostname to its IP, enabling peer discovery and consistent addressing even as Pods move between nodes.
The other options are distractors: “Nodeless,” “IPLess,” and “Specless” are not Kubernetes Service types. In the core API, the Service “types” are things like ClusterIP, NodePort, LoadBalancer, and ExternalName; “headless” is a behavioral mode achieved through the ClusterIP field.
In short: a headless Service removes the virtual IP abstraction and exposes endpoint-level discovery. It’s a deliberate design choice when load-balancing is not desired or when the application itself handles routing, membership, or sharding.
=========
Which of the following is the name of a container orchestration software?
OpenStack
Docker
Apache Mesos
CRI-O
C (Apache Mesos)is correct because Mesos is a cluster manager/orchestrator that can schedule and manage workloads (including containerized workloads) across a pool of machines. Historically, Mesos (often paired with frameworks like Marathon) was used to orchestrate services and batch jobs at scale, similar in spirit to Kubernetes’ scheduling and cluster management role.
Why the other answers are not correct as “container orchestration software” in this context:
OpenStack (A)is primarily an IaaS cloud platform for provisioning compute, networking, and storage (VM-focused). It’s not a container orchestrator, though it can host Kubernetes or containers.
Docker (B)is a container platform/tooling ecosystem (image build, runtime, local orchestration via Docker Compose/Swarm historically), but “Docker” itself is not the best match for “container orchestration software” in the multi-node cluster orchestration sense that the question implies.
CRI-O (D)is acontainer runtimeimplementing Kubernetes’ CRI; it runs containers on a node but does not orchestrate placement, scaling, or service lifecycle across a cluster.
Container orchestration typically means capabilities like scheduling, scaling, service discovery integration, health management, and rolling updates across multiple hosts. Mesos fits that definition: it provides resource management and scheduling over a cluster and can run container workloads via supported containerizers. Kubernetes ultimately became the dominant orchestrator for many use cases, but Mesos is clearly recognized as orchestration software in this category.
So, among these choices, the verified orchestration platform isApache Mesos (C).
=========
What methods can you use to scale a Deployment?
With kubectl edit deployment exclusively.
With kubectl scale-up deployment exclusively.
With kubectl scale deployment and kubectl edit deployment.
With kubectl scale deployment exclusively.
A Deployment’s replica count is controlled by spec.replicas. You can scale a Deployment by changing that field—either directly editing the object or using kubectl’s scaling helper. ThereforeCis correct: you can scale usingkubectl scaleand also viakubectl edit.
kubectl scale deployment
kubectl edit deployment
Option B is invalid because kubectl scale-up deployment is not a standard kubectl command. Option A is incorrect because kubectl edit is not theonlymethod; scaling is commonly done with kubectl scale. Option D is also incorrect because while kubectl scale is a primary method, kubectl edit is also a valid method to change replicas.
In production, you often scale with autoscalers (HPA/VPA), but the question is asking about kubectl methods. The key Kubernetes concept is that scaling is achieved by updating desired state (spec.replicas), and controllers reconcile Pods to match.
=========
In a serverless computing architecture:
Users of the cloud provider are charged based on the number of requests to a function.
Serverless functions are incompatible with containerized functions.
Users should make a reservation to the cloud provider based on an estimation of usage.
Containers serving requests are running in the background in idle status.
Serverless architectures typically bill based onactual consumption, often measured as number of requests and execution duration (and sometimes memory/CPU allocated), soAis correct. The defining trait is that you don’t provision or manage servers directly; the platform scales execution up and down automatically, including down to zero for many models, and charges you for what you use.
Option B is incorrect: many serverless platforms can run container-based workloads (and some are explicitly “serverless containers”). The idea is the operational abstraction and billing model, not incompatibility with containers. Option C is incorrect because “making a reservation based on estimation” describes reserved capacity purchasing, which is the opposite of the typical serverless pay-per-use model. Option D is misleading: serverless systems aim to avoid charging for idle compute; while platforms may keep some warm capacity for latency reasons, the customer-facing model is not “containers running idle in the background.”
In cloud-native architecture, serverless is often chosen for spiky, event-driven workloads where you want minimal ops overhead and cost efficiency at low utilization. It pairs naturally with eventing systems (queues, pub/sub) and can be integrated with Kubernetes ecosystems via event-driven autoscaling frameworks or managed serverless offerings.
So the correct statement isA: charging is commonly based on requests (and usage), which captures the cost and operational model that differentiates serverless from always-on infrastructure.
=========
Which of the following would fall under the responsibilities of an SRE?
Developing a new application feature.
Creating a monitoring baseline for an application.
Submitting a budget for running an application in a cloud.
Writing policy on how to submit a code change.
Site Reliability Engineering (SRE) focuses onreliability, availability, performance, and operational excellenceusing engineering approaches. Among the options,creating a monitoring baseline for an applicationis a classic SRE responsibility, soBis correct. A monitoring baseline typically includes defining key service-level signals (latency, traffic, errors, saturation), establishing dashboards, setting sensible alert thresholds, and ensuring telemetry is complete enough to support incident response and capacity planning.
In Kubernetes environments, SRE work often involves ensuring that workloads expose health endpoints for probes, that resource requests/limits are set to allow stable scheduling and autoscaling, and that observability pipelines (metrics, logs, traces) are consistent. Building a monitoring baseline also ties into SLO/SLI practices: SREs define what “good” looks like, measure it continuously, and create alerts that notify teams when the system deviates from those expectations.
Option A is primarily an application developer task—SREs may contribute to reliability features, but core product feature development is usually owned by engineering teams. Option C is more aligned with finance, FinOps, or management responsibilities, though SRE data can inform costs. Option D is closer to governance, platform policy, or developer experience/process ownership; SREs might influence processes, but “policy on how to submit code change” is not the defining SRE duty compared to monitoring and reliability engineering.
Therefore, the best verified choice isB, because establishing monitoring baselines is central to operating reliable services on Kubernetes.
=========
What does vertical scaling an application deployment describe best?
Adding/removing applications to meet demand.
Adding/removing node instances to the cluster to meet demand.
Adding/removing resources to applications to meet demand.
Adding/removing application instances of the same application to meet demand.
Vertical scalingmeans changing theresources allocated to a single instanceof an application (more or less CPU/memory), which is whyCis correct. In Kubernetes terms, this corresponds to adjusting container resourcerequests and limits(for CPU and memory). Increasing resources can help a workload handle more load per Pod by giving it more compute or memory headroom; decreasing can reduce cost and improve cluster packing efficiency.
This differs fromhorizontal scaling, which changes thenumber of instances(replicas). Option D describes horizontal scaling: adding/removing replicas of the same workload, typically managed by a Deployment and often automated via the Horizontal Pod Autoscaler (HPA). Option B describes scaling theinfrastructure layer(nodes) which is cluster/node autoscaling (Cluster Autoscaler in cloud environments). Option A is not a standard scaling definition.
In practice, vertical scaling in Kubernetes can be manual (edit the Deployment resource requests/limits) or automated using theVertical Pod Autoscaler (VPA), which can recommend or apply new requests based on observed usage. A key nuance is that changing requests/limits often requires Pod restarts to take effect, so vertical scaling is less “instant” than HPA and can disrupt workloads if not planned. That’s why many production teams prefer horizontal scaling for traffic-driven workloads and use vertical scaling to right-size baseline resources or address memory-bound/cpu-bound behavior.
From a cloud-native architecture standpoint, understanding vertical vs horizontal scaling helps you design for elasticity: use vertical scaling to tune per-instance capacity; use horizontal scaling for resilience and throughput; and combine with node autoscaling to ensure the cluster has sufficient capacity. The definition the question is testing is simple:vertical scaling = change resources per application instance, which is optionC.
What happens with a regular Pod running in Kubernetes when a node fails?
A new Pod with the same UID is scheduled to another node after a while.
A new, near-identical Pod but with different UID is scheduled to another node.
By default, a Pod can only be scheduled to the same node when the node fails.
A new Pod is scheduled on a different node only if it is configured explicitly.
Bis correct: when a node fails, Kubernetes does not “move” the same Pod instance; instead, anew Pod object (new UID)is created to replace it—assuming the Pod is managed by a controller (Deployment/ReplicaSet, StatefulSet, etc.). A Pod is an API object with a unique identifier (UID) and is tightly associated with the node it’s scheduled to via spec.nodeName. If the node becomes unreachable, that original Pod cannot be restarted elsewhere because it was bound to that node.
Kubernetes’ high availability comes from controllers maintaining desired state. For example, a Deployment desires N replicas. If a node fails and the replicas on that node are lost, the controller will create replacement Pods, and the scheduler will place them onto healthy nodes. These replacement Pods will be “near-identical” in spec (same template), but they are still new instances with new UIDs and typically new IPs.
Why the other options are wrong:
Ais incorrect because the UID does not remain the same—Kubernetes creates a new Pod object rather than reusing the old identity.
Cis incorrect; pods are not restricted to the same node after failure. The whole point of orchestration is to reschedule elsewhere.
Dis incorrect; rescheduling does not require special explicit configuration for typical controller-managed workloads. The controller behavior is standard. (If it’s a bare Pod without a controller, it will not be recreated automatically.)
This also ties to the difference between “regular Pod” vs controller-managed workloads: a standalone Pod is not self-healing by itself, while a Deployment/ReplicaSet provides that resilience. In typical production design, you run workloads under controllers specifically so node failure triggers replacement and restores replica count.
Therefore, the correct outcome isB.
=========
Which key-value store is used to persist Kubernetes cluster data?
etcd
ZooKeeper
ControlPlaneStore
Redis
Kubernetes stores its cluster state (API objects) inetcd, makingAcorrect. etcd is a distributed, strongly consistent key-value store that serves as the source of truth for the Kubernetes control plane. When you create or update objects such as Pods, Deployments, ConfigMaps, Secrets, or Nodes, thekube-apiservervalidates the request and then persists the desired state into etcd. Controllers and the scheduler watch the API for changes (which ultimately reflect etcd state) and reconcile the cluster to match that desired state.
etcd’s consistency guarantees are crucial. Kubernetes relies on accurate, up-to-date state to make scheduling decisions, enforce RBAC/admission policies, coordinate leader elections, and ensure controllers behave correctly. etcd uses the Raft consensus algorithm to replicate data among members and requires quorum for writes, enabling fault tolerance when deployed in HA configurations (commonly three or five members).
The other options are incorrect in Kubernetes’ standard architecture. ZooKeeper is a distributed coordination system used by some other platforms, but Kubernetes does not use it as its primary datastore. Redis is an in-memory data store used for caching or messaging, not as Kubernetes’ authoritative state store. “ControlPlaneStore” is not a standard Kubernetes component.
Operationally, etcd health is one of the most important determinants of cluster reliability. Slow disk I/O or unstable networking can degrade etcd performance and cause API latency spikes. Backup and restore procedures for etcd are critical disaster-recovery practices, and securing etcd (TLS, access restrictions) is essential because it may contain sensitive data (e.g., Secrets—often base64-encoded, and optionally encrypted at rest depending on configuration).
Therefore, the verified Kubernetes datastore isetcd, optionA.
=========
What is the main purpose of the Open Container Initiative (OCI)?
Accelerating the adoption of containers and Kubernetes in the industry.
Creating open industry standards around container formats and runtimes.
Creating industry standards around container formats and runtimes for private purposes.
Improving the security of standards around container formats and runtimes.
Bis correct: the OCI’s main purpose is to createopen, vendor-neutral industry standardsforcontainer image formatsandcontainer runtimes. Standardization is critical in container orchestration because portability is a core promise: you should be able to build an image once and run it across different environments and runtimes without rewriting packaging or execution logic.
OCI defines (at a high level) two foundational specs:
Image specification: how container images are packaged (layers, metadata, manifests).
Runtime specification: how to run a container (filesystem setup, namespaces/cgroups behavior, lifecycle).These standards enable interoperability across tooling. For example, higher-level runtimes (like containerd or CRI-O) rely on OCI-compliant components (often runc or equivalents) to execute containers consistently.
Why the other options are not the best answer:
A(accelerating adoption) might be an indirect outcome, but it’s not the OCI’s core charter.
Cis contradictory (“industry standards” but “for private purposes”)—OCI is explicitly about open standards.
D(improving security) can be helped by standardization and best practices, but OCI is not primarily a security standards body; its central function isformat and runtime interoperability.
In Kubernetes specifically, OCI is part of the “plumbing” that makes runtimes replaceable. Kubernetes talks to runtimes via CRI; runtimes execute containers via OCI. This layering helps Kubernetes remain runtime-agnostic while still benefiting from consistent container behavior everywhere.
Therefore, the correct choice isB: OCI creates open standards around container formats and runtimes.
=========
CI/CD stands for:
Continuous Information / Continuous Development
Continuous Integration / Continuous Development
Cloud Integration / Cloud Development
Continuous Integration / Continuous Deployment
CI/CD is a foundational practice for delivering software rapidly and reliably, and it maps strongly to cloud native delivery workflows commonly used with Kubernetes.CIstands forContinuous Integration: developers merge code changes frequently into a shared repository, and automated systems build and test those changes to detect issues early.CDis commonly used to meanContinuous DeliveryorContinuous Deploymentdepending on how far automation goes. In many certification contexts and simplified definitions like this question, CD is interpreted asContinuous Deployment, meaning every change that passes the automated pipeline is automatically released to production. That matches optionD.
In a Kubernetes context, CI typically produces artifacts such as container images (built from Dockerfiles or similar build definitions), runs unit/integration tests, scans dependencies, and pushes images to a registry. CD then promotes those images into environments by updating Kubernetes manifests (Deployments, Helm charts, Kustomize overlays, etc.). Progressive delivery patterns (rolling updates, canary, blue/green) often use Kubernetes-native controllers and Service routing to reduce risk.
Why the other options are incorrect: “Continuous Development” isn’t the standard “D” term; it’s ambiguous and not the established acronym expansion. “Cloud Integration/Cloud Development” is unrelated. Continuous Delivery (in the stricter sense) means changes are always in a deployable state and releases may still require a manual approval step, while Continuous Deployment removes that final manual gate. But because the option set explicitly includes “Continuous Deployment,” and that is one of the accepted canonical expansions for CD,Dis the correct selection here.
Practically, CI/CD complements Kubernetes’ declarative model: pipelines update desired state (Git or manifests), and Kubernetes reconciles it. This combination enables frequent releases, repeatability, reduced human error, and faster recovery through automated rollbacks and controlled rollout strategies.
=========
What is the Kubernetes object used for running a recurring workload?
Job
Batch
DaemonSet
CronJob
A recurring workload in Kubernetes is implemented with aCronJob, so the correct choice isD. A CronJob is a controller that createsJobson a schedule defined in standard cron format (minute, hour, day of month, month, day of week). This makes CronJobs ideal for periodic tasks like backups, report generation, log rotation, and cleanup tasks.
AJob(option A) is run-to-completion but is typically aone-timeexecution; it ensures that a specified number of Pods successfully terminate. Youcanuse a Job repeatedly, but something else must create it each time—CronJob is that built-in scheduler. Option B (“Batch”) is not a standard workload resource type (batch is an API group, not the object name used here). Option C (DaemonSet) ensures one Pod runs on every node (or selected nodes), which is not “recurring,” it’s “always present per node.”
CronJobs include operational controls that matter in real clusters. For example, concurrencyPolicy controls what happens if a scheduled run overlaps with a previous run (Allow, Forbid, Replace). startingDeadlineSeconds can handle missed schedules (e.g., if the controller was down). History limits (successfulJobsHistoryLimit, failedJobsHistoryLimit) help manage cleanup and troubleshooting. Each scheduled execution results in a Job with its own Pods, which can be inspected with kubectl get jobs and kubectl logs.
So the correct Kubernetes object for a recurring workload isCronJob (D): it provides native scheduling and creates Jobs automatically according to the defined cadence.
=========
What is the name of the lightweight Kubernetes distribution built for IoT and edge computing?
OpenShift
k3s
RKE
k1s
Edge and IoT environments often have constraints that differ from traditional datacenters: limited CPU/RAM, intermittent connectivity, smaller footprints, and a desire for simpler operations.k3sis a well-known lightweight Kubernetes distribution designed specifically to run in these environments, makingBthe correct answer.
What makes k3s “lightweight” is that it packages Kubernetes components in a simplified way and reduces operational overhead. It typically uses a single binary distribution and can run with an embedded datastore option for smaller installations (while also supporting external datastores for HA use cases). It streamlines dependencies and is aimed at faster installation and reduced resource consumption, which is ideal for edge nodes, IoT gateways, small servers, labs, and development environments.
By contrast,OpenShiftis a Kubernetes distribution focused on enterprise platform capabilities, with additional security defaults, integrated developer tooling, and a larger operational footprint—excellent for many enterprises but not “built for IoT and edge” as the defining characteristic.RKE(Rancher Kubernetes Engine) is a Kubernetes installer/engine used to deploy Kubernetes, but it’s not specifically the lightweight edge-focused distribution in the way k3s is. “k1s” is not a standard, widely recognized Kubernetes distribution name in this context.
From a cloud native architecture perspective, edge Kubernetes distributions extend the same declarative and API-driven model to places where you want consistent operations across cloud, datacenter, and edge. You can apply GitOps patterns, standard manifests, and Kubernetes-native controllers across heterogeneous footprints. k3s provides that familiar Kubernetes experience while optimizing for constrained environments, which is why it has become a common choice for edge/IoT Kubernetes deployments.
=========
Which are the core features provided by a service mesh?
Authentication and authorization
Distributing and replicating data
Security vulnerability scanning
Configuration management
Ais the correct answer because a service mesh primarily focuses on securing and managingservice-to-service communication, and a core part of that isauthentication and authorization. In microservices architectures, internal (“east-west”) traffic can become a complex web of calls. A service mesh introduces a dedicated communication layer—commonly implemented with sidecar proxies or node proxies plus a control plane—to apply consistent security and traffic policies across services.
Authentication in a mesh typically meansservice identity: each workload gets an identity (often via certificates), enabling mutual TLS (mTLS) so services can verify each other and encrypt traffic in transit. Authorization then builds on identity to enforce “who can talk to whom” via policies (for example: service A can call service B only on certain paths or methods). These capabilities are central because they reduce the need for every development team to implement and maintain custom security libraries correctly.
Why the other answers are incorrect:
B(data distribution/replication) is a storage/database concern, not a mesh function.
C(vulnerability scanning) is typically part of CI/CD and supply-chain security tooling, not service-to-service runtime traffic management.
D(configuration management) is broader (GitOps, IaC, Helm/Kustomize); a mesh does have configuration, but “configuration management” is not the defining core feature tested here.
Service meshes also commonly provide traffic management (timeouts, retries, circuit breaking, canary routing) and telemetry (metrics/traces), but among the listed options,authentication and authorizationbest matches “core features.” It captures the mesh’s role in standardizing secure communications in a distributed system.
So, the verified correct answer isA.
=========
What default level of protection is applied to the data in Secrets in the Kubernetes API?
The values use AES symmetric encryption
The values are stored in plain text
The values are encoded with SHA256 hashes
The values are base64 encoded
KubernetesSecretsare designed to store sensitive data such as tokens, passwords, or certificates and make them available to Pods in controlled ways (as environment variables or mounted files). However, thedefaultprotection applied to Secret values in the Kubernetes API isbase64 encoding, not encryption. That is whyDis correct. Base64 is an encoding scheme that converts binary data into ASCII text; it is reversible and does not provide confidentiality.
By default, Secret objects are stored in the cluster’s backing datastore (commonly etcd) as base64-encoded strings inside the Secret manifest. Unless the cluster is configured for encryption at rest, those values are effectively storedunencryptedin etcd and may be visible to anyone who can read etcd directly or who has API permissions to read Secrets. This distinction is critical for security: base64 can prevent accidental issues with special characters in YAML/JSON, but it does not protect against attackers.
Option A is only correct ifencryption at restis explicitly configured on the API server using an EncryptionConfiguration (for example, AES-CBC or AES-GCM providers). Many managed Kubernetes offerings enable encryption at rest for etcd as an option or by default, but that is a deployment choice, not the universal Kubernetes default. Option C is incorrect because hashing is used for verification, not for secret retrieval; you typically need to recover the original value, so hashing isn’t suitable for Secrets. Option B (“plain text”) is misleading: the stored representation is base64-encoded, but because base64 is reversible, the security outcome is close to plain text unless encryption at rest and strict RBAC are in place.
The correct operational stance is: treat Kubernetes Secrets as sensitive; lock down access with RBAC, enable encryption at rest, avoid broad Secret read permissions, and consider external secret managers when appropriate. But strictly for the question’s wording—default level of protection—base64 encodingis the right answer.
=========
What is the purpose of the kube-proxy?
The kube-proxy balances network requests to Pods.
The kube-proxy maintains network rules on nodes.
The kube-proxy ensures the cluster connectivity with the internet.
The kube-proxy maintains the DNS rules of the cluster.
The correct answer isB:kube-proxy maintains network rules on nodes. kube-proxy is a node component that implements part of the KubernetesServiceabstraction. It watches the Kubernetes API for Service and EndpointSlice/Endpoints changes, and then programs the node’s dataplane rules (commonlyiptablesorIPVS, depending on configuration) so that traffic sent to a Service virtual IP and port is correctly forwarded to one of the backing Pod endpoints.
This is how Kubernetes provides stable Service addresses even though Pod IPs are ephemeral. When Pods scale up/down or are replaced during a rollout, endpoints change; kube-proxy updates the node rules accordingly. From the perspective of a client, the Service name and ClusterIP remain stable, while the actual backend endpoints are load-distributed.
Option A is a tempting phrasing but incomplete: load distribution is an outcome of the forwarding rules, but kube-proxy’s primary role is maintaining thenetwork forwarding rulesthat make Services work. Option C is incorrect because internet connectivity depends on cluster networking, routing, NAT, and often CNI configuration—not kube-proxy’s job description. Option D is incorrect because DNS is typically handled by CoreDNS; kube-proxy does not “maintain DNS rules.”
Operationally, kube-proxy failures often manifest as Service connectivity issues: Pod-to-Service traffic fails, ClusterIP routing breaks, NodePort behavior becomes inconsistent, or endpoints aren’t updated correctly. Modern Kubernetes environments sometimes replace kube-proxy with eBPF-based dataplanes, but in the classic architecture the correct statement remains: kube-proxy runs on each node and maintains the rules needed for Service traffic steering.
=========
Which component in Kubernetes is responsible to watch newly created Pods with no assigned node, and selects a node for them to run on?
etcd
kube-controller-manager
kube-proxy
kube-scheduler
The correct answer isD: kube-scheduler. The kube-scheduler is the control plane component responsible for assigning Pods to nodes. It watches for newly created Pods that do not have a spec.nodeName set (i.e., unscheduled Pods). For each such Pod, it evaluates the available nodes against scheduling constraints and chooses the best node, then performs a “bind” operation by setting the Pod’s spec.nodeName.
Scheduling decisions consider many factors: resource requests vs node allocatable capacity, taints/tolerations, node selectors and affinity/anti-affinity, topology spread constraints, and other policy inputs. The scheduler typically runs a two-phase process:filtering(find feasible nodes) andscoring(rank feasible nodes) before selecting one.
Option A (etcd) is the datastore that persists cluster state; it does not make scheduling decisions. Option B (kube-controller-manager) runs controllers (Deployment, Node, Job controllers, etc.) but not scheduling. Option C (kube-proxy) is a node component for Service networking; it doesn’t place Pods.
Understanding this separation is key for troubleshooting. If Pods are stuck Pending with “no nodes available,” the scheduler’s feasibility checks are failing (insufficient CPU/memory, taints not tolerated, affinity mismatch). If Pods schedule but land unexpectedly, it’s often due to scoring preferences or missing constraints. In all cases, the component that performs the node selection is thekube-scheduler.
Therefore, the verified correct answer isD.
=========
In which framework do the developers no longer have to deal with capacity, deployments, scaling and fault tolerance, and OS?
DockerSwarm
Kubernetes
Mesos
Serverless
Serverlessis the model where developers most directly avoid managingserver capacity, OS operations, and much of the deployment/scaling/fault-tolerance mechanics, which is whyDis correct. In serverless computing (commonly Function-as-a-Service, FaaS, and managed serverless container platforms), the provider abstracts away the underlying servers. You typically deploy code (functions) or a container image, define triggers (HTTP events, queues, schedules), and the platform automatically provisions the required compute, scales it based on demand, and handles much of the availability and fault tolerance behind the scenes.
It’s important to compare this to Kubernetes: Kubernetesdoesautomate scheduling, self-healing, rolling updates, and scaling, but it still requires you (or your platform team) to design and operate cluster capacity, node pools, upgrades, runtime configuration, networking, and baseline reliability controls. Even in managed Kubernetes services, you still choose node sizes, scale policies, and operational configuration. Kubernetes reduces toil, but it does not eliminate infrastructure concerns in the same way serverless does.
Docker Swarm and Mesos are orchestration platforms that schedule workloads, but they also require managing the underlying capacity and OS-level aspects. They are not “no longer have to deal with capacity and OS” frameworks.
From a cloud native viewpoint, serverless is about consuming compute as an on-demand utility. Kubernetes can be a foundation for a serverless experience (for example, with event-driven autoscaling or serverless frameworks), butthe pure framework that removes the most operational burden from developers is serverless.
Which Kubernetes resource uses immutable: true boolean field?
Deployment
Pod
ConfigMap
ReplicaSet
The immutable: true field is supported byConfigMap(and also by Secrets, though Secret is not in the options), soCis correct. When a ConfigMap is marked immutable, its data can no longer be changed after creation. This is useful for protecting configuration from accidental modification and for improving cluster performance by reducing watch/update churn on frequently referenced configuration objects.
In Kubernetes, ConfigMaps store non-sensitive configuration as key-value pairs. They can be consumed by Pods as environment variables, command-line arguments, or mounted files in volumes. Without immutability, ConfigMap updates can trigger complex runtime behaviors: for example, file-mounted ConfigMap updates can eventually reflect in the volume (with some delay), but environment variables do not update automatically in running Pods. This can cause confusion and configuration drift between expected and actual behavior. Marking a ConfigMap immutable makes the configuration stable and encourages explicit rollout strategies (create a new ConfigMap with a new name and update the Pod template), which is generally more reliable for production delivery.
Why the other options are wrong: Deployments, Pods, and ReplicaSets do not use an immutable: true field as a standard top-level toggle in their API schema for the purpose described. These objects can be updated through the normal API mechanisms, and their updates are part of typical lifecycle operations (rolling updates, scaling, etc.). The immutability concept exists in Kubernetes, but the specific immutable boolean in this context is a recognized field for ConfigMap (and Secret) objects.
Operationally, immutable ConfigMaps help enforce safer practices: instead of editing live configuration in place, teams adopt versioned configuration artifacts and controlled rollouts via Deployments. This fits cloud-native principles of repeatability and reducing accidental production changes.
=========
Imagine there is a requirement to run a database backup every day. Which Kubernetes resource could be used to achieve that?
kube-scheduler
CronJob
Task
Job
To run a workload on a repeating schedule (like “every day”), Kubernetes providesCronJob, makingBcorrect. A CronJob creates Jobs according to a cron-formatted schedule, and then each Job creates one or more Pods that run to completion. This is the Kubernetes-native replacement for traditional cron scheduling, but implemented as a declarative resource managed by controllers in the cluster.
For a daily database backup, you’d define a CronJob with a schedule (e.g., "0 2 * * *" for 2:00 AM daily), and specify the Pod template that performs the backup (invokes backup scripts/tools, writes output to durable storage, uploads to object storage, etc.). Kubernetes will then create a Job at each scheduled time. CronJobs also support operational controls like concurrencyPolicy (Allow/Forbid/Replace) to decide what happens if a previous backup is still running, startingDeadlineSeconds to handle missed schedules, and history limits to retain recent successful/failed Job records for debugging.
Option D (Job) is close but not sufficient for “every day.” A Job runs a workload until completion once; you would need an external scheduler to create a Job every day. Option A (kube-scheduler) is a control plane component responsible for placing Pods onto nodes and does not schedule recurring tasks. Option C (“Task”) is not a standard Kubernetes workload resource.
This question is fundamentally about mapping a recurring operational requirement (backup cadence) to Kubernetes primitives. The correct design is:CronJobtriggersJobcreation on a schedule;Jobruns Pods to completion. Therefore, the correct answer isB.
=========
Kubernetes ___ protect you against voluntary interruptions (such as deleting Pods, draining nodes) to run applications in a highly available manner.
Pod Topology Spread Constraints
Pod Disruption Budgets
Taints and Tolerations
Resource Limits and Requests
The correct answer isB: Pod Disruption Budgets (PDBs). A PDB is a policy object that limits how many Pods of an application can be voluntarily disrupted at the same time. “Voluntary disruptions” include actions such as draining a node for maintenance (kubectl drain), cluster upgrades, or an administrator deleting Pods. The core purpose is to preserveavailabilityby ensuring that a minimum number (or percentage) of replicas remain running and ready while those planned disruptions occur.
A PDB is typically defined with either minAvailable (e.g., “at least 3 Pods must remain available”) or maxUnavailable (e.g., “no more than 1 Pod can be unavailable”). Kubernetes uses this budget when performing eviction operations. If evicting a Pod would violate the PDB, the eviction is blocked (or delayed), which forces maintenance workflows to proceed more safely—either by draining more slowly, scaling up first, or scheduling maintenance in stages.
Why the other options are not correct: topology spread constraints (A) influencescheduling distributionacross failure domains but don’t directly protect against voluntary disruptions. Taints and tolerations (C) controlwherePods can schedule, not how many can be disrupted. Resource requests/limits (D) controlCPU/memory allocationand do not guard availability during drains or deletions.
PDBs also work best when paired with Deployments/StatefulSets that maintain replicas and with readiness probes that accurately represent whether a Pod can serve traffic. PDBs do not preventinvoluntarydisruptions (node crashes), but they materially reduce risk during planned operations—exactly what the question is targeting.
=========
Which of these components is part of the Kubernetes Control Plane?
CoreDNS
cloud-controller-manager
kube-proxy
kubelet
The Kubernetescontrol planeis the set of components responsible for makingcluster-wide decisions(like scheduling) and detecting and responding to cluster events (like starting new Pods when they fail). In upstream Kubernetes architecture, the canonical control plane components includekube-apiserver,etcd,kube-scheduler, andkube-controller-manager, and—when running on a cloud provider—thecloud-controller-manager. That makes optionBthe correct answer:cloud-controller-manageris explicitly a control plane component that integrates Kubernetes with the underlying cloud.
The cloud-controller-manager runs controllers that talk to cloud APIs for infrastructure concerns such as node lifecycle, routes, and load balancers. For example, when you create a Service of type LoadBalancer, a controller in this component is responsible for provisioning a cloud load balancer and updating the Service status. This is clearly control-plane behavior: reconciling desired state into real infrastructure state.
Why the others are not control plane components (in the classic classification):kubeletis anode component(agent) responsible for running and managing Pods on a specific node.kube-proxyis also anode componentthat implements Service networking rules on nodes.CoreDNSis usually deployed as a cluster add-on for DNS-based service discovery; it’s critical, but it’s not a control plane component in the strict architectural list.
So, while many clusters run CoreDNS in kube-system, the Kubernetes component that is definitively “part of the control plane” among these choices iscloud-controller-manager (B).
=========
Which of the following best describes horizontally scaling an application deployment?
The act of adding/removing node instances to the cluster to meet demand.
The act of adding/removing applications to meet demand.
The act of adding/removing application instances of the same application to meet demand.
The act of adding/removing resources to application instances to meet demand.
Horizontal scaling means changinghow many instancesof an application are running, not changing how big each instance is. Therefore, the best description isC: adding/removing application instances of the same application to meet demand. In Kubernetes, “instances” typically correspond toPod replicasmanaged by a controller like a Deployment. When you scale horizontally, you increase or decrease the replica count, which increases or decreases total throughput and resilience by distributing load across more Pods.
Option A is aboutcluster/node scaling(adding or removing nodes), which is infrastructure scaling typically handled by a cluster autoscaler in cloud environments. Node scaling can enable more Pods to be scheduled, but it’s not the definition of horizontal application scaling itself. Option D describesvertical scaling—adding/removing CPU or memory resources to a given instance (Pod/container) by changing requests/limits or using VPA. Option B is vague and not the standard definition.
Horizontal scaling is a core cloud-native pattern because it improves availability and elasticity. If one Pod fails, other replicas continue serving traffic. In Kubernetes, scaling can be manual (kubectl scale deployment ... --replicas=N) or automatic using theHorizontal Pod Autoscaler (HPA). HPA adjusts replicas based on observed metrics like CPU utilization, memory, or custom/external metrics (for example, request rate or queue length). This creates responsive systems that can handle variable traffic.
From an architecture perspective, designing for horizontal scaling often means ensuring your application is stateless (or manages state externally), uses idempotent request handling, and supports multiple concurrent instances. Stateful workloads can also scale horizontally, but usually with additional constraints (StatefulSets, sharding, quorum membership, stable identity).
So the verified definition and correct choice isC.
=========
Which one of the following is an open source runtime security tool?
lxd
containerd
falco
gVisor
The correct answer isC: Falco.Falcois a widely used open-source runtime security tool (originally created by Sysdig and now a CNCF project) designed to detect suspicious behavior at runtime by monitoring system calls and other kernel-level signals. In Kubernetes environments, Falco helps identify threats such as unexpected shell access in containers, privilege escalation attempts, access to sensitive files, anomalous network tooling, crypto-mining patterns, and other behaviors that indicate compromise or policy violations.
The other options are not primarily “runtime security tools” in the detection/alerting sense:
containerdis a container runtime responsible for executing containers; it’s not a security detection tool.
lxdis a system container and VM manager; again, not a runtime threat detection tool.
gVisoris a sandboxed container runtime that improves isolation by interposing a user-space kernel; it’s a security mechanism, but the question asks for a runtime securitytool(monitoring/detection). Falco fits that definition best.
In cloud-native security practice, Falco typically runs as a DaemonSet so it can observe activity on every node. It uses rules to define what “bad” looks like and can emit alerts to SIEM systems, logging backends, or incident response workflows. This complements preventative controls like RBAC, Pod Security Admission, seccomp, and least privilege configurations. Preventative controls reduce risk; Falco provides visibility and detection when something slips through.
Therefore, among the provided choices, the verified runtime security tool isFalco (C).
=========
Which of the following options include only mandatory fields to create a Kubernetes object using a YAML file?
apiVersion, template, kind, status
apiVersion, metadata, status, spec
apiVersion, template, kind, spec
apiVersion, metadata, kind, spec
Dis correct: the mandatory top-level fields for creating a Kubernetes object manifest areapiVersion,kind,metadata, and (for most objects you create)spec. These fields establish what the object is and what you want Kubernetes to do with it.
apiVersiontells Kubernetes which API group/version schema to use (e.g., apps/v1, v1). This determines valid fields and behavior.
kindidentifies the resource type (e.g., Pod, Deployment, Service).
metadatacontains identifying information like name, namespace, and labels/annotations used for organization, selection, and automation.
specdescribes the desired state. Controllers and the kubelet reconcile actual state to match spec.
Why other choices are wrong:
statusis not a mandatory input field. It’s generally written by Kubernetes controllers and reflects observed state (conditions, readiness, assigned node, etc.). Users typically do not set status when creating objects.
templateis not a universal top-level field. It exists inside some resources (notably Deployment.spec.template), but it’s not a required top-level field across Kubernetes objects.
It’s true that some resources can be created without a spec (or with minimal fields), but in the exam-style framing—“mandatory fields… using a YAML file”—the canonical expected set is exactly the four inD. This aligns with how Kubernetes documentation and examples present manifests: identify the API schema and kind, give object metadata, and declare desired state.
Therefore,apiVersion + metadata + kind + specis the only option that includes only the mandatory fields, makingDthe verified correct answer.
=========
Which Kubernetes-native deployment strategy supports zero-downtime updates of a workload?
Canary
Recreate
BlueGreen
RollingUpdate
D (RollingUpdate)is correct. In Kubernetes, the Deployment resource’s default update strategy isRollingUpdate, which replaces Podsgraduallyrather than all at once. This supports zero-downtime updates when the workload is properly configured (sufficient replicas, correct readiness probes, and appropriate maxUnavailable / maxSurge settings). As new Pods come up and become Ready, old Pods are terminated in a controlled way, keeping the service available throughout the rollout.
RollingUpdate’s “zero downtime” is achieved by maintaining capacity while transitioning between versions. For example, with multiple replicas, Kubernetes can create new Pods, wait for readiness, then scale down old Pods, ensuring traffic continues to flow to healthy instances. Readiness probes are critical: they prevent traffic from being routed to a Pod until it’s actually ready to serve.
Why other options are not the Kubernetes-native “strategy” answer here:
Recreate (B)explicitly stops old Pods before starting new ones, causing downtime for most services.
Canary (A)andBlueGreen (C)are real deployment patterns, but in “Kubernetes-native deployment strategy” terms, the built-in Deployment strategies areRollingUpdateandRecreate. Canary/BlueGreen typically require additional tooling/controllers (service mesh, ingress controller features, or progressive delivery operators) to manage traffic shifting between versions.
So, for a Kubernetes-native strategy that supports zero-downtime updates, the correct and verified choice isRollingUpdate (D).
=========
In CNCF, who develops specifications for industry standards around container formats and runtimes?
Open Container Initiative (OCI)
Linux Foundation Certification Group (LFCG)
Container Network Interface (CNI)
Container Runtime Interface (CRI)
The organization responsible for defining widely adopted standards aroundcontainer formatsandruntime specificationsis theOpen Container Initiative (OCI), soAis correct. OCI defines the image specification (how container images are structured and stored) and the runtime specification (how to run a container), enabling interoperability across tooling and vendors. This is foundational to the cloud-native ecosystem because it allows different build tools, registries, runtimes, and orchestration platforms to work together reliably.
Within Kubernetes and CNCF-adjacent ecosystems, OCI standards are the reason an image built by one tool can be pushed to a registry and pulled/run by many different runtimes. For example, a Kubernetes node running containerd or CRI-O can run OCI-compliant images consistently. OCI standardization reduces fragmentation and vendor lock-in, which is a core motivation in open source cloud-native architecture.
The other options are not correct for this question.CNI(Container Network Interface) is a standard for configuring container networking, not container image formats and runtimes.CRI(Container Runtime Interface) is a Kubernetes-specific interface between kubelet and container runtimes—it enables pluggable runtimes for Kubernetes, but it is not the industry standard body for container format/runtime specifications. “LFCG” is not the recognized standards body here.
In short: OCI defines the “language” for container images and runtime behavior, which is why the same image can be executed across environments. Kubernetes relies on those standards indirectly through runtimes and tooling, but the specification work is owned byOCI. Therefore, the verified correct answer isA.
=========
TESTED 22 Jan 2026

