We at Crack4sure are committed to giving students who are preparing for the Linux Foundation KCNA Exam the most current and reliable questions . To help people study, we've made some of our Kubernetes and Cloud Native Associate exam materials available for free to everyone. You can take the Free KCNA Practice Test as many times as you want. The answers to the practice questions are given, and each answer is explained.
Which field in a Pod or Deployment manifest ensures that Pods are scheduled only on nodes with specific labels?
resources:
disktype: ssd
labels:
disktype: ssd
nodeSelector:
disktype: ssd
annotations:
disktype: ssd
In Kubernetes, Pod scheduling is handled by the Kubernetes scheduler, which is responsible for assigning Pods to suitable nodes based on a set of constraints and policies. One of the simplest and most commonly used mechanisms to control where Pods are scheduled is the nodeSelector field. The nodeSelector field allows you to constrain a Pod so that it is only eligible to run on nodes that have specific labels.
Node labels are key–value pairs attached to nodes by cluster administrators or automation tools. These labels typically describe node characteristics such as hardware type, disk type, geographic zone, or environment. For example, a node might be labeled with disktype=ssd to indicate that it has SSD-backed storage. When a Pod specification includes a nodeSelector with the same key–value pair, the scheduler will only consider nodes that match this label when placing the Pod.
Option A (resources) is incorrect because resource specifications are used to define CPU and memory requests and limits for containers, not to influence node selection based on labels. Option B (labels) is also incorrect because Pod labels are metadata used for identification, grouping, and selection by other Kubernetes objects such as Services and Deployments; they do not affect scheduling decisions. Option D (annotations) is incorrect because annotations are intended for storing non-identifying metadata and are not interpreted by the scheduler for placement decisions.
The nodeSelector field is evaluated during scheduling, and if no nodes match the specified labels, the Pod will remain in a Pending state. While nodeSelector is simple and effective, it is considered a basic scheduling mechanism. For more advanced scheduling requirements—such as expressing preferences, using set-based matching, or combining multiple conditions—Kubernetes also provides node affinity and anti-affinity. However, nodeSelector remains a foundational and widely used feature for enforcing strict node placement based on labels, making option C the correct and verified answer according to Kubernetes documentation.
What does “Continuous Integration” mean?
The continuous integration and testing of code changes from multiple sources manually.
The continuous integration and testing of code changes from multiple sources via automation.
The continuous integration of changes from one environment to another.
The continuous integration of new tools to support developers in a project.
The correct answer is B: Continuous Integration (CI) is the practice of frequently integrating code changes from multiple contributors and validating them through automated builds and tests. The “continuous” part is about doing this often (ideally many times per day) and consistently, so integration problems are detected early instead of piling up until a painful merge or release window.
Automation is essential. CI typically includes steps like compiling/building artifacts, running unit and integration tests, executing linters, checking formatting, scanning dependencies for vulnerabilities, and producing build reports. This automation creates fast feedback loops that help developers catch regressions quickly and maintain a releasable main branch.
Option A is incorrect because manual integration/testing does not scale and undermines the reliability and speed that CI is meant to provide. Option C confuses CI with deployment promotion across environments (which is more aligned with Continuous Delivery/Deployment). Option D is unrelated: adding tools can support CI, but it isn’t the definition.
In cloud-native application delivery, CI is tightly coupled with containerization and Kubernetes: CI pipelines often build container images from source, run tests, scan images, sign artifacts, and push to registries. Those validated artifacts then flow into CD processes that deploy to Kubernetes using manifests, Helm, or GitOps controllers. Without CI, Kubernetes rollouts become riskier because you lack consistent validation of what you’re deploying.
So, CI is best defined as automated integration and testing of code changes from multiple sources, which matches option B.
=========
Which of the following cloud native proxies is used for ingress/egress in a service mesh and can also serve as an application gateway?
Frontend proxy
Kube-proxy
Envoy proxy
Reverse proxy
Envoy Proxy is a high-performance, cloud-native proxy widely used for ingress and egress traffic management in service mesh architectures, and it can also function as an application gateway. It is the foundational data-plane component for popular service meshes such as Istio, Consul, and AWS App Mesh, making option C the correct answer.
In a service mesh, Envoy is typically deployed as a sidecar proxy alongside each application Pod. This allows Envoy to transparently intercept and manage all inbound and outbound traffic for the service. Through this model, Envoy enables advanced traffic management features such as load balancing, retries, timeouts, circuit breaking, mutual TLS, and fine-grained observability without requiring application code changes.
Envoy is also commonly used at the mesh boundary to handle ingress and egress traffic. When deployed as an ingress gateway, Envoy acts as the entry point for external traffic into the mesh, performing TLS termination, routing, authentication, and policy enforcement. As an egress gateway, it controls outbound traffic from the mesh to external services, enabling security controls and traffic visibility. These capabilities allow Envoy to serve effectively as an application gateway, not just an internal proxy.
Option A, “Frontend proxy,” is a generic term and not a specific cloud-native component. Option B, kube-proxy, is responsible for implementing Kubernetes Service networking rules at the node level and does not provide service mesh features or gateway functionality. Option D, “Reverse proxy,” is a general architectural pattern rather than a specific cloud-native proxy implementation.
Envoy’s extensibility, performance, and deep integration with Kubernetes and service mesh control planes make it the industry-standard proxy for modern cloud-native networking. Its ability to function both as a sidecar proxy and as a centralized ingress or egress gateway clearly establishes Envoy proxy as the correct and verified answer.
Which of the following is the name of a container orchestration software?
OpenStack
Docker
Apache Mesos
CRI-O
C (Apache Mesos) is correct because Mesos is a cluster manager/orchestrator that can schedule and manage workloads (including containerized workloads) across a pool of machines. Historically, Mesos (often paired with frameworks like Marathon) was used to orchestrate services and batch jobs at scale, similar in spirit to Kubernetes’ scheduling and cluster management role.
Why the other answers are not correct as “container orchestration software” in this context:
OpenStack (A) is primarily an IaaS cloud platform for provisioning compute, networking, and storage (VM-focused). It’s not a container orchestrator, though it can host Kubernetes or containers.
Docker (B) is a container platform/tooling ecosystem (image build, runtime, local orchestration via Docker Compose/Swarm historically), but “Docker” itself is not the best match for “container orchestration software” in the multi-node cluster orchestration sense that the question implies.
CRI-O (D) is a container runtime implementing Kubernetes’ CRI; it runs containers on a node but does not orchestrate placement, scaling, or service lifecycle across a cluster.
Container orchestration typically means capabilities like scheduling, scaling, service discovery integration, health management, and rolling updates across multiple hosts. Mesos fits that definition: it provides resource management and scheduling over a cluster and can run container workloads via supported containerizers. Kubernetes ultimately became the dominant orchestrator for many use cases, but Mesos is clearly recognized as orchestration software in this category.
So, among these choices, the verified orchestration platform is Apache Mesos (C).
=========
What is the minimum number of etcd members that are required for a highly available Kubernetes cluster?
Two etcd members.
Five etcd members.
Six etcd members.
Three etcd members.
D (three etcd members) is correct. etcd is a distributed key-value store that uses the Raft consensus algorithm. High availability in consensus systems depends on maintaining a quorum (majority) of members to continue serving writes reliably. With 3 members, the cluster can tolerate 1 failure and still have 2/3 available—enough for quorum.
Two members is a common trap: with 2, a single failure leaves 1/2, which is not a majority, so the cluster cannot safely make progress. That means 2-member etcd is not HA; it is fragile and can be taken down by one node loss, network partition, or maintenance event. Five members can tolerate 2 failures and is a valid HA configuration, but it is not the minimum. Six is even-sized and generally discouraged for consensus because it doesn’t improve failure tolerance compared to five (quorum still requires 4), while increasing coordination overhead.
In Kubernetes, etcd reliability directly affects the API server and the entire control plane because etcd stores cluster state: object specs, status, controller state, and more. If etcd loses quorum, the API server will be unable to persist or reliably read/write state, leading to cluster management outages. That’s why the minimum HA baseline is three etcd members, often across distinct failure domains (nodes/AZs), with strong disk performance and consistent low-latency networking.
So, the smallest etcd topology that provides true fault tolerance is 3 members, which corresponds to option D.
=========
What is a best practice to minimize the container image size?
Use a DockerFile.
Use multistage builds.
Build images with different tags.
Add a build.sh script.
A proven best practice for minimizing container image size is to use multi-stage builds, so B is correct. Multi-stage builds allow you to separate the “build environment” from the “runtime environment.” In the first stage, you can use a full-featured base image (with compilers, package managers, and build tools) to compile your application or assemble artifacts. In the final stage, you copy only the resulting binaries or necessary runtime assets into a much smaller base image (for example, a distroless image or a slim OS image). This dramatically reduces the final image size because it excludes compilers, caches, and build dependencies that are not needed at runtime.
In cloud-native application delivery, smaller images matter for several reasons. They pull faster, which speeds up deployments, rollouts, and scaling events (Pods become Ready sooner). They also reduce attack surface by removing unnecessary packages, which helps security posture and scanning results. Smaller images tend to be simpler and more reproducible, improving reliability across environments.
Option A is not a size-minimization practice: using a Dockerfile is simply the standard way to define how to build an image; it doesn’t inherently reduce size. Option C (different tags) changes image identification but not size. Option D (a build script) may help automation, but it doesn’t guarantee smaller images; the image contents are determined by what ends up in the layers.
Multi-stage builds are commonly paired with other best practices: choosing minimal base images, cleaning package caches, avoiding copying unnecessary files (use .dockerignore), and reducing layer churn. But among the options, the clearest and most directly correct technique is multi-stage builds.
Therefore, the verified answer is B.
=========
What sentence is true about CronJobs in Kubernetes?
A CronJob creates one or multiple Jobs on a repeating schedule.
A CronJob creates one container on a repeating schedule.
CronJobs are useful on Linux but are obsolete in Kubernetes.
The CronJob schedule format is different in Kubernetes and Linux.
The true statement is A: a Kubernetes CronJob creates Jobs on a repeating schedule. CronJob is a controller designed for time-based execution. You define a schedule using standard cron syntax (minute, hour, day-of-month, month, day-of-week), and when the schedule triggers, the CronJob controller creates a Job object. Then the Job controller creates one or more Pods to run the task to completion.
Option B is incorrect because CronJobs do not “create one container”; they create Jobs, and Jobs create Pods (which may contain one or multiple containers). Option C is wrong because CronJobs are a core Kubernetes workload primitive for recurring tasks and remain widely used for periodic work like backups, batch processing, and cleanup. Option D is wrong because Kubernetes CronJobs intentionally use cron-like scheduling expressions; the format aligns with the cron concept (with Kubernetes-specific controller behavior around missed runs, concurrency, and history).
CronJobs also provide operational controls you don’t get from plain Linux cron on a node:
concurrencyPolicy (Allow/Forbid/Replace) to manage overlapping runs
startingDeadlineSeconds to control how missed schedules are handled
history limits for successful/failed Jobs to avoid clutter
integration with Kubernetes RBAC, Secrets, ConfigMaps, and volumes for consistent runtime configuration
consistent execution environment via container images, not ad-hoc node scripts
Because the CronJob creates Jobs as first-class API objects, you get observability (events/status), predictable retries, and lifecycle management. That’s why the accurate statement is A.
=========
Which is the correct kubectl command to display logs in real time?
kubectl logs -p test-container-1
kubectl logs -c test-container-1
kubectl logs -l test-container-1
kubectl logs -f test-container-1
To stream logs in real time with kubectl, you use the follow option -f, so D is correct. In Kubernetes, kubectl logs retrieves logs from containers in a Pod. By default, it returns the current log output and exits. When you add -f, kubectl keeps the connection open and continuously prints new log lines as they are produced, similar to tail -f on Linux. This is especially useful for debugging live behavior, watching startup sequences, or monitoring an application during a rollout.
The other flags serve different purposes. -p (as seen in option A) requests logs from the previous instance of a container (useful after a restart/crash), not real-time streaming. -c (option B) selects a specific container within a multi-container Pod; it doesn’t stream by itself (though it can be combined with -f). -l (option C) is used with kubectl logs to select Pods by label, but again it is not the streaming flag; streaming requires -f.
In real troubleshooting, you commonly combine flags, e.g. kubectl logs -f pod-name -c container-name for streaming logs from a specific container, or kubectl logs -f -l app=myapp to stream from Pods matching a label selector (depending on kubectl behavior/version). But the key answer to “display logs in real time” is the follow flag: -f.
Therefore, the correct selection is D.
What is the primary mechanism to identify grouped objects in a Kubernetes cluster?
Custom Resources
Labels
Label Selector
Pod
Kubernetes groups and organizes objects primarily using labels, so B is correct. Labels are key-value pairs attached to objects (Pods, Deployments, Services, Nodes, etc.) and are intended to be used for identifying, selecting, and grouping resources in a flexible, user-defined way.
Labels enable many core Kubernetes behaviors. For example, a Service selects the Pods that should receive traffic by matching a label selector against Pod labels. A Deployment’s ReplicaSet similarly uses label selectors to determine which Pods belong to the replica set. Operators and platform tooling also rely on labels to group resources by application, environment, team, or cost center. This is why labeling is considered foundational Kubernetes hygiene: consistent labels make automation, troubleshooting, and governance easier.
A “label selector” (option C) is how you query/group objects based on labels, but the underlying primary mechanism is still the labels themselves. Without labels applied to objects, selectors have nothing to match. Custom Resources (option A) extend the API with new kinds, but they are not the primary grouping mechanism across the cluster. “Pod” (option D) is a workload unit, not a grouping mechanism.
Practically, Kubernetes recommends common label keys like app.kubernetes.io/name, app.kubernetes.io/instance, and app.kubernetes.io/part-of to standardize grouping. Those conventions improve interoperability with dashboards, GitOps tooling, and policy engines.
So, when the question asks for the primary mechanism used to identify grouped objects in Kubernetes, the most accurate answer is Labels (B)—they are the universal metadata primitive used to group and select resources.
=========
What is the resource type used to package sets of containers for scheduling in a cluster?
Pod
ContainerSet
ReplicaSet
Deployment
The Kubernetes resource used to package one or more containers into a schedulable unit is the Pod, so A is correct. Kubernetes schedules Pods onto nodes; it does not schedule individual containers. A Pod represents a single “instance” of an application component and includes one or more containers that share key runtime properties, including the same network namespace (same IP and port space) and the ability to share volumes.
Pods enable common patterns beyond “one container per Pod.” For example, a Pod may include a main application container plus a sidecar container for logging, proxying, or configuration reload. Because these containers share localhost networking and volume mounts, they can coordinate efficiently without requiring external service calls. Kubernetes manages the Pod lifecycle as a unit: the containers in a Pod are started according to container lifecycle rules and are co-located on the same node.
Option B (ContainerSet) is not a standard Kubernetes workload resource. Option C (ReplicaSet) manages a set of Pod replicas, ensuring a desired count is running, but it is not the packaging unit itself. Option D (Deployment) is a higher-level controller that manages ReplicaSets and provides rollout/rollback behavior, again operating on Pods rather than being the container-packaging unit.
From the scheduling perspective, the PodSpec defines container images, commands, resources, volumes, security context, and placement constraints. The scheduler evaluates these constraints and assigns the Pod to a node. This “Pod as the atomic scheduling unit” is fundamental to Kubernetes architecture and explains why Kubernetes-native concepts (Services, selectors, readiness, autoscaling) all revolve around Pods.
=========
What are the two steps performed by the kube-scheduler to select a node to schedule a pod?
Grouping and placing
Filtering and selecting
Filtering and scoring
Scoring and creating
The kube-scheduler selects a node in two main phases: filtering and scoring, so C is correct. First, filtering identifies which nodes are feasible for the Pod by applying hard constraints. These include resource availability (CPU/memory requests), node taints/tolerations, node selectors and required affinities, topology constraints, and other scheduling requirements. Nodes that cannot satisfy the Pod’s requirements are removed from consideration.
Second, scoring ranks the remaining feasible nodes using priority functions to choose the “best” placement. Scoring can consider factors like spreading Pods across nodes/zones, packing efficiency, affinity preferences, and other policies configured in the scheduler. The node with the highest score is selected (with tie-breaking), and the scheduler binds the Pod by setting spec.nodeName.
Option B (“filtering and selecting”) is close but misses the explicit scoring step that is central to scheduler design. The scheduler does “select” a node, but the canonical two-step wording in Kubernetes scheduling is filtering then scoring. Options A and D are not how scheduler internals are described.
Operationally, understanding filtering vs scoring helps troubleshoot scheduling failures. If a Pod can’t be scheduled, it failed in filtering—kubectl describe pod often shows “0/… nodes are available” reasons (insufficient CPU, taints, affinity mismatch). If it schedules but lands in unexpected places, it’s often about scoring preferences (affinity weights, topology spread preferences, default scheduler profiles).
So the verified correct answer is C: kube-scheduler uses Filtering and Scoring.
=========
How long should a stable API element in Kubernetes be supported (at minimum) after deprecation?
9 months
24 months
12 months
6 months
Kubernetes has a formal API deprecation policy to balance stability for users with the ability to evolve the platform. For a stable (GA) API element, Kubernetes commits to supporting that API for a minimum period after it is deprecated. The correct minimum in this question is 12 months, which corresponds to option C.
In practice, Kubernetes releases occur roughly every three to four months, and the deprecation policy is commonly communicated in terms of “releases” as well as time. A GA API that is deprecated in one release is typically kept available for multiple subsequent releases, giving cluster operators and application teams time to migrate manifests, client libraries, controllers, and automation. This matters because Kubernetes is often at the center of production delivery pipelines; abrupt API removals would break deployments, upgrades, and tooling. By guaranteeing a minimum support window, Kubernetes enables predictable upgrades and safer lifecycle management.
This policy also encourages teams to track API versions and plan migrations. For example, workloads might start on a beta API (which can change), but once an API reaches stable, users can expect a stronger compatibility promise. Deprecation warnings help surface risk early. In many clusters, you’ll see API server warnings and tooling hints when manifests use deprecated fields/versions, allowing proactive remediation before the removal release.
Options 6 or 9 months would be too short for many enterprises to coordinate changes across multiple teams and environments. 24 months may be true for some ecosystems, but the Kubernetes stated minimum in this exam-style framing is 12 months. The key operational takeaway is: don’t ignore deprecation notices—they’re your clock for migration planning. Treat API version upgrades as part of routine cluster lifecycle hygiene to avoid being blocked during Kubernetes version upgrades when deprecated APIs are finally removed.
=========
There is an application running in a logical chain: Gateway API ? Service ? EndpointSlice ? Container.
What Kubernetes API object is missing from this sequence?
Proxy
Docker
Pod
Firewall
In Kubernetes, application traffic flows through a well-defined set of API objects and runtime components before reaching a running container. Understanding this logical chain is essential for grasping how Kubernetes networking works internally.
The given sequence is: Gateway API ? Service ? EndpointSlice ? Container. While this looks close to correct, it is missing a critical Kubernetes abstraction: the Pod. Containers in Kubernetes do not run independently; they always run inside Pods. A Pod is the smallest deployable and schedulable unit in Kubernetes and serves as the execution environment for one or more containers that share networking and storage resources.
The correct logical chain should be:
Gateway API ? Service ? EndpointSlice ? Pod ? Container
The Gateway API defines how external or internal traffic enters the cluster. The Service provides a stable virtual IP and DNS name, abstracting a set of backend workloads. EndpointSlices then represent the actual network endpoints backing the Service, typically mapping to the IP addresses of Pods. Finally, traffic is delivered to containers running inside those Pods.
Option A (Proxy) is incorrect because while proxies such as kube-proxy or data plane proxies play a role in traffic forwarding, they are not Kubernetes API objects that represent application workloads in this logical chain. Option B (Docker) is incorrect because Docker is a container runtime, not a Kubernetes API object, and Kubernetes is runtime-agnostic. Option D (Firewall) is incorrect because firewalls are not core Kubernetes workload or networking API objects involved in service-to-container routing.
Option C (Pod) is the correct answer because Pods are the missing link between EndpointSlices and containers. EndpointSlices point to Pod IPs, and containers cannot exist outside of Pods. Kubernetes documentation clearly states that Pods are the fundamental unit of execution and networking, making them essential in any accurate representation of application traffic flow within a cluster.
Which type of Service requires manual creation of Endpoints?
LoadBalancer
Services without selectors
NodePort
ClusterIP with selectors
A Kubernetes Service without selectors requires you to manage its backend endpoints manually, so B is correct. Normally, a Service uses a selector to match a set of Pods (by labels). Kubernetes then automatically maintains the backend list (historically Endpoints, now commonly EndpointSlice) by tracking which Pods match the selector and are Ready. This automation is one of the key reasons Services provide stable connectivity to dynamic Pods.
When you create a Service without a selector, Kubernetes has no way to know which Pods (or external IPs) should receive traffic. In that pattern, you explicitly create an Endpoints object (or EndpointSlices, depending on your approach and controller support) that maps the Service name to one or more IP:port tuples. This is commonly used to represent external services (e.g., a database running outside the cluster) while still providing a stable Kubernetes Service DNS name for in-cluster clients. Another use case is advanced migration scenarios where endpoints are controlled by custom controllers rather than label selection.
Why the other options are wrong: Service types like ClusterIP, NodePort, and LoadBalancer describe how a Service is exposed, but they do not inherently require manual endpoint management. A ClusterIP Service with selectors (D) is the standard case where endpoints are automatically created and updated. NodePort and LoadBalancer Services also typically use selectors and therefore inherit automatic endpoint management; the difference is in how traffic enters the cluster, not how backends are discovered.
Operationally, when using Services without selectors, you must ensure endpoint IPs remain correct, health is accounted for (often via external tooling), and you update endpoints when backends change. The key concept is: no selector ? Kubernetes can’t auto-populate endpoints ? you must provide them.
=========
What is the main purpose of etcd in Kubernetes?
etcd stores all cluster data in a key value store.
etcd stores the containers running in the cluster for disaster recovery.
etcd stores copies of the Kubernetes config files that live /etc/.
etcd stores the YAML definitions for all the cluster components.
The main purpose of etcd in Kubernetes is to store the cluster’s state as a distributed key-value store, so A is correct. Kubernetes is API-driven: objects like Pods, Deployments, Services, ConfigMaps, Secrets, Nodes, and RBAC rules are persisted by the API server into etcd. Controllers, schedulers, and other components then watch the API for changes and reconcile the cluster accordingly. This makes etcd the “source of truth” for desired and observed cluster state.
Options B, C, and D are misconceptions. etcd does not store the running containers; that’s the job of the kubelet/container runtime on each node, and container state is ephemeral. etcd does not store /etc configuration file copies. And while you may author objects as YAML manifests, Kubernetes stores them internally as API objects (serialized) in etcd—not as “YAML definitions for all components.” The data is structured key/value entries representing Kubernetes resources and metadata.
Because etcd is so critical, its performance and reliability directly affect the cluster. Slow disk I/O or poor network latency increases API request latency and can delay controller reconciliation, leading to cascading operational problems (slow rollouts, delayed scheduling, timeouts). That’s why etcd is typically run on fast, reliable storage and in an HA configuration (often 3 or 5 members) to maintain quorum and tolerate failures. Backups (snapshots) and restore procedures are also central to disaster recovery: if etcd is lost, the cluster loses its state.
Security is also important: etcd can contain sensitive information (especially Secrets unless encrypted at rest). Proper TLS, restricted access, and encryption-at-rest configuration are standard best practices.
So, the verified correct answer is A: etcd stores all cluster data/state in a key-value store.
=========
In a cloud native environment, how do containerization and virtualization differ in terms of resource management?
Containerization uses hypervisors to manage resources, while virtualization does not.
Containerization shares the host OS, while virtualization runs a full OS for each instance.
Containerization consumes more memory than virtualization by default.
Containerization allocates resources per container, virtualization does not isolate them.
The fundamental difference between containerization and virtualization in a cloud native environment lies in how they manage and isolate resources, particularly with respect to the operating system. The correct description is that containerization shares the host operating system, while virtualization runs a full operating system for each instance, making option B the correct answer.
In virtualization, each virtual machine (VM) includes its own complete guest operating system running on top of a hypervisor. The hypervisor virtualizes hardware resources—CPU, memory, storage, and networking—and allocates them to each VM. Because every VM runs a full OS, virtualization introduces significant overhead in terms of memory usage, disk space, and startup time. However, it provides strong isolation between workloads, which is useful for running different operating systems or untrusted workloads on the same physical hardware.
In contrast, containerization operates at the operating system level rather than the hardware level. Containers share the host OS kernel and isolate applications using kernel features such as namespaces and control groups (cgroups). This design makes containers much lighter weight than virtual machines. Containers start faster, consume fewer resources, and allow higher workload density on the same infrastructure. Resource limits and isolation are still enforced, but without duplicating the entire operating system for each application instance.
Option A is incorrect because hypervisors are a core component of virtualization, not containerization. Option C is incorrect because containers generally consume less memory than virtual machines due to the absence of a full guest OS. Option D is incorrect because virtualization does isolate resources very strongly, while containers rely on OS-level isolation rather than hardware-level isolation.
In cloud native architectures, containerization is preferred for microservices and scalable workloads because of its efficiency and portability. Virtualization is still valuable for stronger isolation and heterogeneous operating systems. Therefore, Option B accurately captures the key resource management distinction between the two models.
Which of the following scenarios would benefit the most from a service mesh architecture?
A few applications with hundreds of Pod replicas running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in a single cluster, each one providing multiple services.
Tens of distributed applications running in multiple clusters, each one providing multiple services.
Thousands of distributed applications running in multiple clusters, each one providing multiple services.
A service mesh is most valuable when service-to-service communication becomes complex at large scale—many services, many teams, and often multiple clusters. That’s why D is the best fit: thousands of distributed applications across multiple clusters. In that scenario, the operational burden of securing, observing, and controlling east-west traffic grows dramatically. A service mesh (e.g., Istio, Linkerd) addresses this by introducing a dedicated networking layer (usually sidecar proxies such as Envoy) that standardizes capabilities across services without requiring each application to implement them consistently.
The common “mesh” value-adds are: mTLS for service identity and encryption, fine-grained traffic policy (retries, timeouts, circuit breaking), traffic shifting (canary, mirroring), and consistent telemetry (metrics, traces, access logs). Those features become increasingly beneficial as the number of services and cross-service calls rises, and as you add multi-cluster routing, failover, and policy management across environments. With thousands of applications, inconsistent libraries and configurations become a reliability and security risk; the mesh centralizes and standardizes these behaviors.
In smaller environments (A or C), you can often meet requirements with simpler approaches: Kubernetes Services, Ingress/Gateway, basic mTLS at the edge, and application-level libraries. A single large cluster (B) can still benefit from a mesh, but adding multiple clusters increases complexity: traffic management across clusters, identity trust domains, global observability correlation, and consistent policy enforcement. That’s where mesh architectures typically justify their additional overhead (extra proxies, control plane components, operational complexity).
So, the “most benefit” scenario is the largest, most distributed footprint—D.
=========
Which of the following options include resources cleaned by the Kubernetes garbage collection mechanism?
Stale or expired CertificateSigningRequests (CSRs) and old deployments.
Nodes deleted by a cloud controller manager and obsolete logs from the kubelet.
Unused container and container images, and obsolete logs from the kubelet.
Terminated pods, completed jobs, and objects without owner references.
Kubernetes garbage collection (GC) is about cleaning up API objects and related resources that are no longer needed, so the correct answer is D. Two big categories it targets are (1) objects that have finished their lifecycle (like terminated Pods and completed Jobs, depending on controllers and TTL policies), and (2) “dangling” objects that are no longer referenced properly—often described as objects without owner references (or where owners are gone), which can happen when a higher-level controller is deleted or when dependent resources are left behind.
A key Kubernetes concept here is OwnerReferences: many resources are created “owned” by a controller (e.g., a ReplicaSet owned by a Deployment, Pods owned by a ReplicaSet). When an owning object is deleted, Kubernetes’ garbage collector can remove dependent objects based on deletion propagation policies (foreground/background/orphan). This prevents resource leaks and keeps the cluster tidy and performant.
The other options are incorrect because they refer to cleanup tasks outside Kubernetes GC’s scope. Kubelet logs (B/C) are node-level files and log rotation is handled by node/runtime configuration, not the Kubernetes garbage collector. Unused container images (C) are managed by the container runtime’s image GC and kubelet disk pressure management, not the Kubernetes API GC. Nodes deleted by a cloud controller (B) aren’t “garbage collected” in the same sense; node lifecycle is handled by controllers and cloud integrations, but not as a generic GC cleanup category like ownerRef-based object deletion.
So, when the question asks specifically about “resources cleaned by Kubernetes garbage collection,” it’s pointing to Kubernetes object lifecycle cleanup: terminated Pods, completed Jobs, and orphaned objects—exactly what option D states.
=========
The IPv4/IPv6 dual stack in Kubernetes:
Translates an IPv4 request from a Service to an IPv6 Service.
Allows you to access the IPv4 address by using the IPv6 address.
Requires NetworkPolicies to prevent Services from mixing requests.
Allows you to create IPv4 and IPv6 dual stack Services.
The correct answer is D: Kubernetes dual-stack support allows you to create Services (and Pods, depending on configuration) that use both IPv4 and IPv6 addressing. Dual-stack means the cluster is configured to allocate and route traffic for both IP families. For Services, this can mean assigning both an IPv4 ClusterIP and an IPv6 ClusterIP so clients can connect using either family, depending on their network stack and DNS resolution.
Option A is incorrect because dual-stack is not about protocol translation (that would be NAT64/other gateway mechanisms, not the core Kubernetes dual-stack feature). Option B is also a form of translation/aliasing that isn’t what Kubernetes dual-stack implies; having both addresses available is different from “access IPv4 via IPv6.” Option C is incorrect: dual-stack does not inherently require NetworkPolicies to “prevent mixing requests.” NetworkPolicies are about traffic control, not IP family separation.
In Kubernetes, dual-stack requires support across components: the network plugin (CNI) must support IPv4/IPv6, the cluster must be configured with both Pod CIDRs and Service CIDRs, and DNS should return appropriate A and AAAA records for Service names. Once configured, you can specify preferences such as ipFamilyPolicy (e.g., PreferDualStack) and ipFamilies (IPv4, IPv6 order) for Services to influence allocation behavior.
Operationally, dual-stack is useful for environments transitioning to IPv6, supporting IPv6-only clients, or running in mixed networks. But it adds complexity: address planning, firewalling, and troubleshooting need to consider two IP families. Still, the definition in the question is straightforward: Kubernetes dual-stack enables dual-stack Services, which is option D.
=========
What does the livenessProbe in Kubernetes help detect?
When a container is ready to serve traffic.
When a container has started successfully.
When a container exceeds resource limits.
When a container is unresponsive.
The liveness probe in Kubernetes is designed to detect whether a container is still running correctly or has entered a failed or unresponsive state. Its primary purpose is to determine whether a container should be restarted. When a liveness probe fails repeatedly, Kubernetes assumes the container is unhealthy and automatically restarts it to restore normal operation.
Option D correctly describes this behavior. Liveness probes are used to identify situations where an application is running but no longer functioning as expected—for example, a deadlock, infinite loop, or hung process that cannot recover on its own. In such cases, restarting the container is often the most effective remediation, and Kubernetes handles this automatically through the liveness probe mechanism.
Option A is incorrect because readiness probes—not liveness probes—determine whether a container is ready to receive traffic. A container can be alive but not ready, such as during startup or temporary maintenance. Option B is incorrect because startup success is handled by startup probes, which are specifically designed to manage slow-starting applications and delay liveness and readiness checks until initialization is complete. Option C is incorrect because exceeding resource limits is managed by the container runtime and kubelet (for example, OOMKills), not by probes.
Liveness probes can be implemented using HTTP requests, TCP socket checks, or command execution inside the container. If the probe fails beyond a configured threshold, Kubernetes restarts the container according to the Pod’s restart policy. This self-healing behavior is a core feature of Kubernetes and contributes significantly to application reliability.
Kubernetes documentation emphasizes using liveness probes carefully, as misconfiguration can cause unnecessary restarts. However, when used correctly, they provide a powerful way to automatically recover from application-level failures that Kubernetes cannot otherwise detect.
In summary, the liveness probe’s role is to detect when a container is unresponsive and needs to be restarted, making option D the correct and fully verified answer.
What's the most adopted way of conflict resolution and decision-making for the open-source projects under the CNCF umbrella?
Financial Analysis
Discussion and Voting
Flipism Technique
Project Founder Say
B (Discussion and Voting) is correct. CNCF-hosted open-source projects generally operate with open governance practices that emphasize transparency, community participation, and documented decision-making. While each project can have its own governance model (maintainers, technical steering committees, SIGs, TOC interactions, etc.), a very common and widely adopted approach to resolving disagreements and making decisions is to first pursue discussion (often on GitHub issues/PRs, mailing lists, or community meetings) and then use voting/consensus mechanisms when needed.
This approach is important because open-source communities are made up of diverse contributors across companies and geographies. “Project Founder Say” (D) is not a sustainable or typical CNCF governance norm for mature projects; CNCF explicitly encourages neutral, community-led governance rather than single-person control. “Financial Analysis” (A) is not a conflict resolution mechanism for technical decisions, and “Flipism Technique” (C) is not a real governance practice.
In Kubernetes specifically, community decisions are often made within structured groups (e.g., SIGs) using discussion and consensus-building, sometimes followed by formal votes where governance requires it. The goal is to ensure decisions are fair, recorded, and aligned with the project’s mission and contributor expectations. This also reduces risk of vendor capture and builds trust: anyone can review the rationale in meeting notes, issues, or PR threads, and decisions can be revisited with new evidence.
Therefore, the most adopted conflict resolution and decision-making method across CNCF open-source projects is discussion and voting, making B the verified correct answer.
=========
Which tool is used to streamline installing and managing Kubernetes applications?
apt
helm
service
brew
Helm is the Kubernetes package manager used to streamline installing and managing applications, so B is correct. Helm packages Kubernetes resources into charts, which contain templates, default values, and metadata. When you install a chart, Helm renders templates into concrete manifests and applies them to the cluster. Helm also tracks a “release,” enabling upgrades, rollbacks, and consistent lifecycle operations across environments.
This is why Helm is widely used for complex applications that require multiple Kubernetes objects (Deployments/StatefulSets, Services, Ingresses, ConfigMaps, RBAC, CRDs). Rather than manually maintaining many YAML files per environment, teams can parameterize configuration with values and reuse the same chart across dev/stage/prod with different overrides.
Option A (apt) and option D (brew) are OS package managers (Debian/Ubuntu and macOS/Linuxbrew respectively), not Kubernetes application managers. Option C (service) is a Linux service manager command pattern and not relevant here.
In cloud-native delivery pipelines, Helm often integrates with GitOps and CI/CD: the pipeline builds an image, updates chart values (image tag/digest), and deploys via Helm or via GitOps controllers that render/apply Helm charts. Helm also supports chart repositories and versioning, making it easier to standardize deployments and manage dependencies.
So, the verified tool for streamlined Kubernetes app install/management is Helm (B).
=========
Which of the following sentences is true about container runtimes in Kubernetes?
If you let iptables see bridged traffic, you don't need a container runtime.
If you enable IPv4 forwarding, you don't need a container runtime.
Container runtimes are deprecated, you must install CRI on each node.
You must install a container runtime on each node to run pods on it.
A Kubernetes node must have a container runtime to run Pods, so D is correct. Kubernetes schedules Pods to nodes, but the actual execution of containers is performed by a runtime such as containerd or CRI-O. The kubelet communicates with that runtime via the Container Runtime Interface (CRI) to pull images, create sandboxes, and start/stop containers. Without a runtime, the node cannot launch container processes, so Pods cannot transition into running state.
Options A and B confuse networking kernel settings with runtime requirements. iptables bridged traffic visibility and IPv4 forwarding can be relevant for node networking, but they do not replace the need for a container runtime. Networking and container execution are separate layers: you need networking for connectivity, and you need a runtime for running containers.
Option C is also incorrect and muddled. Container runtimes are not deprecated; rather, Kubernetes removed the built-in Docker shim integration from kubelet in favor of CRI-native runtimes. CRI is an interface, not “something you install instead of a runtime.” In practice you install a CRI-compatible runtime (containerd/CRI-O), which implements CRI endpoints that kubelet talks to.
Operationally, the runtime choice affects node behavior: image management, logging integration, performance characteristics, and compatibility. Kubernetes installation guides explicitly list installing a container runtime as a prerequisite for worker nodes. If a cluster has nodes without a properly configured runtime, workloads scheduled there will fail to start (often stuck in ContainerCreating/ImagePullBackOff/Runtime errors).
Therefore, the only fully correct statement is D: each node needs a container runtime to run Pods.
=========
What is the main purpose of the Ingress in Kubernetes?
Access HTTP and HTTPS services running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their IP address.
Access services different from HTTP or HTTPS running in the cluster based on their path.
Access HTTP and HTTPS services running in the cluster based on their path.
D is correct. Ingress is a Kubernetes API object that defines rules for external access to HTTP/HTTPS services in a cluster. The defining capability is Layer 7 routing—commonly host-based and path-based routing—so you can route requests like example.com/app1 to one Service and example.com/app2 to another. While the question mentions “based on their path,” that’s a classic and correct Ingress use case (and host routing is also common).
Ingress itself is only the specification of routing rules. An Ingress controller (e.g., NGINX Ingress Controller, HAProxy, Traefik, cloud-provider controllers) is what actually implements those rules by configuring a reverse proxy/load balancer. Ingress typically terminates TLS (HTTPS) and forwards traffic to internal Services, giving a more expressive alternative to exposing every service via NodePort/LoadBalancer.
Why the other options are wrong:
A suggests routing by IP address; Ingress is fundamentally about HTTP(S) routing rules (host/path), not direct Service IP access.
B and C describe non-HTTP protocols; Ingress is specifically for HTTP/HTTPS. For TCP/UDP or other protocols, you generally use Services of type LoadBalancer/NodePort, Gateway API implementations, or controller-specific TCP/UDP configuration.
Ingress is a foundational building block for cloud-native application delivery because it centralizes edge routing, enables TLS management, and supports gradual adoption patterns (multiple services under one domain). Therefore, the main purpose described here matches D.
=========
Services and Pods in Kubernetes are ______ objects.
JSON
YAML
Java
REST
In Kubernetes, resources like Pods and Services are represented as API objects that you create, read, update, delete, and watch via the Kubernetes RESTful API. That makes D (REST) the correct answer.
Kubernetes is fundamentally API-driven: the API server exposes endpoints for each resource type (for example, /api/v1/namespaces/{ns}/pods and /api/v1/namespaces/{ns}/services). Clients such as kubectl, controllers, operators, and external systems interact with these resources by making REST-style calls using HTTP verbs (GET, POST, PUT/PATCH, DELETE) and using watch streams for event-driven updates. This API-first design is what enables Kubernetes’ declarative model—users submit desired state to the API server, and controllers reconcile the cluster to that desired state.
Options A and B (JSON and YAML) are common serialization formats used to represent Kubernetes objects, but they are not what the objects “are.” Kubernetes objects are logical API resources; they can be encoded as JSON (what the API uses) and often authored as YAML for human convenience. YAML is effectively a superset-friendly format that can be converted to JSON. The underlying API object model remains the same regardless of whether you wrote YAML or JSON. Option C (Java) is unrelated; Java is a programming language that can interact with Kubernetes via client libraries, but Kubernetes objects are not “Java objects” in the platform’s definition.
So the accurate statement is: Pods and Services are Kubernetes REST API objects (resources) exposed and managed through the Kubernetes API server, which is why REST is the correct fill-in.
=========
Which component in Kubernetes is responsible to watch newly created Pods with no assigned node, and selects a node for them to run on?
etcd
kube-controller-manager
kube-proxy
kube-scheduler
The correct answer is D: kube-scheduler. The kube-scheduler is the control plane component responsible for assigning Pods to nodes. It watches for newly created Pods that do not have a spec.nodeName set (i.e., unscheduled Pods). For each such Pod, it evaluates the available nodes against scheduling constraints and chooses the best node, then performs a “bind” operation by setting the Pod’s spec.nodeName.
Scheduling decisions consider many factors: resource requests vs node allocatable capacity, taints/tolerations, node selectors and affinity/anti-affinity, topology spread constraints, and other policy inputs. The scheduler typically runs a two-phase process: filtering (find feasible nodes) and scoring (rank feasible nodes) before selecting one.
Option A (etcd) is the datastore that persists cluster state; it does not make scheduling decisions. Option B (kube-controller-manager) runs controllers (Deployment, Node, Job controllers, etc.) but not scheduling. Option C (kube-proxy) is a node component for Service networking; it doesn’t place Pods.
Understanding this separation is key for troubleshooting. If Pods are stuck Pending with “no nodes available,” the scheduler’s feasibility checks are failing (insufficient CPU/memory, taints not tolerated, affinity mismatch). If Pods schedule but land unexpectedly, it’s often due to scoring preferences or missing constraints. In all cases, the component that performs the node selection is the kube-scheduler.
Therefore, the verified correct answer is D.
=========
What is a Service?
A static network mapping from a Pod to a port.
A way to expose an application running on a set of Pods.
The network configuration for a group of Pods.
An NGINX load balancer that gets deployed for an application.
The correct answer is B: a Kubernetes Service is a stable way to expose an application running on a set of Pods. Pods are ephemeral—IPs can change when Pods are recreated, rescheduled, or scaled. A Service provides a consistent network identity (DNS name and usually a ClusterIP virtual IP) and a policy for routing traffic to the current healthy backends.
Typically, a Service uses a label selector to determine which Pods are part of the backend set. Kubernetes then maintains the corresponding endpoint data (Endpoints/EndpointSlice), and the cluster dataplane (kube-proxy or an eBPF-based implementation) forwards traffic from the Service IP/port to one of the Pod IPs. This enables reliable service discovery and load distribution across replicas, especially during rolling updates where Pods are constantly replaced.
Option A is incorrect because Service routing is not a “static mapping from a Pod to a port.” It’s dynamic and targets a set of Pods. Option C is too vague and misstates the concept; while Services relate to networking, they are not “the network configuration for a group of Pods” (that’s closer to NetworkPolicy/CNI configuration). Option D is incorrect because Kubernetes does not automatically deploy an NGINX load balancer when you create a Service. NGINX might be used as an Ingress controller or external load balancer in some setups, but a Service is a Kubernetes API abstraction, not a specific NGINX component.
Services come in several types (ClusterIP, NodePort, LoadBalancer, ExternalName), but the core definition remains the same: stable access to a dynamic set of Pods. This is foundational for microservices and for decoupling clients from the churn of Pod lifecycles.
So, the verified correct definition is B.
=========
Which of the following is a challenge derived from running cloud native applications?
The operational costs of maintaining the data center of the company.
Cost optimization is complex to maintain across different public cloud environments.
The lack of different container images available in public image repositories.
The lack of services provided by the most common public clouds.
The correct answer is B. Cloud-native applications often run across multiple environments—different cloud providers, regions, accounts/projects, and sometimes hybrid deployments. This introduces real cost-management complexity: pricing models differ (compute types, storage tiers, network egress), discount mechanisms vary (reserved capacity, savings plans), and telemetry/charge attribution can be inconsistent. When you add Kubernetes, the abstraction layer can further obscure cost drivers because costs are incurred at the infrastructure level (nodes, disks, load balancers) while consumption happens at the workload level (namespaces, Pods, services).
Option A is less relevant because cloud-native adoption often reduces dependence on maintaining a private datacenter; many organizations adopt cloud-native specifically to avoid datacenter CapEx/ops overhead. Option C is generally untrue—public registries and vendor registries contain vast numbers of images; the challenge is more about provenance, security, and supply chain than “lack of images.” Option D is incorrect because major clouds offer abundant services; the difficulty is choosing among them and controlling cost/complexity, not a lack of services.
Cost optimization being complex is a recognized challenge because cloud-native architectures include microservices sprawl, autoscaling, ephemeral environments, and pay-per-use dependencies (managed databases, message queues, observability). Small misconfigurations can cause big bills: noisy logs, over-requested resources, unbounded HPA scaling, and egress-heavy architectures. That’s why practices like FinOps, tagging/labeling for allocation, and automated guardrails are emphasized.
So the best answer describing a real, common cloud-native challenge is B.
=========
Which of the following is a recommended security habit in Kubernetes?
Run the containers as the user with group ID 0 (root) and any user ID.
Disallow privilege escalation from within a container as the default option.
Run the containers as the user with user ID 0 (root) and any group ID.
Allow privilege escalation from within a container as the default option.
The correct answer is B. A widely recommended Kubernetes security best practice is to disallow privilege escalation inside containers by default. In Kubernetes Pod/Container security context, this is represented by allowPrivilegeEscalation: false. This setting prevents a process from gaining more privileges than its parent process—commonly via setuid/setgid binaries or other privilege-escalation mechanisms. Disallowing privilege escalation reduces the blast radius of a compromised container and aligns with least-privilege principles.
Options A and C are explicitly unsafe because they encourage running as root (UID 0 and/or GID 0). Running containers as root increases risk: if an attacker breaks out of the application process or exploits kernel/runtime vulnerabilities, having root inside the container can make privilege escalation and lateral movement easier. Modern Kubernetes security guidance strongly favors running as non-root (runAsNonRoot: true, explicit runAsUser), dropping Linux capabilities, using read-only root filesystems, and applying restrictive seccomp/AppArmor/SELinux profiles where possible.
Option D is the opposite of best practice. Allowing privilege escalation by default increases the attack surface and violates the idea of secure defaults.
Operationally, this habit is often enforced via admission controls and policies (e.g., Pod Security Admission in “restricted” mode, or policy engines like OPA Gatekeeper/Kyverno). It’s also important for compliance: many security baselines require containers to run as non-root and to prevent privilege escalation.
So, the recommended security habit among the choices is clearly B: Disallow privilege escalation.
=========
To visualize data from Prometheus you can use expression browser or console templates. What is the other data visualization tool commonly used together with Prometheus?
Grafana
Graphite
Nirvana
GraphQL
The most common visualization tool used with Prometheus is Grafana, so A is correct. Prometheus includes a built-in expression browser that can graph query results, but Grafana provides a much richer dashboarding experience: reusable dashboards, variables, templating, annotations, alerting integrations, and multi-data-source support.
In Kubernetes observability stacks, Prometheus scrapes and stores time-series metrics (cluster and application metrics). Grafana queries Prometheus using PromQL and renders the results into dashboards for SREs and developers. This pairing is widespread because it cleanly separates concerns: Prometheus is the metrics store and query engine; Grafana is the UI and dashboard layer.
Option B (Graphite) is a separate metrics system with its own storage/query model; while Grafana can visualize Graphite too, the question asks what is commonly used together with Prometheus, which is Grafana. Option D (GraphQL) is an API query language, not a metrics visualization tool. Option C (“Nirvana”) is not a standard Prometheus visualization tool in common Kubernetes stacks.
In practice, this combo enables operational outcomes: dashboards for error rates and latency (often derived from histograms), capacity monitoring (node CPU/memory), workload behavior (Pod restarts, HPA scaling), and SLO reporting. Grafana dashboards often serve as the shared language during incidents: teams correlate alerts with time-series patterns and quickly identify when regressions began.
Therefore, the verified correct tool commonly used with Prometheus for visualization is Grafana (A).
=========
In Kubernetes, which abstraction defines a logical set of Pods and a policy by which to access them?
Service Account
NetworkPolicy
Service
Custom Resource Definition
The correct answer is C: Service. A Kubernetes Service is an abstraction that provides stable access to a logical set of Pods. Pods are ephemeral: they can be rescheduled, recreated, and scaled, which changes their IP addresses over time. A Service solves this by providing a stable identity—typically a virtual IP (ClusterIP) and a DNS name—and a traffic-routing policy that directs requests to the current set of backend Pods.
Services commonly select Pods using labels via a selector (e.g., app=web). Kubernetes then maintains the backend endpoint list (Endpoints/EndpointSlices). The cluster networking layer routes traffic sent to the Service IP/port to one of the Pod endpoints, enabling load distribution across replicas. This is fundamental to microservices architectures: clients call the Service name, not individual Pods.
Why the other options are incorrect:
A ServiceAccount is an identity for Pods to authenticate to the Kubernetes API; it doesn’t define a set of Pods nor traffic access policy.
A NetworkPolicy defines allowed network flows (who can talk to whom) but does not provide stable addressing or load-balanced access to Pods. It is a security policy, not an exposure abstraction.
A CustomResourceDefinition extends the Kubernetes API with new resource types; it’s unrelated to service discovery and traffic routing for a set of Pods.
Understanding Services is core Kubernetes fundamentals: they decouple backend Pod churn from client connectivity. Services also integrate with different exposure patterns via type (ClusterIP, NodePort, LoadBalancer, ExternalName) and can be paired with Ingress/Gateway for HTTP routing. But the essential definition in the question—“logical set of Pods and a policy to access them”—is exactly the textbook description of a Service.
Therefore, the verified correct answer is C.
=========
What factors influence the Kubernetes scheduler when it places Pods on nodes?
Pod memory requests, node taints, and Pod affinity.
Pod labels, node labels, and request labels.
Node taints, node level, and Pod priority.
Pod priority, container command, and node labels.
The Kubernetes scheduler chooses a node for a Pod by evaluating scheduling constraints and cluster state. Key inputs include resource requests (CPU/memory), taints/tolerations, and affinity/anti-affinity rules. Option A directly names three real, high-impact scheduling factors—Pod memory requests, node taints, and Pod affinity—so A is correct.
Resource requests are fundamental: the scheduler must ensure the target node has enough allocatable CPU/memory to satisfy the Pod’s requests. Requests (not limits) drive placement decisions. Taints on nodes repel Pods unless the Pod has a matching toleration, which is commonly used to reserve nodes for special workloads (GPU nodes, system nodes, restricted nodes) or to protect nodes under certain conditions. Affinity and anti-affinity allow expressing “place me near” or “place me away” rules—e.g., keep replicas spread across failure domains or co-locate components for latency.
Option B includes labels, which do matter, but “request labels” is not a standard scheduler concept; labels influence scheduling mainly through selectors and affinity, not as a direct category called “request labels.” Option C mixes a real concept (taints, priority) with “node level,” which isn’t a standard scheduling factor term. Option D includes “container command,” which does not influence scheduling; the scheduler does not care what command the container runs, only placement constraints and resources.
Under the hood, kube-scheduler uses a two-phase process (filtering then scoring) to select a node, but the inputs it filters/scores include exactly the kinds of constraints in A. Therefore, the verified best answer is A.
=========
Which of the following statements is correct concerning Open Policy Agent (OPA)?
The policies must be written in Python language.
Kubernetes can use it to validate requests and apply policies.
Policies can only be tested when published.
It cannot be used outside Kubernetes.
Open Policy Agent (OPA) is a general-purpose policy engine used to define and enforce policy across different systems. In Kubernetes, OPA is commonly integrated through admission control (often via Gatekeeper or custom admission webhooks) to validate and/or mutate requests before they are persisted in the cluster. This makes B correct: Kubernetes can use OPA to validate API requests and apply policy decisions.
Kubernetes’ admission chain is where policy enforcement naturally fits. When a user or controller submits a request (for example, to create a Pod), the API server can call external admission webhooks. Those webhooks can evaluate the request against policy—such as “no privileged containers,” “images must come from approved registries,” “labels must include cost-center,” or “Ingress must enforce TLS.” OPA’s policy language (Rego) allows expressing these rules in a declarative form, and the decision (“allow/deny” and sometimes patches) is returned to the API server. This enforces governance consistently and centrally.
Option A is incorrect because OPA policies are written in Rego, not Python. Option C is incorrect because policies can be tested locally and in CI pipelines before deployment; in fact, testability is a key advantage. Option D is incorrect because OPA is designed to be platform-agnostic—it can be used with APIs, microservices, CI/CD pipelines, service meshes, and infrastructure tools, not only Kubernetes.
From a Kubernetes fundamentals view, OPA complements RBAC: RBAC answers “who can do what to which resources,” while OPA-style admission policies answer “even if you can create this resource, does it meet our organizational rules?” Together they help implement defense in depth: authentication + authorization + policy admission + runtime security controls. That is why OPA is widely used to enforce security and compliance requirements in Kubernetes environments.
=========
The Kubernetes project work is carried primarily by SIGs. What does SIG stand for?
Special Interest Group
Software Installation Guide
Support and Information Group
Strategy Implementation Group
In Kubernetes governance and project structure, SIG stands for Special Interest Group, so A is correct. Kubernetes is a large open source project under the Cloud Native Computing Foundation (CNCF), and its work is organized into groups that focus on specific domains—such as networking, storage, node, scheduling, security, docs, testing, and many more. SIGs provide a scalable way to coordinate contributors, prioritize work, review design proposals (KEPs), triage issues, and manage releases in their area.
Each SIG typically has regular meetings, mailing lists, chat channels, and maintainers who guide the direction of that part of the project. For example, SIG Network focuses on Kubernetes networking architecture and components, SIG Storage on storage APIs and CSI integration, and SIG Scheduling on scheduler behavior and extensibility. This structure helps Kubernetes evolve while maintaining quality, review rigor, and community-driven decision making.
The other options are not part of Kubernetes project terminology. “Software Installation Guide” and the others might sound plausible, but they are not how Kubernetes defines SIGs.
Understanding SIGs matters operationally because many Kubernetes features and design changes originate from SIGs. When you read Kubernetes enhancement proposals, release notes, or documentation, you’ll often see SIG ownership and references. In short, SIGs are the primary organizational units for Kubernetes engineering and stewardship, and SIG = Special Interest Group.
If a Pod was waiting for container images to download on the scheduled node, what state would it be in?
Failed
Succeeded
Unknown
Pending
If a Pod is waiting for its container images to be pulled to the node, it remains in the Pending phase, so D is correct. Kubernetes Pod “phase” is a high-level summary of where the Pod is in its lifecycle. Pending means the Pod has been accepted by the cluster but one or more of its containers has not started yet. That can occur because the Pod is waiting to be scheduled, waiting on volume attachment/mount, or—very commonly—waiting for the container runtime to pull the image.
When image pulling is the blocker, kubectl describe pod
Why the other phases don’t apply:
Succeeded is for run-to-completion Pods that have finished successfully (typical for Jobs).
Failed means the Pod terminated and at least one container terminated in failure (and won’t be restarted, depending on restartPolicy).
Unknown is used when the node can’t be contacted and the Pod’s state can’t be reliably determined (rare in healthy clusters).
A subtle but important Kubernetes detail: status “Waiting” reasons like ImagePullBackOff are container states inside .status.containerStatuses, while the Pod phase can still be Pending. So, “waiting for images to download” maps to Pod Pending, with container waiting reasons providing the deeper diagnosis.
Therefore, the verified correct answer is D: Pending.
=========
Scenario: You have a Kubernetes cluster hosted in a public cloud provider. When trying to create a Service of type LoadBalancer, the external-ip is stuck in the "Pending" state. Which Kubernetes component is failing in this scenario?
Cloud Controller Manager
Load Balancer Manager
Cloud Architecture Manager
Cloud Load Balancer Manager
When you create a Service of type LoadBalancer in a cloud environment, Kubernetes relies on cloud-provider integration to provision an external load balancer and allocate a public IP (or equivalent). The control plane component responsible for this integration is the cloud-controller-manager, so A is correct.
In Kubernetes, a LoadBalancer Service triggers a controller loop that calls the cloud provider APIs to create/update a load balancer that forwards traffic to the cluster (often via NodePorts on worker nodes, or via provider-specific mechanisms). The Service remains with EXTERNAL-IP: Pending until the cloud provider resource is successfully created and the controller updates the Service status with the assigned external address. If that status never updates, it usually indicates the cloud integration path is broken—commonly due to: missing cloud provider configuration, broken credentials/IAM permissions, the cloud-controller-manager not running/healthy, or a misconfigured cloud provider implementation.
The other options are not real Kubernetes components. Kubernetes does not include a “Load Balancer Manager” or “Cloud Architecture Manager” component name in its standard architecture. In many managed Kubernetes offerings, the cloud-controller-manager (or its equivalent) is provided/managed by the provider, but the responsibility remains the same: reconcile Kubernetes Service resources into cloud load balancer resources.
Therefore, in this scenario, the failing component is the Cloud Controller Manager, which is the Kubernetes control plane component that interfaces with the cloud provider to provision external load balancers and update the Service status.
=========
In the Kubernetes platform, which component is responsible for running containers?
etcd
CRI-O
cloud-controller-manager
kube-controller-manager
In Kubernetes, the actual act of running containers on a node is performed by the container runtime. The kubelet instructs the runtime via CRI, and the runtime pulls images, creates containers, and manages their lifecycle. Among the options provided, CRI-O is the only container runtime, so B is correct.
It’s important to be precise: the component that “runs containers” is not the control plane and not etcd. etcd (option A) stores cluster state (API objects) as the backing datastore. It never runs containers. cloud-controller-manager (option C) integrates with cloud APIs for infrastructure like load balancers and nodes. kube-controller-manager (option D) runs controllers that reconcile Kubernetes objects (Deployments, Jobs, Nodes, etc.) but does not execute containers on worker nodes.
CRI-O is a CRI implementation that is optimized for Kubernetes and typically uses an OCI runtime (like runc) under the hood to start containers. Another widely used runtime is containerd. The runtime is installed on nodes and is a prerequisite for kubelet to start Pods. When a Pod is scheduled to a node, kubelet reads the PodSpec and asks the runtime to create a “pod sandbox” and then start the container processes. Runtime behavior also includes pulling images, setting up namespaces/cgroups, and exposing logs/stdout streams back to Kubernetes tooling.
So while “the container runtime” is the most general answer, the question’s option list makes CRI-O the correct selection because it is a container runtime responsible for running containers in Kubernetes.
=========
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called:
Namespaces
Containers
Hypervisors
cgroups
Kubernetes provides “virtual clusters” within a single physical cluster primarily through Namespaces, so A is correct. Namespaces are a logical partitioning mechanism that scopes many Kubernetes resources (Pods, Services, Deployments, ConfigMaps, Secrets, etc.) into separate environments. This enables multiple teams, applications, or environments (dev/test/prod) to share a cluster while keeping their resource names and access controls separated.
Namespaces are often described as “soft multi-tenancy.” They don’t provide full isolation like separate clusters, but they do allow administrators to apply controls per namespace:
RBAC rules can grant different permissions per namespace (who can read Secrets, who can deploy workloads, etc.).
ResourceQuotas and LimitRanges can enforce fair usage and prevent one namespace from consuming all cluster resources.
NetworkPolicies can isolate traffic between namespaces (depending on the CNI).
Containers are runtime units inside Pods and are not “virtual clusters.” Hypervisors are virtualization components for VMs, not Kubernetes partitioning constructs. cgroups are Linux kernel primitives for resource control, not Kubernetes virtual cluster constructs.
While there are other “virtual cluster” approaches (like vcluster projects) that create stronger virtualized control planes, the built-in Kubernetes mechanism referenced by this question is namespaces. Therefore, the correct answer is A: Namespaces.
=========
What function does kube-proxy provide to a cluster?
Implementing the Ingress resource type for application traffic.
Forwarding data to the correct endpoints for Services.
Managing data egress from the cluster nodes to the network.
Managing access to the Kubernetes API.
kube-proxy is a node-level networking component that helps implement the Kubernetes Service abstraction. Services provide a stable virtual IP and DNS name that route traffic to a set of Pods (endpoints). kube-proxy watches the API for Service and EndpointSlice/Endpoints changes and then programs the node’s networking rules so that traffic sent to a Service is forwarded (load-balanced) to one of the correct backend Pod IPs. This is why B is correct.
Conceptually, kube-proxy turns the declarative Service configuration into concrete dataplane behavior. Depending on the mode, it may use iptables rules, IPVS, or integrate with eBPF-capable networking stacks (sometimes kube-proxy is replaced or bypassed by CNI implementations, but the classic kube-proxy role remains the canonical answer). In iptables mode, kube-proxy creates NAT rules that rewrite traffic from the Service virtual IP to one of the Pod endpoints. In IPVS mode, it programs kernel load-balancing tables for more scalable service routing. In all cases, the job is to connect “Service IP/port” to “Pod IP/port endpoints.”
Option A is incorrect because Ingress is a separate API resource and requires an Ingress Controller (like NGINX Ingress, HAProxy, Traefik, etc.) to implement HTTP routing, TLS termination, and host/path rules. kube-proxy is not an Ingress controller. Option C is incorrect because general node egress management is not kube-proxy’s responsibility; egress behavior typically depends on the CNI plugin, NAT configuration, and network policies. Option D is incorrect because API access control is handled by the API server’s authentication/authorization layers (RBAC, webhooks, etc.), not kube-proxy.
So kube-proxy’s essential function is: keep node networking rules in sync so that Service traffic reaches the right Pods. It is one of the key components that makes Services “just work” across nodes without clients needing to know individual Pod IPs.
=========
A CronJob is scheduled to run by a user every one hour. What happens in the cluster when it’s time for this CronJob to run?
Kubelet watches API Server for CronJob objects. When it’s time for a Job to run, it runs the Pod directly.
Kube-scheduler watches API Server for CronJob objects, and this is why it’s called kube-scheduler.
CronJob controller component creates a Pod and waits until it finishes to run.
CronJob controller component creates a Job. Then the Job controller creates a Pod and waits until it finishes to run.
CronJobs are implemented through Kubernetes controllers that reconcile desired state. When the scheduled time arrives, the CronJob controller (part of the controller-manager set of control plane controllers) evaluates the CronJob object’s schedule and determines whether a run should be started. Importantly, CronJob does not create Pods directly as its primary mechanism. Instead, it creates a Job object for each scheduled execution. That Job object then becomes the responsibility of the Job controller, which creates one or more Pods to complete the Job’s work and monitors them until completion. This separation of concerns is why option D is correct.
This design has practical benefits. Jobs encapsulate “run-to-completion” semantics: retries, backoff limits, completion counts, and tracking whether the work has succeeded. CronJob focuses on the temporal triggering aspect (schedule, concurrency policy, starting deadlines, history limits), while Job focuses on the execution aspect (create Pods, ensure completion, retry on failure).
Option A is incorrect because kubelet is a node agent; it does not watch CronJob objects and doesn’t decide when a schedule triggers. Kubelet reacts to Pods assigned to its node and ensures containers run there. Option B is incorrect because kube-scheduler schedules Pods to nodes after they exist (or are created by controllers); it does not trigger CronJobs. Option C is incorrect because CronJob does not usually create a Pod and wait directly; it delegates via a Job, which then manages Pods and completion.
So, at runtime: CronJob controller creates a Job; Job controller creates the Pod(s); scheduler assigns those Pods to nodes; kubelet runs them; Job controller observes success/failure and updates status; CronJob controller manages run history and concurrency rules.
=========
Which storage operator in Kubernetes can help the system to self-scale, self-heal, etc?
Rook
Kubernetes
Helm
Container Storage Interface (CSI)
Rook is a Kubernetes storage operator that helps manage and automate storage systems in a Kubernetes-native way, so A is correct. The key phrase in the question is “storage operator … self-scale, self-heal.” Operators extend Kubernetes by using controllers to reconcile a desired state. Rook applies that model to storage, commonly by managing storage backends like Ceph (and other systems depending on configuration).
With an operator approach, you declare how you want storage to look (cluster size, pools, replication, placement, failure domains), and the operator works continuously to maintain that state. That includes operational behaviors that feel “self-healing” such as reacting to failed storage Pods, rebalancing, or restoring desired replication counts (the exact behavior depends on the backend and configuration). The important KCNA-level idea is that Rook uses Kubernetes controllers to automate day-2 operations for storage in a way consistent with Kubernetes’ reconciliation loops.
The other options do not match the question: “Kubernetes” is the orchestrator itself, not a storage operator. “Helm” is a package manager for Kubernetes apps—it can install storage software, but it is not an operator that continuously reconciles and self-manages. “CSI” (Container Storage Interface) is an interface specification that enables pluggable storage drivers; CSI drivers provision and attach volumes, but CSI itself is not a “storage operator” with the broader self-managing operator semantics described here.
So, for “storage operator that can help with self-* behaviors,” Rook is the correct choice.
=========
Which of the following is the correct command to run an nginx deployment with 2 replicas?
kubectl run deploy nginx --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --replicas=2
kubectl create nginx deployment --image=nginx --replicas=2
kubectl create deploy nginx --image=nginx --count=2
The correct answer is B: kubectl create deploy nginx --image=nginx --replicas=2. This uses kubectl create deployment (shorthand create deploy) to generate a Deployment resource named nginx with the specified container image. The --replicas=2 flag sets the desired replica count, so Kubernetes will create two Pod replicas (via a ReplicaSet) and keep that number stable.
Option A is incorrect because kubectl run is primarily intended to run a Pod (and in older versions could generate other resources, but it’s not the recommended/consistent way to create a Deployment in modern kubectl usage). Option C is invalid syntax: kubectl subcommand order is incorrect; you don’t say kubectl create nginx deployment. Option D uses a non-existent --count flag for Deployment replicas.
From a Kubernetes fundamentals perspective, this question tests two ideas: (1) Deployments are the standard controller for running stateless workloads with a desired number of replicas, and (2) kubectl create deployment is a common imperative shortcut for generating that resource. After running the command, you can confirm with kubectl get deploy nginx, kubectl get rs, and kubectl get pods -l app=nginx (label may vary depending on kubectl version). You’ll see a ReplicaSet created and two Pods brought up.
In production, teams typically use declarative manifests (kubectl apply -f) or GitOps, but knowing the imperative command is useful for quick labs and validation. The key is that replicas are managed by the controller, not by manually starting containers—Kubernetes reconciles the state continuously.
Therefore, B is the verified correct command.
=========
What are the two essential operations that the kube-scheduler normally performs?
Pod eviction or starting
Resource monitoring and reporting
Filtering and scoring nodes
Starting and terminating containers
The kube-scheduler is a core control plane component in Kubernetes responsible for assigning newly created Pods to appropriate nodes. Its primary responsibility is decision-making, not execution. To make an informed scheduling decision, the kube-scheduler performs two essential operations: filtering and scoring nodes.
The scheduling process begins when a Pod is created without a node assignment. The scheduler first evaluates all available nodes and applies a set of filtering rules. During this phase, nodes that do not meet the Pod’s requirements are eliminated. Filtering criteria include resource availability (CPU and memory requests), node selectors, node affinity rules, taints and tolerations, volume constraints, and other policy-based conditions. Any node that fails one or more of these checks is excluded from consideration.
Once filtering is complete, the scheduler moves on to the scoring phase. In this step, each remaining eligible node is assigned a score based on a collection of scoring plugins. These plugins evaluate factors such as resource utilization balance, affinity preferences, topology spread constraints, and custom scheduling policies. The purpose of scoring is to rank nodes according to how well they satisfy the Pod’s placement preferences. The node with the highest total score is selected as the best candidate.
Option A is incorrect because Pod eviction is handled by other components such as the kubelet and controllers, and starting Pods is the responsibility of the kubelet. Option B is incorrect because resource monitoring and reporting are performed by components like metrics-server, not the scheduler. Option D is also incorrect because starting and terminating containers is entirely handled by the kubelet and the container runtime.
By separating filtering (eligibility) from scoring (preference), the kube-scheduler provides a flexible, extensible, and policy-driven scheduling mechanism. This design allows Kubernetes to support diverse workloads and advanced placement strategies while maintaining predictable scheduling behavior.
Therefore, the correct and verified answer is Option C: Filtering and scoring nodes, as documented in Kubernetes scheduling architecture.
What is the purpose of the kubelet component within a Kubernetes cluster?
A dashboard for Kubernetes clusters that allows management and troubleshooting of applications.
A network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
A component that watches for newly created Pods with no assigned node, and selects a node for them to run on.
An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod.
The kubelet is the primary node agent in Kubernetes. It runs on every worker node (and often on control-plane nodes too if they run workloads) and is responsible for ensuring that containers described by PodSpecs are actually running and healthy on that node. The kubelet continuously watches the Kubernetes API (via the control plane) for Pods that have been scheduled to its node, then it collaborates with the node’s container runtime (through CRI) to pull images, create containers, start them, and manage their lifecycle. It also mounts volumes, configures the Pod’s networking (working with the CNI plugin), and reports Pod and node status back to the API server.
Option D captures the core: “an agent on each node that makes sure containers are running in a Pod.” That includes executing probes (liveness, readiness, startup), restarting containers based on the Pod’s restartPolicy, and enforcing resource constraints in coordination with the runtime and OS.
Why the other options are wrong: A describes the Kubernetes Dashboard (or similar UI tools), not kubelet. B describes kube-proxy, which programs node-level networking rules (iptables/ipvs/eBPF depending on implementation) to implement Service virtual IP behavior. C describes the kube-scheduler, which selects a node for Pods that do not yet have an assigned node.
A useful way to remember kubelet’s role is: scheduler decides where, kubelet makes it happen there. Once the scheduler binds a Pod to a node, kubelet becomes responsible for reconciling “desired state” (PodSpec) with “observed state” (running containers). If a container crashes, kubelet will restart it according to policy; if an image is missing, it will pull it; if a Pod is deleted, it will stop containers and clean up. This node-local reconciliation loop is fundamental to Kubernetes’ self-healing and declarative operation model.
=========
Which statement about Ingress is correct?
Ingress provides a simple way to track network endpoints within a cluster.
Ingress is a Service type like NodePort and ClusterIP.
Ingress is a construct that allows you to specify how a Pod is allowed to communicate.
Ingress exposes routes from outside the cluster to Services in the cluster.
Ingress is the Kubernetes API resource for defining external HTTP/HTTPS routing into the cluster, so D is correct. An Ingress object specifies rules such as hostnames (e.g., app.example.com), URL paths (e.g., /api), and TLS configuration, mapping those routes to Kubernetes Services. This provides Layer 7 routing capabilities beyond what a basic Service offers.
Ingress is not a Service type (so B is wrong). Service types (ClusterIP, NodePort, LoadBalancer, ExternalName) are part of the Service API and operate at Layer 4. Ingress is a separate API object that depends on an Ingress Controller to actually implement routing. The controller watches Ingress resources and configures a reverse proxy/load balancer (like NGINX, HAProxy, or a cloud load balancer integration) to enforce the desired routing. Without an Ingress Controller, creating an Ingress object alone will not route traffic.
Option A describes endpoint tracking (that’s closer to Endpoints/EndpointSlice). Option C describes NetworkPolicy, which controls allowed network flows between Pods/namespaces. Ingress is about exposing and routing incoming application traffic from outside the cluster to internal Services.
So the verified correct statement is D: Ingress exposes routes from outside the cluster to Services in the cluster.
What helps an organization to deliver software more securely at a higher velocity?
Kubernetes
apt-get
Docker Images
CI/CD Pipeline
A CI/CD pipeline is a core practice/tooling approach that enables organizations to deliver software faster and more securely, so D is correct. CI (Continuous Integration) automates building and testing code changes frequently, reducing integration risk and catching defects early. CD (Continuous Delivery/Deployment) automates releasing validated builds into environments using consistent, repeatable steps—reducing manual errors and enabling rapid iteration.
Security improves because automation enables standardized checks on every change: static analysis, dependency scanning, container image scanning, policy validation, and signing/verification steps can be integrated into the pipeline. Instead of relying on ad-hoc human processes, security controls become repeatable gates. In Kubernetes environments, pipelines commonly build container images, run tests, publish artifacts to registries, and then deploy via manifests, Helm, or GitOps controllers—keeping deployments consistent and auditable.
Option A (Kubernetes) is a platform that helps run and manage workloads, but by itself it doesn’t guarantee secure high-velocity delivery. It provides primitives (rollouts, declarative config, RBAC), yet the delivery workflow still needs automation. Option B (apt-get) is a package manager for Debian-based systems and is not a delivery pipeline. Option C (Docker Images) are artifacts; they improve portability and repeatability, but they don’t provide the end-to-end automation of building, testing, promoting, and deploying across environments.
In cloud-native application delivery, the pipeline is the “engine” that turns code changes into safe production releases. Combined with Kubernetes’ declarative deployment model (Deployments, rolling updates, health probes), a CI/CD pipeline supports frequent releases with controlled rollouts, fast rollback, and strong auditability. That is exactly what the question is targeting. Therefore, the verified answer is D.
=========
What components are common in a service mesh?
Tracing and log storage
Circuit breaking and Pod scheduling
Data plane and runtime plane
Service proxy and control plane
A service mesh is an architectural pattern that manages service-to-service communication in a microservices environment by inserting a dedicated networking layer. The two most common building blocks you’ll see across service mesh implementations are (1) a data plane of proxies and (2) a control plane that configures and manages those proxies—this aligns best with “service proxy and control plane,” option D.
In practice, the data plane is usually implemented via sidecar proxies (or sometimes node/ambient proxies) that sit “next to” workloads and handle traffic functions such as mTLS encryption, retries, timeouts, load balancing policies, traffic splitting, and telemetry generation. These proxies can capture inbound and outbound traffic without requiring changes to application code, which is one of the defining benefits of a mesh.
The control plane provides the management layer: it distributes policy and configuration to the proxies (routing rules, security policies, identities/certificates), discovers services/endpoints, and often coordinates certificate rotation and workload identity. In Kubernetes environments, meshes typically integrate with the Kubernetes API for service discovery and configuration.
Option C is close in spirit but uses non-standard wording (“runtime plane” is not a typical service mesh term; “control plane” is). Options A and B describe capabilities that may exist in a mesh ecosystem (telemetry, circuit breaking), but they are not the universal “core components” across meshes. Tracing/log storage, for example, is usually handled by external observability backends (e.g., Jaeger, Tempo, Loki) rather than being intrinsic “mesh components.”
So, the most correct and broadly accepted answer is D: service proxy and control plane.
=========
A Kubernetes Pod is returning a CrashLoopBackOff status. What is the most likely reason for this behavior?
There are insufficient resources allocated for the Pod.
The application inside the container crashed after starting.
The container’s image is missing or cannot be pulled.
The Pod is unable to communicate with the Kubernetes API server.
A CrashLoopBackOff status in Kubernetes indicates that a container within a Pod is repeatedly starting, crashing, and being restarted by Kubernetes. This behavior occurs when the container process exits shortly after starting and Kubernetes applies an increasing back-off delay between restart attempts to prevent excessive restarts.
Option B is the correct answer because CrashLoopBackOff most commonly occurs when the application inside the container crashes after it has started. Typical causes include application runtime errors, misconfigured environment variables, missing configuration files, invalid command or entrypoint definitions, failed dependencies, or unhandled exceptions during application startup. Kubernetes itself is functioning as expected by restarting the container according to the Pod’s restart policy.
Option A is incorrect because insufficient resources usually lead to different symptoms. For example, if a container exceeds its memory limit, it may be terminated with an OOMKilled status rather than repeatedly crashing immediately. While resource constraints can indirectly cause crashes, they are not the defining reason for a CrashLoopBackOff state.
Option C is incorrect because an image that cannot be pulled results in statuses such as ImagePullBackOff or ErrImagePull, not CrashLoopBackOff. In those cases, the container never successfully starts.
Option D is incorrect because Pods do not need to communicate directly with the Kubernetes API server for normal application execution. Issues with API server communication affect control plane components or scheduling, not container restart behavior.
From a troubleshooting perspective, Kubernetes documentation recommends inspecting container logs using kubectl logs and reviewing Pod events with kubectl describe pod to identify the root cause of the crash. Fixing the underlying application error typically resolves the CrashLoopBackOff condition.
In summary, CrashLoopBackOff is a protective mechanism that signals a repeatedly failing container process. The most likely and verified cause is that the application inside the container is crashing after startup, making option B the correct answer.
What is the goal of load balancing?
Automatically measure request performance across instances of an application.
Automatically distribute requests across different versions of an application.
Automatically distribute instances of an application across the cluster.
Automatically distribute requests across instances of an application.
The core goal of load balancing is to distribute incoming requests across multiple instances of a service so that no single instance becomes overloaded and so that the overall service is more available and responsive. That matches option D, which is the correct answer.
In Kubernetes, load balancing commonly appears through the Service abstraction. A Service selects a set of Pods using labels and provides stable access via a virtual IP (ClusterIP) and DNS name. Traffic sent to the Service is then forwarded to one of the healthy backend Pods. This spreads load across replicas and provides resilience: if one Pod fails, it is removed from endpoints (or becomes NotReady) and traffic shifts to remaining replicas. The actual traffic distribution mechanism depends on the networking implementation (kube-proxy using iptables/IPVS or an eBPF dataplane), but the intent remains consistent: distribute requests across multiple backends.
Option A describes monitoring/observability, not load balancing. Option B describes progressive delivery patterns like canary or A/B routing; that can be implemented with advanced routing layers (Ingress controllers, service meshes), but it’s not the general definition of load balancing. Option C describes scheduling/placement of instances (Pods) across cluster nodes, which is the role of the scheduler and controllers, not load balancing.
In cloud environments, load balancing may also be implemented by external load balancers (cloud LBs) in front of the cluster, then forwarded to NodePorts or ingress endpoints, and again balanced internally to Pods. At each layer, the objective is the same: spread request traffic across multiple service instances to improve performance and availability.
=========
Which of the following is a definition of Hybrid Cloud?
A combination of services running in public and private data centers, only including data centers from the same cloud provider.
A cloud native architecture that uses services running in public clouds, excluding data centers in different availability zones.
A cloud native architecture that uses services running in different public and private clouds, including on-premises data centers.
A combination of services running in public and private data centers, excluding serverless functions.
A hybrid cloud architecture combines public cloud and private/on-premises environments, often spanning multiple infrastructure domains while maintaining some level of portability, connectivity, and unified operations. Option C captures the commonly accepted definition: services run across public and private clouds, including on-premises data centers, so C is correct.
Hybrid cloud is not limited to a single cloud provider (which is why A is too restrictive). Many organizations adopt hybrid cloud to meet regulatory requirements, data residency constraints, latency needs, or to preserve existing investments while still using public cloud elasticity. In Kubernetes terms, hybrid strategies often include running clusters both on-prem and in one or more public clouds, then standardizing deployment through Kubernetes APIs, GitOps, and consistent security/observability practices.
Option B is incorrect because excluding data centers in different availability zones is not a defining property; in fact, hybrid deployments commonly use multiple zones/regions for resilience. Option D is a distraction: serverless inclusion or exclusion does not define hybrid cloud. Hybrid is about the combination of infrastructure environments, not a specific compute model.
A practical cloud-native view is that hybrid architectures introduce challenges around identity, networking, policy enforcement, and consistent observability across environments. Kubernetes helps because it provides a consistent control plane API and workload model regardless of where it runs. Tools like service meshes, federated identity, and unified monitoring can further reduce fragmentation.
So, the most accurate definition in the given choices is C: hybrid cloud combines public and private clouds, including on-premises infrastructure, to run services in a coordinated architecture.
=========
Which cloud native tool keeps Kubernetes clusters in sync with sources of configuration (like Git repositories), and automates updates to configuration when there is new code to deploy?
Flux and ArgoCD
GitOps Toolkit
Linkerd and Istio
Helm and Kustomize
Tools that continuously reconcile cluster state to match a Git repository’s desired configuration are GitOps controllers, and the best match here is Flux and ArgoCD, so A is correct. GitOps is the practice where Git is the source of truth for declarative system configuration. A GitOps tool continuously compares the desired state (manifests/Helm/Kustomize outputs stored in Git) with the actual state in the cluster and then applies changes to eliminate drift.
Flux and Argo CD both implement this reconciliation loop. They watch Git repositories, detect updates (new commits/tags), and apply the updated Kubernetes resources. They also surface drift and sync status, enabling auditable, repeatable deployments and easy rollbacks (revert Git). This model improves delivery velocity and security because changes flow through code review, and cluster changes can be restricted to the GitOps controller identity rather than ad-hoc human kubectl access.
Option B (“GitOps Toolkit”) is related—Flux uses a GitOps Toolkit internally—but the question asks for a “tool” that keeps clusters in sync; the recognized tools are Flux and Argo CD in this list. Option C lists service meshes (traffic/security/telemetry), not deployment synchronization tools. Option D lists packaging/templating tools; Helm and Kustomize help build manifests, but they do not, by themselves, continuously reconcile cluster state to a Git source.
In Kubernetes application delivery, GitOps tools become the deployment engine: CI builds artifacts, updates references in Git (image tags/digests), and the GitOps controller deploys those changes. This separation strengthens traceability and reduces configuration drift. Therefore, A is the verified correct answer.
=========
Which of the following actions is supported when working with Pods in Kubernetes?
Managing static Pods directly through the API server.
Guaranteeing Pods always stay on the same node once scheduled.
Renaming containers in a Pod using kubectl patch.
Creating Pods through workload resources like Deployments.
In Kubernetes, Pods are the smallest deployable units and represent one or more containers that share networking and storage. While Pods can be created directly, Kubernetes strongly encourages users to manage Pods indirectly through higher-level workload resources. Among the options provided, creating Pods through workload resources like Deployments is a fully supported and recommended practice.
Workload resources such as Deployments, ReplicaSets, StatefulSets, and Jobs are designed to manage Pods declaratively. A Deployment, for example, defines a desired state—such as the number of replicas and the Pod template—and Kubernetes continuously works to maintain that state. If a Pod crashes, is deleted, or a node fails, the Deployment automatically creates a replacement Pod. This model provides self-healing, scalability, rolling updates, and rollback capabilities, which are not available when managing standalone Pods.
Option A is incorrect because static Pods are not managed through the API server. Static Pods are created and managed directly by the kubelet on a specific node using manifest files placed on disk. Although the API server becomes aware of static Pods, they cannot be created, modified, or deleted through it.
Option B is incorrect because Kubernetes does not guarantee that Pods will always remain on the same node. If a node becomes unhealthy or a Pod is evicted, the scheduler may place a replacement Pod on a different node. Only certain workload patterns, such as StatefulSets with persistent storage, attempt to preserve identity—not node placement.
Option C is also incorrect because container names within a Pod are immutable. Kubernetes does not allow renaming containers using kubectl patch or any other mechanism after the Pod has been created.
Therefore, the correct and verified answer is option D: creating Pods through workload resources like Deployments, which aligns with Kubernetes design principles and official documentation.
What is a Kubernetes service with no cluster IP address called?
Headless Service
Nodeless Service
IPLess Service
Specless Service
A Kubernetes Service normally provides a stable virtual IP (ClusterIP) and a DNS name that load-balances traffic across matching Pods. A headless Service is a special type of Service where Kubernetes does not allocate a ClusterIP. Instead, the Service’s DNS returns individual Pod IPs (or other endpoint records), allowing clients to connect directly to specific backends rather than through a single virtual IP. That is why the correct answer is A (Headless Service).
Headless Services are created by setting spec.clusterIP: None. When you do this, kube-proxy does not program load-balancing rules for a virtual IP because there isn’t one. Instead, service discovery is handled via DNS records that point to the actual endpoints. This behavior is especially important for stateful or identity-sensitive systems where clients must talk to a particular replica (for example, databases, leader/follower clusters, or StatefulSet members).
This is also why headless Services pair naturally with StatefulSets. StatefulSets provide stable network identities (pod-0, pod-1, etc.) and stable DNS names. The headless Service provides the DNS domain that resolves each Pod’s stable hostname to its IP, enabling peer discovery and consistent addressing even as Pods move between nodes.
The other options are distractors: “Nodeless,” “IPLess,” and “Specless” are not Kubernetes Service types. In the core API, the Service “types” are things like ClusterIP, NodePort, LoadBalancer, and ExternalName; “headless” is a behavioral mode achieved through the ClusterIP field.
In short: a headless Service removes the virtual IP abstraction and exposes endpoint-level discovery. It’s a deliberate design choice when load-balancing is not desired or when the application itself handles routing, membership, or sharding.
=========
In which framework do the developers no longer have to deal with capacity, deployments, scaling and fault tolerance, and OS?
Docker Swarm
Kubernetes
Mesos
Serverless
Serverless is the model where developers most directly avoid managing server capacity, OS operations, and much of the deployment/scaling/fault-tolerance mechanics, which is why D is correct. In serverless computing (commonly Function-as-a-Service, FaaS, and managed serverless container platforms), the provider abstracts away the underlying servers. You typically deploy code (functions) or a container image, define triggers (HTTP events, queues, schedules), and the platform automatically provisions the required compute, scales it based on demand, and handles much of the availability and fault tolerance behind the scenes.
It’s important to compare this to Kubernetes: Kubernetes does automate scheduling, self-healing, rolling updates, and scaling, but it still requires you (or your platform team) to design and operate cluster capacity, node pools, upgrades, runtime configuration, networking, and baseline reliability controls. Even in managed Kubernetes services, you still choose node sizes, scale policies, and operational configuration. Kubernetes reduces toil, but it does not eliminate infrastructure concerns in the same way serverless does.
Docker Swarm and Mesos are orchestration platforms that schedule workloads, but they also require managing the underlying capacity and OS-level aspects. They are not “no longer have to deal with capacity and OS” frameworks.
From a cloud native viewpoint, serverless is about consuming compute as an on-demand utility. Kubernetes can be a foundation for a serverless experience (for example, with event-driven autoscaling or serverless frameworks), but the pure framework that removes the most operational burden from developers is serverless.
If kubectl is failing to retrieve information from the cluster, where can you find Pod logs to troubleshoot?
/var/log/pods/
~/.kube/config
/var/log/k8s/
/etc/kubernetes/
The correct answer is A: /var/log/pods/. When kubectl logs can’t retrieve logs (for example, API connectivity issues, auth problems, or kubelet/API proxy issues), you can often troubleshoot directly on the node where the Pod ran. Kubernetes nodes typically store container logs on disk, and a common location is under /var/log/pods/, organized by namespace, Pod name/UID, and container. This directory contains symlinks or files that map to the underlying container runtime log location (often under /var/log/containers/ as well, depending on distro/runtime setup).
Option B (~/.kube/config) is your local kubeconfig file; it contains cluster endpoints and credentials, not Pod logs. Option D (/etc/kubernetes/) contains Kubernetes component configuration/manifests on some installations (especially control plane), not application logs. Option C (/var/log/k8s/) is not a standard Kubernetes log path.
Operationally, the node-level log locations depend on the container runtime and logging configuration, but the Kubernetes convention is that kubelet writes container logs to a known location and exposes them through the API so kubectl logs works. If the API path is broken, node access becomes your fallback. This is also why secure node access is sensitive: anyone with node root access can potentially read logs (and other data), which is part of the threat model.
So, the best answer for where to look on the node for Pod logs when kubectl can’t retrieve them is /var/log/pods/, option A.
=========
In Kubernetes, what is the primary responsibility of the kubelet running on each worker node?
To allocate persistent storage volumes and manage distributed data replication for Pods.
To manage cluster state information and handle all scheduling decisions for workloads.
To ensure that containers defined in Pod specifications are running and remain healthy on the node.
To provide internal DNS resolution and route service traffic between Pods and nodes.
The kubelet is the primary node-level agent in Kubernetes and plays a critical role in ensuring that workloads run correctly on each worker node. Its main responsibility is to ensure that the containers described in Pod specifications are running and remain healthy on that node, which makes option C the correct answer.
Once the Kubernetes scheduler assigns a Pod to a node, the kubelet on that node takes over execution responsibilities. It watches the API server for Pod specifications that are scheduled to its node and then interacts with the container runtime to start, stop, and manage the containers defined in those Pods. The kubelet continuously monitors container health and reports Pod and node status back to the API server, enabling Kubernetes to make informed decisions about restarts, rescheduling, or remediation.
Health checks are another key responsibility of the kubelet. It executes liveness, readiness, and startup probes as defined in the Pod specification. Based on probe results, the kubelet may restart containers or update Pod status to reflect whether the application is ready to receive traffic. This behavior directly supports Kubernetes’ self-healing capabilities.
Option A is incorrect because persistent storage allocation and data replication are handled by storage systems, CSI drivers, and controllers—not by the kubelet itself. Option B is incorrect because cluster state management and scheduling decisions are the responsibility of control plane components such as the API server, controller manager, and kube-scheduler. Option D is incorrect because DNS resolution and service traffic routing are handled by components like CoreDNS and kube-proxy.
In summary, the kubelet acts as the “node supervisor” for Kubernetes workloads. By ensuring containers are running as specified and continuously reporting their status, the kubelet forms the essential link between the Kubernetes control plane and the actual execution of applications on worker nodes. This clearly aligns with Option C as the correct and verified answer.
During a team meeting, a developer mentions the significance of open collaboration in the cloud native ecosystem. Which statement accurately reflects principles of collaborative development and community stewardship?
Open source projects succeed when contributors focus on code quality without the overhead of community engagement.
Maintainers of open source projects act independently to make technical decisions without requiring input from contributors.
Community stewardship emphasizes guiding project growth but does not necessarily include sustainability considerations.
Community events and working groups foster collaboration by bringing people together to share knowledge and build connections.
Open collaboration and community stewardship are foundational principles of the cloud native ecosystem, particularly within projects governed by organizations such as the Cloud Native Computing Foundation (CNCF). These principles emphasize that successful open source projects are not driven solely by code quality, but by healthy, inclusive, and sustainable communities.
Option D accurately reflects these principles. Community events, special interest groups, and working groups play a vital role in fostering collaboration. They provide structured and informal spaces where contributors, maintainers, and users can exchange ideas, share operational experiences, mentor new participants, and collectively guide the direction of projects. This collaborative approach helps ensure that projects evolve in ways that meet real-world needs and benefit from diverse perspectives.
Option A is incorrect because community engagement is not an “overhead” but a critical success factor. Kubernetes and other cloud native projects explicitly recognize that documentation, communication, governance, and contributor onboarding are just as important as writing high-quality code. Without active community participation, projects often struggle with adoption, contributor burnout, and long-term viability.
Option B is incorrect because modern open source governance values transparency and shared decision-making. While maintainers have responsibilities such as reviewing changes and ensuring project stability, they are expected to solicit feedback, encourage discussion, and incorporate contributor input through open processes. This approach builds trust and accountability within the community.
Option C is also incorrect because sustainability is a core aspect of community stewardship. Stewardship includes ensuring that projects can be maintained over time, preventing maintainer burnout, encouraging new contributors, and establishing governance models that support long-term health.
According to cloud native and Kubernetes documentation, strong communities enable innovation, resilience, and scalability—both technically and socially. By bringing people together through events and working groups, community stewardship reinforces collaboration and shared ownership, making option D the correct and fully verified answer.
E QUESTION NO: 5 [Cloud Native Application Delivery]
What does SBOM stand for?
A. System Bill of Materials
B. Software Bill Operations Management
C. Security Baseline for Open Source Management
D. Software Bill of Materials
Answer: D
SBOM stands for Software Bill of Materials, a critical concept in modern cloud native application delivery and software supply chain security. An SBOM is a formal, structured inventory that lists all components included in a software artifact, such as libraries, frameworks, dependencies, and their versions. This includes both direct and transitive dependencies that are bundled into applications, containers, or container images.
In cloud native environments, applications are often built using numerous open source components and third-party libraries. While this accelerates development, it also increases the risk of hidden vulnerabilities. An SBOM provides transparency into what software is actually running in production, enabling organizations to quickly identify whether they are affected by newly disclosed vulnerabilities or license compliance issues.
Option A is incorrect because SBOM is specific to software, not systems or hardware materials. Option B is incorrect because it describes a management process rather than a standardized inventory of software components. Option C is incorrect because SBOM is not a security baseline or policy framework; instead, it is a factual record of software contents that supports security and compliance efforts.
SBOMs are especially important in containerized and Kubernetes-based workflows. Container images often bundle many dependencies into a single artifact, making it difficult to assess risk without a detailed inventory. By generating and distributing SBOMs alongside container images, teams can integrate vulnerability scanning, compliance checks, and risk assessment earlier in the delivery pipeline. This practice aligns with the principles of DevSecOps and shift-left security.
Kubernetes and cloud native security guidance emphasize SBOMs as a foundational element of software supply chain security. They support faster incident response, improved trust between software producers and consumers, and stronger governance across the lifecycle of applications. As a result, Software Bill of Materials is the correct and fully verified expansion of SBOM, making option D the accurate answer.
Which resource do you use to attach a volume in a Pod?
StorageVolume
PersistentVolume
StorageClass
PersistentVolumeClaim
In Kubernetes, Pods typically attach persistent storage by referencing a PersistentVolumeClaim (PVC), making D correct. A PVC is a user’s request for storage with specific requirements (size, access mode, storage class). Kubernetes then binds the PVC to a matching PersistentVolume (PV) (either pre-provisioned statically or created dynamically via a StorageClass and CSI provisioner). The Pod does not directly attach a PV; it references the PVC, and Kubernetes handles the binding and mounting.
This design separates responsibilities: administrators (or CSI drivers) manage PV provisioning and backend storage details, while developers consume storage via PVCs. In a Pod spec, you define a volume of type persistentVolumeClaim and set claimName:
Option B (PersistentVolume) is not directly referenced by Pods; PVs are cluster resources that represent actual storage. Pods don’t “pick” PVs; claims do. Option C (StorageClass) defines provisioning parameters (e.g., disk type, replication, binding mode) but is not what a Pod references to mount a volume. Option A is not a Kubernetes resource type.
Operationally, using PVCs enables dynamic provisioning and portability: the same Pod spec can be deployed across clusters where the StorageClass name maps to appropriate backend storage. It also supports lifecycle controls like reclaim policies (Delete/Retain) and snapshot/restore workflows depending on CSI capabilities.
So the Kubernetes resource you use in a Pod to attach a persistent volume is PersistentVolumeClaim, option D.
=========
Which are the core features provided by a service mesh?
Authentication and authorization
Distributing and replicating data
Security vulnerability scanning
Configuration management
A is the correct answer because a service mesh primarily focuses on securing and managing service-to-service communication, and a core part of that is authentication and authorization. In microservices architectures, internal (“east-west”) traffic can become a complex web of calls. A service mesh introduces a dedicated communication layer—commonly implemented with sidecar proxies or node proxies plus a control plane—to apply consistent security and traffic policies across services.
Authentication in a mesh typically means service identity: each workload gets an identity (often via certificates), enabling mutual TLS (mTLS) so services can verify each other and encrypt traffic in transit. Authorization then builds on identity to enforce “who can talk to whom” via policies (for example: service A can call service B only on certain paths or methods). These capabilities are central because they reduce the need for every development team to implement and maintain custom security libraries correctly.
Why the other answers are incorrect:
B (data distribution/replication) is a storage/database concern, not a mesh function.
C (vulnerability scanning) is typically part of CI/CD and supply-chain security tooling, not service-to-service runtime traffic management.
D (configuration management) is broader (GitOps, IaC, Helm/Kustomize); a mesh does have configuration, but “configuration management” is not the defining core feature tested here.
Service meshes also commonly provide traffic management (timeouts, retries, circuit breaking, canary routing) and telemetry (metrics/traces), but among the listed options, authentication and authorization best matches “core features.” It captures the mesh’s role in standardizing secure communications in a distributed system.
So, the verified correct answer is A.
=========
What do Deployments and StatefulSets have in common?
They manage Pods that are based on an identical container spec.
They support the OnDelete update strategy.
They support an ordered, graceful deployment and scaling.
They maintain a sticky identity for each of their Pods.
Both Deployments and StatefulSets are Kubernetes workload controllers that manage a set of Pods created from a Pod template, meaning they manage Pods based on an identical container specification (a shared Pod template). That is why A is correct. In both cases, you declare a desired state (replicas, container images, environment variables, volumes, probes, etc.) in spec.template, and the controller ensures the cluster converges toward that state by creating, updating, or replacing Pods.
The differences are what make the other options incorrect. OnDelete update strategy is associated with StatefulSets (it’s one of their update strategies), but it is not a shared, defining behavior across both controllers, so B is not “in common.” Ordered, graceful deployment and scaling is a hallmark of StatefulSets (ordered pod creation/termination and stable identities) rather than Deployments, so C is not shared. Sticky identity per Pod (stable network identity and stable storage identity per replica, commonly via StatefulSet + headless Service) is specifically a StatefulSet characteristic, not a Deployment feature, so D is not common.
A useful way to think about it is: both controllers manage replicas of a Pod template, but they differ in semantics. Deployments are designed primarily for stateless workloads and typically focus on rolling updates and scalable replicas where any instance is interchangeable. StatefulSets are designed for stateful workloads and add identity and ordering guarantees: each replica gets a stable name (like db-0, db-1) and often stable PersistentVolumeClaims.
So the shared commonality the question is testing is the basic workload-controller pattern: both controllers manage Pods created from a common template (identical container spec). Therefore, A is the verified answer.
=========
What is a sidecar container?
A Pod that runs next to another container within the same Pod.
A container that runs next to another Pod within the same namespace.
A container that runs next to another container within the same Pod.
A Pod that runs next to another Pod within the same namespace.
A sidecar container is an additional container that runs alongside the main application container within the same Pod, sharing network and storage context. That matches option C, so C is correct. The sidecar pattern is used to add supporting capabilities to an application without modifying the application code. Because both containers are in the same Pod, the sidecar can communicate with the main container over localhost and share volumes for files, sockets, or logs.
Common sidecar examples include: log forwarders that tail application logs and ship them to a logging system, proxies (service mesh sidecars like Envoy) that handle mTLS and routing policy, config reloaders that watch ConfigMaps and signal the main process, and local caching agents. Sidecars are especially powerful in cloud-native systems because they standardize cross-cutting concerns—security, observability, traffic policy—across many workloads.
Options A and D incorrectly describe “a Pod running next to …” which is not how sidecars work; sidecars are containers, not separate Pods. Running separate Pods “next to” each other in a namespace does not give the same shared network namespace and tightly coupled lifecycle. Option B is also incorrect for the same reason: a sidecar is not a separate Pod; it is a container in the same Pod.
Operationally, sidecars share the Pod lifecycle: they are scheduled together, scaled together, and generally terminated together. This is both a benefit (co-location guarantees) and a responsibility (resource requests/limits should include the sidecar’s needs, and failure modes should be understood). Kubernetes is increasingly formalizing sidecar behavior (e.g., sidecar containers with ordered startup semantics), but the core definition remains: a helper container in the same Pod.
=========
Manual reclamation policy of a PV resource is known as:
claimRef
Delete
Retain
Recycle
The correct answer is C: Retain. In Kubernetes persistent storage, a PersistentVolume (PV) has a persistentVolumeReclaimPolicy that determines what happens to the underlying storage asset after its PersistentVolumeClaim (PVC) is deleted. The reclaim policy options historically include Delete and Retain (and Recycle, which is deprecated/removed in many modern contexts). “Manual reclamation” refers to the administrator having to manually clean up and/or rebind the storage after the claim is released—this behavior corresponds to Retain.
With Retain, when the PVC is deleted, the PV moves to a “Released” state, but the actual storage resource (cloud disk, NFS path, etc.) is not deleted automatically. Kubernetes will not automatically make that PV available for a new claim until an administrator takes action—typically cleaning the data, removing the old claim reference, and/or creating a new PV/PVC binding flow. This is important for data safety: you don’t want to automatically delete sensitive or valuable data just because a claim was removed.
By contrast, Delete means Kubernetes (via the storage provisioner/CSI driver) will delete the underlying storage asset when the claim is deleted—useful for dynamic provisioning and disposable environments. Recycle used to scrub the volume contents and make it available again, but it’s not the recommended modern approach and has been phased out in favor of dynamic provisioning and explicit workflows.
So, the policy that implies manual intervention and manual cleanup/reuse is Retain, which is option C.
=========
Why is Cloud-Native Architecture important?
Cloud Native Architecture revolves around containers, microservices and pipelines.
Cloud Native Architecture removes constraints to rapid innovation.
Cloud Native Architecture is modern for application deployment and pipelines.
Cloud Native Architecture is a bleeding edge technology and service.
Cloud-native architecture is important because it enables organizations to build and run software in a way that supports rapid innovation while maintaining reliability, scalability, and efficient operations. Option B best captures this: cloud native removes constraints to rapid innovation, so B is correct.
In traditional environments, innovation is slowed by heavyweight release processes, tightly coupled systems, manual operations, and limited elasticity. Cloud-native approaches—containers, declarative APIs, automation, and microservices-friendly patterns—reduce those constraints. Kubernetes exemplifies this by offering a consistent deployment model, self-healing, automated rollouts, scaling primitives, and a large ecosystem of delivery and observability tools. This makes it easier to ship changes more frequently and safely: teams can iterate quickly, roll back confidently, and standardize operations across environments.
Option A is partly descriptive (containers/microservices/pipelines are common in cloud native), but it doesn’t explain why it matters; it lists ingredients rather than the benefit. Option C is vague (“modern”) and again doesn’t capture the core value proposition. Option D is incorrect because cloud native is not primarily about being “bleeding edge”—it’s about proven practices that improve time-to-market and operational stability.
A good way to interpret “removes constraints” is: cloud native shifts the bottleneck away from infrastructure friction. With automation (IaC/GitOps), standardized runtime packaging (containers), and platform capabilities (Kubernetes controllers), teams spend less time on repetitive manual work and more time delivering features. Combined with observability and policy automation, this results in faster delivery with better reliability—exactly the reason cloud-native architecture is emphasized across the Kubernetes ecosystem.
=========
What is the role of the ingressClassName field in a Kubernetes Ingress resource?
It defines the type of protocol (HTTP or HTTPS) that the Ingress Controller should process.
It specifies the backend Service used by the Ingress Controller to route external requests.
It determines how routing rules are prioritized when multiple Ingress objects are applied.
It indicates which Ingress Controller should implement the rules defined in the Ingress resource.
The ingressClassName field in a Kubernetes Ingress resource is used to explicitly specify which Ingress Controller is responsible for processing and enforcing the rules defined in that Ingress. This makes option D the correct answer.
In Kubernetes clusters, it is common to have multiple Ingress Controllers running at the same time. For example, a cluster might run an NGINX Ingress Controller, a cloud-provider-specific controller, and an internal-only controller simultaneously. Without a clear mechanism to select which controller should handle a given Ingress resource, multiple controllers could attempt to process the same rules, leading to conflicts or undefined behavior.
The ingressClassName field solves this problem by referencing an IngressClass object. The IngressClass defines the controller implementation (via the controller field), and the Ingress resource uses ingressClassName to declare which class—and therefore which controller—should act on it. This creates a clean and explicit binding between an Ingress and its controller.
Option A is incorrect because protocol handling (HTTP vs HTTPS) is defined through TLS configuration and service ports, not by ingressClassName. Option B is incorrect because backend Services are defined in the rules and backend sections of the Ingress specification. Option C is incorrect because routing priority is determined by path matching rules and controller-specific logic, not by ingressClassName.
Historically, annotations were used to select Ingress Controllers, but ingressClassName is now the recommended and standardized approach. It improves clarity, portability, and compatibility across different Kubernetes distributions and controllers.
In summary, the primary purpose of ingressClassName is to indicate which Ingress Controller should implement the routing rules for a given Ingress resource, making Option D the correct and verified answer.
Which Kubernetes component is the smallest deployable unit of computing?
StatefulSet
Deployment
Pod
Container
In Kubernetes, the Pod is the smallest deployable and schedulable unit, making C correct. Kubernetes does not schedule individual containers directly; instead, it schedules Pods, each of which encapsulates one or more containers that must run together on the same node. This design supports both single-container Pods (the most common) and multi-container Pods (for sidecars, adapters, and co-located helper processes).
Pods provide shared context: containers in a Pod share the same network namespace (one IP address and port space) and can share storage volumes. This enables tight coupling where needed—for example, a service mesh proxy sidecar and the application container communicate via localhost, or a log-forwarding sidecar reads logs from a shared volume. Kubernetes manages lifecycle at the Pod level: kubelet ensures the containers defined in the PodSpec are running and uses probes to determine readiness and liveness.
StatefulSet and Deployment are controllers that manage sets of Pods. A Deployment manages ReplicaSets for stateless workloads and provides rollout/rollback features; a StatefulSet provides stable identities, ordered operations, and stable storage for stateful replicas. These are higher-level constructs, not the smallest units.
Option D (“Container”) is smaller in an abstract sense, but it is not the smallest Kubernetes deployable unit because Kubernetes APIs and scheduling work at the Pod boundary. You don’t “kubectl apply” a container; you apply a Pod template within a Pod object (often via controllers).
Understanding Pods as the atomic unit is crucial: Services select Pods, autoscalers scale Pods (replica counts), and scheduling decisions are made per Pod. That’s why Kubernetes documentation consistently refers to Pods as the fundamental building block for running workloads.
=========
Which Kubernetes Service type exposes a service only within the cluster?
ClusterIP
NodePort
LoadBalancer
ExternalName
In Kubernetes, a Service provides a stable network endpoint for a set of Pods and abstracts away their dynamic nature. Kubernetes offers several Service types, each designed for different exposure requirements. Among these, ClusterIP is the Service type that exposes an application only within the cluster, making it the correct answer.
When a Service is created with the ClusterIP type, Kubernetes assigns it a virtual IP address that is reachable exclusively from within the cluster’s network. This IP is used by other Pods and internal components to communicate with the Service through cluster DNS or environment variables. External traffic from outside the cluster cannot directly access a ClusterIP Service, which makes it ideal for internal APIs, backend services, and microservices that should not be publicly exposed.
Option B (NodePort) is incorrect because NodePort exposes the Service on a static port on each node’s IP address, allowing access from outside the cluster. Option C (LoadBalancer) is incorrect because it provisions an external load balancer—typically through a cloud provider—to expose the Service publicly. Option D (ExternalName) is incorrect because it does not create a proxy or internal endpoint at all; instead, it maps the Service name to an external DNS name outside the cluster.
ClusterIP is also the default Service type in Kubernetes. If no type is explicitly specified in a Service manifest, Kubernetes automatically assigns it as ClusterIP. This default behavior reflects the principle of least exposure, encouraging internal-only access unless external access is explicitly required.
From a cloud native architecture perspective, ClusterIP Services are fundamental to building secure, scalable microservices systems. They enable internal service-to-service communication while reducing the attack surface by preventing unintended external access.
According to Kubernetes documentation, ClusterIP Services are intended for internal communication within the cluster and are not reachable from outside the cluster network. Therefore, ClusterIP is the correct and fully verified answer, making option A the right choice.
3 Months Free Update
3 Months Free Update
3 Months Free Update
TESTED 24 Feb 2026