Which of the following challenges to embedded system security can be addressed through ongoing, remote maintenance?
Processors being overwhelmed by the demands of security processing
Deploying updated firmware as vulnerabilities are discovered and addressed
Resource constraints due to limitations on battery, memory, and other physical components
Physical security attacks that take advantage of vulnerabilities in the hardware
Ongoing, remote maintenance is one of the most effective ways to improve the security posture of embedded systems over time because it enables timely remediation of newly discovered weaknesses. Embedded devices frequently run firmware that includes operating logic, network stacks, and third-party libraries. As vulnerabilities are discovered in these components, organizations must be able to deploy fixes quickly to reduce exposure. Remote maintenance supports this by enabling over-the-air firmware and software updates, configuration changes, certificate and key rotation, and the rollout of compensating controls such as updated security policies or hardened settings.
Option B is correct because remote maintenance directly addresses the challenge ofdeploying updated firmwareas issues are identified. Cybersecurity guidance for embedded and IoT environments emphasizes secure update mechanisms: authenticated update packages, integrity verification (such as digital signatures), secure distribution channels, rollback protection, staged deployment, and audit logging of update actions. These practices reduce the risk of attackers installing malicious firmware and help ensure devices remain supported throughout their operational life.
The other options are not primarily solved by remote maintenance. Limited CPU and memory are inherent design constraints that may require hardware redesign. Battery and component limitations are also physical constraints. Physical security attacks exploit device access and hardware weaknesses, which require tamper resistance, secure boot, and physical protections rather than remote maintenance alone.
Which of the following would qualify as a multi-factor authentication pair?
Thumbprint and Encryption
Something You Know and Something You Are
Password and Token
Encryption and Password
Multi-factor authentication requires a user to prove identity usingtwo or more different factor types. Cybersecurity standards describe the main factor categories assomething you know(for example, a password or PIN),something you have(for example, a hardware token, smart card, or authenticator app producing a one-time code), andsomething you are(biometrics such as fingerprint, face, or iris). A valid MFA pair must come fromdifferent categories, not just two items from the same category or a mix of authentication with non-authentication concepts.
OptionBis correct because it explicitly combines two distinct factor types: a knowledge factor and an inherence factor. This pairing is widely recognized as MFA because compromising one factor does not automatically compromise the other: an attacker who steals a password still needs the biometric, and spoofing a biometric does not provide the secret knowledge factor.
OptionAis incorrect because “encryption” is not an authentication factor; it is a protection mechanism for confidentiality and integrity of data. OptionDhas the same problem: encryption is not a user factor. OptionCcan represent MFA in many real implementations if “token” is truly a possession factor; however, training materials and exam items often prefer the clearest, unambiguous factor-language pairing, which is why “Something You Know and Something You Are” is the best single answer here.
Compliance with regulations is generally demonstrated through:
independent audits of systems and security procedures.
review of security requirements by senior executives and/or the Board.
extensive QA testing prior to system implementation.
penetration testing by ethical hackers.
Regulatory compliance is generally demonstrated throughindependent auditsbecause regulators, customers, and partners typically require objective evidence that required controls exist and operate effectively. An independent audit is performed by a qualified party that is not responsible for running the controls being assessed, which strengthens credibility and reduces conflicts of interest. Cybersecurity and governance documents describe audits as a formal method to verify compliance against defined criteria such as laws, regulations, contractual obligations, or control frameworks. Auditors review policies and procedures, inspect system configurations, sample access and change records, evaluate logging and monitoring, test incident response evidence, and validate that controls are consistently performed over time. The outcome is usually a report, attestation, or findings with remediation plans—artifacts commonly used to prove compliance.
A Board or executive review supports governance and oversight, but it does not, by itself, provide independent verification that controls are functioning. QA testing focuses on product quality and functional correctness; it may include security testing but does not typically satisfy regulatory evidence requirements for ongoing operational controls. Penetration testing is valuable for identifying exploitable weaknesses, yet it is a point-in-time technical exercise and does not comprehensively demonstrate compliance with procedural, administrative, and operational requirements such as access governance, retention, training, vendor oversight, and continuous monitoring. Therefore, independent audits are the standard mechanism to demonstrate compliance in a defensible, repeatable way.
Uploaded image
The hash function supports data in transit by ensuring:
validation that a message originated from a particular user.
a message was modified in transit.
a public key is transitioned into a private key.
encrypted messages are not shared with another party.
A cryptographic hash function supports data in transit primarily by providingintegrity assurance. When a sender computes a hash (digest) of a message and the receiver recomputes the hash after receipt, the two digests should match if the message arrived unchanged. If the message is altered in any way while traveling across the network—whether by an attacker, a faulty intermediary device, or transmission errors—the recomputed digest will differ from the original. This difference is the key signal that the messagewas modified in transit, which is what option B expresses. In practical secure-transport designs, hashes are typically combined with a secret key or digital signature so an attacker cannot simply modify the message and generate a new valid digest. Examples include HMAC for message authentication and digital signatures that hash the content and then sign the hash with a private key. These mechanisms provide integrity and, when keyed or signed, also provide authentication and non-repudiation properties.
Option A is more specifically about authentication of origin, which requires a keyed construction such as HMAC or a signature scheme; a plain hash alone cannot prove who sent the message. Option C is incorrect because keys are not “converted” from public to private. Option D relates to confidentiality, which is provided by encryption, not hashing. Therefore, the best answer is B because hashing enables detection of message modification during transit.
ITIL Information Technology Infrastructure Library defines:
a standard of best practices for IT Service Management.
how technology and hardware systems interface securely with one another.
the standard set of components used in every business technology system.
a set of security requirements that every business technology system must meet.
ITIL is a widely adopted framework that definesbest-practice guidance for IT Service Management. Its focus is on how organizations design, deliver, operate, and continually improve IT services so they reliably support business outcomes. In cybersecurity and service delivery documentation, ITIL is often referenced because strong service management processes are foundational to secure operations. For example, ITIL practices such as incident management, problem management, change enablement, configuration management, and service continuity help ensure security controls are implemented consistently and that deviations are identified, tracked, and corrected.
ITIL does not define how hardware systems interface securely with one another; that is more aligned with architecture standards, security engineering, and network or platform design frameworks. It also does not prescribe a universal set of components for every technology system; that belongs to reference architectures and enterprise architecture standards. Likewise, ITIL is not primarily a security requirements standard. While ITIL supports security governance through practices like risk management, access management, and information security management integration, it does not itself serve as a mandatory security control catalog.
From a cybersecurity perspective, ITIL contributes by promoting repeatable processes, clear roles and responsibilities, measurable service levels, and continual improvement. These elements reduce operational risk, improve response effectiveness, and strengthen accountability—key requirements for maintaining confidentiality, integrity, and availability in production environments.
What is an embedded system?
A system that is located in a secure underground facility
A system placed in a location and designed so it cannot be easily removed
It provides computing services in a small form factor with limited processing power
It safeguards the cryptographic infrastructure by storing keys inside a tamper-resistant external device
An embedded system is a specialized computing system designed to perform a dedicated function as part of a larger device or physical system. Unlike general-purpose computers, embedded systems are built to support a specific mission such as controlling sensors, actuators, communications, or device logic in products like routers, printers, medical devices, vehicles, industrial controllers, and smart appliances. Cybersecurity documentation commonly highlights that embedded systems tend to operate with constrained resources, which may include limited CPU power, memory, storage, and user interface capabilities. These constraints affect both design and security: patching may be harder, logging may be minimal, and security features must be carefully engineered to fit the platform’s limitations.
Option C best matches this characterization by describing a small form factor and limited processing power, which are typical attributes of many embedded devices. While not every embedded system is “small,” the key idea is that it is purpose-built, resource-constrained, and tightly integrated into a larger product.
The other options describe different concepts. A secure underground facility relates to physical site security, not embedded computing. Being hard to remove is about physical installation or tamper resistance, which can apply to many systems but is not what defines “embedded.” Storing cryptographic keys in a tamper-resistant external device describes a hardware security module or secure element use case, not the general definition of an embedded system.
What term is defined as a fix to software programming errors and vulnerabilities?
Control
Release
Log
Patch
Apatchis a vendor- or developer-provided update intended to correct defects in software, includingprogramming errorsandsecurity vulnerabilities. Cybersecurity and IT operations documents describe patching as a primary method of vulnerability remediation because many attacks succeed by exploiting known weaknesses for which fixes already exist. When a vulnerability is disclosed, the vendor may publish a patch that changes code, updates components, adjusts configuration defaults, or replaces vulnerable libraries. Applying the patch reduces the likelihood that an attacker can use that weakness to gain unauthorized access, execute malicious code, elevate privileges, or disrupt availability.
A patch is different from acontrol, which is a broader safeguard (technical, administrative, or physical) used to reduce risk; patching itself can be part of a control, such as a patch management program. It is also different from arelease, which is a broader software distribution that may include new features, improvements, and multiple fixes; a patch is usually more targeted and may be issued between major releases. Alogis an audit record of events and is used for monitoring, troubleshooting, and incident investigation—not for fixing code defects.
Cybersecurity guidance emphasizes disciplined patch management: maintaining asset inventories, prioritizing patches by risk and exposure, testing changes, deploying promptly, verifying installation, and documenting exceptions to manage residual risk.
A significant benefit of role-based access is that it:
simplifies the assignment of correct access levels to a user based on the work they will perform.
makes it easier to audit and verify data access.
ensures that employee accounts will be shut down on departure or role change.
ensures that tasks and associated privileges for a specific business process are disseminated among multiple users.
Role-based access control assigns permissions to defined roles that reflect job functions, and users receive access by being placed into the appropriate role. The major operational and security benefit is that itsimplifies and standardizes access provisioning. Instead of granting permissions individually to each user, administrators manage a smaller, controlled set of roles such as Accounts Payable Clerk, HR Specialist, or Application Administrator. When a new employee joins or changes responsibilities, access can be adjusted quickly and consistently by changing role membership. This reduces manual errors, limits over-provisioning, and helps enforce least privilege because each role is designed to include only the permissions required for that function.
RBAC also improves governance by making access decisions more repeatable and policy-driven. Security and compliance teams can review roles, validate that each role’s permissions match business needs, and require approvals for changes to role definitions. This approach supports segregation of duties by separating conflicting capabilities into different roles, which lowers fraud and misuse risk.
Option B is a real advantage of RBAC, but it is typically a secondary outcome of having structured roles rather than the primary “significant benefit” emphasized in access-control design. Option C relates to identity lifecycle processes such as deprovisioning, which can be integrated with RBAC but is not guaranteed by RBAC alone. Option D describes distributing tasks among multiple users, which is more aligned with segregation of duties design, not the core benefit of RBAC.
Analyst B has discovered multiple attempts from unauthorized users to access confidential data. This is most likely?
Admin
Hacker
User
IT Support
Multiple attempts by unauthorized users to access confidential data most closely aligns with activity from a hacker, meaning an unauthorized actor attempting to gain access to systems or information. Cybersecurity operations commonly observe this pattern as repeated login failures, password-spraying, credential-stuffing, brute-force attempts, repeated probing of restricted endpoints, or abnormal access requests against protected repositories. While “user” is too generic and could include authorized individuals, the question explicitly states “unauthorized users,” pointing to malicious or illegitimate actors. “Admin” and “IT Support” are roles typically associated with legitimate privileged access and operational troubleshooting; repeated unauthorized access attempts from those roles would be atypical and would still represent compromise or misuse rather than normal operations. Cybersecurity documentation often classifies these attempts as indicators of malicious intent and potential precursor events to a breach. Controls recommended to counter such activity include strong authentication (multi-factor authentication), account lockout and throttling policies, anomaly detection, IP reputation filtering, conditional access, least privilege, and monitoring of authentication logs for patterns across accounts and geographies. The key distinction is that repeated unauthorized attempts represent hostile behavior by an external or rogue actor, which is best described as a hacker in the provided options.
Analyst B has discovered multiple sources which can harm the organization’s systems. What has she discovered?
Breach
Hacker
Threat
Ransomware
Multiple sources that can harm an organization’s systems are classified as threats. In cybersecurity risk terminology, a threat is any circumstance, event, actor, or condition with the potential to adversely impact confidentiality, integrity, or availability. Threats can be human (external attackers, insiders, third-party compromises), technical (malware, ransomware campaigns, exploit kits), operational (misconfigurations, weak processes, inadequate monitoring), or environmental (power disruption, natural disasters). This differs from a breach, which is the realized outcome where unauthorized access or disclosure has already occurred. It also differs from hacker, which refers to one type of threat actor rather than the broader category of potential harm. Ransomware is a specific threat type (malware that encrypts data and demands payment), not a general term for multiple sources of harm. Cybersecurity documents commonly pair “threats” with “vulnerabilities” and “controls”: threats exploit vulnerabilities to create risk; controls reduce either the likelihood of exploitation or the impact if exploitation occurs. Identifying “multiple sources which can harm systems” is essentially threat identification—an early and ongoing step in risk management used to inform security architecture, monitoring, and incident preparedness. Therefore, the correct concept is threat.
Cybersecurity regulations typically require that enterprises demonstrate that they can protect:
applications and technology systems.
trade secrets and other intellectual property.
personal data of customers and employees.
business continuity and disaster recovery.
Cybersecurity regulations most commonly focus on the protection ofpersonal data, because misuse or exposure can directly harm individuals through identity theft, fraud, discrimination, or loss of privacy. Privacy and data-protection laws typically require organizations to implement appropriate safeguards to protect personal information across its lifecycle, including collection, storage, processing, sharing, and disposal. In cybersecurity governance documentation, this obligation is often expressed through requirements to maintain confidentiality and integrity of personal data, limit access based on business need, and ensure accountability through logging, monitoring, and audits.
Demonstrating protection of personal data generally includes having a documented data classification scheme, clearly defined lawful purposes for processing, retention limits, and secure handling procedures. Technical controls commonly expected include strong authentication, least privilege and role-based access control, encryption for data at rest and in transit, secure key management, endpoint and server hardening, vulnerability management, and continuous monitoring for suspicious activity. Operational capabilities such as incident response, breach detection, and timely notification processes are also emphasized because regulators expect organizations to manage and report material data exposures appropriately.
While protecting applications, intellectual property, and ensuring continuity are important security objectives, they are not the primary focus of many cybersecurity regulations in the same consistent way aspersonal data protection. Therefore, the best answer is personal data of customers and employees.
What is the purpose of Digital Rights Management DRM?
To ensure that all attempts to access information are tracked, logged, and auditable
To control the use, modification, and distribution of copyrighted works
To ensure that corporate files and data cannot be accessed by unauthorized personnel
To ensure that intellectual property remains under the full control of the originating enterprise
Digital Rights Management is a set of technical mechanisms used to enforce the permitted uses of digital content after it has been delivered to a user or device. Its primary purpose is tocontrol how copyrighted works are accessed and used, including restricting copying, printing, screen capture, forwarding, offline use, device limits, and redistribution. DRM systems commonly apply encryption to content and then rely on a licensing and policy enforcement component that checks whether a user or device has the right to open the content and under what conditions. These conditions can include time-based access (expiry), geographic limitations, subscription status, concurrent use limits, or restrictions on modification and export.
This aligns precisely with option B because DRM is fundamentally aboutusage control of copyrighted digital works, such as music, movies, e-books, software, and protected media streams. In cybersecurity documentation, DRM is often discussed alongside content protection, anti-piracy measures, and license compliance. It differs from general access control and audit logging: access control determines who may enter a system or open a resource, while auditing records actions for accountability. DRM extends beyond simple access by enforcing what a legitimate user can do with the content once accessed.
Option A describes audit logging, option C describes general authorization and data access control, and option D is closer to broad information rights management goals but is less precise than the standard definition focused on controlling use and distribution of copyrighted works.
NIST 800-30 defines cyber risk as a function of the likelihood of a given threat-source exercising a potential vulnerability, and:
the pre-disposing conditions of the vulnerability.
the probability of detecting damage to the infrastructure.
the effectiveness of the control assurance framework.
the resulting impact of that adverse event on the organization.
NIST SP 800-30 describes risk using a classic risk model:risk is a function of likelihood and impact. In this model, a threat-source may exploit a vulnerability, producing a threat event that results in adverse consequences. Thelikelihoodcomponent reflects how probable it is that a threat event will occur and successfully cause harm, considering factors such as threat capability and intent (or in non-adversarial cases, the frequency of hazards), the existence and severity of vulnerabilities, exposure, and the strength of current safeguards. However, likelihood alone does not define risk; a highly likely event that causes minimal harm may be less important than a less likely event that causes severe harm.
The second required component is theimpact—the magnitude of harm to the organization if the adverse event occurs. Impact is commonly evaluated across mission and business outcomes, including financial loss, operational disruption, legal or regulatory consequences, reputational damage, and loss of confidentiality, integrity, or availability. This is why option D is correct: NIST’s definition explicitly ties the risk expression tothe resulting impact on the organization.
The other options may influence likelihood assessment or control selection, but they are not the missing definitional element. Detection probability and control assurance relate to monitoring and governance; predisposing conditions can shape likelihood. None replace the
What common mitigation tool is used for directly handling or treating cyber risks?
Exit Strategy
Standards
Control
Business Continuity Plan
In cybersecurity risk management,risk treatmentis the set of actions used to reduce risk to an acceptable level. The most common tool used to directly treat or mitigate cyber risk is acontrolbecause controls are the specific safeguards that prevent, detect, or correct adverse events. Cybersecurity frameworks describe controls as measures implemented to reduce either thelikelihoodof a threat event occurring or theimpactif it does occur. Controls can be technical (such as multifactor authentication, encryption, endpoint protection, network segmentation, logging and monitoring), administrative (policies, standards, training, access approvals, change management), or physical (badges, locks, facility protections). Regardless of type, controls are the direct mechanism used to mitigate identified risks.
Anexit strategyis typically a vendor or outsourcing risk management concept focused on how to transition away from a provider or system; it supports resilience but is not the primary tool for directly mitigating a specific cyber risk.Standardsguide consistency by defining required practices and configurations, but the standard itself is not the mitigation—controls implemented to meet the standard are. Abusiness continuity plansupports availability and recovery after disruption, which is important, but it primarily addresses continuity and recovery rather than directly reducing the underlying cybersecurity risk in normal operations. Therefore, the best answer is the one that represents the direct implementation of safeguards:controls.
What risk factors should the analyst consider when assessing the Overall Likelihood of a threat?
Attack Initiation Likelihood and Initiated Attack Success Likelihood
Risk Level, Risk Impact, and Mitigation Strategy
Overall Site Traffic and Commerce Volume
Past Experience and Trends
In NIST-style risk assessment,overall likelihoodis not a single guess; it is derived by considering two related likelihood components. First isthe likelihood that a threat event will be initiated. This reflects how probable it is that a threat actor or source will attempt the attack or that a threat event will occur, considering factors such as adversary capability, intent, targeting, opportunity, and environmental conditions. Second isthe likelihood that an initiated event will succeed, meaning the attempt results in the adverse outcome. This depends heavily on the organization’s existing protections and conditions, including control strength, system exposure, vulnerabilities, misconfigurations, detection and response capability, and user behavior.
Option A matches this structure: analysts evaluate bothattack initiation likelihoodandinitiated attack success likelihoodto reach an overall view of likelihood. A high initiation likelihood with low success likelihood might occur when an organization is frequently targeted but has strong defenses. Conversely, low initiation likelihood with high success likelihood might apply to niche systems that are rarely targeted but poorly protected.
The other options are incomplete or misplaced. Risk impact is a separate dimension from likelihood, and mitigation strategy is an output of risk treatment, not an input to likelihood. Site traffic and commerce volume can influence exposure but do not define likelihood by themselves. Past experience and trends are useful evidence, but they support estimating the two likelihood components rather than replacing them.
The main phases of incident management are:
awareness, interest, desire, action.
reporting, investigation, assessment, corrective actions, review.
initiation, planning, action, closing.
assess, investigate, report, respond, legal compliance.
Incident management is a structured operational process used to ensure security issues are handled consistently, evidence is preserved, impact is reduced, and improvements are implemented to prevent recurrence. The phases listed in option B match how incident management is commonly documented in operational security programs.
Reportingis the entry point: users, monitoring tools, and service desks raise alerts or tickets, capturing what happened, when, and initial impact. Clear reporting channels and defined severity criteria ensure incidents are escalated quickly and handled by the right teams.Investigationfollows, focusing on fact-finding and evidence collection such as logs, endpoint telemetry, network traces, and user statements.Assessmentdetermines scope, business impact, affected assets and data, and the likelihood of continuing compromise. This step drives prioritization and selects the appropriate handling path.
Corrective actionsimplement containment, eradication, and recovery activities, such as isolating hosts, disabling compromised accounts, applying patches, rotating credentials, restoring from backups, and validating system integrity. Corrective actions also include communications, documentation, and coordination with legal, privacy, and business stakeholders when required. Finally,reviewis the lessons-learned phase that updates playbooks, improves detections, closes control gaps, and ensures root causes are addressed through durable fixes rather than temporary workarounds.
The other options do not represent standard incident management phases: A is a marketing model, while C and D are incomplete or mis-ordered compared to established incident management lifecycle documentation.
What is the first step of the forensic process?
Reporting
Examination
Analysis
Collection
The first step in a standard digital forensic process iscollectionbecause all later work depends on obtaining data in a way that preserves its integrity and evidentiary value. Collection involves identifying potential sources of relevant evidence and then acquiring it using controlled, repeatable methods. Typical sources include endpoint disk images, memory captures, mobile device extractions, server and application logs, cloud audit trails, email records, firewall and proxy logs, and authentication events. During collection, forensic guidance emphasizes maintaining a documentedchain of custody, recording who handled the evidence, when it was acquired, how it was transported and stored, and what tools and settings were used. This documentation supports accountability and helps ensure evidence is admissible and defensible if used in disciplinary actions, regulatory inquiries, or legal proceedings.
Collection also includes steps to prevent evidence contamination or loss. Investigators may isolate systems to stop further changes, capture volatile data such as RAM before shutdown, use write blockers when imaging storage media, verify acquisitions with cryptographic hashes, and securely store originals while performing analysis on validated copies. Only after evidence is collected and preserved do teams move intoexaminationandanalysis, where artifacts are filtered, parsed, correlated, and interpreted to reconstruct timelines and determine cause and scope. Reporting comes later to communicate findings and support remediation.
What does non-repudiation mean in the context of web security?
Ensuring that all traffic between web servers must be securely encrypted
Providing permission to use web server resources according to security policies and specified procedures, so that the activity can be audited
Ensuring that all data has not been altered in an unauthorized manner while being transmitted between web servers
Providing the sender of a message with proof of delivery, and the receiver with proof of the sender's identity
Non-repudiation is a security property that providesverifiable evidenceof an action or communication so that the parties involved cannot credibly deny their participation later. In web security, it most commonly means being able to provewho sent a message or performed a transactionand, in many cases, that the message was received and recorded. This is why option D is correct: it captures the idea of giving the receiver proof of the sender’s identity and giving the sender evidence that the message or transaction was delivered or accepted.
Cybersecurity guidance typically associates non-repudiation withdigital signatures, strong identity binding, and protected audit evidence. A digital signature uses asymmetric cryptography so that only the holder of a private key can sign, while anyone with the public key can verify the signature. When combined with trusted certificates, accurate time sources, and protected logs, this creates strong accountability. Non-repudiation also depends on maintaining the integrity of supporting evidence, such as tamper-resistant audit logs, secure log retention, and controlled access to signing keys.
It is different from confidentiality (encryption of traffic), and different from integrity alone (preventing unauthorized modification). It is also different from authorization and auditing, which support accountability but do not, by themselves, provide cryptographic-grade proof that a specific entity performed a specific action. Non-repudiation is especially important for high-trust transactions such as approvals, payments, and legally binding communications.
TESTED 18 Feb 2026

