Which of the following should be addressed by functional security requirements?
System reliability
User privileges
Identified vulnerabilities
Performance and stability
Functional security requirements definewhat security capabilities a system must provideto protect information and enforce policy. They describe required security functions such as identification and authentication, authorization, role-based access control, privilege management, session handling, auditing/logging, segregation of duties, and account lifecycle processes. Because of this,user privilegesare a direct and core concern of functional security requirements: the system must support controlling who can access what, under which conditions, and with what level of permission.
In cybersecurity requirement documentation, “privileges” include permission assignment (roles, groups, entitlements), enforcement of least privilege, privileged access restrictions, elevation workflows, administrative boundaries, and the ability to review and revoke permissions. These are functional because they require specific system behaviors and features—for example, the ability to define roles, prevent unauthorized actions, log privileged activities, and enforce timeouts or re-authentication for sensitive operations.
The other options are typically classified differently.System reliabilityandperformance/stabilityare generally non-functional requirements (quality attributes) describing service levels, resilience, and operational characteristics rather than security functions.Identified vulnerabilitiesare findings from assessments that drive remediation work and risk treatment; they inform security improvements but are not themselves functional requirements. Therefore, the option best aligned with functional security requirements is user privileges.
If a threat is expected to have a serious adverse effect, according to NIST SP 800-30 it would be rated with a severity level of:
moderate.
severe.
severely low.
very severe.
NIST SP 800-30 Rev. 1 defines qualitative risk severity levels using consistent impact language. In its assessment scale,“Moderate”is explicitly tied to events that can be expected to have aserious adverse effecton organizational operations, organizational assets, individuals, other organizations, or the Nation.
A “serious adverse effect” is described as outcomes such as asignificant degradation in mission capabilitywhere the organization can still perform its primary functions but withsignificantly reduced effectiveness,significant damageto organizational assets,significant financial loss, orsignificant harm to individualsthat does not involve loss of life or life-threatening injuries. This phrasing is used to distinguish “Moderate” from “Low” (limited adverse effect) and from “High” (severe or catastrophic adverse effect).
This classification matters in enterprise risk because it drives prioritization and control selection. A “Moderate” rating typically triggers stronger treatment actions than “Low,” such as tighter access controls, enhanced monitoring, more frequent vulnerability remediation, stronger configuration management, and improved incident response readiness. It also helps leaders compare risks consistently across systems and business processes by anchoring severity to clear operational and harm-based criteria rather than subjective judgment.
The process by which organizations assess the data they hold and the level of protection it should be given based on its risk to loss or harm from disclosure, is known as:
vulnerability assessment.
internal audit.
information classification.
information categorization.
Information classificationis the formal process of evaluating the data an organization creates or holds and assigning it a sensitivity level so the organization can apply the right safeguards. Cybersecurity policies describe classification as the foundation for consistent protection because it links thepotential harm from unauthorized disclosure, alteration, or lossto specific handling and control requirements. Typical classification labels include Public, Internal, Confidential, and Restricted, though names vary by organization. Once data is classified, required protections can be specified, such as encryption at rest and in transit, access restrictions based on least privilege, approved storage locations, monitoring requirements, retention periods, and secure disposal methods.
This is not avulnerability assessment, which focuses on identifying weaknesses in systems, applications, or configurations. It is also not aninternal audit, which evaluates whether controls and processes are being followed and are effective. Option D,information categorization, is often used in some frameworks to describe assigning impact levels (for example, confidentiality, integrity, availability impact) to information types or systems, mainly to drive control baselines. While related, the question specifically emphasizes assessing data and deciding thelevel of protectionbased on risk from disclosure, which aligns most directly withclassificationprograms used to govern labeling and handling rules across the organization.
A strong classification program improves security consistency, supports compliance, reduces accidental exposure, and helps prioritize controls for the most sensitive information assets.
Analyst B has discovered multiple attempts from unauthorized users to access confidential data. This is most likely?
Admin
Hacker
User
IT Support
Multiple attempts by unauthorized users to access confidential data most closely aligns with activity from a hacker, meaning an unauthorized actor attempting to gain access to systems or information. Cybersecurity operations commonly observe this pattern as repeated login failures, password-spraying, credential-stuffing, brute-force attempts, repeated probing of restricted endpoints, or abnormal access requests against protected repositories. While “user” is too generic and could include authorized individuals, the question explicitly states “unauthorized users,” pointing to malicious or illegitimate actors. “Admin” and “IT Support” are roles typically associated with legitimate privileged access and operational troubleshooting; repeated unauthorized access attempts from those roles would be atypical and would still represent compromise or misuse rather than normal operations. Cybersecurity documentation often classifies these attempts as indicators of malicious intent and potential precursor events to a breach. Controls recommended to counter such activity include strong authentication (multi-factor authentication), account lockout and throttling policies, anomaly detection, IP reputation filtering, conditional access, least privilege, and monitoring of authentication logs for patterns across accounts and geographies. The key distinction is that repeated unauthorized attempts represent hostile behavior by an external or rogue actor, which is best described as a hacker in the provided options.
Which of the following challenges to embedded system security can be addressed through ongoing, remote maintenance?
Processors being overwhelmed by the demands of security processing
Deploying updated firmware as vulnerabilities are discovered and addressed
Resource constraints due to limitations on battery, memory, and other physical components
Physical security attacks that take advantage of vulnerabilities in the hardware
Ongoing, remote maintenance is one of the most effective ways to improve the security posture of embedded systems over time because it enables timely remediation of newly discovered weaknesses. Embedded devices frequently run firmware that includes operating logic, network stacks, and third-party libraries. As vulnerabilities are discovered in these components, organizations must be able to deploy fixes quickly to reduce exposure. Remote maintenance supports this by enabling over-the-air firmware and software updates, configuration changes, certificate and key rotation, and the rollout of compensating controls such as updated security policies or hardened settings.
Option B is correct because remote maintenance directly addresses the challenge ofdeploying updated firmwareas issues are identified. Cybersecurity guidance for embedded and IoT environments emphasizes secure update mechanisms: authenticated update packages, integrity verification (such as digital signatures), secure distribution channels, rollback protection, staged deployment, and audit logging of update actions. These practices reduce the risk of attackers installing malicious firmware and help ensure devices remain supported throughout their operational life.
The other options are not primarily solved by remote maintenance. Limited CPU and memory are inherent design constraints that may require hardware redesign. Battery and component limitations are also physical constraints. Physical security attacks exploit device access and hardware weaknesses, which require tamper resistance, secure boot, and physical protections rather than remote maintenance alone.
Why is directory management important for cybersecurity?
It prevents outside agents from viewing confidential company information
It allows all application security to be managed through a single interface
It prevents outsiders from knowing personal information about employees
It controls access to folders and files on the network
Directory management is important because it provides a centralized way to define identities, groups, roles, and permissions, which directly determines who can access network resources. In most enterprises, directory services store user and service accounts and then integrate with file servers, applications, email platforms, VPN, and cloud services. This integration enables consistent enforcement of authorization rules such as group-based access to shared folders and files, role-based access control, and least privilege. Option D captures this core security purpose: directory management is a foundational control mechanism for governing access to networked resources.
From a cybersecurity controls perspective, directory management supports secure onboarding and offboarding, ensuring that new users receive only appropriate permissions and that departing users are disabled promptly to reduce insider and external risk. It also strengthens authentication by enabling enterprise-wide policies such as password rules, account lockouts, multi-factor authentication integration, and conditional access. In addition, centralized directories improve auditability: administrators can review memberships and entitlements, monitor privileged group changes, and generate logs that support investigations and compliance reporting.
The other options are either too broad or not primarily about directory management. While directories help protect confidential information indirectly, their direct function is not “preventing outside agents” by itself; it is enforcing access rules. They also do not manage all application security through one interface, and preventing outsiders from knowing employee personal information is a privacy objective, not the main purpose of directory management.
Top of Form
The hash function supports data in transit by ensuring:
validation that a message originated from a particular user.
a message was modified in transit.
a public key is transitioned into a private key.
encrypted messages are not shared with another party.
A cryptographic hash function supports data in transit primarily by providingintegrity assurance. When a sender computes a hash (digest) of a message and the receiver recomputes the hash after receipt, the two digests should match if the message arrived unchanged. If the message is altered in any way while traveling across the network—whether by an attacker, a faulty intermediary device, or transmission errors—the recomputed digest will differ from the original. This difference is the key signal that the messagewas modified in transit, which is what option B expresses. In practical secure-transport designs, hashes are typically combined with a secret key or digital signature so an attacker cannot simply modify the message and generate a new valid digest. Examples include HMAC for message authentication and digital signatures that hash the content and then sign the hash with a private key. These mechanisms provide integrity and, when keyed or signed, also provide authentication and non-repudiation properties.
Option A is more specifically about authentication of origin, which requires a keyed construction such as HMAC or a signature scheme; a plain hash alone cannot prove who sent the message. Option C is incorrect because keys are not “converted” from public to private. Option D relates to confidentiality, which is provided by encryption, not hashing. Therefore, the best answer is B because hashing enables detection of message modification during transit.
Why would a Business Analyst include current technology when documenting the current state business processes surrounding a solution being replaced?
To ensure the future state business processes are included in user training
To identify potential security impacts to integrated systems within the value chain
To identify and meet internal security governance requirements
To classify the data elements so that information confidentiality, integrity, and availability are protected
A Business Analyst documents current technology in the “as-is” state because business processes are rarely isolated; they depend on applications, interfaces, data exchanges, identity services, and shared infrastructure. From a cybersecurity perspective, replacing one solution can unintentionally change trust boundaries, authentication flows, authorization decisions, logging coverage, and data movement across integrated systems. Option B is correct because understanding the current technology landscape helps identify where security impacts may occur across the value chain, including upstream data providers, downstream consumers, third-party services, and internal platforms that rely on the existing system.
Cybersecurity documents emphasize that integration points are common attack surfaces. APIs, file transfers, message queues, single sign-on, batch jobs, and shared databases can introduce risks such as broken access control, insecure data transmission, data leakage, privilege escalation, and gaps in monitoring. If the BA captures current integrations, dependencies, and data flows, the delivery team can properly perform threat modeling, define security requirements, and avoid breaking compensating controls that other systems depend on. This also supports planning for secure decommissioning, migration, and cutover, ensuring credentials, keys, service accounts, and network paths are rotated or removed appropriately.
The other options are less precise for the question. Training is not the core driver for documenting current technology. Governance requirements apply broadly but do not explain why current tech must be included. Data classification is important, but it is a separate activity from capturing technology dependencies needed to assess integration security impacts.
NIST 800-30 defines cyber risk as a function of the likelihood of a given threat-source exercising a potential vulnerability, and:
the pre-disposing conditions of the vulnerability.
the probability of detecting damage to the infrastructure.
the effectiveness of the control assurance framework.
the resulting impact of that adverse event on the organization.
NIST SP 800-30 describes risk using a classic risk model:risk is a function of likelihood and impact. In this model, a threat-source may exploit a vulnerability, producing a threat event that results in adverse consequences. Thelikelihoodcomponent reflects how probable it is that a threat event will occur and successfully cause harm, considering factors such as threat capability and intent (or in non-adversarial cases, the frequency of hazards), the existence and severity of vulnerabilities, exposure, and the strength of current safeguards. However, likelihood alone does not define risk; a highly likely event that causes minimal harm may be less important than a less likely event that causes severe harm.
The second required component is theimpact—the magnitude of harm to the organization if the adverse event occurs. Impact is commonly evaluated across mission and business outcomes, including financial loss, operational disruption, legal or regulatory consequences, reputational damage, and loss of confidentiality, integrity, or availability. This is why option D is correct: NIST’s definition explicitly ties the risk expression tothe resulting impact on the organization.
The other options may influence likelihood assessment or control selection, but they are not the missing definitional element. Detection probability and control assurance relate to monitoring and governance; predisposing conditions can shape likelihood. None replace the
In the OSI model for network communication, the Session Layer is responsible for:
establishing a connection and terminating it when it is no longer needed.
presenting data to the receiver in a form that it recognizes.
adding appropriate network addresses to packets.
transmitting the data on the medium.
The OSISession Layer(Layer 5) is responsible forestablishing, managing, and terminating sessionsbetween communicating applications. A session is the logical dialogue that allows two endpoints to coordinate how communication starts, how it continues, and how it ends. This includes controlling the “conversation” state, such as who can transmit at what time, maintaining the session so it stays active, and closing it cleanly when it is no longer needed. Because of this, option A best matches the Session Layer’s core responsibilities.
In contrast,presenting data to the receiver in a recognizable formis the job of thePresentation Layer(Layer 6), which deals with formatting, encoding, compression, and often cryptographic transformation concepts.Adding appropriate network addresses to packetsaligns to theNetwork Layer(Layer 3), where logical addressing and routing decisions occur, typically associated with IP addressing.Transmitting the data on the mediumis handled at thePhysical Layer(Layer 1), which concerns signals, cabling, and the actual movement of bits.
From a cybersecurity perspective, session management is important because weaknesses can enablesession hijacking, replay, or fixation, especially when session identifiers are predictable, not protected, or not properly invalidated. Controls commonly include strong authentication, secure session token generation, timeout and reauthentication rules, and proper session termination to reduce exposure.
What is a Recovery Point Objective RPO?
The point in time prior to the outage to which business and process data must be recovered
The maximum time a system may be out of service before a significant business impact occurs
The target time to restore a system without experiencing any significant business impact
The target time to restore systems to operational status following an outage
ARecovery Point Objectivedefines the acceptable amount of data loss measured in time. It answers the question: “After an outage or disruptive event,how far back in time can we restore data and still meet business needs?” If the RPO is 4 hours, the organization is stating it can tolerate losing up to 4 hours of data changes, meaning backups, replication, journaling, or snapshots must be frequent enough to restore to a point no older than 4 hours before the incident. That is exactly what option A describes: the specific point in time prior to the outage to which data must be recovered.
RPO is often paired withRecovery Time Objectivebut they are not the same. RTO focuses onhow quicklyservice must be restored, while RPO focuses onhow much datathe organization can afford to lose. Options B, C, and D all describe time-to-restore concepts, which align with RTO or related recovery targets rather than RPO.
In operational resilience and disaster recovery planning, RPO drives technical design choices: backup frequency, replication methods, storage and retention strategies, and validation testing. Lower RPO values generally require more robust and often more expensive solutions, such as near-real-time replication and strong change capture controls. RPO also influences incident response and recovery procedures to ensure restoration steps reliably meet the agreed data-loss tolerance.
Top of Form
Controls that are put in place to address specific risks may include:
only initial reviews.
technology or process solutions.
partial coverage of one or more risks.
coverage for partial extent and scope of the risk.
Cybersecurity controls are the safeguards an organization implements to reduce risk to an acceptable level. In standard risk-management language, a control is not limited to a one-time review; it is an ongoing capability that is designed, implemented, and operated to prevent, detect, or correct unwanted events. That capability is typically delivered throughtechnology solutions(technical controls) andprocess solutions(administrative or procedural controls), which is why option B is correct.
Technology controls include items like firewalls, endpoint protection, encryption, multifactor authentication, logging and monitoring, vulnerability scanning, secure configuration baselines, and data-loss prevention. These controls directly enforce security requirements through system behavior and automation, helping reduce the likelihood or impact of threats.
Process controls include policies, standards, access approval workflows, segregation of duties, change management, secure development practices, incident response playbooks, training, and periodic access recertification. These ensure people consistently perform security-critical tasks correctly and create accountability and repeatability.
Options C and D describe possible outcomes or limitations (controls may not fully eliminate risk and may only mitigate part of it), but they are not what controlsinclude. Option A is incorrect because “only initial reviews” are insufficient; reviews can be a component of a control, but effective controls require sustained operation, evidence, and reassessment as systems, threats, and business needs change.
There are three states in which data can exist:
at dead, in action, in use.
at dormant, in mobile, in use.
at sleep, in awake, in use.
at rest, in transit, in use.
Data is commonly categorized into three states because the threats and protections change depending on where the data is and what is happening to it. Data at rest is stored on a device or system, such as databases, file shares, endpoints, backups, and cloud storage. The main risks are unauthorized access, theft of storage media, misconfigured permissions, and improper disposal. Controls typically include strong access control, encryption at rest with sound key management, secure configuration and hardening, segmentation, and resilient backup protections including restricted access and immutability.
Data in transit is data moving between systems, such as client-to-server traffic, service-to-service connections, API calls, and email routing. The primary risks are interception, alteration, and impersonation through man-in-the-middle techniques. Standard controls include transport encryption (such as TLS), strong authentication and certificate validation, secure network architecture, and monitoring for anomalous connections or data flows.
Data in use is actively processed in memory by applications and users, for example when a document is opened, a record is processed by an application, or data is displayed to a user. This state is challenging because data may be decrypted for processing. Controls include least privilege, strong authentication and session management, endpoint protection, application security controls, and secure development practices, with hardware-backed isolation when required.
What is the purpose of Digital Rights Management DRM?
To ensure that all attempts to access information are tracked, logged, and auditable
To control the use, modification, and distribution of copyrighted works
To ensure that corporate files and data cannot be accessed by unauthorized personnel
To ensure that intellectual property remains under the full control of the originating enterprise
Digital Rights Management is a set of technical mechanisms used to enforce the permitted uses of digital content after it has been delivered to a user or device. Its primary purpose is tocontrol how copyrighted works are accessed and used, including restricting copying, printing, screen capture, forwarding, offline use, device limits, and redistribution. DRM systems commonly apply encryption to content and then rely on a licensing and policy enforcement component that checks whether a user or device has the right to open the content and under what conditions. These conditions can include time-based access (expiry), geographic limitations, subscription status, concurrent use limits, or restrictions on modification and export.
This aligns precisely with option B because DRM is fundamentally aboutusage control of copyrighted digital works, such as music, movies, e-books, software, and protected media streams. In cybersecurity documentation, DRM is often discussed alongside content protection, anti-piracy measures, and license compliance. It differs from general access control and audit logging: access control determines who may enter a system or open a resource, while auditing records actions for accountability. DRM extends beyond simple access by enforcing what a legitimate user can do with the content once accessed.
Option A describes audit logging, option C describes general authorization and data access control, and option D is closer to broad information rights management goals but is less precise than the standard definition focused on controlling use and distribution of copyrighted works.
What is the first step of the forensic process?
Reporting
Examination
Analysis
Collection
The first step in a standard digital forensic process iscollectionbecause all later work depends on obtaining data in a way that preserves its integrity and evidentiary value. Collection involves identifying potential sources of relevant evidence and then acquiring it using controlled, repeatable methods. Typical sources include endpoint disk images, memory captures, mobile device extractions, server and application logs, cloud audit trails, email records, firewall and proxy logs, and authentication events. During collection, forensic guidance emphasizes maintaining a documentedchain of custody, recording who handled the evidence, when it was acquired, how it was transported and stored, and what tools and settings were used. This documentation supports accountability and helps ensure evidence is admissible and defensible if used in disciplinary actions, regulatory inquiries, or legal proceedings.
Collection also includes steps to prevent evidence contamination or loss. Investigators may isolate systems to stop further changes, capture volatile data such as RAM before shutdown, use write blockers when imaging storage media, verify acquisitions with cryptographic hashes, and securely store originals while performing analysis on validated copies. Only after evidence is collected and preserved do teams move intoexaminationandanalysis, where artifacts are filtered, parsed, correlated, and interpreted to reconstruct timelines and determine cause and scope. Reporting comes later to communicate findings and support remediation.
Which scenario is an example of the principle of least privilege being followed?
An application administrator has full permissions to only the applications they support
All application and database administrators have full permissions to every application in the company
Certain users are granted administrative access to their network account, in case they need to install a web-app
A manager who is conducting performance appraisals is granted access to HR files for all employees
The principle of least privilege requires that users, administrators, services, and applications are granted only the minimum access necessary to perform authorized job functions, and nothing more. Option A follows this principle because the administrator’s elevated permissions are limited in scope to the specific applications they are responsible for supporting. This reduces the attack surface and limits blast radius: if that administrator account is compromised, the attacker’s reach is constrained to only those applications rather than the entire enterprise environment.
Least privilege is typically implemented through role-based access control, separation of duties, and privileged access management practices. These controls ensure privileges are assigned based on defined roles, reviewed regularly, and removed when no longer required. They also promote using standard user accounts for routine tasks and reserving administrative actions for controlled, auditable sessions. In addition, least privilege supports stronger accountability through logging and change tracking, because fewer people have the ability to make high-impact changes across systems.
The other scenarios violate least privilege. Option B grants excessive enterprise-wide permissions, creating unnecessary risk and enabling widespread damage from mistakes or compromise. Option C provides “just in case” administrative access, which cybersecurity guidance explicitly discourages because it increases exposure without a validated business need. Option D is overly broad because access to all HR files exceeds what is required for performance appraisals, which typically should be limited to relevant employee records only.
Analyst B has discovered unauthorized access to data. What has she discovered?
Breach
Hacker
Threat
Ransomware
Unauthorized access to data is the defining condition of a data breach. In standard cybersecurity terminology, a breach occurs when confidentiality is compromised—meaning data is accessed, acquired, viewed, or exfiltrated by an entity that is not authorized to do so. This is distinct from a “threat,” which is only the potential for harm, and distinct from a “hacker,” which describes an actor rather than the security outcome. A breach can result from external attackers, malicious insiders, credential theft, misconfigurations, unpatched vulnerabilities, or poor access controls. Cybersecurity guidance typically frames breaches as realized security incidents with measurable impact: exposure of regulated data, loss of intellectual property, fraud risk, reputational harm, and legal/regulatory consequences. Once unauthorized access is confirmed, incident response procedures generally require containment (limit further access), preservation of evidence (logs, system images where appropriate), eradication (remove persistence), and recovery (restore secure operations). Organizations also assess scope—what data types were accessed, how many records, which systems, and the dwell time—and then determine notification obligations where laws or contracts apply. In short, the discovery describes an actual compromise of data confidentiality, which is precisely a breach.
What is a risk owner?
The person accountable for resolving a risk
The person who is responsible for creating the risk
The person who will take the action to mitigate a risk
The person who identified the risk
Arisk owneris the individual who isaccountablefor a specific risk being properly managed to an acceptable level. Accountability means the risk owner has the authority and obligation to ensure the risk is assessed, an appropriate treatment decision is made, and the organization follows through—whether that decision is to mitigate, transfer, avoid, or accept the risk. In many governance models, the risk owner is typically a business or technology leader who “owns” the process, asset, or outcome most affected by the risk, and who can commit resources or approve changes needed to address it.
This is different from the person who performs the mitigation work. A risk owner may delegate tasks to control owners, engineers, or project teams, but they remain accountable for ensuring actions are completed, deadlines are met, residual risk is understood, and exceptions are documented and approved according to policy. The risk owner is also the person who should review changes in risk conditions over time, such as new vulnerabilities, changes in threat activity, or business/process changes that alter impact.
Option C describes an implementer or control owner, not necessarily the accountable party. Option D is simply the discoverer of the risk, and option B is incorrect because risks are often created by circumstances, design choices, or external factors rather than a single person.
Compliance with regulations is generally demonstrated through:
independent audits of systems and security procedures.
review of security requirements by senior executives and/or the Board.
extensive QA testing prior to system implementation.
penetration testing by ethical hackers.
Regulatory compliance is generally demonstrated throughindependent auditsbecause regulators, customers, and partners typically require objective evidence that required controls exist and operate effectively. An independent audit is performed by a qualified party that is not responsible for running the controls being assessed, which strengthens credibility and reduces conflicts of interest. Cybersecurity and governance documents describe audits as a formal method to verify compliance against defined criteria such as laws, regulations, contractual obligations, or control frameworks. Auditors review policies and procedures, inspect system configurations, sample access and change records, evaluate logging and monitoring, test incident response evidence, and validate that controls are consistently performed over time. The outcome is usually a report, attestation, or findings with remediation plans—artifacts commonly used to prove compliance.
A Board or executive review supports governance and oversight, but it does not, by itself, provide independent verification that controls are functioning. QA testing focuses on product quality and functional correctness; it may include security testing but does not typically satisfy regulatory evidence requirements for ongoing operational controls. Penetration testing is valuable for identifying exploitable weaknesses, yet it is a point-in-time technical exercise and does not comprehensively demonstrate compliance with procedural, administrative, and operational requirements such as access governance, retention, training, vendor oversight, and continuous monitoring. Therefore, independent audits are the standard mechanism to demonstrate compliance in a defensible, repeatable way.
Uploaded image
TESTED 04 Apr 2026

