Which of the following is a MAJOR consideration in implementing a Voice over IP (VoIP) network?
Use of a unified messaging.
Use of separation for the voice network.
Use of Network Access Control (NAC) on switches.
Use of Request for Comments (RFC) 1918 addressing.
The use of Network Access Control (NAC) on switches is a major consideration in implementing a Voice over IP (VoIP) network. NAC is a mechanism that enforces security policies on the network devices, such as switches, routers, firewalls, and servers. NAC can prevent unauthorized or compromised devices from accessing the network, or limit their access to specific segments or resources. NAC can also monitor and remediate the devices for compliance with the security policies, such as patch level, antivirus status, or configuration settings. NAC can enhance the security and performance of a VoIP network, as well as reduce the operational costs and risks. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 473; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 353.
An organization publishes and periodically updates its employee policies in a file on their intranet. Which of the following is a PRIMARY security concern?
Availability
Confidentiality
Integrity
Ownership
The primary security concern for an organization that publishes and periodically updates its employee policies in a file on their intranet is integrity. Integrity is the property that ensures that the data or the information is accurate, complete, consistent, and authentic, and that it has not been modified, altered, or corrupted by unauthorized or malicious parties. Integrity is a primary security concern for the employee policies file on the intranet, as it can affect the compliance, trust, and reputation of the organization, and the rights and responsibilities of the employees. The employee policies file must reflect the current and valid policies of the organization, and must not be changed or tampered with by anyone who is not authorized or qualified to do so. Availability, confidentiality, and ownership are not the primary security concerns for the employee policies file on the intranet, as they are related to the accessibility, protection, or attribution of the data or the information, not the accuracy or the authenticity of the data or the information. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 20. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 33.
Refer to the information below to answer the question.
Desktop computers in an organization were sanitized for re-use in an equivalent security environment. The data was destroyed in accordance with organizational policy and all marking and other external indications of the sensitivity of the data that was formerly stored on the magnetic drives were removed.
Organizational policy requires the deletion of user data from Personal Digital Assistant (PDA) devices before disposal. It may not be possible to delete the user data if the device is malfunctioning. Which destruction method below provides the BEST assurance that the data has been removed?
Knurling
Grinding
Shredding
Degaussing
The best destruction method that provides the assurance that the data has been removed from a malfunctioning PDA device is shredding. Shredding is a method of physically destroying the media, such as flash memory cards, by cutting or tearing them into small pieces that make the data unrecoverable. Shredding can be effective in removing the data from a PDA device that cannot be deleted by software or firmware methods, as it does not depend on the functionality of the device or the media. Shredding can also prevent the reuse or the recycling of the media or the device, as it renders them unusable. Knurling, grinding, and degaussing are not the best destruction methods that provide the assurance that the data has been removed from a malfunctioning PDA device, as they are related to the methods of altering the surface, the shape, or the magnetic field of the media, not the methods of cutting or tearing the media into small pieces. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 889. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 905.
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following?
Hardware and software compatibility issues
Applications’ critically and downtime tolerance
Budget constraints and requirements
Cost/benefit analysis and business objectives
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as:
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to:
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them.
Which of the following is the FIRST step in the incident response process?
Determine the cause of the incident
Disconnect the system involved from the network
Isolate and contain the system involved
Investigate all symptoms to confirm the incident
Investigating all symptoms to confirm the incident is the first step in the incident response process. An incident is an event that violates or threatens the security, availability, integrity, or confidentiality of the IT systems or data. An incident response is a process that involves detecting, analyzing, containing, eradicating, recovering, and learning from an incident, using various methods and tools. An incident response can provide several benefits, such as:
Investigating all symptoms to confirm the incident is the first step in the incident response process, because it can ensure that the incident is verified and validated, and that the incident response is initiated and escalated. A symptom is a sign or an indication that an incident may have occurred or is occurring, such as an alert, a log, or a report. Investigating all symptoms to confirm the incident involves collecting and analyzing the relevant data and information from various sources, such as the IT systems, the network, the users, or the external parties, and determining whether an incident has actually happened or is happening, and how serious or urgent it is. Investigating all symptoms to confirm the incident can also help to:
The other options are not the first steps in the incident response process, but rather steps that should be done after or along with investigating all symptoms to confirm the incident. Determining the cause of the incident is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the root cause and source of the incident are identified and analyzed, and that the incident response is directed and focused. Determining the cause of the incident involves examining and testing the affected IT systems and data, and tracing and tracking the origin and path of the incident, using various techniques and tools, such as forensics, malware analysis, or reverse engineering. Determining the cause of the incident can also help to:
Disconnecting the system involved from the network is a step that should be done along with investigating all symptoms to confirm the incident, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the incident response is conducted in a safe and controlled environment. Disconnecting the system involved from the network can also help to:
Isolating and containing the system involved is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the incident is confined and restricted, and that the incident response is continued and maintained. Isolating and containing the system involved involves applying and enforcing the appropriate security measures and controls to limit or stop the activity and impact of the incident on the IT systems and data, such as firewall rules, access policies, or encryption keys. Isolating and containing the system involved can also help to:
Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations?
Walkthrough
Simulation
Parallel
White box
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as:
There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are:
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system.
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
An organization is found lacking the ability to properly establish performance indicators for its Web hosting solution during an audit. What would be the MOST probable cause?
Absence of a Business Intelligence (BI) solution
Inadequate cost modeling
Improper deployment of the Service-Oriented Architecture (SOA)
Insufficient Service Level Agreement (SLA)
Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for hosting and maintaining a website or a web application on the internet. A Web hosting solution can offer various benefits, such as:
A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations, responsibilities, and obligations of the parties involved in a service, such as the service provider and the service consumer. An SLA can include various components, such as:
Insufficient SLA would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it could mean that the SLA does not include or specify the appropriate service level indicators or objectives for the Web hosting solution, or that the SLA does not provide or enforce the adequate service level reporting or penalties for the Web hosting solution. This could affect the ability of the organization to measure and assess the Web hosting solution quality, performance, and availability, and to identify and address any issues or risks in the Web hosting solution.
The other options are not the most probable causes for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, but rather the factors that could affect or improve the Web hosting solution in other ways. Absence of a Business Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or conversion. A BI solution is a system that involves the collection, integration, processing, and presentation of the data and information from various sources, such as the Web hosting solution, to support the decision making and planning of the organization. However, absence of a BI solution is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the analysis or usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a factor that could affect the ability of the organization to estimate and optimize the cost and value of the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment. A cost model is a tool or a method that helps the organization to calculate and compare the cost and value of the Web hosting solution, and to identify and implement the best or most efficient Web hosting solution. However, inadequate cost modeling is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the estimation or optimization of the cost and value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA) is a factor that could affect the ability of the organization to design and develop the Web hosting solution, such as the web services, components, or interfaces. A SOA is a software architecture that involves the modularization, standardization, and integration of the software components or services that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits, such as:
However, improper deployment of the SOA is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the design or development of the Web hosting solution.
With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions?
Continuously without exception for all security controls
Before and after each change of the control
At a rate concurrent with the volatility of the security control
Only during system implementation and decommissioning
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as:
A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment.
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing ISCM solutions, because it can ensure that the ISCM solutions can capture and reflect the current and accurate state and performance of the security control, and can identify and report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control.
Which of the following is a PRIMARY advantage of using a third-party identity service?
Consolidation of multiple providers
Directory synchronization
Web based logon
Automated account management
Consolidation of multiple providers is the primary advantage of using a third-party identity service. A third-party identity service is a service that provides identity and access management (IAM) functions, such as authentication, authorization, and federation, for multiple applications or systems, using a single identity provider (IdP). A third-party identity service can offer various benefits, such as:
Consolidation of multiple providers is the primary advantage of using a third-party identity service, because it can simplify and streamline the IAM architecture and processes, by reducing the number of IdPs and IAM systems that are involved in managing the identities and access for multiple applications or systems. Consolidation of multiple providers can also help to avoid the issues or risks that might arise from having multiple IdPs and IAM systems, such as the inconsistency, redundancy, or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of the IAM functions.
The other options are not the primary advantages of using a third-party identity service, but rather secondary or specific advantages for different aspects or scenarios of using a third-party identity service. Directory synchronization is an advantage of using a third-party identity service, but it is more relevant for the scenario where the organization has an existing directory service, such as LDAP or Active Directory, that stores and manages the user accounts and attributes, and wants to synchronize them with the third-party identity service, to enable the SSO or federation for the users. Web based logon is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service uses a web-based protocol, such as SAML or OAuth, to facilitate the SSO or federation for the users, by redirecting them to a web-based logon page, where they can enter their credentials or consent. Automated account management is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service provides the IAM functions, such as provisioning, deprovisioning, or updating, for the user accounts and access rights, using an automated or self-service mechanism, such as SCIM or JIT.
An external attacker has compromised an organization’s network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information?
Implement packet filtering on the network firewalls
Install Host Based Intrusion Detection Systems (HIDS)
Require strong authentication for administrators
Implement logical network segmentation at the switches
Implementing logical network segmentation at the switches is the most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, location, or security level. Logical network segmentation can be implemented at the switches, which are devices that operate at the data link layer of the OSI model and forward data packets based on the MAC addresses. Logical network segmentation can provide several benefits, such as:
Logical network segmentation can mitigate the attacker’s ability to gain further information by limiting the visibility and access of the sniffer to the segment where it is installed. A sniffer is a tool that captures and analyzes the data packets that are transmitted over a network. A sniffer can be used for legitimate purposes, such as troubleshooting, testing, or monitoring the network, or for malicious purposes, such as eavesdropping, stealing, or modifying the data. A sniffer can only capture the data packets that are within its broadcast domain, which is the set of devices that can communicate with each other without a router. By implementing logical network segmentation at the switches, the organization can create multiple broadcast domains and isolate the sensitive or critical data from the compromised segment. This way, the attacker can only see the data packets that belong to the same segment as the sniffer, and not the data packets that belong to other segments. This can prevent the attacker from gaining further information or accessing other resources on the network.
The other options are not the most effective layers of security the organization could have implemented to mitigate the attacker’s ability to gain further information, but rather layers that have other limitations or drawbacks. Implementing packet filtering on the network firewalls is not the most effective layer of security, because packet filtering only examines the network layer header of the data packets, such as the source and destination IP addresses, and does not inspect the payload or the content of the data. Packet filtering can also be bypassed by using techniques such as IP spoofing or fragmentation. Installing Host Based Intrusion Detection Systems (HIDS) is not the most effective layer of security, because HIDS only monitors and detects the activities and events on a single host, and does not prevent or respond to the attacks. HIDS can also be disabled or evaded by the attacker if the host is compromised. Requiring strong authentication for administrators is not the most effective layer of security, because authentication only verifies the identity of the users or processes, and does not protect the data in transit or at rest. Authentication can also be defeated by using techniques such as phishing, keylogging, or credential theft.
What is the MOST important step during forensic analysis when trying to learn the purpose of an unknown application?
Disable all unnecessary services
Ensure chain of custody
Prepare another backup of the system
Isolate the system from the network
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application. An unknown application is an application that is not recognized or authorized by the system or network administrator, and that may have been installed or executed without the user’s knowledge or consent. An unknown application may have various purposes, such as:
Forensic analysis is a process that involves examining and investigating the system or network for any evidence or traces of the unknown application, such as its origin, nature, behavior, and impact. Forensic analysis can provide several benefits, such as:
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the forensic analysis is conducted in a safe and controlled environment. Isolating the system from the network can also help to:
The other options are not the most important steps during forensic analysis when trying to learn the purpose of an unknown application, but rather steps that should be done after or along with isolating the system from the network. Disabling all unnecessary services is a step that should be done after isolating the system from the network, because it can ensure that the system is optimized and simplified for the forensic analysis, and that the system resources and functions are not consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step that should be done along with isolating the system from the network, because it can ensure that the integrity and authenticity of the evidence are maintained and documented throughout the forensic process, and that the evidence can be traced and verified. Preparing another backup of the system is a step that should be done after isolating the system from the network, because it can ensure that the system data and configuration are preserved and replicated for the forensic analysis, and that the system can be restored and recovered in case of any damage or loss.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
Guaranteed recovery of all business functions
Minimization of the need decision making during a crisis
Insurance against litigation following a disaster
Protection from loss of organization resources
Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as:
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses.
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
A continuous information security-monitoring program can BEST reduce risk through which of the following?
Collecting security events and correlating them to identify anomalies
Facilitating system-wide visibility into the activities of critical user accounts
Encompassing people, process, and technology
Logging both scheduled and unscheduled system changes
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as:
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following:
The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation.
Which of the following is the BEST network defense against unknown types of attacks or stealth attacks in progress?
Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDS)
Stateful firewalls
Network Behavior Analysis (NBA) tools
Network Behavior Analysis (NBA) tools are the best network defense against unknown types of attacks or stealth attacks in progress. NBA tools are devices or software that monitor and analyze the network traffic and activities, and detect any anomalies or deviations from the normal or expected behavior. NBA tools use various techniques, such as statistical analysis, machine learning, artificial intelligence, or heuristics, to establish a baseline of the network behavior, and to identify any outliers or indicators of compromise. NBA tools can provide several benefits, such as:
The other options are not the best network defense against unknown types of attacks or stealth attacks in progress, but rather network defenses that have other limitations or drawbacks. Intrusion Prevention Systems (IPS) are devices or software that monitor and block the network traffic and activities that match the predefined signatures or rules of known attacks. IPS can provide a proactive and preventive layer of security, but they cannot detect or stop unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IPS. Intrusion Detection Systems (IDS) are devices or software that monitor and alert the network traffic and activities that match the predefined signatures or rules of known attacks. IDS can provide a reactive and detective layer of security, but they cannot detect or alert unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IDS. Stateful firewalls are devices or software that filter and control the network traffic and activities based on the state and context of the network sessions, such as the source and destination IP addresses, port numbers, protocol types, and sequence numbers. Stateful firewalls can provide a granular and dynamic layer of security, but they cannot filter or control unknown types of attacks or stealth attacks that use valid or spoofed network sessions, or that can exploit or bypass the firewall rules.
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located?
Link layer
Physical layer
Session layer
Application layer
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc.
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure.
What is the purpose of an Internet Protocol (IP) spoofing attack?
To send excessive amounts of data to a process, making it unpredictable
To intercept network traffic without authorization
To disguise the destination address from a target’s IP filtering devices
To convince a system that it is communicating with a known entity
The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as:
The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address.
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node?
Transport layer
Application layer
Network layer
Session layer
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection-oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows:
UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments?
Common Vulnerabilities and Exposures (CVE)
Common Vulnerability Scoring System (CVSS)
Asset Reporting Format (ARF)
Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management.
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
A business has implemented Payment Card Industry Data Security Standard (PCI-DSS) compliant handheld credit card processing on their Wireless Local Area Network (WLAN) topology. The network team partitioned the WLAN to create a private segment for credit card processing using a firewall to control device access and route traffic to the card processor on the Internet. What components are in the scope of PCI-DSS?
The entire enterprise network infrastructure.
The handheld devices, wireless access points and border gateway.
The end devices, wireless access points, WLAN, switches, management console, and firewall.
The end devices, wireless access points, WLAN, switches, management console, and Internet
The components that are in the scope of PCI-DSS are the end devices, wireless access points, WLAN, switches, management console, and firewall. PCI-DSS is a set of standards and requirements that aim to ensure the security of the cardholder data and the payment transactions. PCI-DSS applies to any entity that stores, processes, or transmits cardholder data, or that provides services or devices that affect the security of the cardholder data. The scope of PCI-DSS includes all the system components that are connected to or support the cardholder data environment, such as the hardware, the software, the network, or the personnel. In this question, the end devices, wireless access points, WLAN, switches, management console, and firewall are all part of the system components that are connected to or support the cardholder data environment, as they are used to process the credit card transactions on the WLAN. Therefore, they are in the scope of PCI-DSS, and they must comply with the PCI-DSS requirements. The entire enterprise network infrastructure and the Internet are not in the scope of PCI-DSS, as they are not directly connected to or support the cardholder data environment, and they are separated from the private segment for credit card processing by the firewall. The border gateway is not a system component, but a term that refers to a device that connects two networks with different protocols, such as a router or a proxy server. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 548. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 564.
A security manager has noticed an inconsistent application of server security controls resulting in vulnerabilities on critical systems. What is the MOST likely cause of this issue?
A lack of baseline standards
Improper documentation of security guidelines
A poorly designed security policy communication program
Host-based Intrusion Prevention System (HIPS) policies are ineffective
The most likely cause of the inconsistent application of server security controls resulting in vulnerabilities on critical systems is a lack of baseline standards. Baseline standards are the minimum level of security controls and measures that must be applied to the servers or other assets to ensure their protection and compliance. Baseline standards help to establish a consistent and uniform security posture across the organization, and to prevent or reduce the exposure to threats and risks. If there is a lack of baseline standards, the server security controls may vary in quality, effectiveness, or completeness, resulting in vulnerabilities on critical systems. Improper documentation of security guidelines, a poorly designed security policy communication program, and ineffective Host-based Intrusion Prevention System (HIPS) policies are not the most likely causes of this issue, as they do not directly affect the application of server security controls or the existence of baseline standards. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 35. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 48.
From a security perspective, which of the following is a best practice to configure a Domain Name Service (DNS) system?
Configure secondary servers to use the primary server as a zone forwarder.
Block all Transmission Control Protocol (TCP) connections.
Disable all recursive queries on the name servers.
Limit zone transfers to authorized devices.
From a security perspective, the best practice to configure a DNS system is to limit zone transfers to authorized devices. Zone transfers are the processes of replicating the DNS data from one server to another, usually from a primary server to a secondary server. Zone transfers can expose sensitive information about the network topology, hosts, and services to attackers, who can use this information to launch further attacks. Therefore, zone transfers should be restricted to only the devices that need them, and authenticated and encrypted to prevent unauthorized access or modification. The other options are not as good as limiting zone transfers, as they either do not provide sufficient security for the DNS system (A and B), or do not address the zone transfer issue ©. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 156; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 166.
What is the MOST critical factor to achieve the goals of a security program?
Capabilities of security resources
Executive management support
Effectiveness of security management
Budget approved for security resources
The most critical factor to achieve the goals of a security program is the executive management support. The executive management is the highest level of authority or decision-making in the organization, such as the board of directors, the chief executive officer, or the chief information officer. The executive management support is the endorsement, the sponsorship, or the involvement of the executive management in the security program, such as the security planning, the security implementation, the security monitoring, or the security auditing. The executive management support is the most critical factor to achieve the goals of the security program, as it can provide the vision, the direction, or the strategy for the security program, and it can align the security program with the business needs and requirements. The executive management support can also provide the resources, the budget, or the authority for the security program, and it can foster the security culture, the security awareness, or the security governance in the organization. The executive management support can also influence the stakeholders, the customers, or the regulators, and it can demonstrate the commitment, the accountability, or the responsibility for the security program. Capabilities of security resources, effectiveness of security management, and budget approved for security resources are not the most critical factors to achieve the goals of the security program, as they are related to the skills, the performance, or the funding of the security program, not the endorsement, the sponsorship, or the involvement of the executive management in the security program. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, Security and Risk Management, page 33. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, Security and Risk Management, page 48.
What is the PRIMARY reason for ethics awareness and related policy implementation?
It affects the workflow of an organization.
It affects the reputation of an organization.
It affects the retention rate of employees.
It affects the morale of the employees.
The primary reason for ethics awareness and related policy implementation is to affect the reputation of an organization positively, by demonstrating its commitment to ethical principles, values, and standards in its business practices, services, and products. Ethics awareness and policy implementation can also help the organization avoid legal liabilities, fines, or sanctions for unethical conduct, and foster trust and loyalty among its customers, partners, and employees. The other options are not as important as affecting the reputation, as they either do not directly relate to ethics (A), or are secondary outcomes of ethics (C and D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 19; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 28.
With data labeling, which of the following MUST be the key decision maker?
Information security
Departmental management
Data custodian
Data owner
With data labeling, the data owner must be the key decision maker. The data owner is the person or entity that has the authority and responsibility for the data, including its classification, protection, and usage. The data owner must decide how to label the data according to its sensitivity, criticality, and value, and communicate the labeling scheme to the data custodians and users. The data owner must also review and update the data labels as needed. The other options are not the key decision makers for data labeling, as they either do not have the authority or responsibility for the data (A, B, and C), or do not have the knowledge or interest in the data (B and C). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 63; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 69.
What is the PRIMARY advantage of using automated application security testing tools?
The application can be protected in the production environment.
Large amounts of code can be tested using fewer resources.
The application will fail less when tested using these tools.
Detailed testing of code functions can be performed.
Automated application security testing tools are software tools that can scan, analyze, and test the code of an application for vulnerabilities, errors, or flaws. The primary advantage of using these tools is that they can test large amounts of code using fewer resources, such as time, money, and human effort, than manual testing. This can improve the efficiency, effectiveness, and coverage of the testing process. The application can be protected in the production environment, the application will fail less when tested using these tools, and detailed testing of code functions can be performed are all possible outcomes of using automated application security testing tools, but they are not the primary advantage of using them. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1017. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1039.
Refer to the information below to answer the question.
Desktop computers in an organization were sanitized for re-use in an equivalent security environment. The data was destroyed in accordance with organizational policy and all marking and other external indications of the sensitivity of the data that was formerly stored on the magnetic drives were removed.
After magnetic drives were degaussed twice according to the product manufacturer's directions, what is the MOST LIKELY security issue with degaussing?
Commercial products often have serious weaknesses of the magnetic force available in the degausser product.
Degausser products may not be properly maintained and operated.
The inability to turn the drive around in the chamber for the second pass due to human error.
Inadequate record keeping when sanitizing mediA.
The most likely security issue with degaussing is that the degausser products may not be properly maintained and operated. Degaussing is a method of sanitizing magnetic media, such as hard disk drives, by applying a strong magnetic field that erases the data stored on the media. Degaussing can be effective in destroying the data, but it requires that the degausser products are calibrated, tested, and used according to the manufacturer’s specifications and instructions. If the degausser products are not properly maintained and operated, they may not generate a sufficient magnetic force to erase the data completely, or they may damage the media or the device. Commercial products often have serious weaknesses of the magnetic force available in the degausser product, the inability to turn the drive around in the chamber for the second pass due to human error, and inadequate record keeping when sanitizing media are not the most likely security issues with degaussing, as they are related to the quality, the technique, or the documentation of the degaussing process, not the maintenance or the operation of the degausser products. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 888. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 904.
Which of the following is a critical factor for implementing a successful data classification program?
Executive sponsorship
Information security sponsorship
End-user acceptance
Internal audit acceptance
The critical factor for implementing a successful data classification program is executive sponsorship. Executive sponsorship is the support and commitment from the senior management of the organization for the data classification program. Executive sponsorship can provide the necessary resources, authority, and guidance for the data classification program, and ensure that the program aligns with the organization’s goals, policies, and culture. Executive sponsorship can also influence and motivate the data owners, custodians, and users to participate and comply with the data classification program. The other options are not as critical as executive sponsorship, as they either do not have the same level of influence or authority (B, C, and D), or do not directly contribute to the data classification program (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 66; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 72.
Which of the following is critical for establishing an initial baseline for software components in the operation and maintenance of applications?
Application monitoring procedures
Configuration control procedures
Security audit procedures
Software patching procedures
Configuration control procedures are critical for establishing an initial baseline for software components in the operation and maintenance of applications. Configuration control procedures are the processes and activities that ensure the integrity, consistency, and traceability of the software components throughout the SDLC. Configuration control procedures include identifying, documenting, storing, reviewing, approving, and updating the software components, as well as managing the changes and versions of the components. By establishing an initial baseline, the organization can have a reference point for measuring and evaluating the performance, quality, and security of the software components, and for applying and tracking the changes and updates to the components. The other options are not as critical as configuration control procedures, as they either do not establish an initial baseline (A and C), or do not apply to all software components (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 468; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 568.
Refer to the information below to answer the question.
A large organization uses unique identifiers and requires them at the start of every system session. Application access is based on job classification. The organization is subject to periodic independent reviews of access controls and violations. The organization uses wired and wireless networks and remote access. The organization also uses secure connections to branch offices and secure backup and recovery strategies for selected information and processes.
In addition to authentication at the start of the user session, best practice would require re-authentication
periodically during a session.
for each business process.
at system sign-off.
after a period of inactivity.
The best practice would require re-authentication after a period of inactivity, in addition to authentication at the start of the user session. Authentication is a process of verifying the identity or the credentials of a user or a device that requests access to a system or a resource. Re-authentication is a process of repeating the authentication after a certain condition or event, such as a change of location, a change of role, a change of privilege, or a period of inactivity. Re-authentication can help to enhance the security and the accountability of the access control, as it can prevent or detect the unauthorized or malicious use of the user or the device credentials, and it can ensure that the user or the device is still active and valid. Re-authenticating after a period of inactivity can help to prevent the unauthorized or malicious access by someone who may have gained physical access to the user or the device session, such as a co-worker, a visitor, or a thief. Re-authenticating periodically during a session, for each business process, or at system sign-off are not the best practices, as they may not be necessary or effective for the security or the accountability of the access control, and they may cause inconvenience or frustration to the user or the device. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 685. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 701.
What is the BEST first step for determining if the appropriate security controls are in place for protecting data at rest?
Identify regulatory requirements
Conduct a risk assessment
Determine business drivers
Review the security baseline configuration
A risk assessment is the best first step for determining if the appropriate security controls are in place for protecting data at rest. A risk assessment involves identifying the assets, threats, vulnerabilities, and impacts related to the data, as well as the likelihood and severity of potential breaches. Based on the risk assessment, the appropriate security controls can be selected and implemented to mitigate the risks to an acceptable level. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 35; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 41.
Refer to the information below to answer the question.
A security practitioner detects client-based attacks on the organization’s network. A plan will be necessary to address these concerns.
What is the BEST reason for the organization to pursue a plan to mitigate client-based attacks?
Client privilege administration is inherently weaker than server privilege administration.
Client hardening and management is easier on clients than on servers.
Client-based attacks are more common and easier to exploit than server and network based attacks.
Client-based attacks have higher financial impact.
The best reason for the organization to pursue a plan to mitigate client-based attacks is that client-based attacks are more common and easier to exploit than server and network based attacks. Client-based attacks are the attacks that target the client applications or systems, such as web browsers, email clients, or media players, and that can exploit the vulnerabilities or weaknesses of the client software or configuration, or the user behavior or interaction. Client-based attacks are more common and easier to exploit than server and network based attacks, because the client applications or systems are more exposed and accessible to the attackers, the client software or configuration is more diverse and complex to secure, and the user behavior or interaction is more unpredictable and prone to errors or mistakes. Therefore, the organization needs to pursue a plan to mitigate client-based attacks, as they pose a significant security threat or risk to the organization’s data, systems, or network. Client privilege administration is inherently weaker than server privilege administration, client hardening and management is easier on clients than on servers, and client-based attacks have higher financial impact are not the best reasons for the organization to pursue a plan to mitigate client-based attacks, as they are not supported by the facts or evidence, or they are not relevant or specific to the client-side security. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, Software Development Security, page 1050. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, Software Development Security, page 1066.
Which of the following is a detective access control mechanism?
Log review
Least privilege
Password complexity
Non-disclosure agreement
The access control mechanism that is detective is log review. Log review is a process of examining and analyzing the records or events of the system or network activity, such as user login, file access, or network traffic, that are stored in log files. Log review can help to detect and identify any unauthorized, abnormal, or malicious access or behavior, and to provide evidence or clues for further investigation or response. Log review is a detective access control mechanism, as it can discover or reveal the occurrence or the source of the security incidents or violations, after they have happened. Least privilege, password complexity, and non-disclosure agreement are not detective access control mechanisms, as they are related to the restriction, protection, or confidentiality of the access or information, not the detection or identification of the security incidents or violations. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, Security Operations, page 932. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, Security Operations, page 948.
Given the various means to protect physical and logical assets, match the access management area to the technology.
In the context of protecting physical and logical assets, the access management areas and the technologies can be matched as follows: - Facilities are the physical buildings or locations that house the organization’s assets, such as servers, computers, or documents. Facilities can be protected by using windows that are resistant to breakage, intrusion, or eavesdropping, and that can prevent the leakage of light or sound from inside the facilities. - Devices are the hardware or software components that enable the communication or processing of data, such as routers, switches, firewalls, or applications. Devices can be protected by using firewalls that can filter, block, or allow the network traffic based on the predefined rules or policies, and that can prevent unauthorized or malicious access or attacks to the devices or the data. - Information Systems are the systems that store, process, or transmit data, such as databases, servers, or applications. Information Systems can be protected by using authentication mechanisms that can verify the identity or the credentials of the users or the devices that request access to the information systems, and that can prevent impersonation or spoofing of the users or the devices. - Encryption is a technology that can be applied in various areas, such as Devices or Information Systems, to protect the confidentiality or the integrity of the data. Encryption can transform the data into an unreadable or unrecognizable form, using a secret key or an algorithm, and can prevent the interception, disclosure, or modification of the data by unauthorized parties.
What does secure authentication with logging provide?
Data integrity
Access accountability
Encryption logging format
Segregation of duties
Secure authentication with logging provides access accountability, which means that the actions of users can be traced and audited. Logging can help identify unauthorized or malicious activities, enforce policies, and support investigations12
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
A company is attempting to enhance the security of its user authentication processes. After evaluating several options, the company has decided to utilize Identity as a Service (IDaaS).
Which of the following factors leads the company to choose an IDaaS as their solution?
In-house development provides more control.
In-house team lacks resources to support an on-premise solution.
Third-party solutions are inherently more secure.
Third-party solutions are known for transferring the risk to the vendor.
The factor that leads the company to choose an IDaaS as their solution is that the in-house team lacks resources to support an on-premise solution. IDaaS is a cloud-based service that provides identity and access management capabilities, such as single sign-on, multi-factor authentication, or identity federation, to the users and applications of an organization. IDaaS can offer the following advantages over an on-premise solution, which is a solution that is installed and managed by the organization itself, on its own servers or infrastructure:
An organization plans to acquire @ commercial off-the-shelf (COTS) system to replace their aging home-built reporting system. When should the organization's security team FIRST get involved in this acquisition’s life cycle?
When the system is being designed, purchased, programmed, developed, or otherwise constructed
When the system is verified and validated
When the system is deployed into production
When the need for a system is expressed and the purpose of the system Is documented
The security team should be involved in the acquisition life cycle as early as possible, preferably when the need for a system is expressed and the purpose of the system is documented. This will ensure that the security requirements are identified and incorporated into the system design, purchase, development, and testing phases. Waiting until the system is verified and validated or deployed into production may be too late to address any security issues or risks that could have been prevented or mitigated earlier. References: CISSP - Certified Information Systems Security Professional, Domain 1. Security and Risk Management, 1.3 Understand and apply security governance principles, 1.3.2 Due diligence/due care; CISSP Exam Outline, Domain 1. Security and Risk Management, 1.3 Understand and apply security governance principles, 1.3.2 Due diligence/due care
Which of the following Disaster recovery (DR) testing processes is LEAST likely to disrupt normal business operations?
Parallel
Simulation
Table-top
Cut-over
A table-top DR testing process is the least likely to disrupt normal business operations, as it involves only a discussion or a walkthrough of the DR plan with the key stakeholders and participants. No actual systems or resources are involved in the test, and no disruption or downtime is expected. A parallel DR testing process involves activating the backup site and running the systems in parallel with the primary site, which may cause some performance issues or conflicts. A simulation DR testing process involves simulating a disaster scenario and testing the response capabilities of the staff and the systems, which may cause some stress or confusion. A cut-over DR testing process involves switching the operations entirely to the backup site and shutting down the primary site, which may cause the most disruption and risk of the DR testing processes. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 19: Security Operations, page 1869.
Which is the second phase of public key Infrastructure (pk1) key/certificate life-cycle management?
Issued Phase
Cancellation Phase
Implementation phase
Initialization Phase
The second phase of public key infrastructure (PKI) key/certificate life-cycle management is the issued phase, where the certificate authority (CA) issues a digital certificate to the requester after verifying their identity and public key. The certificate contains the public key, the identity of the owner, the validity period, the serial number, and the digital signature of the CA. The certificate is then published in a repository or directory for others to access and validate. References: CISSP Study Guide: Key Management Life Cycle, Key Management - OWASP Cheat Sheet Series, CISSP 2021: Software Development Lifecycles & Ecosystems
Which of the following is a security weakness in the evaluation of common criteria (CC) products?
The manufacturer can state what configuration of the product is to be evaluated.
The product can be evaluated by labs m other countries.
The Target of Evaluation's (TOE) testing environment is identical to the operating environment
The evaluations are expensive and time-consuming to perform.
The security weakness in the evaluation of common criteria (CC) products is that the manufacturer can state what configuration of the product is to be evaluated. Common criteria (CC) is an international standard that defines a framework for the evaluation, certification, or validation of the security, functionality, or performance of the products, systems, or components, that are used or applied in the information technology (IT) or information security (IS) domains, such as software, hardware, or firmware. CC can provide various benefits, such as consistency, interoperability, or transparency, for the manufacturers, consumers, or evaluators, of the products, systems, or components, by providing a common, objective, or independent way to assess, measure, or compare the security, functionality, or performance of the products, systems, or components. CC can follow various methods, models, or frameworks, such as the Evaluation Assurance Level (EAL), the Protection Profile (PP), or the Security Target (ST), that can define, structure, or guide the evaluation, certification, or validation process, by using various criteria, requirements, or specifications, such as the functional requirements, the assurance requirements, or the security objectives, that can describe, represent, or demonstrate the security, functionality, or performance of the products, systems, or components. The security weakness in the evaluation of common criteria (CC) products is that the manufacturer can state what configuration of the product is to be evaluated, which means that the manufacturer can select, determine, or specify the features, settings, or parameters, of the product, that are to be assessed, measured, or compared, during the evaluation, certification, or validation process. The manufacturer can state what configuration of the product is to be evaluated, to exploit or manipulate the evaluation, certification, or validation process, by choosing, defining, or presenting the configuration of the product, that can favor, benefit, or advantage the manufacturer, rather than the consumers or evaluators, of the product, such as the configuration of the product, that can highlight, emphasize, or exaggerate the security, functionality, or performance of the product, or that can conceal, hide, or minimize the vulnerabilities, weaknesses, or issues, of the product. The product can be evaluated by labs in other countries, the target of evaluation’s (TOE) testing environment is identical to the operating environment, or the evaluations are expensive and time-consuming to perform are not the security weaknesses in the evaluation of common criteria (CC) products, as they are either more related to the characteristics, features, or aspects, of the evaluation, certification, or validation process, such as the location, environment, or cost of the evaluation, certification, or validation process, that may affect the quality, efficiency, or reliability of the evaluation, certification, or validation process, rather than to the security, functionality, or performance of the products, systems, or components, that are evaluated, certified, or validated, by the evaluation, certification, or validation process, or to the challenges, difficulties, or limitations, of the evaluation, certification, or validation process, such as the complexity, duration, or resources of the evaluation, certification, or validation process, that may affect the feasibility, availability, or accessibility of the evaluation, certification, or validation process, rather than to the security, functionality, or performance of the products, systems, or components, that are evaluated, certified, or validated, by the evaluation, certification, or validation process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 516; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.15, page 307.
Which of the below strategies would MOST comprehensively address the risk of malicious insiders leaking sensitive information?
Data Loss Protection (DIP), firewalls, data classification
Least privilege access, Data Loss Protection (DLP), physical access controls
Staff vetting, least privilege access, Data Loss Protection (DLP)
Background checks, data encryption, web proxies
Staff vetting, least privilege access, and Data Loss Protection (DLP) are the strategies that would most comprehensively address the risk of malicious insiders leaking sensitive information. Staff vetting is the process of verifying the background, qualifications, and trustworthiness of the employees or contractors who have access to the organization’s information and assets. Staff vetting can help prevent hiring or retaining individuals who may pose a security risk or have malicious intentions. Least privilege access is the principle of granting the minimum level of access necessary for a user or process to perform their assigned tasks. Least privilege access can help limit the exposure and damage of sensitive information in case of a breach or misuse by an insider. Data Loss Protection (DLP) is a technology that monitors, detects, and prevents the unauthorized transfer or leakage of sensitive data from the organization’s network or systems. DLP can help protect the confidentiality and integrity of the data and enforce the organization’s security policies and compliance requirements. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 205. CISSP Testking ISC Exam Questions, Question
Which of the following explains why classifying data is an important step in performing a Risk assessment?
To provide a framework for developing good security metrics
To justify the selection of costly security controls
To classify the security controls sensitivity that helps scope the risk assessment
To help determine the appropriate level of data security controls
Classifying data is an important step in performing a risk assessment, because it helps to determine the appropriate level of data security controls. Data classification is a process of assigning labels or categories to data based on their sensitivity, value, or criticality. Data classification helps to identify the potential impact of data loss, disclosure, or modification, and the corresponding level of protection required. Data classification also helps to prioritize the data assets and allocate the resources for risk management. The other options are not the main reasons why data classification is important for risk assessment. Data classification may provide a framework for developing security metrics, justify the selection of costly security controls, or classify the security controls sensitivity, but these are secondary benefits or outcomes of data classification, not the primary purpose. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 75-76; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 53-54.
Which of the following documents specifies services from the client's viewpoint?
Service level report
Business impact analysis (BIA)
Service level agreement (SLA)
Service Level Requirement (SLR)
The document that specifies services from the client’s viewpoint is the Service Level Requirement (SLR). SLR is a document that defines and describes the expectations and the needs of the client or the customer regarding the services that are provided or delivered by the service provider or the vendor, such as the quality, the availability, the performance, or the cost of the services. SLR specifies services from the client’s viewpoint, because it can:
The other options are not the documents that specify services from the client’s viewpoint. Service level report is a document that provides and presents the information and the data about the actual performance and the effectiveness of the services that are provided or delivered by the service provider or the vendor, compared to the agreed or the expected performance and the effectiveness of the services, such as the service level targets or the service level indicators. Service level report does not specify services from the client’s viewpoint, but rather reports services from the service provider’s viewpoint. Business impact analysis (BIA) is a document that provides and presents the analysis and the assessment of the potential impact and the consequences of the disruption or the interruption of the critical business functions or processes, due to the incidents or the events, such as the disasters, the emergencies, or the threats. BIA does not specify services from the client’s viewpoint, but rather analyzes services from the business viewpoint. Service level agreement (SLA) is a document that defines and describes the agreed or the expected performance and the effectiveness of the services that are provided or delivered by the service provider or the vendor, such as the service level targets or the service level indicators, and the remedies or the penalties for the non-compliance or the breach of the performance and the effectiveness of the services. SLA does not specify services from the client’s viewpoint, but rather agrees services from the service provider’s viewpoint. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, page 900. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7: Security
Which of the following is the BEST approach for a forensic examiner to obtain the greatest amount of relevant information form malicious software?
Analyze the behavior of the program.
Examine the file properties and permissions.
Review the code to identify its origin.
Analyze the logs generated by the software.
Malicious software, or malware, is any software that is designed to harm, disrupt, or compromise the security, functionality, or data of a system or network. Malware can take various forms, such as viruses, worms, trojans, ransomware, spyware, rootkits, or backdoors. A forensic examiner can obtain the greatest amount of relevant information from malware by analyzing the behavior of the program, which involves observing and documenting how the malware interacts with the system, network, and other programs, what actions it performs, what files it creates, modifies, or deletes, what registry keys it changes, what network connections it establishes, what data it exfiltrates, and so on. Analyzing the behavior of the malware can help the examiner to identify the purpose, functionality, and impact of the malware, as well as the possible indicators of compromise (IOCs) that can be used to detect and remove the malware from other systems. References: Portable Malware Lab for Beginners, Logging and monitoring: What you need to know for the CISSP, An Overview of Available Malware Analyst Certification Options
employee training, risk management, and data handling procedures and policies could be characterized as which type of security measure?
Non-essential
Management
Preventative
Administrative
Employee training, risk management, and data handling procedures and policies could be characterized as administrative security measures. Administrative security measures are the policies, procedures, standards, guidelines, and practices that define and govern the roles, responsibilities, and actions of the personnel and the organization in relation to the security of the information systems and the data. Administrative security measures could be characterized as administrative security measures, because they can:
The other options are not the types of security measures that employee training, risk management, and data handling procedures and policies could be characterized as. Non-essential security measures are the security measures that are not required or necessary for the protection of the information systems and the data, and that may be removed or reduced without compromising the security objectives or requirements. Non-essential security measures are not the type of security measures that employee training, risk management, and data handling procedures and policies could be characterized as, because they are essential and necessary for the protection of the information systems and the data, and they cannot be removed or reduced without compromising the security objectives or requirements. Management security measures are the security measures that are implemented and enforced by the management or the leadership of the organization, and that are related to the planning, organizing, directing, and controlling of the security activities and resources. Management security measures are not the type of security measures that employee training, risk management, and data handling procedures and policies could be characterized as, because they are not implemented and enforced by the management or the leadership of the organization, but rather by the personnel and the organization themselves. Preventive security measures are the security measures that are designed and deployed to prevent or deter the occurrence or the impact of the security incidents or the attacks, such as the encryption, the authentication, or the firewall. Preventive security measures are not the type of security measures that employee training, risk management, and data handling procedures and policies could be characterized as, because they are not designed and deployed to prevent or deter the occurrence or the impact of the security incidents or the attacks, but rather to define and govern the roles, responsibilities, and actions of the personnel and the organization in relation to the security of the information systems and the data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 19. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1: Security and Risk Management, page 19.
Which is the PRIMARY mechanism for providing the workforce with the information needed to protect an agency’s vital information resources?
Incorporating security awareness and training as part of the overall information security program
An information technology (IT) security policy to preserve the confidentiality, integrity, and availability of systems
Implementation of access provisioning process for coordinating the creation of user accounts
Execution of periodic security and privacy assessments to the organization
Security awareness and training are essential components of the overall information security program, as they provide the workforce with the information needed to protect the agency’s vital information resources. Security awareness is the process of informing the workforce about the security policies, procedures, standards, and guidelines of the agency, as well as the current threats, vulnerabilities, and best practices of information security. Security awareness aims to increase the security knowledge and awareness of the workforce, and to influence their behavior and attitude towards security. Security training is the process of educating the workforce about the specific skills and competencies required to perform their security roles and responsibilities. Security training aims to enhance the security capabilities and performance of the workforce, and to ensure their compliance with the security requirements of the agency. Security awareness and training are the primary mechanisms for providing the workforce with the information needed to protect the agency’s vital information resources, as they enable the workforce to understand the security objectives, risks, and controls of the agency, and to act accordingly. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 38. Official (ISC)² CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, page 81.
Which of the following is the PRIMARY reason a sniffer operating on a network is collecting packets only from its own host?
An Intrusion Detection System (IDS) has dropped the packets.
The network is connected using switches.
The network is connected using hubs.
The network’s firewall does not allow sniffing.
The primary reason a sniffer operating on a network is collecting packets only from its own host is that the network is connected using switches. A sniffer is a tool that captures and analyzes the network traffic by intercepting the packets that flow through the network. A sniffer can be used for various purposes, such as network troubleshooting, performance monitoring, security auditing, or malicious activities. The network topology and the devices used to connect the network affect the ability and the scope of the sniffer to capture the packets. A switch is a device that connects multiple devices on a network and forwards the packets to the destination device based on the MAC address. A switch creates separate collision domains for each port, which means that the packets are only sent to the intended device and not to the others. Therefore, a sniffer connected to a switch can only capture the packets that are destined for or originated from its own host, unless the switch is configured to allow port mirroring or broadcast the packets to all ports. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 156. CISSP Practice Exam – FREE 20 Questions and Answers, Question 15.
What is the MOST common cause of Remote Desktop Protocol (RDP) compromise?
Port scan
Brute force attack
Remote exploit
Social engineering
The most common cause of Remote Desktop Protocol (RDP) compromise is brute force attack. RDP is a protocol that allows a user to remotely access and control another computer or device, over a network or the internet, using a graphical user interface. RDP can provide convenience and efficiency for the remote administration, support, or collaboration, but it can also pose a security risk, as it can expose the remote computer or device to potential attacks, such as port scan, remote exploit, social engineering, or brute force attack. Brute force attack is a type of attack that involves trying multiple combinations of usernames and passwords, until finding the correct or valid one, to gain unauthorized access to a system or a service, such as RDP. Brute force attack is the most common cause of RDP compromise, as it can exploit the weak or default credentials, or the lack of multifactor authentication, of the RDP service . References: [CISSP CBK, Fifth Edition, Chapter 4, page 365]; [100 CISSP Questions, Answers and Explanations, Question 20].
Why would a security architect specify that a default route pointing to a sinkhole be
injected into internal networks?
To have firewalls route all network traffic
To detect the traffic destined to non-existent network destinations
To exercise authority over the network department
To re-inject the route into external networks
A sinkhole is a device or system that attracts and redirects unwanted or malicious traffic to a dead end, where it can be analyzed or discarded. A default route is a route that is used when no other route matches the destination address of a packet. A security architect may specify that a default route pointing to a sinkhole be injected into internal networks to detect the traffic destined to non-existent network destinations. This traffic may indicate the presence of malware, misconfigured systems, or unauthorized devices on the network. By sending this traffic to a sinkhole, the security architect can monitor and investigate the source and nature of the traffic and take appropriate actions. Having firewalls route all network traffic, exercising authority over the network department, or re-injecting the route into external networks are not valid reasons for injecting a default route pointing to a sinkhole into internal networks. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Secure Network Architecture and Securing Network Components, page 355; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 4: Communication and Network Security, Question 4.18, page 190.
Digital non-repudiation requires which of the following?
A trusted third-party
Appropriate corporate policies
Symmetric encryption
Multifunction access cards
Digital non-repudiation requires a trusted third-party. Non-repudiation is a security property that prevents a party from denying or disputing the validity or authenticity of a digital document, such as a message, transaction, or contract. Non-repudiation can help to ensure the accountability, reliability, and integrity of the digital document, as well as to provide the evidence and proof of the digital document. Non-repudiation can be achieved by using various methods or techniques, such as digital signatures, timestamps, or certificates. Digital non-repudiation is a type of non-repudiation that uses a cryptographic technique to verify the authenticity, integrity, and non-repudiation of a digital document. Digital non-repudiation requires a trusted third-party, which is a person or entity that is independent, impartial, and reliable, and that provides a service or function that facilitates or supports the digital non-repudiation process. A trusted third-party can help to provide digital non-repudiation, by issuing, managing, or verifying the cryptographic keys, certificates, or signatures that are used to create or validate the digital document, as well as by maintaining or providing the records, logs, or timestamps that are used to prove or confirm the digital document. A trusted third-party can also help to resolve any disputes or conflicts that may arise from the digital document, by acting as an arbitrator, mediator, or witness. Appropriate corporate policies, symmetric encryption, or multifunction access cards are not the requirements for digital non-repudiation, as they are either irrelevant, insufficient, or ineffective for digital non-repudiation. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Cryptography and Symmetric Key Algorithms, page 259; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 3: Security Engineering, Question 3.10, page 136.
The MAIN task of promoting security for Personal Computers (PC) is
understanding the technical controls and ensuring they are correctly installed.
understanding the required systems and patching processes for different Operating Systems (OS).
making sure that users are using only valid, authorized software, so that the chance of virus infection
making users understand the risks to the machines and data, so they will take appropriate steps to project them.
Personal Computers (PC) are devices that can store, process, or transmit digital data that can be used by individuals or groups for personal or professional purposes. PC security is the process of ensuring the confidentiality, integrity, and availability of the data and the PC. The main task of promoting security for PC is making users understand the risks to the machines and data, so they will take appropriate steps to protect them. Users are the primary users and owners of the PC, and they are responsible for the security of the data and the PC. Users should be aware of the potential threats and vulnerabilities that can affect the data and the PC, such as malware, phishing, theft, loss, or unauthorized access. Users should also be educated and trained on the best practices and techniques to protect the data and the PC, such as using encryption, authentication, antivirus, firewall, backup, or recovery. Users should also be motivated and incentivized to follow the security policies and procedures of the organization or the PC vendor, and to report any security incidents or issues. Understanding the technical controls and ensuring they are correctly installed, understanding the required systems and patching processes for different Operating Systems (OS), or making sure that users are using only valid, authorized software are not the main tasks of promoting security for PC, as they are more related to technical, operational, or compliance aspects of PC security. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Malicious Code and Application Attacks, page 403; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 3: Security Architecture and Engineering, Question 3.9, page 156.
Which of the following is the PRIMARY purpose of due diligence when an organization embarks on a merger or acquisition?
Assess the business risks.
Formulate alternative strategies.
Determine that all parties are equally protected.
Provide adequate capability for all parties.
Strategy and program management, project delivery, governance, operations
Due diligence is the process of gathering and analyzing information about the target organization before a merger or acquisition. The primary purpose of due diligence is to assess the business risks and opportunities associated with the transaction, such as financial, legal, operational, technical, and security aspects. Due diligence helps the acquiring organization to make informed decisions, negotiate better terms, and avoid potential liabilities or pitfalls. Due diligence is not meant to formulate alternative strategies, determine that all parties are equally protected, or provide adequate capability for all parties, although these may be secondary outcomes or objectives of the process. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 49. Official (ISC)² CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, page 75.
Why should Open Web Application Security Project (OWASP) Application Security Verification standards (ASVS) Level 1 be considered a MINIMUM level of protection for any web application?
ASVS Level 1 ensures that applications are invulnerable to OWASP top 10 threats.
Opportunistic attackers will look for any easily exploitable vulnerable applications.
Most regulatory bodies consider ASVS Level 1 as a baseline set of controls for applications.
Securing applications at ASVS Level 1 provides adequate protection for sensitive data.
OWASP Application Security Verification Standards (ASVS) Level 1 is the lowest level of protection for any web application, as it only requires automated verification of the security controls. ASVS Level 1 should be considered a minimum level of protection, because opportunistic attackers will look for any easily exploitable vulnerable applications, and automated verification may not detect all the possible flaws or weaknesses. Option A, ASVS Level 1 ensures that applications are invulnerable to OWASP top 10 threats, is incorrect, as ASVS Level 1 does not guarantee that the applications are immune to the most common web application security risks. Option C, most regulatory bodies consider ASVS Level 1 as a baseline set of controls for applications, is incorrect, as most regulatory bodies require higher levels of verification and assurance for applications that handle sensitive or regulated data. Option D, securing applications at ASVS Level 1 provides adequate protection for sensitive data, is incorrect, as ASVS Level 1 is not sufficient for protecting sensitive data, and higher levels of verification and encryption are needed. References: CISSP practice exam questions and answers | TechTarget, CISSP All-in-One Exam Guide, Eighth Edition
Which of the following is a correct feature of a virtual local area network (VLAN)?
A VLAN segregates network traffic therefore information security is enhanced significantly.
Layer 3 routing is required to allow traffic from one VLAN to another.
VLAN has certain security features such as where the devices are physically connected.
There is no broadcast allowed within a single VLAN due to network segregation.
A virtual local area network (VLAN) is a logical grouping of network devices that share the same broadcast domain, regardless of their physical location or connection. A VLAN can improve network performance, security, and management by segregating network traffic based on criteria such as function, department, or security level. A VLAN operates at layer 2 of the OSI model, which means that it can only communicate within the same VLAN by default. To allow traffic from one VLAN to another, layer 3 routing is required, which involves using a router or a layer 3 switch to route packets based on their IP addresses. Layer 3 routing enables inter-VLAN communication and connectivity to other networks, such as the internet or a WAN. Layer 3 routing also provides additional security and control features, such as access control lists, firewalls, and quality of service. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 9: Communication and Network Security, page 591. Official (ISC)² CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, page 677.
Which of the following authorization standards is built to handle Application Programming Interface (API) access for Federated Identity Management (FIM)?
Security Assertion Markup Language (SAML)
Open Authentication (OAUTH)
Remote Authentication Dial-in User service (RADIUS)
Terminal Access Control Access Control System Plus (TACACS+)
The authorization standard that is built to handle Application Programming Interface (API) access for Federated Identity Management (FIM) is Open Authentication (OAuth). OAuth is a standard protocol that enables the delegation of authorization to access resources or services from one party to another, without sharing the credentials. OAuth can be used for FIM, which is a mechanism that allows the users to use a single identity across multiple domains or systems, such as social media platforms, cloud services, or web applications. OAuth can handle API access for FIM, which means that the users can authorize the applications to access their data or services from other providers, such as contacts, calendars, or photos, through the APIs. Security Assertion Markup Language (SAML), Remote Authentication Dial-in User Service (RADIUS), and Terminal Access Control Access Control System Plus (TACACS+) are not authorization standards that are built to handle API access for FIM, but they are standards or protocols that can be used or supported by FIM for authentication, authorization, or accounting purposes. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 656; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 439.
A security engineer is required to integrate security into a software project that is implemented by small groups test quickly, continuously, and independently develop, test, and deploy code to the cloud. The engineer will MOST likely integrate with which software development process’
Service-oriented architecture (SOA)
Spiral Methodology
Structured Waterfall Programming Development
Devops Integrated Product Team (IPT)
Devops Integrated Product Team (IPT) is a software development process that integrates development, testing, and deployment into a continuous and collaborative cycle, using agile methodologies, automation tools, and cloud services. A security engineer who is required to integrate security into a software project that is implemented by small groups that quickly, continuously, and independently develop, test, and deploy code to the cloud will most likely integrate with the Devops IPT process. This process can enable the security engineer to embed security practices and controls into each stage of the software development life cycle, such as code analysis, vulnerability scanning, configuration management, and incident response. The other options are not software development processes that match the description of the project. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 8: Software Development Security, pp. 1405-1406; CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, pp. 2099-2100.
From an asset security perspective, what is the BEST countermeasure to prevent data theft due to data remanence when a sensitive data storage media is no longer needed?
Return the media to the system owner.
Delete the sensitive data from the media.
Physically destroy the retired media.
Encrypt data before it Is stored on the media.
From an asset security perspective, the best countermeasure to prevent data theft due to data remanence when a sensitive data storage media is no longer needed is to physically destroy the retired media. Data remanence is the residual data that remains on a storage device after it has been erased or formatted. Data remanence poses a risk of unauthorized disclosure or recovery of sensitive information by malicious actors. Physical destruction is the most secure method of eliminating data remanence, as it involves rendering the storage media unusable and irreparable by methods such as shredding, pulverizing, burning, or melting. Physical destruction ensures that no data can be retrieved from the media, even with advanced forensic tools or techniques. References: Data States and Data Remanence, Data Remanence and Decommissioning, Data Remanence (CISSP Free by Skillset.com)
An engineer notices some late collisions on a half-duplex link. The engineer verifies that the devices on both ends of the connection are configured for half duplex. Which of the following is the MOST likely cause of this issue?
The link is improperly terminated
One of the devices is misconfigured
The cable length is excessive.
One of the devices has a hardware issue.
The most likely cause of the late collisions on a half-duplex link is that the cable length is excessive. A half-duplex link is a communication channel that allows data transmission in one direction at a time. A collision occurs when two devices try to transmit data at the same time on the same channel, resulting in corrupted or lost data. A late collision occurs when a collision is detected after the first 64 bytes of the frame have been transmitted, indicating a problem with the physical layer of the network. One possible cause of late collisions is that the cable length is too long, exceeding the maximum distance allowed by the network standard. This can cause signal degradation, propagation delay, and synchronization issues, leading to late collisions. The other options are less likely to cause late collisions on a half-duplex link, as they may not affect the timing or quality of the signal transmission. References: CISSP - Certified Information Systems Security Professional, Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.1 Establish secure communication channels, 4.2.1.1 Transmission methods; CISSP Exam Outline, Domain 4. Communication and Network Security, 4.2 Secure network components, 4.2.1 Establish secure communication channels, 4.2.1.1 Transmission methods
Which of the following types of hosts should be operating in the demilitarized zone (DMZ)?
Hosts intended to provide limited access to public resources
Database servers that can provide useful information to the public
Hosts that store unimportant data such as demographical information
File servers containing organizational data
A demilitarized zone (DMZ) is a network segment that is separated from both the internal and external networks by firewalls. The purpose of a DMZ is to provide limited access to public resources, such as web servers, email servers, or DNS servers, while protecting the internal network from unauthorized access. A DMZ should not contain database servers, file servers, or hosts that store sensitive or unimportant data, as these could be compromised by attackers who gain access to the DMZ. References: CISSP Official Study Guide, 9th Edition, page 414; CISSP All-in-One Exam Guide, 8th Edition, page 1130
Which layer handle packet fragmentation and reassembly in the Open system interconnection (OSI) Reference model?
Session
Transport
Data Link
Network
The layer that handles packet fragmentation and reassembly in the Open System Interconnection (OSI) reference model is the network layer. The network layer is the third layer of the OSI model, and it is responsible for routing and forwarding packets across different networks. The network layer also performs packet fragmentation and reassembly, which are processes that divide a large packet into smaller fragments to fit the maximum transmission unit (MTU) of the underlying network, and reassemble the fragments back into the original packet at the destination. Packet fragmentation and reassembly can improve the efficiency and reliability of data transmission, as well as avoid congestion and errors. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 151; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Communication and Network Security, page 225]
Which of the following is the MOST common cause of system or security failures?
Lack of system documentation
Lack of physical security controls
Lack of change control
Lack of logging and monitoring
The most common cause of system or security failures is lack of change control. Change control is a process that ensures that any changes to the system or the environment are authorized, documented, tested, and approved before implementation. Change control helps to prevent errors, conflicts, inconsistencies, and vulnerabilities that may arise from unauthorized or uncontrolled changes. Lack of change control can result in system instability, performance degradation, functionality loss, security breaches, or compliance violations. Lack of system documentation, lack of physical security controls, and lack of logging and monitoring are also potential causes of system or security failures, but they are not as common or as critical as lack of change control. References: CISSP CBK Reference, 5th Edition, Chapter 3, page 145; CISSP All-in-One Exam Guide, 8th Edition, Chapter 3, page 113
What is the correct order of execution for security architecture?
Governance, strategy and program management, project delivery, operations
Strategy and program management, governance, project delivery, operations
Governance, strategy and program management, operations, project delivery
Strategy and program management, project delivery, governance, operations
Security architecture is the design and implementation of the security controls, mechanisms, and processes that protect the confidentiality, integrity, and availability of the information and systems of an organization. Security architecture is aligned with the business goals, objectives, and requirements of the organization, and supports the security policies, standards, and guidelines of the organization. Security architecture follows a systematic and structured approach, which consists of the following phases or steps:
A firm within the defense industry has been directed to comply with contractual requirements for encryption of a government client’s Controlled Unclassified Information (CUI). What encryption strategy represents how to protect data at rest in the MOST efficient and cost-effective manner?
Perform physical separation of program information and encrypt only information deemed critical by the defense client
Perform logical separation of program information, using virtualized storage solutions with built-in encryption at the virtualization layer
Perform logical separation of program information, using virtualized storage solutions with encryption management in the back-end disk systems
Implement data at rest encryption across the entire storage area network (SAN)
The encryption strategy that represents how to protect data at rest in the most efficient and cost-effective manner is to perform logical separation of program information, using virtualized storage solutions with built-in encryption at the virtualization layer. Data at rest is a term that describes the data or the information that are stored or archived on a system or a device, such as a hard disk, a flash drive, or a tape. Data at rest can pose a security risk, as it can be accessed or stolen by unauthorized parties, such as hackers, insiders, or thieves, and as it can compromise the confidentiality, integrity, or availability of the data or the information. Encryption is a type of security technique that involves transforming the data or the information into an unreadable or unintelligible form, using a key or an algorithm, to prevent or mitigate unauthorized access or disclosure of the data or the information. Encryption can be applied to data at rest, to protect the data or the information from potential threats, such as eavesdropping, interception, modification, or deletion. Logical separation of program information, using virtualized storage solutions with built-in encryption at the virtualization layer, is an encryption strategy that involves dividing or isolating the data or the information into different logical units or segments, using software or virtualization, rather than physical or hardware, and applying encryption to the data or the information at the virtualization layer, rather than at the application or the disk layer. Logical separation of program information, using virtualized storage solutions with built-in encryption at the virtualization layer, is an encryption strategy that represents how to protect data at rest in the most efficient and cost-effective manner, as it can provide the benefits of both logical separation and encryption, such as security, performance, scalability, and manageability34. References: CISSP CBK, Fifth Edition, Chapter 3, page 245; 2024 Pass4itsure CISSP Dumps, Question 18.
A security practitioner has been tasked with establishing organizational asset handling procedures. What should be considered that would have the GRFATEST impact to the development of these procedures?
Media handling procedures
User roles and responsibilities
Acceptable Use Policy (ALP)
Information classification scheme
A security practitioner who has been tasked with establishing organizational asset handling procedures should consider the information classification scheme as the factor that would have the greatest impact to the development of these procedures. An information classification scheme is a set of policies and rules that define how the organization’s information assets are categorized and labeled according to their sensitivity, value, and criticality. The information classification scheme also determines the appropriate security controls, access rights, retention periods, and disposal methods for each category of information. By applying an information classification scheme, the organization can ensure that its asset handling procedures are consistent, effective, and aligned with its security objectives and compliance requirements. References: CISSP domain 2: Asset security, 2024 CISSP Detailed Content Outline With Weights Final (Public Use Only), CISSP 2021: Asset Classification & Lifecycle
Which security architecture strategy could be applied to secure an operating system (OS) baseline for deployment within the corporate enterprise?
Principle of Least Privilege
Principle of Separation of Duty
Principle of Secure Default
principle of Fail Secure
The security architecture strategy that could be applied to secure an operating system (OS) baseline for deployment within the corporate enterprise is the principle of secure default. An OS baseline is a set of minimum security standards or configurations that are applied to an OS before it is deployed or installed on a device or a system within an organization. An OS baseline can include settings such as passwords, permissions, firewalls, patches, or encryption. An OS baseline can enhance the security, performance, and functionality of the OS, and can prevent or reduce the risk of vulnerabilities, attacks, or errors. A security architecture strategy is a method or a technique that guides the design, development, implementation, or evaluation of the security aspects or features of a system or a product, such as an OS. A security architecture strategy can improve the security, reliability, or usability of the system or the product, and can align with the security objectives or requirements of the organization. The principle of secure default is a security architecture strategy that states that the system or the product should have the most secure settings or options enabled by default, and that the user or the administrator should have the option to change or disable them if needed. The principle of secure default can be applied to secure an OS baseline for deployment within the corporate enterprise, as it can ensure that the OS has the highest level of security and protection from the start, and that the user or the administrator can customize or adjust the OS settings or options according to their preferences or needs. The principle of least privilege, the principle of separation of duty, and the principle of fail secure are not security architecture strategies that could be applied to secure an OS baseline for deployment within the corporate enterprise, as they are either not related to the default settings or options of the OS, or they have different purposes or functions than securing the OS baseline. References:
How does an organization verify that an information system's current hardware and software match the standard system configuration?
By reviewing the configuration after the system goes into production
By running vulnerability scanning tools on all devices in the environment
By comparing the actual configuration of the system against the baseline
By verifying all the approved security patches are implemented
A baseline is a standard or reference point against which something can be measured or compared. A system configuration baseline is a documented set of specifications for the hardware and software components of an information system, such as operating system, applications, patches, settings, etc. A system configuration baseline can be used to ensure that the system meets the security and performance requirements of the organization, and to detect any unauthorized or unwanted changes to the system. To verify that an information system’s current hardware and software match the standard system configuration, the organization can compare the actual configuration of the system against the baseline, using tools such as configuration management software, checksums, or hashes. Reviewing the configuration after the system goes into production is not sufficient, as it does not account for any changes that may occur after the initial deployment. Running vulnerability scanning tools on all devices in the environment is not specific, as it does not compare the system configuration against the baseline, but rather against a database of known vulnerabilities. Verifying all the approved security patches are implemented is not comprehensive, as it does not cover other aspects of the system configuration, such as applications, settings, etc. References: System Configuration Baseline, [CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3: Security Architecture and Engineering]2
What is the process called when impact values are assigned to the security objectives for information types?
Qualitative analysis
Quantitative analysis
Remediation
System security categorization
The process called when impact values are assigned to the security objectives for information types is system security categorization. System security categorization is a process of determining the potential impact on an organization if a system or information is compromised, based on the security objectives of confidentiality, integrity, and availability. System security categorization helps to identify the security requirements and controls for the system or information, as well as to prioritize the resources and efforts for protecting them. System security categorization can be based on the standards or guidelines provided by the organization or the relevant authorities, such as the Federal Information Processing Standards (FIPS) Publication 199 or the National Institute of Standards and Technology (NIST) Special Publication 800-6034 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, p. 29; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 1: Security and Risk Management, p. 31.
A company has decided that they need to begin maintaining assets deployed in the enterprise. What approach should be followed to determine and maintain ownership information to bring the company into compliance?
Enterprise asset management framework
Asset baseline using commercial off the shelf software
Asset ownership database using domain login records
A script to report active user logins on assets
According to the CISSP CBK Official Study Guide1, the approach that should be followed to determine and maintain ownership information to bring the company into compliance is the enterprise asset management framework. An enterprise asset management framework is a set of principles, processes, and practices that are used or applied to manage or control the assets or the resources that are deployed or utilized in the enterprise or the organization, such as the hardware, software, data, or information of the enterprise or the organization. An enterprise asset management framework helps to ensure the security or the integrity of the enterprise or the organization, as well as the assets or the resources that are deployed or utilized in the enterprise or the organization, by enforcing or implementing the policies, procedures, or standards that govern or regulate the identification, classification, ownership, valuation, allocation, utilization, maintenance, protection, or disposal of the assets or the resources of the enterprise or the organization. An enterprise asset management framework also helps to ensure the compliance or the conformity of the enterprise or the organization, as well as the assets or the resources that are deployed or utilized in the enterprise or the organization, by adhering or conforming to the laws, regulations, or requirements that apply or relate to the assets or the resources of the enterprise or the organization, such as the legal, contractual, or ethical obligations or responsibilities of the enterprise or the organization. Following an enterprise asset management framework helps to determine and maintain ownership information to bring the company into compliance, as it provides or supports a systematic or a structured approach or method to identify or assign the owners or the custodians of the assets or the resources of the enterprise or the organization, as well as to document or record the ownership information or the details of the assets or the resources of the enterprise or the organization, such as the name, description, location, status, or value of the assets or the resources of the enterprise or the organization. Determining and maintaining ownership information helps to bring the company into compliance, as it ensures or verifies the accountability or the responsibility of the owners or the custodians of the assets or the resources of the enterprise or the organization, as well as the accuracy, completeness, or consistency of the ownership information or the details of the assets or the resources of the enterprise or the organization, which may help to avoid or prevent the disputes, conflicts, or issues that may arise or occur regarding the assets or the resources of the enterprise or the organization, such as the theft, loss, misuse, or abuse of the assets or the resources of the enterprise or the organization. Asset baseline using commercial off the shelf software is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, although it may be a benefit or a consequence of following an enterprise asset management framework. An asset baseline is a reference or a standard that is used or applied to measure or compare the performance or the quality of the assets or the resources of the enterprise or the organization, by using or applying the appropriate metrics or indicators, such as the availability, reliability, or efficiency of the assets or the resources of the enterprise or the organization. Commercial off the shelf software is a type of software that is readily available or accessible in the market or the industry, which can be purchased or acquired by the enterprise or the organization, without requiring or involving any customization or modification of the software, such as the operating systems, applications, or utilities of the software. Using commercial off the shelf software helps to create or establish an asset baseline, as it provides or supports a common or a consistent platform or tool to collect or analyze the data or the information that are related or relevant to the performance or the quality of the assets or the resources of the enterprise or the organization, such as the usage, configuration, or status of the assets or the resources of the enterprise or the organization. Creating or establishing an asset baseline helps to manage or control the assets or the resources of the enterprise or the organization, as it enables or facilitates the monitoring, evaluation, or improvement of the performance or the quality of the assets or the resources of the enterprise or the organization, by using or applying the appropriate methods or mechanisms, such as the reporting, auditing, or optimization of the assets or the resources of the enterprise or the organization. However, using commercial off the shelf software to create or establish an asset baseline is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, as it does not address or target the identification, documentation, or verification of the owners or the custodians of the assets or the resources of the enterprise or the organization, which are the essential or the fundamental components or elements of the ownership information or the details of the assets or the resources of the enterprise or the organization. Asset ownership database using domain login records is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, although it may be a benefit or a consequence of following an enterprise asset management framework. An asset ownership database is a repository or a storage that is used or applied to store or maintain the ownership information or the details of the assets or the resources of the enterprise or the organization, such as the name, description, location, status, or value of the assets or the resources of the enterprise or the organization. A domain login record is a record or a log that is used or applied to record or document the login or the access of the users or the employees to the domain or the network of the enterprise or the organization, such as the username, password, date, time, or duration of the login or the access of the users or the employees to the domain or the network of the enterprise or the organization. Using domain login records helps to create or establish an asset ownership database, as it provides or supports a source or a basis to identify or assign the owners or the custodians of the assets or the resources of the enterprise or the organization, as well as to document or record the ownership information or the details of the assets or the resources of the enterprise or the organization, based on the login or the access of the users or the employees to the domain or the network of the enterprise or the organization, which may indicate or reflect the usage, configuration, or status of the assets or the resources of the enterprise or the organization. However, using domain login records to create or establish an asset ownership database is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, as it does not provide or support a comprehensive or a complete approach or method to identify or assign the owners or the custodians of the assets or the resources of the enterprise or the organization, as well as to document or record the ownership information or the details of the assets or the resources of the enterprise or the organization, as it may not cover or include all the assets or the resources of the enterprise or the organization, or all the users or the employees of the enterprise or the organization, which may lead to the gaps, errors, or inconsistencies in the ownership information or the details of the assets or the resources of the enterprise or the organization. A script to report active user logins on assets is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, although it may be a benefit or a consequence of following an enterprise asset management framework. A script is a program or a code that is used or applied to perform or execute a specific or a particular function or task in the system or the network, by using or applying the appropriate commands or instructions, such as the batch, shell, or PowerShell commands or instructions of the system or the network. Reporting active user logins on assets is the process of generating or producing a report or a document that shows or displays the active or the current user logins or accesses to the assets or the resources of the enterprise or the organization, such as the username, password, date, time, or duration of the user logins or accesses to the assets or the resources of the enterprise or the organization. Using a script helps to report active user logins on assets, as it provides or supports a fast or an efficient way or method to collect or analyze the data or the information that are related or relevant to the active or the current user logins or accesses to the assets or the resources of the enterprise or the organization, by using or applying the appropriate commands or instructions, such as the batch, shell, or PowerShell commands or instructions of the system or the network. Reporting active user logins on assets helps to manage or control the assets or the resources of the enterprise or the organization, as it enables or facilitates the monitoring, evaluation, or improvement of the usage, configuration, or status of the assets or the resources of the enterprise or the organization, by using or applying the appropriate methods or mechanisms, such as the reporting, auditing, or optimization of the assets or the resources of the enterprise or the organization. However, using a script to report active user logins on assets is not the approach that should be followed to determine and maintain ownership information to bring the company into compliance, as it does not address or target the identification, documentation, or verification of the owners or the custodians of the assets or the resources of the enterprise or the organization, which are the essential or the fundamental components or elements of the ownership information or the details of the assets or the resources of the enterprise or the organization.
Which of the following has the GREATEST impact on an organization's security posture?
International and country-specific compliance requirements
Security violations by employees and contractors
Resource constraints due to increasing costs of supporting security
Audit findings related to employee access and permissions process
The factor that has the greatest impact on an organization’s security posture is the international and country-specific compliance requirements. Compliance requirements are the rules or the regulations that an organization must follow or adhere to, in order to meet the standards or the expectations of the authorities or the stakeholders, such as the governments, the customers, or the auditors. Compliance requirements can vary depending on the location, the industry, or the type of the organization, and they can affect the security policies, the controls, or the practices of the organization. Compliance requirements can have a significant impact on the organization’s security posture, as they can influence the security objectives, the risks, or the resources of the organization, and they can also impose penalties or sanctions for non-compliance or violations. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 23; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 31
Which of the following is the PRIMARY reason to perform regular vulnerability scanning of an organization network?
Provide vulnerability reports to management.
Validate vulnerability remediation activities.
Prevent attackers from discovering vulnerabilities.
Remediate known vulnerabilities.
According to the CISSP Official (ISC)2 Practice Tests, the primary reason to perform regular vulnerability scanning of an organization network is to remediate known vulnerabilities. A vulnerability scanning is the process of identifying and measuring the weaknesses and exposures in a system, network, or application, that may be exploited by threats and cause harm to the organization or its assets. A vulnerability scanning can be performed by using various tools, techniques, or methods, such as automated scanners, manual tests, or penetration tests. The primary reason to perform regular vulnerability scanning of an organization network is to remediate known vulnerabilities, which means to fix, mitigate, or eliminate the vulnerabilities that are discovered or reported by the vulnerability scanning. Remediation of known vulnerabilities helps to improve the security posture and effectiveness of the system, network, or application, as well as to reduce the overall risk to an acceptable level. Providing vulnerability reports to management is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a benefit or outcome of it. Vulnerability reports are the documents that provide the evidence and analysis of the vulnerability scanning, such as the scope, objectives, methods, results, and recommendations of the vulnerability scanning. Vulnerability reports help to communicate and document the findings and issues of the vulnerability scanning, as well as to support the decision making and planning for the remediation of the vulnerabilities. Validating vulnerability remediation activities is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a part or step of it. Validating vulnerability remediation activities is the process of verifying and testing the effectiveness and completeness of the remediation actions that are taken to address the vulnerabilities, such as patching, updating, configuring, or replacing the system, network, or application components. Validating vulnerability remediation activities helps to ensure that the vulnerabilities are properly and successfully remediated, and that no new or residual vulnerabilities are introduced or left behind. Preventing attackers from discovering vulnerabilities is not the primary reason to perform regular vulnerability scanning of an organization network, although it may be a benefit or outcome of it. Preventing attackers from discovering vulnerabilities is the process of hiding or obscuring the vulnerabilities from the potential attackers, by using various techniques or methods, such as encryption, obfuscation, or deception. Preventing attackers from discovering vulnerabilities helps to reduce the likelihood and opportunity of the attackers to exploit the vulnerabilities, but it does not address the root cause or the impact of the vulnerabilities.
Which of the following is the MOST effective method of mitigating data theft from an active user workstation?
Implement full-disk encryption
Enable multifactor authentication
Deploy file integrity checkers
Disable use of portable devices
The most effective method of mitigating data theft from an active user workstation is to disable use of portable devices. Portable devices are the devices that can be easily connected to or disconnected from a workstation, such as USB drives, external hard drives, flash drives, or smartphones. Portable devices can pose a risk of data theft from an active user workstation, as they can be used to copy, transfer, or exfiltrate data from the workstation, either by malicious insiders or by unauthorized outsiders. By disabling use of portable devices, the data theft from an active user workstation can be prevented or reduced.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 330; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 291
A database administrator is asked by a high-ranking member of management to perform specific changes to the accounting system database. The administrator is specifically instructed to not track or evidence the change in a ticket. Which of the following is the BEST course of action?
Ignore the request and do not perform the change.
Perform the change as requested, and rely on the next audit to detect and report the situation.
Perform the change, but create a change ticket regardless to ensure there is complete traceability.
Inform the audit committee or internal audit directly using the corporate whistleblower process.
According to the CISSP CBK Official Study Guide1, the best course of action for the database administrator in this scenario is to inform the audit committee or internal audit directly using the corporate whistleblower process. A whistleblower is a person who reports or exposes any wrongdoing, fraud, corruption, or illegal activity within an organization to the appropriate authorities or parties. A whistleblower process is a mechanism that enables and protects the whistleblowers from any retaliation or discrimination, and ensures that their reports are handled properly and confidentially. The database administrator should inform the audit committee or internal audit directly using the corporate whistleblower process, as this would demonstrate their professional ethics and responsibility, as well as their compliance with the organizational policies and standards. The database administrator should not ignore the request and do not perform the change, as this would be unprofessional and irresponsible, and may also expose them to potential pressure or threats from the high-ranking member of management. The database administrator should not perform the change as requested, and rely on the next audit to detect and report the situation, as this would be unethical and illegal, and may also compromise the integrity and reliability of the accounting system database. The database administrator should not perform the change, but create a change ticket regardless to ensure there is complete traceability, as this would be dishonest and risky, and may also create a conflict or discrepancy with the high-ranking member of management. References: 1
Regarding asset security and appropriate retention, which of the following INITIAL top three areas are important to focus on?
Security control baselines, access controls, employee awareness and training
Human resources, asset management, production management
Supply chain lead-time, inventory control, and encryption
Polygraphs, crime statistics, forensics
Regarding asset security and appropriate retention, the initial top three areas that are important to focus on are security control baselines, access controls, employee awareness and training. Asset security and appropriate retention are the processes of identifying, classifying, protecting, and disposing of the assets of an organization, such as data, systems, devices, or facilities. Asset security and appropriate retention can help prevent or reduce the loss, theft, damage, or misuse of the assets, as well as comply with the legal and regulatory requirements. The initial top three areas that can help achieve asset security and appropriate retention are:
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2: Asset Security, pp. 61-62; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 2: Asset Security, pp. 163-164.
Which of the following restricts the ability of an individual to carry out all the steps of a particular process?
Job rotation
Separation of duties
Least privilege
Mandatory vacations
According to the CISSP For Dummies3, the concept that restricts the ability of an individual to carry out all the steps of a particular process is separation of duties. Separation of duties is a security principle that divides the tasks and responsibilities of a process among different individuals or roles, so that no one person or role has complete control or authority over the process. Separation of duties helps to prevent or detect fraud, errors, abuse, or collusion, by requiring multiple approvals, checks, or verifications for each step of the process. Separation of duties also helps to enforce the principle of least privilege, which states that users and processes should only have the minimum access required to perform their tasks, and no more. Job rotation is not the concept that restricts the ability of an individual to carry out all the steps of a particular process, although it may be a technique that supports separation of duties. Job rotation is a security practice that requires the individuals or roles to periodically switch or rotate their tasks and responsibilities, so that no one person or role performs the same task or responsibility for a long period of time. Job rotation helps to prevent or detect fraud, errors, abuse, or collusion, by exposing the activities and performance of each individual or role to different perspectives and evaluations. Job rotation also helps to reduce the risk of insider threats, by limiting the opportunity and familiarity of each individual or role with the tasks and responsibilities. Least privilege is not the concept that restricts the ability of an individual to carry out all the steps of a particular process, although it may be a principle that supports separation of duties. Least privilege is a security principle that states that users and processes should only have the minimum access required to perform their tasks, and no more. Least privilege helps to prevent or limit unauthorized or malicious actions, as well as the impact of potential incidents, by reducing the access rights and permissions of each user and process. Mandatory vacations is not the concept that restricts the ability of an individual to carry out all the steps of a particular process, although it may be a technique that supports separation of duties. Mandatory vacations is a security practice that requires the individuals or roles to take a mandatory leave of absence from their tasks and responsibilities for a certain period of time, so that no one person or role performs the same task or responsibility continuously. Mandatory vacations helps to prevent or detect fraud, errors, abuse, or collusion, by allowing the activities and performance of each individual or role to be reviewed and audited by others during their absence. Mandatory vacations also helps to reduce the risk of insider threats, by disrupting the routine and plans of each individual or role with the tasks and responsibilities. References: 3
Which technology is a prerequisite for populating the cloud-based directory in a federated identity solution?
Notification tool
Message queuing tool
Security token tool
Synchronization tool
A vulnerability in which of the following components would be MOST difficult to detect?
Kernel
Shared libraries
Hardware
System application
According to the CISSP CBK Official Study Guide, a vulnerability in hardware would be the most difficult to detect. A vulnerability is a weakness or exposure in a system, network, or application, which may be exploited by threats and cause harm to the organization or its assets. A vulnerability can exist in various components of a system, network, or application, such as the kernel, the shared libraries, the hardware, or the system application. A vulnerability in hardware would be the most difficult to detect, as it may require physical access, specialized tools, or advanced skills to identify and measure the vulnerability. Hardware is the physical or tangible component of a system, network, or application that provides the basic functionality, performance, and support for the system, network, or application, such as the processor, memory, disk, or network card. Hardware may have vulnerabilities due to design flaws, manufacturing defects, configuration errors, or physical damage. A vulnerability in hardware may affect the security, reliability, or availability of the system, network, or application, such as causing data leakage, performance degradation, or system failure. A vulnerability in the kernel would not be the most difficult to detect, although it may be a difficult to detect. The kernel is the core or central component of a system, network, or application that provides the basic functionality, performance, and control for the system, network, or application, such as the operating system, the hypervisor, or the firmware. The kernel may have vulnerabilities due to design flaws, coding errors, configuration errors, or malicious modifications. A vulnerability in the kernel may affect the security, reliability, or availability of the system, network, or application, such as causing privilege escalation, system compromise, or system crash. A vulnerability in the kernel may be detected by using various tools, techniques, or methods, such as code analysis, vulnerability scanning, or penetration testing. A vulnerability in the shared libraries would not be the most difficult to detect, although it may be a difficult to detect. The shared libraries are the reusable or common components of a system, network, or application, that provide the functionality, performance, and compatibility for the system, network, or application, such as the dynamic link libraries, the application programming interfaces, or the frameworks.
The PRIMARY characteristic of a Distributed Denial of Service (DDoS) attack is that it
exploits weak authentication to penetrate networks.
can be detected with signature analysis.
looks like normal network activity.
is commonly confused with viruses or worms.
The primary characteristic of a Distributed Denial of Service (DDoS) attack is that it looks like normal network activity. A DDoS attack is a type of attack or a threat that aims or intends to disrupt or to degrade the availability or the performance of a system or a service, by overwhelming or flooding the system or the service with a large amount or a high volume of traffic or requests, from multiple or distributed sources or locations, such as the compromised or infected computers, devices, or networks, that are controlled or coordinated by the attacker or the malicious actor. The primary characteristic of a DDoS attack is that it looks like normal network activity, which means that it is difficult or challenging to detect or to prevent the DDoS attack, as it is hard or impossible to distinguish or to differentiate the legitimate or the authentic traffic or requests from the illegitimate or the malicious traffic or requests, and as it is hard or impossible to block or to filter the illegitimate or the malicious traffic or requests, without affecting or impacting the legitimate or the authentic traffic or requests. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 115; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 172
What is the BEST way to encrypt web application communications?
Secure Hash Algorithm 1 (SHA-1)
Secure Sockets Layer (SSL)
Cipher Block Chaining Message Authentication Code (CBC-MAC)
Transport Layer Security (TLS)
TLS is the successor to SSL and is considered to be the best option for encrypting web application communications. It provides secure communication between web browsers and servers, ensuring data integrity, confidentiality, and authentication. References: ISC2 CISSP
What does an organization FIRST review to assure compliance with privacy requirements?
Best practices
Business objectives
Legal and regulatory mandates
Employee's compliance to policies and standards
The first thing that an organization reviews to assure compliance with privacy requirements is the legal and regulatory mandates that apply to its business operations and data processing activities. Legal and regulatory mandates are the laws, regulations, standards, and contracts that govern how an organization must protect the privacy of personal information and the rights of data subjects. An organization must identify and understand the relevant mandates that affect its jurisdiction, industry, and data types, and implement the appropriate controls and measures to comply with them. The other options are not the first thing that an organization reviews, but rather part of the privacy compliance program. Best practices are the recommended methods and techniques for achieving privacy objectives, but they are not mandatory or binding. Business objectives are the goals and strategies that an organization pursues to create value and competitive advantage, but they may not align with privacy requirements. Employee’s compliance to policies and standards is the degree to which the organization’s staff adhere to the internal rules and guidelines for privacy protection, but it is not a review activity, but rather a measurement and enforcement activity. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, p. 105; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 287.
Discretionary Access Control (DAC) restricts access according to
data classification labeling.
page views within an application.
authorizations granted to the user.
management accreditation.
Discretionary Access Control (DAC) restricts access according to authorizations granted to the user. DAC is a type of access control that allows the owner or creator of a resource to decide who can access it and what level of access they can have. DAC uses access control lists (ACLs) to assign permissions to resources, and users can pass or change their permissions to other users
Which of the following would BEST describe the role directly responsible for data within an organization?
Data custodian
Information owner
Database administrator
Quality control
According to the CISSP For Dummies, the role that is directly responsible for data within an organization is the information owner. The information owner is the person or role that has the authority and accountability for the data or information that the organization owns, creates, uses, or maintains, such as data, documents, records, or intellectual property. The information owner is responsible for defining the classification, value, and sensitivity of the data or information, as well as the security requirements, policies, and standards for the data or information. The information owner is also responsible for granting or revoking the access rights and permissions to the data or information, as well as for monitoring and auditing the compliance and effectiveness of the security controls and mechanisms for the data or information. The data custodian is not the role that is directly responsible for data within an organization, although it may be a role that supports or assists the information owner. The data custodian is the person or role that has the responsibility for implementing and maintaining the security controls and mechanisms for the data or information, as defined by the information owner. The data custodian is responsible for performing the technical and operational tasks and activities for the data or information, such as backup, recovery, encryption, or disposal. The database administrator is not the role that is directly responsible for data within an organization, although it may be a role that supports or assists the information owner or the data custodian. The database administrator is the person or role that has the responsibility for managing and administering the database system that stores and processes the data or information. The database administrator is responsible for performing the technical and operational tasks and activities for the database system, such as installation, configuration, optimization, or troubleshooting.
Which of the following roles has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization?
Data Custodian
Data Owner
Data Creator
Data User
The role that has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization is the data owner. A data owner is a person or an entity that has the authority or the responsibility for the data or the information within an organization, and that determines or defines the classification, the usage, the protection, or the retention of the data or the information. A data owner has the obligation to ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, as the data owner is ultimately accountable or liable for the security or the quality of the data or the information, regardless of who processes or handles the data or the information. A data owner can ensure that a third party provider is capable of processing and handling data in a secure manner and meeting the standards set by the organization, by performing the tasks or the functions such as conducting due diligence, establishing service level agreements, defining security requirements, monitoring performance, or auditing compliance. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2, page 61; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 2, page 67
If an identification process using a biometric system detects a 100% match between a presented template and a stored template, what is the interpretation of this result?
User error
Suspected tampering
Accurate identification
Unsuccessful identification
If an identification process using a biometric system detects a 100% match between a presented template and a stored template, the interpretation of this result is suspected tampering. A biometric system is a system that uses physical or behavioral characteristics of a person to verify their identity, such as fingerprint, iris, voice, or face. A biometric system compares the presented template, which is the biometric data captured from the person at the time of identification, with the stored template, which is the biometric data enrolled and stored in the system database. A biometric system usually does not produce a 100% match, as there are always some variations or errors in the biometric data due to environmental, physiological, or technical factors. A biometric system uses a threshold or a tolerance level to determine whether the match is acceptable or not. A 100% match is very unlikely and suspicious, as it may indicate that someone has tampered with the biometric system or the biometric data, such as by copying, modifying, or spoofing the stored template or the presented template12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, p. 267; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 5: Identity and Access Management, p. 647.
What is an important characteristic of Role Based Access Control (RBAC)?
Supports Mandatory Access Control (MAC)
Simplifies the management of access rights
Relies on rotation of duties
Requires two factor authentication
An important characteristic of Role Based Access Control (RBAC) is that it simplifies the management of access rights. RBAC is a model of access control that assigns permissions to roles, rather than individual users. Users are then assigned to roles based on their job functions or responsibilities. RBAC simplifies the management of access rights by reducing the complexity and overhead of granting, revoking, or modifying permissions for each user. RBAC also improves the consistency and security of access control by enforcing the principle of least privilege and separation of duties. The other options are not characteristics of RBAC, but rather different models or concepts of access control. Supports Mandatory Access Control (MAC) is a characteristic of MAC, which is a model of access control that assigns security labels to subjects and objects, and enforces access decisions based on the comparison of the labels. Relies on rotation of duties is a concept of access control that involves changing the roles or tasks of users periodically to prevent fraud or collusion. Requires two factor authentication is a concept of access control that involves using two or more factors of authentication to verify the identity of the user. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, p. 267; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, p. 334.
The PRIMARY security concern for handheld devices is the
strength of the encryption algorithm.
spread of malware during synchronization.
ability to bypass the authentication mechanism.
strength of the Personal Identification Number (PIN).
The primary security concern for handheld devices is the spread of malware during synchronization. Handheld devices are often synchronized with other devices, such as desktops or laptops, to exchange data and update applications. This process can introduce malware from one device to another, or vice versa, if proper security controls are not in place.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 635; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 10, page 557
What is the PRIMARY goal for using Domain Name System Security Extensions (DNSSEC) to sign records?
Integrity
Confidentiality
Accountability
Availability
The primary goal for using Domain Name System Security Extensions (DNSSEC) to sign records is integrity. DNSSEC is a set of extensions or enhancements to the Domain Name System (DNS) protocol, which is a protocol that translates or resolves the domain names or the hostnames into the IP addresses or the network addresses, and vice versa. DNSSEC is designed or intended to provide the security or the protection for the DNS protocol, by using the digital signatures or the cryptographic keys to sign or to verify the DNS records or the DNS data, such as the A records, the AAAA records, or the MX records. The primary goal for using DNSSEC to sign records is integrity, which means that DNSSEC aims to ensure or to confirm that the DNS records or the DNS data are authentic, accurate, or reliable, and that they have not been modified, altered, or corrupted by the third parties or the attackers who intercept or manipulate the DNS queries or the DNS responses over the network. DNSSEC can provide the integrity for the DNS records or the DNS data, by using the public key cryptography or the asymmetric cryptography to generate or to validate the digital signatures or the cryptographic keys that are attached or appended to the DNS records or the DNS data, and that can prove or demonstrate the origin, the identity, or the validity of the DNS records or the DNS data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 113; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, page 170
From a cryptographic perspective, the service of non-repudiation includes which of the following features?
Validity of digital certificates
Validity of the authorization rules
Proof of authenticity of the message
Proof of integrity of the message
the service of non-repudiation from a cryptographic perspective includes the proof of integrity of the message. This means that non-repudiation is a service that ensures that the sender of a message cannot deny sending it, and the receiver of a message cannot deny receiving it, by providing evidence that the message has not been altered or tampered with during transmission. Non-repudiation can be achieved by using digital signatures and certificates, which are cryptographic techniques that bind the identity of the sender to the content of the message, and verify that the message has not been modified. Non-repudiation does not include the validity of digital certificates, as this is a service that ensures that the certificates are authentic, current, and trustworthy, by checking their expiration dates, revocation status, and issuing authorities. Non-repudiation does not include the validity of the authorization rules, as this is a service that ensures that the access to a resource is granted or denied based on the policies and permissions defined by the owner or administrator. Non-repudiation does not include the proof of authenticity of the message, as this is a service that ensures that the message comes from the claimed sender, by verifying their identity and credentials.
An organization regularly conducts its own penetration tests. Which of the following scenarios MUST be covered for the test to be effective?
Third-party vendor with access to the system
System administrator access compromised
Internal attacker with access to the system
Internal user accidentally accessing data
According to the CXL blog1, the scenario that must be covered for the penetration test to be effective is the third-party vendor with access to the system. A third-party vendor is an external entity or organization that provides a service or a product to the organization, such as a software developer, a cloud provider, or a payment processor. A third-party vendor with access to the system is a potential source of vulnerability or risk for the organization, as it may introduce or expose some weaknesses or flaws in the system, such as the configuration, the authentication, or the encryption of the system. A third-party vendor with access to the system may also be a target or a vector of attack for the malicious users or hackers, as it may be compromised or exploited to gain unauthorized or unintended access to the system, or to perform malicious actions or activities on the system, such as stealing, modifying, or deleting the data or information on the system. Therefore, the scenario of the third-party vendor with access to the system must be covered for the penetration test to be effective, as it helps to identify and assess the security gaps or issues that may arise from the third-party vendor’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. System administrator access compromised is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. A system administrator is an internal entity or person that manages and maintains the system, such as the network, the server, or the database of the organization. A system administrator access compromised is a scenario in which the system administrator’s account or credentials are stolen, hacked, or misused by the malicious users or hackers, who can then access or use the system with the system administrator’s privileges or permissions, such as creating, modifying, or deleting the users, the data, or the settings of the system. A system administrator access compromised is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the system administrator’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, a system administrator access compromised is not the scenario that must be covered for the penetration test to be effective, as it is not a common or realistic scenario that occurs in the real world, and as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. Internal attacker with access to the system is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. An internal attacker is an internal entity or person that performs malicious actions or activities on the system, such as an employee, a contractor, or a partner of the organization. An internal attacker with access to the system is a scenario in which the internal attacker uses their legitimate or illegitimate access to the system to perform malicious actions or activities on the system, such as stealing, modifying, or deleting the data or information on the system. An internal attacker with access to the system is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the internal attacker’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, an internal attacker with access to the system is not the scenario that must be covered for the penetration test to be effective, as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. Internal user accidentally accessing data is not the scenario that must be covered for the penetration test to be effective, although it may be a scenario that could be covered for the penetration test to be more comprehensive. An internal user is an internal entity or person that uses the system for legitimate purposes or functions, such as an employee, a contractor, or a partner of the organization. An internal user accidentally accessing data is a scenario in which the internal user unintentionally or mistakenly accesses or views the data or information on the system that they are not supposed to access or view, such as the confidential, sensitive, or personal data or information of the organization or the customers. An internal user accidentally accessing data is a scenario that could be covered for the penetration test to be more comprehensive, as it helps to identify and assess the security gaps or issues that may arise from the internal user’s access to the system, as well as to recommend and implement the appropriate safeguards or countermeasures to prevent or mitigate the potential harm or damage to the system. However, an internal user accidentally accessing data is not the scenario that must be covered for the penetration test to be effective, as it is not a malicious or intentional scenario that poses a serious threat or risk to the system, and as it is not directly related to the third-party vendor’s access to the system, which is the main focus of the penetration test. References: 1
A proxy firewall operates at what layer of the Open System Interconnection (OSI) model?
Transport
Data link
Network
Application
According to the CISSP Official (ISC)2 Practice Tests2, a proxy firewall operates at the application layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across a network, by dividing the functions into seven layers: physical, data link, network, transport, session, presentation, and application. A proxy firewall is a type of firewall that acts as an intermediary between the source and the destination of a network connection, by intercepting and inspecting the data packets at the application layer, which is the highest layer of the OSI model. The application layer is responsible for providing the interface and services for the applications and processes that communicate over the network, such as HTTP, FTP, SMTP, and DNS. A proxy firewall can filter and control the network traffic based on the content and context of the application layer protocols and messages, as well as perform caching, authentication, encryption, and logging functions. A proxy firewall does not operate at the transport layer, the data link layer, or the network layer of the OSI model, as these are lower layers that provide different functions, such as reliable and ordered delivery of data, physical and logical addressing of devices, and routing and forwarding of packets. References: 2
What is the PRIMARY difference between security policies and security procedures?
Policies are used to enforce violations, and procedures create penalties
Policies point to guidelines, and procedures are more contractual in nature
Policies are included in awareness training, and procedures give guidance
Policies are generic in nature, and procedures contain operational details
The primary difference between security policies and security procedures is that policies are generic in nature, and procedures contain operational details. Security policies are the high-level statements or rules that define the goals, objectives, and requirements of security for an organization. Security procedures are the low-level steps or actions that specify how to implement, enforce, and comply with the security policies.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 17; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 13
The World Trade Organization's (WTO) agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) requires authors of computer software to be given the
right to refuse or permit commercial rentals.
right to disguise the software's geographic origin.
ability to tailor security parameters based on location.
ability to confirm license authenticity of their works.
The World Trade Organization’s (WTO) agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) requires authors of computer software to be given the right to refuse or permit commercial rentals. TRIPS is an international treaty that sets the minimum standards and rules for the protection and enforcement of intellectual property rights, such as patents, trademarks, or copyrights. TRIPS requires authors of computer software to be given the right to refuse or permit commercial rentals, which means that they can control whether their software can be rented or leased to others for profit. This right is intended to prevent the unauthorized copying or distribution of the software, and to ensure that the authors receive fair compensation for their work. The other options are not the rights that TRIPS requires authors of computer software to be given, but rather different or irrelevant concepts. The right to disguise the software’s geographic origin is not a right, but rather a violation, of TRIPS, as it can mislead or deceive the consumers or authorities about the source or quality of the software. The ability to tailor security parameters based on location is not a right, but rather a feature, of some software, such as encryption or authentication software, that can adjust the security settings or functions according to the location or jurisdiction of the user or device. The ability to confirm license authenticity of their works is not a right, but rather a benefit, of some software, such as digital rights management or anti-piracy software, that can verify or validate the license or ownership of the software. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, p. 40; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 302.
Which of the following BEST represents the concept of least privilege?
Access to an object is denied unless access is specifically allowed.
Access to an object is only available to the owner.
Access to an object is allowed unless it is protected by the information security policy.
Access to an object is only allowed to authenticated users via an Access Control List (ACL).
According to the CISSP CBK Official Study Guide1, the concept of least privilege means that users and processes should only have the minimum access required to perform their tasks, and no more. This reduces the risk of unauthorized or malicious actions, as well as the impact of potential incidents. One way to implement the principle of least privilege is to use a default-deny policy, which means that access to an object is denied unless access is specifically allowed. This is also known as a whitelist approach, which only grants access to predefined and authorized entities. Access to an object is only available to the owner is not a good representation of the concept of least privilege, as it may prevent legitimate access by other authorized users or processes. Access to an object is allowed unless it is protected by the information security policy is not a good representation of the concept of least privilege, as it may allow unnecessary or excessive access by default. This is also known as a blacklist approach, which only denies access to predefined and unauthorized entities. Access to an object is only allowed to authenticated users via an Access Control List (ACL) is not a good representation of the concept of least privilege, as it may not consider the authorization and accountability aspects of access control. Authentication is the process of verifying the identity of a user or process, while authorization is the process of granting or denying access based on the identity and the access policy. An ACL is a mechanism that defines the permissions and restrictions for accessing an object, but it does not necessarily enforce the principle of least privilege. References: 1
Which of the following is the PRIMARY concern when using an Internet browser to access a cloud-based service?
Insecure implementation of Application Programming Interfaces (API)
Improper use and storage of management keys
Misconfiguration of infrastructure allowing for unauthorized access
Vulnerabilities within protocols that can expose confidential data
The primary concern when using an Internet browser to access a cloud-based service is the vulnerabilities within protocols that can expose confidential data. Protocols are the rules and formats that govern the communication and exchange of data between systems or applications. Protocols can have vulnerabilities or flaws that can be exploited by attackers to intercept, modify, or steal the data. For example, some protocols may not provide adequate encryption, authentication, or integrity for the data, or they may have weak or outdated algorithms, keys, or certificates. When using an Internet browser to access a cloud-based service, the data may be transmitted over various protocols, such as HTTP, HTTPS, SSL, TLS, etc. If any of these protocols are vulnerable, the data may be compromised, especially if the data is sensitive or confidential. Therefore, it is important to use secure and updated protocols, as well as to monitor and patch any vulnerabilities12 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 338; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 456.
Which of the following media sanitization techniques is MOST likely to be effective for an organization using public cloud services?
Low-level formatting
Secure-grade overwrite erasure
Cryptographic erasure
Drive degaussing
Media sanitization is the process of rendering the data on a storage device inaccessible or unrecoverable by a given level of effort. For an organization using public cloud services, the most effective media sanitization technique is cryptographic erasure, which involves encrypting the data on the device with a strong key and then deleting the key, making the data unreadable. Cryptographic erasure is suitable for cloud environments because it does not require physical access to the device, it can be performed remotely and quickly, and it does not affect the performance or lifespan of the device. Low-level formatting, secure-grade overwrite erasure, and drive degaussing are media sanitization techniques that require physical access to the device, which may not be possible or feasible for cloud users. Additionally, these techniques may not be compatible with some cloud storage technologies, such as solid-state drives (SSDs) or flash memory, and they may reduce the performance or lifespan of the device.
The BEST method to mitigate the risk of a dictionary attack on a system is to
use a hardware token.
use complex passphrases.
implement password history.
encrypt the access control list (ACL).
The best method to mitigate the risk of a dictionary attack on a system is to use complex passphrases. A dictionary attack is a type of brute force attack that tries to guess or crack a password or a passphrase by using a list or a database of common or frequently used words, phrases, or combinations, such as names, dates, or dictionary words. A complex passphrase is a type of password or a passphrase that consists of a long and random sequence of characters, words, or symbols, that is hard to guess or crack by a dictionary attack or any other attack. A complex passphrase can provide a high level of security and entropy for a system, as it increases the possible combinations and reduces the probability of a successful attack.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 328; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 289
Match the objectives to the assessment questions in the governance domain of Software Assurance Maturity Model (SAMM).
The correct matches are as follows:
Comprehensive Explanation: These matches are based on the definitions and objectives of the four governance domain practices in the Software Assurance Maturity Model (SAMM). SAMM is a framework to help organizations assess and improve their software security posture. The governance domain covers the organizational aspects of software security, such as policies, metrics, and roles.
References: SAMM Governance Domain; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 8, page 452
Backup information that is critical to the organization is identified through a
Vulnerability Assessment (VA).
Business Continuity Plan (BCP).
Business Impact Analysis (BIA).
data recovery analysis.
A BIA is a process that identifies and evaluates the potential effects of natural and human-caused disasters on critical business operations1. A BIA helps to determine which data, systems, and processes are essential for the continuity and recovery of the organization2. A BIA also helps to prioritize the backup and restoration of critical data and systems3.
Which of the following BEST describes a rogue Access Point (AP)?
An AP that is not protected by a firewall
An AP not configured to use Wired Equivalent Privacy (WEP) with Triple Data Encryption Algorithm (3DES)
An AP connected to the wired infrastructure but not under the management of authorized network administrators
An AP infected by any kind of Trojan or Malware
A rogue Access Point (AP) is an AP connected to the wired infrastructure but not under the management of authorized network administrators. A rogue AP can pose a serious security threat, as it can allow unauthorized access to the network, bypass security controls, and expose sensitive data. The other options are not correct descriptions of a rogue AP. Option A is a description of an unsecured AP, which is an AP that is not protected by a firewall or other security measures. Option B is a description of an outdated AP, which is an AP not configured to use Wired Equivalent Privacy (WEP) with Triple Data Encryption Algorithm (3DES), which are weak encryption methods that can be easily cracked. Option D is a description of a compromised AP, which is an AP infected by any kind of Trojan or Malware, which can cause malicious behavior or damage to the network. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, p. 325; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, p. 241.
Which of the following activities BEST identifies operational problems, security misconfigurations, and malicious attacks?
Policy documentation review
Authentication validation
Periodic log reviews
Interface testing
The activity that best identifies operational problems, security misconfigurations, and malicious attacks is periodic log reviews. Log reviews are the process of examining and analyzing the records of events or activities that occur on a system or network, such as user actions, system errors, security alerts, or network traffic. Periodic log reviews can help to identify operational problems, such as system failures, performance issues, or configuration errors, by detecting anomalies, trends, or patterns in the log data. Periodic log reviews can also help to identify security misconfigurations, such as weak passwords, open ports, or missing patches, by comparing the log data with the security policies, standards, or baselines. Periodic log reviews can also help to identify malicious attacks, such as unauthorized access, data breaches, or denial of service, by recognizing signs of intrusion, compromise, or exploitation in the log data. The other options are not the best activities to identify operational problems, security misconfigurations, and malicious attacks, but rather different types of activities. Policy documentation review is the process of examining and evaluating the documents that define the rules and guidelines for the system or network security, such as policies, procedures, or standards. Policy documentation review can help to ensure the completeness, consistency, and compliance of the security documents, but not to identify the actual problems or attacks. Authentication validation is the process of verifying and confirming the identity and credentials of a user or device that requests access to a system or network, such as passwords, tokens, or certificates. Authentication validation can help to prevent unauthorized access, but not to identify the existing problems or attacks. Interface testing is the process of checking and evaluating the functionality, usability, and reliability of the interfaces between different components or systems, such as modules, applications, or networks. Interface testing can help to ensure the compatibility, interoperability, and integration of the interfaces, but not to identify the problems or attacks. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, p. 377; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, p. 405.
When designing a vulnerability test, which one of the following is likely to give the BEST indication of what components currently operate on the network?
Topology diagrams
Mapping tools
Asset register
Ping testing
According to the CISSP All-in-One Exam Guide2, when designing a vulnerability test, mapping tools are likely to give the best indication of what components currently operate on the network. Mapping tools are software applications that scan and discover the network topology, devices, services, and protocols. They can provide a graphical representation of the network structure and components, as well as detailed information about each node and connection. Mapping tools can help identify potential vulnerabilities and weaknesses in the network configuration and architecture, as well as the exposure and attack surface of the network. Topology diagrams are not likely to give the best indication of what components currently operate on the network, as they may be outdated, inaccurate, or incomplete. Topology diagrams are static and abstract representations of the network layout and design, but they may not reflect the actual and dynamic state of the network. Asset register is not likely to give the best indication of what components currently operate on the network, as it may be outdated, inaccurate, or incomplete. Asset register is a document that lists and categorizes the assets owned by an organization, such as hardware, software, data, and personnel. However, it may not capture the current status, configuration, and interconnection of the assets, as well as the changes and updates that occur over time. Ping testing is not likely to give the best indication of what components currently operate on the network, as it is a simple and limited technique that only checks the availability and response time of a host. Ping testing is a network utility that sends an echo request packet to a target host and waits for an echo reply packet. It can measure the connectivity and latency of the host, but it cannot provide detailed information about the host’s characteristics, services, and vulnerabilities. References: 2
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements?
Development, testing, and deployment
Prevention, detection, and remediation
People, technology, and operations
Certification, accreditation, and monitoring
An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
Which of the following is the BEST method to prevent malware from being introduced into a production environment?
Purchase software from a limited list of retailers
Verify the hash key or certificate key of all updates
Do not permit programs, patches, or updates from the Internet
Test all new software in a segregated environment
Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as:
The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares.
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
The stringency of an Information Technology (IT) security assessment will be determined by the
system's past security record.
size of the system's database.
sensitivity of the system's datA.
age of the system.
The stringency of an Information Technology (IT) security assessment will be determined by the sensitivity of the system’s data, as this reflects the level of risk and impact that a security breach could have on the organization and its stakeholders. The more sensitive the data, the more stringent the security assessment should be, as it should cover more aspects of the system, use more rigorous methods and tools, and provide more detailed and accurate results and recommendations. The system’s past security record, size of the system’s database, and age of the system are not the main factors that determine the stringency of the security assessment, as they do not directly relate to the value and importance of the data that the system processes, stores, or transmits . References: 3: Common Criteria for Information Technology Security Evaluation 4: Information technology security assessment - Wikipedia
Which of the following is an attacker MOST likely to target to gain privileged access to a system?
Programs that write to system resources
Programs that write to user directories
Log files containing sensitive information
Log files containing system calls
An attacker is most likely to target programs that write to system resources to gain privileged access to a system. System resources are the hardware and software components that are essential for the operation and functionality of a system, such as the CPU, memory, disk, network, operating system, drivers, libraries, etc. Programs that write to system resources may have higher privileges or permissions than programs that write to user directories or log files. An attacker may exploit vulnerabilities or flaws in these programs to execute malicious code, escalate privileges, or bypass security controls. Programs that write to user directories or log files are less likely to be targeted by an attacker, as they may have lower privileges or permissions, and may not contain sensitive information or system calls. User directories are the folders or locations where users store their personal files or data. Log files are the records of events or activities that occur in a system or application.
By allowing storage communications to run on top of Transmission Control Protocol/Internet Protocol (TCP/IP) with a Storage Area Network (SAN), the
confidentiality of the traffic is protected.
opportunity to sniff network traffic exists.
opportunity for device identity spoofing is eliminated.
storage devices are protected against availability attacks.
By allowing storage communications to run on top of Transmission Control Protocol/Internet Protocol (TCP/IP) with a Storage Area Network (SAN), the opportunity to sniff network traffic exists. A SAN is a dedicated network that connects storage devices, such as disk arrays, tape libraries, or servers, to provide high-speed data access and transfer. A SAN may use different protocols or technologies to communicate with storage devices, such as Fibre Channel, iSCSI, or NFS. By allowing storage communications to run on top of TCP/IP, a common network protocol that supports internet and intranet communications, a SAN may leverage the existing network infrastructure and reduce costs and complexity. However, this also exposes the storage communications to the same risks and threats that affect the network communications, such as sniffing, spoofing, or denial-of-service attacks. Sniffing is the act of capturing or monitoring network traffic, which may reveal sensitive or confidential information, such as passwords, encryption keys, or data. By allowing storage communications to run on top of TCP/IP with a SAN, the confidentiality of the traffic is not protected, unless encryption or other security measures are applied. The opportunity for device identity spoofing is not eliminated, as an attacker may still impersonate a legitimate storage device or server by using a forged or stolen IP address or MAC address. The storage devices are not protected against availability attacks, as an attacker may still disrupt or overload the network or the storage devices by sending malicious or excessive packets or requests.
When building a data center, site location and construction factors that increase the level of vulnerability to physical threats include
hardened building construction with consideration of seismic factors.
adequate distance from and lack of access to adjacent buildings.
curved roads approaching the data center.
proximity to high crime areas of the city.
When building a data center, site location and construction factors that increase the level of vulnerability to physical threats include proximity to high crime areas of the city. This factor increases the risk of theft, vandalism, sabotage, or other malicious acts that could damage or disrupt the data center operations. The other options are factors that decrease the level of vulnerability to physical threats, as they provide protection or deterrence against natural or human-made hazards. Hardened building construction with consideration of seismic factors (A) reduces the impact of earthquakes or other natural disasters. Adequate distance from and lack of access to adjacent buildings (B) prevents unauthorized entry or fire spread from neighboring structures. Curved roads approaching the data center © slow down the speed of vehicles and make it harder for attackers to ram or bomb the data center. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 637; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 10, page 699.
Two companies wish to share electronic inventory and purchase orders in a supplier and client relationship. What is the BEST security solution for them?
Write a Service Level Agreement (SLA) for the two companies.
Set up a Virtual Private Network (VPN) between the two companies.
Configure a firewall at the perimeter of each of the two companies.
Establish a File Transfer Protocol (FTP) connection between the two companies.
The best security solution for two companies that wish to share electronic inventory and purchase orders in a supplier and client relationship is to set up a Virtual Private Network (VPN) between the two companies. A VPN is a secure and encrypted connection that allows the two companies to exchange data over a public network, such as the internet, as if they were on a private network. A VPN protects the confidentiality, integrity, and availability of the data, and prevents unauthorized access, interception, or modification by third parties. A VPN also provides authentication, authorization, and accounting of the users and devices that access the data . References: : What is a VPN and how does it work? Your guide to internet privacy and security : What is a VPN?
While impersonating an Information Security Officer (ISO), an attacker obtains information from company employees about their User IDs and passwords. Which method of information gathering has the attacker used?
Trusted path
Malicious logic
Social engineering
Passive misuse
Social engineering is the method of information gathering that the attacker has used while impersonating an ISO and obtaining information from company employees about their User IDs and passwords. Social engineering is a technique of manipulating or deceiving people into revealing confidential or sensitive information, or performing actions that compromise the security of an organization or a system1. Social engineering can exploit the human factors, such as trust, curiosity, fear, or greed, to influence the behavior or judgment of the target. Social engineering can take various forms, such as phishing, baiting, pretexting, or impersonation. Trusted path, malicious logic, and passive misuse are not methods of information gathering that the attacker has used, as they are related to different aspects of security or attack. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 19.
What is the FIRST step in developing a security test and its evaluation?
Determine testing methods
Develop testing procedures
Identify all applicable security requirements
Identify people, processes, and products not in compliance
The first step in developing a security test and its evaluation is to identify all applicable security requirements. Security requirements are the specifications or criteria that define the security objectives, expectations, and needs of the system or network. Security requirements may be derived from various sources, such as business goals, user needs, regulatory standards, contractual obligations, or best practices. Identifying all applicable security requirements is essential to establish the scope, purpose, and criteria of the security test and its evaluation. Determining testing methods, developing testing procedures, and identifying people, processes, and products not in compliance are subsequent steps that should be done after identifying the security requirements, as they depend on the security requirements to be defined and agreed upon. References: : Security Testing - Overview : Security Testing - Planning
The type of authorized interactions a subject can have with an object is
control.
permission.
procedure.
protocol.
Permission is the type of authorized interactions a subject can have with an object. Permission is a rule or a setting that defines the specific actions or operations that a subject can perform on an object, such as read, write, execute, or delete1. Permission is usually granted by the owner or the administrator of the object, and can be based on the identity, role, or group membership of the subject. Control, procedure, and protocol are not types of authorized interactions a subject can have with an object, as they are related to different aspects of access control or security. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 355.
Which of the following statements is TRUE for point-to-point microwave transmissions?
They are not subject to interception due to encryption.
Interception only depends on signal strength.
They are too highly multiplexed for meaningful interception.
They are subject to interception by an antenna within proximity.
They are subject to interception by an antenna within proximity. Point-to-point microwave transmissions are line-of-sight media, which means that they can be intercepted by any antenna that is in the direct path of the signal. The interception does not depend on encryption, multiplexing, or signal strength, as long as the antenna is close enough to receive the signal.
When constructing an Information Protection Policy (IPP), it is important that the stated rules are necessary, adequate, and
flexible.
confidential.
focused.
achievable.
An Information Protection Policy (IPP) is a document that defines the objectives, scope, roles, responsibilities, and rules for protecting the information assets of an organization. An IPP should be aligned with the business goals and legal requirements, and should be communicated and enforced throughout the organization. When constructing an IPP, it is important that the stated rules are necessary, adequate, and achievable, meaning that they are relevant, sufficient, and realistic for the organization’s context and capabilities34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 234: CISSP For Dummies, 7th Edition, Chapter 1, page 15.
The BEST method of demonstrating a company's security level to potential customers is
a report from an external auditor.
responding to a customer's security questionnaire.
a formal report from an internal auditor.
a site visit by a customer's security team.
The best method of demonstrating a company’s security level to potential customers is a report from an external auditor, who is an independent and qualified third party that evaluates the company’s security policies, procedures, controls, and practices against a set of standards or criteria, such as ISO 27001, NIST, or COBIT. A report from an external auditor provides an objective and credible assessment of the company’s security posture, and may also include recommendations for improvement or certification . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 47. : CISSP For Dummies, 7th Edition, Chapter 1, page 29.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices?
Derived credential
Temporary security credential
Mobile device credentialing service
Digest authentication
Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows:
A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key.
A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
An Information Technology (IT) professional attends a cybersecurity seminar on current incident response methodologies.
What code of ethics canon is being observed?
Provide diligent and competent service to principals
Protect society, the commonwealth, and the infrastructure
Advance and protect the profession
Act honorable, honesty, justly, responsibly, and legally
Attending a cybersecurity seminar to learn about current incident response methodologies aligns with the ethical canon of advancing and protecting the profession. It involves enhancing one’s knowledge and skills, contributing to the growth and integrity of the field, and staying abreast of the latest developments and best practices in information security. References: ISC² Code of Ethics
Due to system constraints, a group of system administrators must share a high-level access set of credentials.
Which of the following would be MOST appropriate to implement?
Increased console lockout times for failed logon attempts
Reduce the group in size
A credential check-out process for a per-use basis
Full logging on affected systems
The most appropriate measure to implement when a group of system administrators must share a high-level access set of credentials due to system constraints is a credential check-out process for a per-use basis. This means that the system administrators must request and obtain the credentials from a secure source each time they need to use them, and return them after they finish their tasks. This can help to reduce the risk of unauthorized access, misuse, or compromise of the credentials, as well as to enforce accountability and traceability of the system administrators’ actions. Increasing console lockout times, reducing the group size, and enabling full logging are not as effective as a credential check-out process, as they do not address the root cause of the problem, which is the sharing of the credentials. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 633; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 412.
An organization has discovered that users are visiting unauthorized websites using anonymous proxies.
Which of the following is the BEST way to prevent future occurrences?
Remove the anonymity from the proxy
Analyze Internet Protocol (IP) traffic for proxy requests
Disable the proxy server on the firewall
Block the Internet Protocol (IP) address of known anonymous proxies
Anonymous proxies are servers that act as intermediaries between the user and the internet, hiding the user’s real IP address and allowing them to bypass network restrictions and access unauthorized websites. The best way to prevent users from visiting unauthorized websites using anonymous proxies is to block the IP address of known anonymous proxies on the firewall or router. This will prevent the user from establishing a connection with the proxy server and accessing the blocked content. Removing the anonymity from the proxy, analyzing IP traffic for proxy requests, or disabling the proxy server on the firewall are not effective ways to prevent future occurrences, as they do not address the root cause of the problem or require more resources and time to implement. References: The 17 Best Proxy Sites to Help You Browse Anonymously; Buy HTTP proxies and Socks5 | Anonymous Proxies; The Best Free Proxy Server List: Tested & Working! (2024).
Which of the following is the MOST effective method to mitigate Cross-Site Scripting (XSS) attacks?
Use Software as a Service (SaaS)
Whitelist input validation
Require client certificates
Validate data output
The most effective method to mitigate Cross-Site Scripting (XSS) attacks is to use whitelist input validation. XSS attacks occur when an attacker injects malicious code, usually in the form of a script, into a web application that is then executed by the browser of an unsuspecting user. XSS attacks can compromise the confidentiality, integrity, and availability of the web application and the user’s data. Whitelist input validation is a technique that checks the user input against a predefined set of acceptable values or characters, and rejects any input that does not match the whitelist. Whitelist input validation can prevent XSS attacks by filtering out any malicious or unexpected input that may contain harmful scripts. Whitelist input validation should be applied at the point of entry of the user input, and should be combined with output encoding or sanitization to ensure that any input that is displayed back to the user is safe and harmless. Use Software as a Service (SaaS), require client certificates, and validate data output are not the most effective methods to mitigate XSS attacks, although they may be related or useful techniques. Use Software as a Service (SaaS) is a model that delivers software applications over the Internet, usually on a subscription or pay-per-use basis. SaaS can provide some benefits for web security, such as reducing the attack surface, outsourcing the maintenance and patching of the software, and leveraging the expertise and resources of the service provider. However, SaaS does not directly address the issue of XSS attacks, as the service provider may still have vulnerabilities or flaws in their web applications that can be exploited by XSS attackers. Require client certificates is a technique that uses digital certificates to authenticate the identity of the clients who access a web application. Client certificates are issued by a trusted certificate authority (CA), and contain the public key and other information of the client. Client certificates can provide some benefits for web security, such as enhancing the confidentiality and integrity of the communication, preventing unauthorized access, and enabling mutual authentication. However, client certificates do not directly address the issue of XSS attacks, as the client may still be vulnerable to XSS attacks if the web application does not properly validate and encode the user input. Validate data output is a technique that checks the data that is sent from the web application to the client browser, and ensures that it is correct, consistent, and safe. Validate data output can provide some benefits for web security, such as detecting and correcting any errors or anomalies in the data, preventing data leakage or corruption, and enhancing the quality and reliability of the web application. However, validate data output is not sufficient to prevent XSS attacks, as the data output may still contain malicious scripts that can be executed by the client browser. Validate data output should be complemented with output encoding or sanitization to ensure that any data output that is displayed to the user is safe and harmless.
The organization would like to deploy an authorization mechanism for an Information Technology (IT)
infrastructure project with high employee turnover.
Which access control mechanism would be preferred?
Attribute Based Access Control (ABAC)
Discretionary Access Control (DAC)
Mandatory Access Control (MAC)
Role-Based Access Control (RBAC)
The preferred access control mechanism for an IT infrastructure project with high employee turnover is Role-Based Access Control (RBAC). RBAC is a type of access control model that assigns permissions to users based on their roles or functions within the organization, rather than on their individual identities or attributes. RBAC can be preferred for an IT infrastructure project with high employee turnover because it can simplify the management and the administration of the user accounts and access rights. RBAC can reduce the administrative overhead and ensure the consistency and accuracy of the user accounts and access rights, by using predefined roles or groups that have defined privileges. RBAC can also facilitate the identity lifecycle management activities, such as provisioning, review, or revocation, by adding or removing users from the roles or groups based on their current jobs. RBAC can also provide some benefits for security, such as enforcing the principle of least privilege, facilitating the separation of duties, and supporting the audit and compliance activities. Attribute Based Access Control (ABAC), Discretionary Access Control (DAC), and Mandatory Access Control (MAC) are not the preferred access control mechanisms for an IT infrastructure project with high employee turnover, although they may be related or useful access control models. ABAC is a type of access control model that assigns permissions to users with policies that combine attributes together. Attributes are characteristics or properties of the users, the objects, the environment, or the actions. ABAC can provide some benefits for access control, such as enhancing the flexibility and the granularity of the permissions, supporting the dynamic and complex scenarios, and enabling the interoperability and scalability of the systems or the network. However, ABAC is not preferred for an IT infrastructure project with high employee turnover, as it can be difficult and costly to implement and manage, due to the large number and variety of attributes and policies, and the lack of standardization and validation of the attributes and policies. DAC is a type of access control model that assigns permissions to users based on their identities or their ownership of the objects. DAC is enforced by the owner or the creator of the object, who can grant or revoke permissions to other users at their discretion. DAC can provide some benefits for access control, such as enhancing the flexibility and the usability of the permissions, supporting the user collaboration and sharing, and enabling the user autonomy and responsibility. However, DAC is not preferred for an IT infrastructure project with high employee turnover, as it can be insecure and inconsistent, due to the lack of centralized control and oversight of the permissions, and the potential for excessive or unauthorized permissions. MAC is a type of access control model that assigns permissions to users and objects based on their security labels, which indicate their level of sensitivity or trustworthiness. MAC is enforced by the system or the network, rather than by the owner or the creator of the object, and it cannot be modified or overridden by the users. MAC can provide some benefits for access control, such as enhancing the confidentiality and the integrity of the data, preventing unauthorized access or disclosure, and supporting the audit and compliance activities. However, MAC is not preferred for an IT infrastructure project with high employee turnover, as it can be rigid and complex, due to the strict and predefined rules and policies of the permissions, and the difficulty and overhead of assigning and maintaining the security labels.
What is the MAIN purpose of a change management policy?
To assure management that changes to the Information Technology (IT) infrastructure are necessary
To identify the changes that may be made to the Information Technology (IT) infrastructure
To verify that changes to the Information Technology (IT) infrastructure are approved
To determine the necessary for implementing modifications to the Information Technology (IT) infrastructure
The main purpose of a change management policy is to ensure that all changes made to the IT infrastructure are approved, documented, and communicated effectively across the organization. This helps to minimize the risks associated with unauthorized or poorly planned changes, such as security breaches, system failures, or compliance issues. A change management policy does not assure management that changes are necessary, identify the changes that may be made, or determine the necessity for implementing modifications, although these may be part of the change management process. References: CISSP CBK Reference
The MAIN use of Layer 2 Tunneling Protocol (L2TP) is to tunnel data
through a firewall at the Session layer
through a firewall at the Transport layer
in the Point-to-Point Protocol (PPP)
in the Payload Compression Protocol (PCP)
The main use of Layer 2 Tunneling Protocol (L2TP) is to tunnel data in the Point-to-Point Protocol (PPP). L2TP is a tunneling protocol that operates at the data link layer (Layer 2) of the OSI model, and is used to support virtual private networks (VPNs) or as part of the delivery of services by ISPs. L2TP does not provide encryption or authentication by itself, but it can be combined with IPsec to provide security and confidentiality for the tunneled data. L2TP is commonly used to tunnel PPP sessions over an IP network, such as the Internet. PPP is a protocol that establishes a direct connection between two nodes, and provides authentication, encryption, and compression for the data transmitted over the connection. PPP is often used to connect a remote client to a corporate network, or a user to an ISP. By using L2TP to encapsulate PPP packets, the connection can be extended over a public or shared network, creating a VPN. This way, the user can access the network resources and services securely and transparently, as if they were directly connected to the network. The other options are not the main use of L2TP, as they involve different protocols or layers. L2TP does not tunnel data through a firewall, but rather over an IP network. L2TP does not operate at the session layer or the transport layer, but at the data link layer. L2TP does not use the Payload Compression Protocol (PCP), but rather the Point-to-Point Protocol (PPP). References: Layer 2 Tunneling Protocol - Wikipedia; What is the Layer 2 Tunneling Protocol (L2TP)? - NordVPN; Understanding VPN protocols: OpenVPN, L2TP, WireGuard & more.
As part of the security assessment plan, the security professional has been asked to use a negative testing strategy on a new website. Which of the following actions would be performed?
Use a web scanner to scan for vulnerabilities within the website.
Perform a code review to ensure that the database references are properly addressed.
Establish a secure connection to the web server to validate that only the approved ports are open.
Enter only numbers in the web form and verify that the website prompts the user to enter a valid input.
A negative testing strategy is a type of software testing that aims to verify how the system handles invalid or unexpected inputs, errors, or conditions. A negative testing strategy can help identify potential bugs, vulnerabilities, or failures that could compromise the functionality, security, or usability of the system. One example of a negative testing strategy is to enter only numbers in a web form that expects a text input, such as a name or an email address, and verify that the website prompts the user to enter a valid input. This can help ensure that the website has proper input validation and error handling mechanisms, and that it does not accept or process any malicious or malformed data. A web scanner, a code review, and a secure connection are not examples of a negative testing strategy, as they do not involve providing invalid or unexpected inputs to the system.
What is the PRIMARY role of a scrum master in agile development?
To choose the primary development language
To choose the integrated development environment
To match the software requirements to the delivery plan
To project manage the software delivery
The primary role of a scrum master in agile development is to match the software requirements to the delivery plan. A scrum master is a facilitator who helps the development team and the product owner to collaborate and deliver the software product incrementally and iteratively, following the agile principles and practices. A scrum master is responsible for ensuring that the team follows the scrum framework, which includes defining the product backlog, planning the sprints, conducting the daily stand-ups, reviewing the deliverables, and reflecting on the process. A scrum master is not responsible for choosing the primary development language, the integrated development environment, or project managing the software delivery, although they may provide guidance and support to the team on these aspects. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 933; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 855.
A security analyst for a large financial institution is reviewing network traffic related to an incident. The analyst determines the traffic is irrelevant to the investigation but in the process of the review, the analyst also finds that an applications data, which included full credit card cardholder data, is transferred in clear text between the server and user’s desktop. The analyst knows this violates the Payment Card Industry Data Security Standard (PCI-DSS). Which of the following is the analyst’s next step?
Send the log file co-workers for peer review
Include the full network traffic logs in the incident report
Follow organizational processes to alert the proper teams to address the issue.
Ignore data as it is outside the scope of the investigation and the analyst’s role.
Section: Security Operations
What Is the FIRST step in establishing an information security program?
Establish an information security policy.
Identify factors affecting information security.
Establish baseline security controls.
Identify critical security infrastructure.
The first step in establishing an information security program is to establish an information security policy. An information security policy is a document that defines the objectives, scope, principles, and responsibilities of the information security program. An information security policy provides the foundation and direction for the information security program, as well as the basis for the development and implementation of the information security standards, procedures, and guidelines. An information security policy should be approved and supported by the senior management, and communicated and enforced across the organization. Identifying factors affecting information security, establishing baseline security controls, and identifying critical security infrastructure are not the first steps in establishing an information security program, but they may be part of the subsequent steps, such as the risk assessment, risk mitigation, or risk monitoring. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 22; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 14.
Which of the following is the BEST metric to obtain when gaining support for an Identify and Access
Management (IAM) solution?
Application connection successes resulting in data leakage
Administrative costs for restoring systems after connection failure
Employee system timeouts from implementing wrong limits
Help desk costs required to support password reset requests
Identify and Access Management (IAM) is the process of managing the identities and access rights of users and devices in an organization. IAM solutions can provide various benefits, such as improving security, compliance, productivity, and user experience. However, implementing an IAM solution may also require significant investment and resources, and therefore, it is important to obtain support from the stakeholders and decision-makers. One of the best metrics to obtain when gaining support for an IAM solution is the help desk costs required to support password reset requests. This metric can demonstrate the following advantages of an IAM solution:
When determining who can accept the risk associated with a vulnerability, which of the following is the MOST important?
Countermeasure effectiveness
Type of potential loss
Incident likelihood
Information ownership
Information ownership is the most important factor when determining who can accept the risk associated with a vulnerability. Information ownership is the concept that assigns the roles and responsibilities for the creation, maintenance, protection, and disposal of information assets within an organization. Information owners are the individuals or entities who have the authority and accountability for the information assets, and who can make decisions regarding the information lifecycle, classification, access, and usage. Information owners are also responsible for accepting or rejecting the risk associated with the information assets, and for ensuring that the risk is managed and communicated appropriately. Information owners can delegate some of their responsibilities to other roles, such as information custodians, information users, or information stewards, but they cannot delegate their accountability for the information assets and the associated risk. Countermeasure effectiveness, type of potential loss, and incident likelihood are not the most important factors when determining who can accept the risk associated with a vulnerability, although they are relevant or useful factors. Countermeasure effectiveness is the measure of how well a security control reduces or eliminates the risk. Countermeasure effectiveness can help to evaluate the cost-benefit and performance of the security control, and to determine the level of residual risk. Type of potential loss is the measure of the adverse impact or consequence that can result from a risk event. Type of potential loss can include financial, operational, reputational, legal, or strategic losses. Type of potential loss can help to assess the severity and priority of the risk, and to justify the investment and implementation of the security control. Incident likelihood is the measure of the probability or frequency of a risk event occurring. Incident likelihood can be influenced by various factors, such as the threat capability, the vulnerability exposure, the environmental conditions, or the historical data. Incident likelihood can help to estimate the level and trend of the risk, and to select the appropriate risk response and security control.
When developing a business case for updating a security program, the security program owner MUST do
which of the following?
Identify relevant metrics
Prepare performance test reports
Obtain resources for the security program
Interview executive management
When developing a business case for updating a security program, the security program owner must identify relevant metrics that can help to measure and evaluate the performance and the effectiveness of the security program, as well as to justify and support the investment and the return of the security program. A business case is a document or a presentation that provides the rationale or the argument for initiating or continuing a project or a program, such as a security program, by analyzing and comparing the costs and the benefits, the risks and the opportunities, and the alternatives and the recommendations of the project or the program. A business case can provide some benefits for security, such as enhancing the visibility and the accountability of the security program, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. A business case can involve various elements and steps, such as:
Identifying relevant metrics is a key element or step of developing a business case for updating a security program, as it can help to measure and evaluate the performance and the effectiveness of the security program, as well as to justify and support the investment and the return of the security program. Metrics are measures or indicators that can quantify or qualify the attributes or the outcomes of a process or an activity, such as the security program, and that can provide the information or the feedback that can facilitate the decision making or the improvement of the process or the activity. Metrics can provide some benefits for security, such as enhancing the accuracy and the reliability of the security program, preventing or detecting fraud or errors, and supporting the audit and the compliance activities. Identifying relevant metrics can involve various tasks or duties, such as:
Preparing performance test reports, obtaining resources for the security program, and interviewing executive management are not the tasks or duties that the security program owner must do when developing a business case for updating a security program, although they may be related or possible tasks or duties. Preparing performance test reports is a task or a technique that can be used by the security program owner, the security program team, or the security program auditor, to verify or validate the functionality and the quality of the security program, according to the standards and the criteria of the security program, and to detect and report any errors, bugs, or vulnerabilities in the security program. Obtaining resources for the security program is a task or a technique that can be used by the security program owner, the security program sponsor, or the security program manager, to acquire or allocate the necessary or the sufficient resources for the security program, such as the financial, human, or technical resources, and to manage or optimize the use or the distribution of the resources for the security program. Interviewing executive management is a task or a technique that can be used by the security program owner, the security program team, or the security program auditor, to collect and analyze the information and the feedback about the security program, from the executive management, who are the primary users or recipients of the security program, and who have the authority and the accountability to implement or execute the security program.
A control to protect from a Denial-of-Service (DoS) attach has been determined to stop 50% of attacks, and additionally reduces the impact of an attack by 50%. What is the residual risk?
25%
50%
75%
100%
The residual risk is 25% in this scenario. Residual risk is the portion of risk that remains after security measures have been applied to mitigate the risk. Residual risk can be calculated by subtracting the risk reduction from the total risk. In this scenario, the total risk is 100%, and the risk reduction is 75%. The risk reduction is 75% because the control stops 50% of attacks, and reduces the impact of an attack by 50%. Therefore, the residual risk is 100% - 75% = 25%. Alternatively, the residual risk can be calculated by multiplying the probability and the impact of the remaining risk. In this scenario, the probability of an attack is 50%, and the impact of an attack is 50%. Therefore, the residual risk is 50% x 50% = 25%. 50%, 75%, and 100% are not the correct answers to the question, as they do not reflect the correct calculation of the residual risk.
Even though a particular digital watermark is difficult to detect, which of the following represents a way it might still be inadvertently removed?
Truncating parts of the data
Applying Access Control Lists (ACL) to the data
Appending non-watermarked data to watermarked data
Storing the data in a database
A digital watermark is a hidden signal embedded in a data file that can be used to identify the owner, source, or authenticity of the data. A watermark is difficult to detect and remove without degrading the quality of the data. However, one way that a watermark might still be inadvertently removed is by truncating parts of the data, such as cropping an image or cutting a video. This might affect the location or size of the watermark and make it unreadable or invalid. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, page 507; CISSP For Dummies, 7th Edition, page 344.
Which of the BEST internationally recognized standard for evaluating security products and systems?
Payment Card Industry Data Security Standards (PCI-DSS)
Common Criteria (CC)
Health Insurance Portability and Accountability Act (HIPAA)
Sarbanes-Oxley (SOX)
The best internationally recognized standard for evaluating security products and systems is Common Criteria (CC), which is a framework or a methodology that defines and describes the criteria or the guidelines for the evaluation or the assessment of the security functionality and the security assurance of information technology (IT) products and systems, such as hardware, software, firmware, or network devices. Common Criteria (CC) can provide some benefits for security, such as enhancing the confidence and the trust in the security products and systems, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Common Criteria (CC) can involve various elements and roles, such as:
Payment Card Industry Data Security Standard (PCI-DSS), Health Insurance Portability and Accountability Act (HIPAA), and Sarbanes-Oxley (SOX) are not internationally recognized standards for evaluating security products and systems, although they may be related or relevant regulations or frameworks for security. Payment Card Industry Data Security Standard (PCI-DSS) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the cardholder data or the payment card information, such as the credit card number, the expiration date, or the card verification value, and that applies to the entities or the organizations that are involved or engaged in the processing, the storage, or the transmission of the cardholder data or the payment card information, such as the merchants, the service providers, or the acquirers. Health Insurance Portability and Accountability Act (HIPAA) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the protected health information (PHI) or the personal health information, such as the medical records, the diagnosis, or the treatment, and that applies to the entities or the organizations that are involved or engaged in the provision, the payment, or the operation of the health care services or the health care plans, such as the health care providers, the health care clearinghouses, or the health plans. Sarbanes-Oxley (SOX) is a regulation or a framework that defines and describes the security requirements or the objectives for the protection and the management of the financial information or the financial reports, such as the income statement, the balance sheet, or the cash flow statement, and that applies to the entities or the organizations
In an organization where Network Access Control (NAC) has been deployed, a device trying to connect to the network is being placed into an isolated domain. What could be done on this device in order to obtain proper
connectivity?
Connect the device to another network jack
Apply remediation’s according to security requirements
Apply Operating System (OS) patches
Change the Message Authentication Code (MAC) address of the network interface
Network Access Control (NAC) is a technology that enforces security policies and controls on the devices that attempt to access a network. NAC can verify the identity and compliance of the devices, and grant or deny access based on predefined rules and criteria. NAC can also place the devices into different domains or segments, depending on their security posture and role. One of the domains that NAC can create is the isolated domain, which is a restricted network segment that isolates the devices that do not meet the security requirements or pose a potential threat to the network. The devices in the isolated domain have limited or no access to the network resources, and are subject to remediation actions. Remediation is the process of fixing or improving the security status of the devices, by applying the necessary updates, patches, configurations, or software. Remediation can be performed automatically by the NAC system, or manually by the device owner or administrator. Therefore, the best thing that can be done on a device that is placed into an isolated domain by NAC is to apply remediation’s according to the security requirements, which can restore the device’s compliance and enable it to access the network normally.
A company receives an email threat informing of an Imminent Distributed Denial of Service (DDoS) attack
targeting its web application, unless ransom is paid. Which of the following techniques BEST addresses that threat?
Deploying load balancers to distribute inbound traffic across multiple data centers
Set Up Web Application Firewalls (WAFs) to filter out malicious traffic
Implementing reverse web-proxies to validate each new inbound connection
Coordinate with and utilize capabilities within Internet Service Provider (ISP)
The best technique to address the threat of an imminent DDoS attack targeting a web application is to coordinate with and utilize the capabilities within the ISP. A DDoS attack is a malicious attempt to disrupt the normal traffic of a targeted server, service, or network by overwhelming the target or its surrounding infrastructure with a flood of Internet traffic. A DDoS attack can cause severe damage to the availability, performance, and reputation of the web application, as well as incur financial losses and legal liabilities. Therefore, it is important to have a DDoS mitigation strategy in place to prevent or minimize the impact of such attacks. One of the most effective ways to mitigate DDoS attacks is to leverage the capabilities of the ISP, as they have more resources, bandwidth, and expertise to handle large volumes of traffic and filter out malicious packets. The ISP can also provide additional services such as traffic monitoring, alerting, reporting, and analysis, as well as assist with the investigation and prosecution of the attackers. The ISP can also work with other ISPs and network operators to coordinate the response and share information about the attack. The other options are not the best techniques to address the threat of an imminent DDoS attack, as they may not be sufficient, timely, or scalable to handle the attack. Deploying load balancers, setting up web application firewalls, and implementing reverse web-proxies are some of the measures that can be taken at the application level to improve the resilience and security of the web application, but they may not be able to cope with the magnitude and complexity of a DDoS attack, especially if the attack targets the network layer or the infrastructure layer. Moreover, these measures may require more time, cost, and effort to implement and maintain, and may not be feasible to deploy in a short notice. References: What is a distributed denial-of-service (DDoS) attack?; What is a DDoS Attack? DDoS Meaning, Definition & Types | Fortinet; Denial-of-service attack - Wikipedia.
Extensible Authentication Protocol-Message Digest 5 (EAP-MD5) only provides which of the following?
Mutual authentication
Server authentication
User authentication
Streaming ciphertext data
Extensible Authentication Protocol-Message Digest 5 (EAP-MD5) is a type of EAP method that uses the MD5 hashing algorithm to provide user authentication. EAP is a framework that allows different authentication methods to be used in network access scenarios, such as wireless, VPN, or dial-up. EAP-MD5 only provides user authentication, which means that it verifies the identity of the user who is requesting access to the network, but not the identity of the network server who is granting access. Therefore, EAP-MD5 does not provide mutual authentication, server authentication, or streaming ciphertext data. EAP-MD5 is considered insecure and vulnerable to various attacks, such as offline dictionary attacks, man-in-the-middle attacks, or replay attacks, and should not be used in modern networks.
Which of the following is the BEST reason for writing an information security policy?
To support information security governance
To reduce the number of audit findings
To deter attackers
To implement effective information security controls
The best reason for writing an information security policy is to support information security governance. Information security governance is the process or the framework of establishing and enforcing the policies and standards for the protection and the management of the information and the systems within an organization, as well as for overseeing and evaluating the performance and the effectiveness of the information security program and the information security controls. Information security governance can provide some benefits for security, such as enhancing the visibility and the accountability of the information security program and the information security controls, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. Information security governance can involve various elements and roles, such as:
Writing an information security policy is the best reason for writing an information security policy, as it is the foundation and the core of the information security governance process or framework, and it provides the guidance and the direction for the information security program and the information security controls, as well as for the information security stakeholders. Writing an information security policy can involve various tasks or duties, such as:
To reduce the number of audit findings, to deter attackers, and to implement effective information security controls are not the best reasons for writing an information security policy, although they may be related or possible outcomes or benefits of writing an information security policy. To reduce the number of audit findings is an outcome or a benefit of writing an information security policy, as it implies that the information security policy has helped to improve the performance and the effectiveness of the information security program and the information security controls, as well as to comply with the industry regulations or the best practices, and that the information security policy has supported the audit and the compliance activities, by providing the evidence or the data that can validate or verify the information security program and the information security controls. However, to reduce the number of audit findings is not the best reason for writing an information security policy, as it is not the primary or the most important purpose or objective of writing an information security policy, and it may not be true or applicable for all information security policies.
Which of the following is the BEST way to reduce the impact of an externally sourced flood attack?
Have the service provider block the soiree address.
Have the soiree service provider block the address.
Block the source address at the firewall.
Block all inbound traffic until the flood ends.
The best way to reduce the impact of an externally sourced flood attack is to have the service provider block the source address. A flood attack is a type of denial-of-service attack that aims to overwhelm the target system or network with a large amount of traffic, such as SYN packets, ICMP packets, or UDP packets. An externally sourced flood attack is a flood attack that originates from outside the target’s network, such as from the internet. Having the service provider block the source address can help to reduce the impact of an externally sourced flood attack, as it can prevent the malicious traffic from reaching the target’s network, and thus conserve the network bandwidth and resources. Having the source service provider block the address, blocking the source address at the firewall, or blocking all inbound traffic until the flood ends are not the best ways to reduce the impact of an externally sourced flood attack, as they may not be feasible, effective, or efficient, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 745; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 525.
When conducting a security assessment of access controls, which activity is part of the data analysis phase?
Present solutions to address audit exceptions.
Conduct statistical sampling of data transactions.
Categorize and identify evidence gathered during the audit.
Collect logs and reports.
The activity that is part of the data analysis phase when conducting a security assessment of access controls is to categorize and identify evidence gathered during the audit. A security assessment of access controls is a process that evaluates the effectiveness and compliance of the access controls implemented in a system or an organization. A security assessment of access controls typically consists of four phases: planning, data collection, data analysis, and reporting. The data analysis phase is the phase where the collected data is processed, interpreted, and evaluated, based on the audit objectives, criteria, and standards. The data analysis phase involves activities such as categorizing and identifying evidence gathered during the audit, which means sorting and labeling the data according to their type, source, and relevance, and verifying their validity, reliability, and sufficiency. Presenting solutions to address audit exceptions, conducting statistical sampling of data transactions, and collecting logs and reports are not activities that are part of the data analysis phase, but of the reporting, data collection, and data collection phases, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 75; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 67.
Which of the following is a characteristic of an internal audit?
An internal audit is typically shorter in duration than an external audit.
The internal audit schedule is published to the organization well in advance.
The internal auditor reports to the Information Technology (IT) department
Management is responsible for reading and acting upon the internal audit results
A characteristic of an internal audit is that management is responsible for reading and acting upon the internal audit results. An internal audit is an independent and objective evaluation or assessment of the internal controls, processes, or activities of an organization, performed by a group of auditors or professionals who are part of the organization, such as the internal audit department or the audit committee. An internal audit can provide some benefits for security, such as enhancing the accuracy and the reliability of the operations, preventing or detecting fraud or errors, and supporting the audit and the compliance activities. An internal audit can involve various steps and roles, such as:
Management is responsible for reading and acting upon the internal audit results, as they are the primary users or recipients of the internal audit report, and they have the authority and the accountability to implement or execute the recommendations or the improvements suggested by the internal audit report, as well as to report or disclose the internal audit results to the external parties, such as the regulators, the shareholders, or the customers. An internal audit is typically shorter in duration than an external audit, the internal audit schedule is published to the organization well in advance, and the internal auditor reports to the audit committee are not characteristics of an internal audit, although they may be related or possible aspects of an internal audit. An internal audit is typically shorter in duration than an external audit, as it is performed by a group of auditors or professionals who are part of the organization, and who have more familiarity and access to the internal controls, processes, or activities of the organization, compared to a group of auditors or professionals who are outside the organization, and who have less familiarity and access to the internal controls, processes, or activities of the organization. However, an internal audit is typically shorter in duration than an external audit is not a characteristic of an internal audit, as it is not a defining or a distinguishing feature of an internal audit, and it may vary depending on the type or the nature of the internal audit, such as the objectives, scope, criteria, or methodology of the internal audit. The internal audit schedule is published to the organization well in advance, as it is a good practice or a technique that can help to ensure the transparency and the accountability of the internal audit, as well as to facilitate the coordination and the cooperation of the internal audit stakeholders, such as the management, the audit committee, the internal auditor, or the audit team.
When developing solutions for mobile devices, in which phase of the Software Development Life Cycle (SDLC) should technical limitations related to devices be specified?
Implementation
Initiation
Review
Development
The technical limitations related to devices should be specified in the initiation phase of the Software Development Life Cycle (SDLC) when developing solutions for mobile devices. The initiation phase is the first phase of the SDLC, where the project scope, objectives, requirements, and constraints are defined and documented. The technical limitations related to devices are part of the constraints that affect the design and development of the software solutions for mobile devices, such as the screen size, memory capacity, battery life, network connectivity, or security features. The technical limitations should be identified and addressed early in the SDLC, to avoid rework, delays, or failures in the later phases. The implementation, review, and development phases are not the phases where the technical limitations should be specified, but where they should be considered and tested. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 922; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 7: Software Development Security, page 844.
Which Identity and Access Management (IAM) process can be used to maintain the principle of least
privilege?
identity provisioning
access recovery
multi-factor authentication (MFA)
user access review
The Identity and Access Management (IAM) process that can be used to maintain the principle of least privilege is user access review. User access review is the process of periodically reviewing and verifying the user accounts and access rights on a system or a network, and ensuring that they are appropriate, necessary, and compliant with the policies and standards. User access review can help to maintain the principle of least privilege by identifying and removing any excessive, obsolete, or unauthorized access rights that may pose a security risk or violate the regulations. User access review can also help to support the audit and compliance activities, as well as the identity lifecycle management activities. Identity provisioning, access recovery, and multi-factor authentication (MFA) are not the IAM processes that can be used to maintain the principle of least privilege, although they may be related or useful processes. Identity provisioning is the process of creating, modifying, or deleting the user accounts and access rights on a system or a network. Identity provisioning can help to establish the principle of least privilege by granting the user accounts and access rights that are aligned with the user roles or functions within the organization. However, identity provisioning is not sufficient to maintain the principle of least privilege, as the user accounts and access rights may change or become outdated over time, due to various factors, such as role changes, transfers, promotions, or terminations. Access recovery is the process of restoring the user accounts and access rights on a system or a network, after they have been lost, corrupted, or compromised. Access recovery can help to ensure the availability and integrity of the user accounts and access rights, as well as to mitigate the impact of a security incident or a disaster. However, access recovery is not a process that can be used to maintain the principle of least privilege, as it does not involve reviewing or verifying the appropriateness or necessity of the user accounts and access rights. Multi-factor authentication (MFA) is a technique that uses two or more factors of authentication to verify the identity of the user who accesses a system or a network. MFA can help to enhance the security and reliability of the authentication process, by requiring the user to provide something they know (e.g., password), something they have (e.g., token), or something they are (e.g., biometric). However, MFA is not a process that can be used to maintain the principle of least privilege, as it does not affect the user accounts and access rights, but only the user access credentials.
In a change-controlled environment, which of the following is MOST likely to lead to unauthorized changes to
production programs?
Modifying source code without approval
Promoting programs to production without approval
Developers checking out source code without approval
Developers using Rapid Application Development (RAD) methodologies without approval
In a change-controlled environment, the activity that is most likely to lead to unauthorized changes to production programs is promoting programs to production without approval. A change-controlled environment is an environment that follows a specific process or a procedure for managing and tracking the changes to the hardware and software components of a system or a network, such as the configuration, the functionality, or the security of the system or the network. A change-controlled environment can provide some benefits for security, such as enhancing the performance and the functionality of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. A change-controlled environment can involve various steps and roles, such as:
Promoting programs to production without approval is the activity that is most likely to lead to unauthorized changes to production programs, as it violates the change-controlled environment process and procedure, and it introduces potential risks or issues to the system or the network. Promoting programs to production without approval means that the code or the program of the system or the network is moved or transferred from the development or the testing environment to the production or the operational environment, without obtaining the necessary or the sufficient authorization or consent from the relevant or the responsible parties, such as the change manager, the change review board, or the change advisory board. Promoting programs to production without approval can lead to unauthorized changes to production programs, as it can result in the following consequences:
An organization’s security policy delegates to the data owner the ability to assign which user roles have access
to a particular resource. What type of authorization mechanism is being used?
Discretionary Access Control (DAC)
Role Based Access Control (RBAC)
Media Access Control (MAC)
Mandatory Access Control (MAC)
Discretionary Access Control (DAC) is a type of authorization mechanism that grants or denies access to resources based on the identity of the user and the permissions assigned by the owner of the resource. The owner of the resource has the discretion to decide who can access the resource and what level of access they can have. For example, the owner of a file can assign read, write, or execute permissions to different users or groups. DAC is flexible and easy to implement, but it also poses security risks, such as unauthorized access, data leakage, or privilege escalation, if the owner is not careful or knowledgeable about the security implications of their decisions.
What protocol is often used between gateway hosts on the Internet?
Exterior Gateway Protocol (EGP)
Border Gateway Protocol (BGP)
Open Shortest Path First (OSPF)
Internet Control Message Protocol (ICMP)
Border Gateway Protocol (BGP) is a protocol that is often used between gateway hosts on the Internet. A gateway host is a network device that connects two or more different networks, such as a router or a firewall. BGP is a routing protocol that exchanges routing information between autonomous systems (ASes), which are groups of networks under a single administrative control. BGP is used to determine the best path to reach a destination network on the Internet, based on various factors such as hop count, bandwidth, latency, and policy. BGP is also used to implement interdomain routing policies, such as traffic engineering, load balancing, and security. BGP is the de facto standard for Internet routing and is widely deployed by Internet service providers (ISPs) and large enterprises. The other options are not protocols that are often used between gateway hosts on the Internet. Exterior Gateway Protocol (EGP) is an obsolete protocol that was used to exchange routing information between ASes before BGP. Open Shortest Path First (OSPF) is a protocol that is used to exchange routing information within an AS, not between ASes. Internet Control Message Protocol (ICMP) is a protocol that is used to send error and control messages between hosts and routers, not to exchange routing information. References: Border Gateway Protocol - Wikipedia; What is Border Gateway Protocol (BGP)? - Definition from WhatIs.com; What is BGP? | How BGP Routing Works | Cloudflare.
Assessing a third party’s risk by counting bugs in the code may not be the best measure of an attack surface
within the supply chain.
Which of the following is LEAST associated with the attack surface?
Input protocols
Target processes
Error messages
Access rights
Error messages are not part of the attack surface, which is the sum of all the points where an attacker can try to enter or extract data from a system. Error messages are the output of the system when something goes wrong, and they can reveal useful information to an attacker, such as the system version, configuration, or vulnerabilities. However, they are not directly associated with the attack surface. Input protocols, target processes, and access rights are all factors that can affect the attack surface, as they determine how the system interacts with the external environment and what resources are exposed or protected. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Security Engineering, page 587; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 3: Security Architecture and Engineering, page 375.
Which of the following is the MOST efficient mechanism to account for all staff during a speedy nonemergency evacuation from a large security facility?
Large mantrap where groups of individuals leaving are identified using facial recognition technology
Radio Frequency Identification (RFID) sensors worn by each employee scanned by sensors at each exitdoor
Emergency exits with push bars with coordinates at each exit checking off the individual against a
predefined list
Card-activated turnstile where individuals are validated upon exit
Section: Security Operations
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control?
Add a new rule to the application layer firewall
Block access to the service
Install an Intrusion Detection System (IDS)
Patch the application source code
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web-based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as:
An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as:
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to:
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch.
Which of the following factors contributes to the weakness of Wired Equivalent Privacy (WEP) protocol?
WEP uses a small range Initialization Vector (IV)
WEP uses Message Digest 5 (MD5)
WEP uses Diffie-Hellman
WEP does not use any Initialization Vector (IV)
WEP uses a small range Initialization Vector (IV) is the factor that contributes to the weakness of Wired Equivalent Privacy (WEP) protocol. WEP is a security protocol that provides encryption and authentication for wireless networks, such as Wi-Fi. WEP uses the RC4 stream cipher to encrypt the data packets, and the CRC-32 checksum to verify the data integrity. WEP also uses a shared secret key, which is concatenated with a 24-bit Initialization Vector (IV), to generate the keystream for the RC4 encryption. WEP has several weaknesses and vulnerabilities, such as:
WEP has been deprecated and replaced by more secure protocols, such as Wi-Fi Protected Access (WPA) or Wi-Fi Protected Access II (WPA2), which use stronger encryption and authentication methods, such as the Temporal Key Integrity Protocol (TKIP), the Advanced Encryption Standard (AES), or the Extensible Authentication Protocol (EAP).
The other options are not factors that contribute to the weakness of WEP, but rather factors that are irrelevant or incorrect. WEP does not use Message Digest 5 (MD5), which is a hash function that produces a 128-bit output from a variable-length input. WEP does not use Diffie-Hellman, which is a method for generating a shared secret key between two parties. WEP does use an Initialization Vector (IV), which is a 24-bit value that is concatenated with the secret key.
TESTED 26 Apr 2024