3 Months Free Update
3 Months Free Update
3 Months Free Update
Match the name of access control model with its associated restriction.
Drag each access control model to its appropriate restriction access on the right.
The correct matches are as follows:
Explanation: The image shows a table with two columns. The left column lists four different types of Access Control Models, and the right column lists their associated restrictions. The correct matches are based on the definitions and characteristics of each Access Control Model, as explained below:
References: ISC2 CISSP, 2
What type of wireless network attack BEST describes an Electromagnetic Pulse (EMP) attack?
Radio Frequency (RF) attack
Denial of Service (DoS) attack
Data modification attack
Application-layer attack
A Denial of Service (DoS) attack is a type of wireless network attack that aims to prevent legitimate users from accessing or using a wireless network or service. An Electromagnetic Pulse (EMP) attack is a specific form of DoS attack that involves generating a powerful burst of electromagnetic energy that can damage or disrupt electronic devices and systems, including wireless networks. An EMP attack can cause permanent or temporary loss of wireless network availability, functionality, or performance. A Radio Frequency (RF) attack is a type of wireless network attack that involves interfering with or jamming the radio signals used by wireless devices and networks, but it does not necessarily involve an EMP. A data modification attack is a type of wireless network attack that involves altering or tampering with the data transmitted or received over a wireless network, but it does not necessarily cause a DoS. An application-layer attack is a type of wireless network attack that targets the applications or services running on a wireless network, such as web servers or email servers, but it does not necessarily involve an EMP.
Which Web Services Security (WS-Security) specification negotiates how security tokens will be issued, renewed and validated? Click on the correct specification in the image below.
WS-Trust
WS-Trust is a Web Services Security (WS-Security) specification that negotiates how security tokens will be issued, renewed and validated. WS-Trust defines a framework for establishing trust relationships between different parties, and a protocol for requesting and issuing security tokens that can be used to authenticate and authorize the parties. WS-Trust also supports different types of security tokens, such as Kerberos tickets, X.509 certificates, SAML assertions, etc56 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 346; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 465.
Regarding asset security and appropriate retention, which of the following INITIAL top three areas are important to focus on?
Security control baselines, access controls, employee awareness and training
Human resources, asset management, production management
Supply chain lead-time, inventory control, and encryption
Polygraphs, crime statistics, forensics
Regarding asset security and appropriate retention, the initial top three areas that are important to focus on are security control baselines, access controls, employee awareness and training. Asset security and appropriate retention are the processes of identifying, classifying, protecting, and disposing of the assets of an organization, such as data, systems, devices, or facilities. Asset security and appropriate retention can help prevent or reduce the loss, theft, damage, or misuse of the assets, as well as comply with the legal and regulatory requirements. The initial top three areas that can help achieve asset security and appropriate retention are:
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 2: Asset Security, pp. 61-62; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 2: Asset Security, pp. 163-164.
The BEST method to mitigate the risk of a dictionary attack on a system is to
use a hardware token.
use complex passphrases.
implement password history.
encrypt the access control list (ACL).
The best method to mitigate the risk of a dictionary attack on a system is to use complex passphrases. A dictionary attack is a type of brute force attack that tries to guess or crack a password or a passphrase by using a list or a database of common or frequently used words, phrases, or combinations, such as names, dates, or dictionary words. A complex passphrase is a type of password or a passphrase that consists of a long and random sequence of characters, words, or symbols, that is hard to guess or crack by a dictionary attack or any other attack. A complex passphrase can provide a high level of security and entropy for a system, as it increases the possible combinations and reduces the probability of a successful attack.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 328; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, page 289
Which of the following is the MOST important consideration when developing a Disaster Recovery Plan (DRP)?
The dynamic reconfiguration of systems
The cost of downtime
A recovery strategy for all business processes
A containment strategy
According to the CISSP All-in-One Exam Guide1, the most important consideration when developing a Disaster Recovery Plan (DRP) is to have a recovery strategy for all business processes. A DRP is a document that defines the procedures and actions to be taken in the event of a disaster that disrupts the normal operations of an organization. A recovery strategy is a plan that specifies how the organization will restore the critical business processes and functions, as well as the supporting resources, such as data, systems, personnel, and facilities, within the predefined recovery objectives and time frames. A recovery strategy should cover all business processes, not just the IT-related ones, as they may have interdependencies and impacts on each other. A recovery strategy should also be aligned with the business continuity plan (BCP), which is a document that defines the procedures and actions to be taken to ensure the continuity of the essential business operations during and after a disaster. The dynamic reconfiguration of systems is not the most important consideration when developing a DRP, although it may be a useful technique to enhance the resilience and availability of the systems. The dynamic reconfiguration of systems is the ability to change the configuration and functionality of the systems without interrupting their operations, such as adding, removing, or replacing components, modules, or services. The dynamic reconfiguration of systems may help to reduce the downtime and recovery time of the systems, but it does not address the recovery of the business processes and functions. The cost of downtime is not the most important consideration when developing a DRP, although it may be a factor that influences the recovery objectives and priorities. The cost of downtime is the amount of money that the organization loses or spends due to the disruption of its normal operations, such as loss of revenue, productivity, reputation, or customers, as well as the expenses for recovery, restoration, or compensation. The cost of downtime may help to justify the investment and budget for the DRP, but it does not address the recovery of the business processes and functions. A containment strategy is not the most important consideration when developing a DRP, although it may be a part of the incident response plan (IRP), which is a document that defines the procedures and actions to be taken to detect, analyze, contain, eradicate, and recover from a security incident. A containment strategy is a plan that specifies how the organization will isolate and control the incident, such as disconnecting the affected systems, blocking the malicious traffic, or changing the passwords. A containment strategy may help to prevent or limit the damage and spread of the incident, but it does not address the recovery of the business processes and functions. References: 1
In configuration management, what baseline configuration information MUST be maintained for each computer system?
Operating system and version, patch level, applications running, and versions.
List of system changes, test reports, and change approvals
Last vulnerability assessment report and initial risk assessment report
Date of last update, test report, and accreditation certificate
Baseline configuration information is the set of data that describes the state of a computer system at a specific point in time. It is used to monitor and control changes to the system, as well as to assess its compliance with security standards and policies. Baseline configuration information must include the operating system and version, patch level, applications running, and versions, because these are the essential components that define the functionality and security of the system. These components can also affect the compatibility and interoperability of the system with other systems and networks. Therefore, it is important to maintain accurate and up-to-date records of these components for each computer system123. References:
In the network design below, where is the MOST secure Local Area Network (LAN) segment to deploy a Wireless Access Point (WAP) that provides contractors access to the Internet and authorized enterprise services?
LAN 4
The most secure LAN segment to deploy a WAP that provides contractors access to the Internet and authorized enterprise services is LAN 4. A WAP is a device that enables wireless devices to connect to a wired network using Wi-Fi, Bluetooth, or other wireless standards. A WAP can provide convenience and mobility for the users, but it can also introduce security risks, such as unauthorized access, eavesdropping, interference, or rogue access points. Therefore, a WAP should be deployed in a secure LAN segment that can isolate the wireless traffic from the rest of the network and apply appropriate security controls and policies. LAN 4 is connected to the firewall that separates it from the other LAN segments and the Internet. This firewall can provide network segmentation, filtering, and monitoring for the WAP and the wireless devices. The firewall can also enforce the access rules and policies for the contractors, such as allowing them to access the Internet and some authorized enterprise services, but not the other LAN segments that may contain sensitive or critical data or systems34 References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, p. 317; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 4: Communication and Network Security, p. 437.
What is an important characteristic of Role Based Access Control (RBAC)?
Supports Mandatory Access Control (MAC)
Simplifies the management of access rights
Relies on rotation of duties
Requires two factor authentication
An important characteristic of Role Based Access Control (RBAC) is that it simplifies the management of access rights. RBAC is a model of access control that assigns permissions to roles, rather than individual users. Users are then assigned to roles based on their job functions or responsibilities. RBAC simplifies the management of access rights by reducing the complexity and overhead of granting, revoking, or modifying permissions for each user. RBAC also improves the consistency and security of access control by enforcing the principle of least privilege and separation of duties. The other options are not characteristics of RBAC, but rather different models or concepts of access control. Supports Mandatory Access Control (MAC) is a characteristic of MAC, which is a model of access control that assigns security labels to subjects and objects, and enforces access decisions based on the comparison of the labels. Relies on rotation of duties is a concept of access control that involves changing the roles or tasks of users periodically to prevent fraud or collusion. Requires two factor authentication is a concept of access control that involves using two or more factors of authentication to verify the identity of the user. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, p. 267; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 6, p. 334.
Order the below steps to create an effective vulnerability management process.
Which of the following information MUST be provided for user account provisioning?
Full name
Unique identifier
Security question
Date of birth
According to the CISSP CBK Official Study Guide1, the information that must be provided for user account provisioning is the unique identifier. User account provisioning is the process of creating, managing, and deleting user accounts or identities in the system or the network, by using or applying the appropriate methods or mechanisms, such as the policies, procedures, or tools of the system or the network. User account provisioning helps to ensure the security or the integrity of the system or the network, as well as the resources, data, or information that are accessed or used by the user accounts or identities, by enforcing or implementing the principles or the concepts of the identification, authentication, authorization, or accountability of the user accounts or identities. The information that must be provided for user account provisioning is the unique identifier, as it is the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network, such as the username, the email address, or the employee number of the user account or identity. The unique identifier helps to ensure the security or the integrity of the system or the network, as well as the resources, data, or information that are accessed or used by the user account or identity, by preventing or avoiding the duplication, confusion, or collision of the user account or identity with other user accounts or identities in the system or the network, which may lead to the attacks or threats that may compromise or harm the system or the network, such as the impersonation, spoofing, or masquerading of the user account or identity. Full name is not the information that must be provided for user account provisioning, although it may be a benefit or a consequence of providing the unique identifier. Full name is the information that consists of the first name, middle name, and last name of the user account or identity, which is used or applied to represent or display the user account or identity in the system or the network, such as the John Smith, Jane Doe, or Alice Cooper of the user account or identity. Full name helps to provide a more human or personal touch or factor to the user account or identity, as well as to facilitate or enhance the communication or the interaction of the user account or identity with other user accounts or identities in the system or the network. Full name may be a benefit or a consequence of providing the unique identifier, as the unique identifier may be derived or generated from the full name, or the full name may be associated or linked with the unique identifier, of the user account or identity. However, full name is not the information that must be provided for user account provisioning, as it is not the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network. Security question is not the information that must be provided for user account provisioning, although it may be a benefit or a consequence of providing the unique identifier. Security question is the information that consists of a question and an answer that are related or relevant to the user account or identity, which are used or applied to verify or confirm the user account or identity in the system or the network, such as the What is your mother’s maiden name?, What is your favorite color?, or What is the name of your first pet? of the user account or identity. Security question helps to provide an additional layer or level of security or protection to the user account or identity, as well as to facilitate or enhance the recovery or the reset of the user account or identity in the system or the network, in the event of the loss, forgetfulness, or compromise of the user account or identity, such as the password, username, or email address of the user account or identity. Security question may be a benefit or a consequence of providing the unique identifier, as the security question may be derived or generated from the unique identifier, or the security question may be associated or linked with the unique identifier, of the user account or identity. However, security question is not the information that must be provided for user account provisioning, as it is not the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network. Date of birth is not the information that must be provided for user account provisioning, although it may be a benefit or a consequence of providing the unique identifier. Date of birth is the information that consists of the day, month, and year of the birth of the user account or identity, which is used or applied to represent or display the age or the birthday of the user account or identity in the system or the network, such as the 01/01/2000, 31/12/1999, or 29/02/2000 of the user account or identity. Date of birth helps to provide a more human or personal touch or factor to the user account or identity, as well as to facilitate or enhance the communication or the interaction of the user account or identity with other user accounts or identities in the system or the network. Date of birth may be a benefit or a consequence of providing the unique identifier, as the date of birth may be derived or generated from the unique identifier, or the date of birth may be associated or linked with the unique identifier, of the user account or identity. However, date of birth is not the information that must be provided for user account provisioning, as it is not the essential or the fundamental component or element of the user account or identity, which is used or applied to identify or distinguish the user account or identity from other user accounts or identities in the system or the network. References: 1
Which of the following is an essential step before performing Structured Query Language (SQL) penetration tests on a production system?
Verify countermeasures have been deactivated.
Ensure firewall logging has been activated.
Validate target systems have been backed up.
Confirm warm site is ready to accept connections.
An essential step before performing SQL penetration tests on a production system is to validate that the target systems have been backed up. SQL penetration tests are a type of security testing that involves injecting malicious SQL commands or queries into a database or application to exploit vulnerabilities or gain unauthorized access. Performing SQL penetration tests on a production system can cause data loss, corruption, or modification, as well as system downtime or instability. Therefore, it is important to ensure that the target systems have been backed up before conducting the tests, so that the data and system can be restored in case of any damage or disruption. The other options are not essential steps, but rather optional or irrelevant steps. Verifying countermeasures have been deactivated is not an essential step, but rather a risky and unethical step, as it can expose the system to other attacks or compromise the validity of the test results. Ensuring firewall logging has been activated is not an essential step, but rather a good practice, as it can help to monitor and record the test activities and outcomes. Confirming warm site is ready to accept connections is not an essential step, but rather a contingency plan, as it can provide an alternative site for continuing the system operations in case of a major failure or disaster. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 9, p. 471; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, p. 417.
Which of the following is a remote access protocol that uses a static authentication?
Point-to-Point Tunneling Protocol (PPTP)
Routing Information Protocol (RIP)
Password Authentication Protocol (PAP)
Challenge Handshake Authentication Protocol (CHAP)
Password Authentication Protocol (PAP) is a remote access protocol that uses a static authentication method, which means that the username and password are sent in clear text over the network. PAP is considered insecure and vulnerable to eavesdropping and replay attacks, as anyone who can capture the network traffic can obtain the credentials. PAP is supported by Point-to-Point Protocol (PPP), which is a common protocol for establishing remote connections over dial-up, broadband, or wireless networks. PAP is usually used as a fallback option when more secure protocols, such as Challenge Handshake Authentication Protocol (CHAP) or Extensible Authentication Protocol (EAP), are not available or compatible.
In order for a security policy to be effective within an organization, it MUST include
strong statements that clearly define the problem.
a list of all standards that apply to the policy.
owner information and date of last revision.
disciplinary measures for non compliance.
In order for a security policy to be effective within an organization, it must include disciplinary measures for non compliance. A security policy is a document or a statement that defines and communicates the security goals, the objectives, or the expectations of the organization, and that provides the guidance or the direction for the security activities, the processes, or the functions of the organization. A security policy must include disciplinary measures for non compliance, which are the actions or the consequences that the organization will take or impose on the users or the devices that violate or disregard the security policy or the security rules. Disciplinary measures for non compliance can help ensure the effectiveness of the security policy, as they can deter or prevent the users or the devices from engaging in the behaviors or the practices that could jeopardize or undermine the security of the organization, and they can also enforce or reinforce the accountability or the responsibility of the users or the devices for the security of the organization. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 18; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 26
Which of the following represents the GREATEST risk to data confidentiality?
Network redundancies are not implemented
Security awareness training is not completed
Backup tapes are generated unencrypted
Users have administrative privileges
Generating backup tapes unencrypted represents the greatest risk to data confidentiality, as it exposes the data to unauthorized access or disclosure if the tapes are lost, stolen, or intercepted. Backup tapes are often stored off-site or transported to remote locations, which increases the chances of them falling into the wrong hands. If the backup tapes are unencrypted, anyone who obtains them can read the data without any difficulty. Therefore, backup tapes should always be encrypted using strong algorithms and keys, and the keys should be protected and managed separately from the tapes.
The other options do not pose as much risk to data confidentiality as generating backup tapes unencrypted. Network redundancies are not implemented will affect the availability and reliability of the network, but not necessarily the confidentiality of the data. Security awareness training is not completed will increase the likelihood of human errors or negligence that could compromise the data, but not as directly as generating backup tapes unencrypted. Users have administrative privileges will grant users more access and control over the system and the data, but not as widely as generating backup tapes unencrypted.
Which of the following types of technologies would be the MOST cost-effective method to provide a reactive control for protecting personnel in public areas?
Install mantraps at the building entrances
Enclose the personnel entry area with polycarbonate plastic
Supply a duress alarm for personnel exposed to the public
Hire a guard to protect the public area
Supplying a duress alarm for personnel exposed to the public is the most cost-effective method to provide a reactive control for protecting personnel in public areas. A duress alarm is a device that allows a person to signal for help in case of an emergency, such as an attack, a robbery, or a medical condition. A duress alarm can be activated by pressing a button, pulling a cord, or speaking a code word. A duress alarm can alert security personnel, law enforcement, or other responders to the location and nature of the emergency, and initiate appropriate actions. A duress alarm is a reactive control because it responds to an incident after it has occurred, rather than preventing it from happening.
The other options are not as cost-effective as supplying a duress alarm, as they involve more expensive or complex technologies or resources. Installing mantraps at the building entrances is a preventive control that restricts the access of unauthorized persons to the facility, but it also requires more space, maintenance, and supervision. Enclosing the personnel entry area with polycarbonate plastic is a preventive control that protects the personnel from physical attacks, but it also reduces the visibility and ventilation of the area. Hiring a guard to protect the public area is a deterrent control that discourages potential attackers, but it also involves paying wages, benefits, and training costs.
Which of the following actions will reduce risk to a laptop before traveling to a high risk area?
Examine the device for physical tampering
Implement more stringent baseline configurations
Purge or re-image the hard disk drive
Change access codes
Purging or re-imaging the hard disk drive of a laptop before traveling to a high risk area will reduce the risk of data compromise or theft in case the laptop is lost, stolen, or seized by unauthorized parties. Purging or re-imaging the hard disk drive will erase all the data and applications on the laptop, leaving only the operating system and the essential software. This will minimize the exposure of sensitive or confidential information that could be accessed by malicious actors. Purging or re-imaging the hard disk drive should be done using secure methods that prevent data recovery, such as overwriting, degaussing, or physical destruction.
The other options will not reduce the risk to the laptop as effectively as purging or re-imaging the hard disk drive. Examining the device for physical tampering will only detect if the laptop has been compromised after the fact, but will not prevent it from happening. Implementing more stringent baseline configurations will improve the security settings and policies of the laptop, but will not protect the data if the laptop is bypassed or breached. Changing access codes will make it harder for unauthorized users to log in to the laptop, but will not prevent them from accessing the data if they use other methods, such as booting from a removable media or removing the hard disk drive.
What is the MOST important consideration from a data security perspective when an organization plans to relocate?
Ensure the fire prevention and detection systems are sufficient to protect personnel
Review the architectural plans to determine how many emergency exits are present
Conduct a gap analysis of a new facilities against existing security requirements
Revise the Disaster Recovery and Business Continuity (DR/BC) plan
When an organization plans to relocate, the most important consideration from a data security perspective is to conduct a gap analysis of the new facilities against the existing security requirements. A gap analysis is a process that identifies and evaluates the differences between the current state and the desired state of a system or a process. In this case, the gap analysis would compare the security controls and measures implemented in the old and new locations, and identify any gaps or weaknesses that need to be addressed. The gap analysis would also help to determine the costs and resources needed to implement the necessary security improvements in the new facilities.
The other options are not as important as conducting a gap analysis, as they do not directly address the data security risks associated with relocation. Ensuring the fire prevention and detection systems are sufficient to protect personnel is a safety issue, not a data security issue. Reviewing the architectural plans to determine how many emergency exits are present is also a safety issue, not a data security issue. Revising the Disaster Recovery and Business Continuity (DR/BC) plan is a good practice, but it is not a preventive measure, rather a reactive one. A DR/BC plan is a document that outlines how an organization will recover from a disaster and resume its normal operations. A DR/BC plan should be updated regularly, not only when relocating.
All of the following items should be included in a Business Impact Analysis (BIA) questionnaire EXCEPT questions that
determine the risk of a business interruption occurring
determine the technological dependence of the business processes
Identify the operational impacts of a business interruption
Identify the financial impacts of a business interruption
A Business Impact Analysis (BIA) is a process that identifies and evaluates the potential effects of natural and man-made disasters on business operations. The BIA questionnaire is a tool that collects information from business process owners and stakeholders about the criticality, dependencies, recovery objectives, and resources of their processes. The BIA questionnaire should include questions that:
The BIA questionnaire should not include questions that determine the risk of a business interruption occurring, as this is part of the risk assessment process, which is a separate activity from the BIA. The risk assessment process identifies and analyzes the threats and vulnerabilities that could cause a business interruption, and estimates the likelihood and impact of such events. The risk assessment process also evaluates the existing controls and mitigation strategies, and recommends additional measures to reduce the risk to an acceptable level.
Intellectual property rights are PRIMARY concerned with which of the following?
Owner’s ability to realize financial gain
Owner’s ability to maintain copyright
Right of the owner to enjoy their creation
Right of the owner to control delivery method
Intellectual property rights are primarily concerned with the owner’s ability to realize financial gain from their creation. Intellectual property is a category of intangible assets that are the result of human creativity and innovation, such as inventions, designs, artworks, literature, music, software, etc. Intellectual property rights are the legal rights that grant the owner the exclusive control over the use, reproduction, distribution, and modification of their intellectual property. Intellectual property rights aim to protect the owner’s interests and incentives, and to reward them for their contribution to the society and economy.
The other options are not the primary concern of intellectual property rights, but rather the secondary or incidental benefits or aspects of them. The owner’s ability to maintain copyright is a means of enforcing intellectual property rights, but not the end goal of them. The right of the owner to enjoy their creation is a personal or moral right, but not a legal or economic one. The right of the owner to control the delivery method is a specific or technical aspect of intellectual property rights, but not a general or fundamental one.
When assessing an organization’s security policy according to standards established by the International Organization for Standardization (ISO) 27001 and 27002, when can management responsibilities be defined?
Only when assets are clearly defined
Only when standards are defined
Only when controls are put in place
Only procedures are defined
When assessing an organization’s security policy according to standards established by the ISO 27001 and 27002, management responsibilities can be defined only when standards are defined. Standards are the specific rules, guidelines, or procedures that support the implementation of the security policy. Standards define the minimum level of security that must be achieved by the organization, and provide the basis for measuring compliance and performance. Standards also assign roles and responsibilities to different levels of management and staff, and specify the reporting and escalation procedures.
Management responsibilities are the duties and obligations that managers have to ensure the effective and efficient execution of the security policy and standards. Management responsibilities include providing leadership, direction, support, and resources for the security program, establishing and communicating the security objectives and expectations, ensuring compliance with the legal and regulatory requirements, monitoring and reviewing the security performance and incidents, and initiating corrective and preventive actions when needed.
Management responsibilities cannot be defined without standards, as standards provide the framework and criteria for defining what managers need to do and how they need to do it. Management responsibilities also depend on the scope and complexity of the security policy and standards, which may vary depending on the size, nature, and context of the organization. Therefore, standards must be defined before management responsibilities can be defined.
The other options are not correct, as they are not prerequisites for defining management responsibilities. Assets are the resources that need to be protected by the security policy and standards, but they do not determine the management responsibilities. Controls are the measures that are implemented to reduce the security risks and achieve the security objectives, but they do not determine the management responsibilities. Procedures are the detailed instructions that describe how to perform the security tasks and activities, but they do not determine the management responsibilities.
A company whose Information Technology (IT) services are being delivered from a Tier 4 data center, is preparing a companywide Business Continuity Planning (BCP). Which of the following failures should the IT manager be concerned with?
Application
Storage
Power
Network
A company whose IT services are being delivered from a Tier 4 data center should be most concerned with application failures when preparing a companywide BCP. A BCP is a document that describes how an organization will continue its critical business functions in the event of a disruption or disaster. A BCP should include a risk assessment, a business impact analysis, a recovery strategy, and a testing and maintenance plan.
A Tier 4 data center is the highest level of data center classification, according to the Uptime Institute. A Tier 4 data center has the highest level of availability, reliability, and fault tolerance, as it has multiple and independent paths for power and cooling, and redundant and backup components for all systems. A Tier 4 data center has an uptime rating of 99.995%, which means it can only experience 0.4 hours of downtime per year. Therefore, the likelihood of a power, storage, or network failure in a Tier 4 data center is very low, and the impact of such a failure would be minimal, as the data center can quickly switch to alternative sources or routes.
However, a Tier 4 data center cannot prevent or mitigate application failures, which are caused by software bugs, configuration errors, or malicious attacks. Application failures can affect the functionality, performance, or security of the IT services, and cause data loss, corruption, or breach. Therefore, the IT manager should be most concerned with application failures when preparing a BCP, and ensure that the applications are properly designed, tested, updated, and monitored.
An important principle of defense in depth is that achieving information security requires a balanced focus on which PRIMARY elements?
Development, testing, and deployment
Prevention, detection, and remediation
People, technology, and operations
Certification, accreditation, and monitoring
An important principle of defense in depth is that achieving information security requires a balanced focus on the primary elements of people, technology, and operations. People are the users, administrators, managers, and other stakeholders who are involved in the security process. They need to be aware, trained, motivated, and accountable for their security roles and responsibilities. Technology is the hardware, software, network, and other tools that are used to implement the security controls and measures. They need to be selected, configured, updated, and monitored according to the security standards and best practices. Operations are the policies, procedures, processes, and activities that are performed to achieve the security objectives and requirements. They need to be documented, reviewed, audited, and improved continuously to ensure their effectiveness and efficiency.
The other options are not the primary elements of defense in depth, but rather the phases, functions, or outcomes of the security process. Development, testing, and deployment are the phases of the security life cycle, which describes how security is integrated into the system development process. Prevention, detection, and remediation are the functions of the security management, which describes how security is maintained and improved over time. Certification, accreditation, and monitoring are the outcomes of the security evaluation, which describes how security is assessed and verified against the criteria and standards.
When implementing controls in a heterogeneous end-point network for an organization, it is critical that
hosts are able to establish network communications.
users can make modifications to their security software configurations.
common software security components be implemented across all hosts.
firewalls running on each host are fully customizable by the user.
A heterogeneous end-point network is a network that consists of different types of devices, such as computers, tablets, smartphones, printers, etc., that connect to the network and communicate with each other. Each device, or host, may have different operating systems, applications, configurations, and security requirements. When implementing controls in a heterogeneous end-point network, it is critical that common software security components be implemented across all hosts. Common software security components are software programs or features that provide security functions, such as antivirus, firewall, encryption, authentication, etc. Implementing common software security components across all hosts ensures that the hosts have a consistent and minimum level of security protection, and that the hosts can interoperate securely with each other and with the network. Implementing common software security components across all hosts does not mean that the hosts have to be identical or have the same security settings. The hosts can still have different hardware, software, and security configurations, as long as they meet the security requirements and standards of the organization and the network. Implementing common software security components across all hosts is not the same as ensuring that hosts are able to establish network communications, allowing users to make modifications to their security software configurations, or making firewalls running on each host fully customizable by the user. These are other aspects of security management that may or may not be relevant or desirable for a heterogeneous end-point network, depending on the organization’s policies and objectives.
An internal Service Level Agreement (SLA) covering security is signed by senior managers and is in place. When should compliance to the SLA be reviewed to ensure that a good security posture is being delivered?
As part of the SLA renewal process
Prior to a planned security audit
Immediately after a security breach
At regularly scheduled meetings
Compliance to the SLA should be reviewed at regularly scheduled meetings, such as monthly or quarterly, to ensure that the security posture is being delivered as agreed. This allows both parties to monitor the performance, identify any issues or gaps, and take corrective actions if needed. Reviewing the SLA only as part of the renewal process, prior to a planned security audit, or immediately after a security breach is not sufficient, as it may result in missing or delaying the detection and resolution of security problems. References: 1: How to measure your SLA: 5 Metrics you should be Monitoring and Reporting23: Run your security awareness program like a marketer with these campaign kits4
By allowing storage communications to run on top of Transmission Control Protocol/Internet Protocol (TCP/IP) with a Storage Area Network (SAN), the
confidentiality of the traffic is protected.
opportunity to sniff network traffic exists.
opportunity for device identity spoofing is eliminated.
storage devices are protected against availability attacks.
By allowing storage communications to run on top of Transmission Control Protocol/Internet Protocol (TCP/IP) with a Storage Area Network (SAN), the opportunity to sniff network traffic exists. A SAN is a dedicated network that connects storage devices, such as disk arrays, tape libraries, or servers, to provide high-speed data access and transfer. A SAN may use different protocols or technologies to communicate with storage devices, such as Fibre Channel, iSCSI, or NFS. By allowing storage communications to run on top of TCP/IP, a common network protocol that supports internet and intranet communications, a SAN may leverage the existing network infrastructure and reduce costs and complexity. However, this also exposes the storage communications to the same risks and threats that affect the network communications, such as sniffing, spoofing, or denial-of-service attacks. Sniffing is the act of capturing or monitoring network traffic, which may reveal sensitive or confidential information, such as passwords, encryption keys, or data. By allowing storage communications to run on top of TCP/IP with a SAN, the confidentiality of the traffic is not protected, unless encryption or other security measures are applied. The opportunity for device identity spoofing is not eliminated, as an attacker may still impersonate a legitimate storage device or server by using a forged or stolen IP address or MAC address. The storage devices are not protected against availability attacks, as an attacker may still disrupt or overload the network or the storage devices by sending malicious or excessive packets or requests.
Which one of the following is a threat related to the use of web-based client side input validation?
Users would be able to alter the input after validation has occurred
The web server would not be able to validate the input after transmission
The client system could receive invalid input from the web server
The web server would not be able to receive invalid input from the client
A threat related to the use of web-based client side input validation is that users would be able to alter the input after validation has occurred. Client side input validation is performed on the user’s browser using JavaScript or other scripting languages. It can provide a faster and more user-friendly feedback to the user, but it can also be easily bypassed or manipulated by an attacker who disables JavaScript, uses a web proxy, or modifies the source code of the web page. Therefore, client side input validation should not be relied upon as the sole or primary method of preventing malicious or malformed input from reaching the web server. Server side input validation is also necessary to ensure the security and integrity of the web application56. References: 5: Input Validation - OWASP Cheat Sheet Series76: Input Validation vulnerabilities and how to fix them
Which of the following is ensured when hashing files during chain of custody handling?
Availability
Accountability
Integrity
Non-repudiation
Hashing files during chain of custody handling ensures integrity, which means that the files have not been altered or tampered with during the collection, preservation, or analysis of digital evidence1. Hashing is a process of applying a mathematical function to a file to generate a unique value, called a hash or a digest, that represents the file’s content. By comparing the hash values of the original and the copied files, the integrity of the files can be verified. Availability, accountability, and non-repudiation are not ensured by hashing files during chain of custody handling, as they are related to different aspects of information security. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 633.
Which of the following MUST be part of a contract to support electronic discovery of data stored in a cloud environment?
Integration with organizational directory services for authentication
Tokenization of data
Accommodation of hybrid deployment models
Identification of data location
Identification of data location is a must-have clause in a contract to support electronic discovery of data stored in a cloud environment. Electronic discovery, or e-discovery, is the process of identifying, preserving, collecting, processing, reviewing, and producing electronically stored information (ESI) that is relevant to a legal case or investigation1. In a cloud environment, where data may be stored in multiple locations, jurisdictions, or servers, it is essential to have a clear and contractual agreement on how and where the data can be accessed, retrieved, and produced for e-discovery purposes. Identification of data location can help ensure the availability, integrity, and admissibility of the data as evidence. Integration with organizational directory services for authentication, tokenization of data, and accommodation of hybrid deployment models are not mandatory clauses for e-discovery support, as they are more related to the security, privacy, and flexibility of the cloud service, rather than the legal aspects of data discovery. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 647.
Which security action should be taken FIRST when computer personnel are terminated from their jobs?
Remove their computer access
Require them to turn in their badge
Conduct an exit interview
Reduce their physical access level to the facility
The first security action that should be taken when computer personnel are terminated from their jobs is to remove their computer access. Computer access is the ability to log in, use, or modify the computer systems, networks, or data of the organization3. Removing computer access can prevent the terminated personnel from accessing or harming the organization’s information assets, or from stealing or leaking sensitive or confidential data. Removing computer access can also reduce the risk of insider threats, such as sabotage, fraud, or espionage. Requiring them to turn in their badge, conducting an exit interview, and reducing their physical access level to the facility are also important security actions that should be taken when computer personnel are terminated from their jobs, but they are not as urgent or critical as removing their computer access. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 5, page 249.
Which one of the following describes granularity?
Maximum number of entries available in an Access Control List (ACL)
Fineness to which a trusted system can authenticate users
Number of violations divided by the number of total accesses
Fineness to which an access control system can be adjusted
Granularity is the degree of detail or precision that an access control system can provide. A granular access control system can specify different levels of access for different users, groups, resources, or conditions. For example, a granular firewall can allow or deny traffic based on the source, destination, port, protocol, time, or other criteria
Which of the following is a potential risk when a program runs in privileged mode?
It may serve to create unnecessary code complexity
It may not enforce job separation duties
It may create unnecessary application hardening
It may allow malicious code to be inserted
A potential risk when a program runs in privileged mode is that it may allow malicious code to be inserted. Privileged mode, also known as kernel mode or supervisor mode, is a mode of operation that grants the program full access and control over the hardware and software resources of the system, such as memory, disk, CPU, and devices. A program that runs in privileged mode can perform any action or instruction without any restriction or protection. This can be exploited by an attacker who can inject malicious code into the program, such as a rootkit, a backdoor, or a keylogger, and gain unauthorized access or control over the system . References: : What is Privileged Mode? : Privilege Escalation - OWASP Cheat Sheet Series
During an audit of system management, auditors find that the system administrator has not been trained. What actions need to be taken at once to ensure the integrity of systems?
A review of hiring policies and methods of verification of new employees
A review of all departmental procedures
A review of all training procedures to be undertaken
A review of all systems by an experienced administrator
During an audit of system management, if auditors find that the system administrator has not been trained, the immediate action that needs to be taken to ensure the integrity of systems is a review of all systems by an experienced administrator. This is to verify that the systems are configured, maintained, and secured properly, and that there are no errors, vulnerabilities, or breaches that could compromise the system’s availability, confidentiality, or integrity. A review of hiring policies, departmental procedures, or training procedures are not urgent actions, as they are more related to the long-term improvement of the system management process, rather than the current state of the systems . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 829. : CISSP For Dummies, 7th Edition, Chapter 8, page 267.
A disadvantage of an application filtering firewall is that it can lead to
a crash of the network as a result of user activities.
performance degradation due to the rules applied.
loss of packets on the network due to insufficient bandwidth.
Internet Protocol (IP) spoofing by hackers.
A disadvantage of an application filtering firewall is that it can lead to performance degradation due to the rules applied. An application filtering firewall is a type of firewall that inspects the content and context of the data packets at the application layer of the OSI model. It can block or allow traffic based on the application protocol, the source and destination addresses, the user identity, the time of day, and other criteria. An application filtering firewall provides a high level of security and control, but it also requires more processing power and memory than other types of firewalls. This can result in slower network performance and increased latency56. References: 5: Application Layer Filtering (ALF): What is it and How does it Fit into your Security Plan?76: Different types of Firewalls: Their advantages and disadvantages
A Virtual Machine (VM) environment has five guest Operating Systems (OS) and provides strong isolation. What MUST an administrator review to audit a user’s access to data files?
Host VM monitor audit logs
Guest OS access controls
Host VM access controls
Guest OS audit logs
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation. A VM environment is a system that allows multiple virtual machines (VMs) to run on a single physical machine, each with its own OS and applications. A VM environment can provide several benefits, such as:
A guest OS is the OS that runs on a VM, which is different from the host OS that runs on the physical machine. A guest OS can have its own security controls and mechanisms, such as access controls, encryption, authentication, and audit logs. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the data files. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents.
Guest OS audit logs are what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, because they can provide the most accurate and relevant information about the user’s actions and interactions with the data files on the VM. Guest OS audit logs can also help the administrator to identify and report any unauthorized or suspicious access or disclosure of the data files, and to recommend or implement any corrective or preventive actions.
The other options are not what an administrator must review to audit a user’s access to data files in a VM environment that has five guest OS and provides strong isolation, but rather what an administrator might review for other purposes or aspects. Host VM monitor audit logs are records that capture and store the information about the events and activities that occur on the host VM monitor, which is the software or hardware component that manages and controls the VMs on the physical machine. Host VM monitor audit logs can provide information about the performance, status, and configuration of the VMs, but they cannot provide information about the user’s access to data files on the VMs. Guest OS access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the resources and services on the guest OS. Guest OS access controls can provide a proactive and preventive layer of security by enforcing the principles of least privilege, separation of duties, and need to know. However, guest OS access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the data files. Host VM access controls are rules and mechanisms that regulate and restrict the access and permissions of the users and processes to the VMs on the physical machine. Host VM access controls can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, host VM access controls are not what an administrator must review to audit a user’s access to data files, but rather what an administrator must configure and implement to protect the VMs.
Which of the following countermeasures is the MOST effective in defending against a social engineering attack?
Mandating security policy acceptance
Changing individual behavior
Evaluating security awareness training
Filtering malicious e-mail content
According to the CISSP CBK Official Study Guide, the countermeasure that is the most effective in defending against a social engineering attack is changing individual behavior. A social engineering attack is an attack that exploits or manipulates the human or the psychological aspects of the system or the network, such as the trust, curiosity, or greed of the users or the employees, rather than the technical or the logical aspects of the system or the network, such as the hardware, software, or firmware of the system or the network. A social engineering attack may use various techniques or methods, such as the phishing, the baiting, or the pretexting of the users or the employees, to persuade or deceive them into performing or disclosing something that may compromise or harm the security or the integrity of the system or the network, such as the passwords, usernames, or data of the system or the network. The countermeasure that is the most effective in defending against a social engineering attack is changing individual behavior, as it addresses or targets the root cause or the source of the social engineering attack, which is the human or the psychological aspect of the system or the network, such as the trust, curiosity, or greed of the users or the employees. Changing individual behavior is the process of modifying or altering the actions or the reactions of the users or the employees, by using or applying the appropriate methods or mechanisms, such as the education, training, or awareness of the users or the employees. Changing individual behavior helps to prevent or mitigate the social engineering attack, as it reduces or eliminates the vulnerability or the susceptibility of the users or the employees to the social engineering attack, by increasing or enhancing their knowledge, skills, or awareness of the social engineering attack, as well as their ability, confidence, or readiness to resist or respond to the social engineering attack. Mandating security policy acceptance is not the countermeasure that is the most effective in defending against a social engineering attack, although it may be a benefit or a consequence of changing individual behavior.
What is the PRIMARY difference between security policies and security procedures?
Policies are used to enforce violations, and procedures create penalties
Policies point to guidelines, and procedures are more contractual in nature
Policies are included in awareness training, and procedures give guidance
Policies are generic in nature, and procedures contain operational details
The primary difference between security policies and security procedures is that policies are generic in nature, and procedures contain operational details. Security policies are the high-level statements or rules that define the goals, objectives, and requirements of security for an organization. Security procedures are the low-level steps or actions that specify how to implement, enforce, and comply with the security policies.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 17; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 1, page 13
At which layer of the Open Systems Interconnect (OSI) model are the source and destination address for a datagram handled?
Transport Layer
Data-Link Layer
Network Layer
Application Layer
According to the CISSP Official (ISC)2 Practice Tests3, the layer of the Open Systems Interconnect (OSI) model that handles the source and destination address for a datagram is the Network Layer. The OSI model is a conceptual framework that defines the functions, services, and protocols of the communication system or network, as well as the interactions and interfaces among them. The OSI model consists of seven layers, each of which performs a specific function or service for the communication system or network, such as the Physical Layer, the Data-Link Layer, the Network Layer, the Transport Layer, the Session Layer, the Presentation Layer, or the Application Layer. The Network Layer is the third layer of the OSI model, which provides the functionality and service of routing and forwarding the data or information across the communication system or network, such as the Internet Protocol (IP) or the Internet Control Message Protocol (ICMP). The Network Layer handles the source and destination address for a datagram, which is a unit or a packet of data or information that is transmitted or received over the communication system or network. The source and destination address for a datagram are the logical or numerical identifiers that specify the origin and the destination of the datagram, such as the IP address or the host name of the sender and the receiver of the datagram. The Network Layer uses the source and destination address for a datagram to determine the best path or route for the datagram to travel from the sender to the receiver, as well as to deliver the datagram to the correct destination. The Transport Layer is not the layer of the OSI model that handles the source and destination address for a datagram, although it may be the layer that handles the source and destination port for a segment. The Transport Layer is the fourth layer of the OSI model, which provides the functionality and service of ensuring the reliable and efficient transmission and reception of the data or information across the communication system or network, such as the Transmission Control Protocol (TCP) or the User Datagram Protocol (UDP). The Transport Layer handles the source and destination port for a segment, which is a unit or a packet of data or information that is transmitted or received over the communication system or network. The source and destination port for a segment are the logical or numerical identifiers that specify the application or the service that is sending or receiving the segment, such as the port number or the socket number of the application or the service. The Transport Layer uses the source and destination port for a segment to establish, maintain, and terminate the connection or the session between the sender and the receiver, as well as to deliver the segment to the correct application or service. The Data-Link Layer is not the layer of the OSI model that handles the source and destination address for a datagram, although, it may be the layer that handles the source and destination address for a frame. The Data-Link Layer is the second layer of the OSI model, which provides the functionality and service of transferring and exchanging the data or information between the adjacent nodes or devices on the communication system or network, such as the Ethernet, the Wi-Fi, or the Bluetooth. The Data-Link Layer handles the source and destination address for a frame, which is a unit or a packet of data or information that is transmitted or received over the communication system or network. The source and destination address for a frame are the physical or hardware identifiers that specify the node or the device that is sending or receiving the frame, such as the Media Access Control (MAC) address or the Physical Address of the node or the device. The Data-Link Layer uses the source and destination address for a frame to identify, locate, and access the node or the device that is sending or receiving the frame, as well as to deliver the frame to the correct node or device. The Application Layer is not the layer of the OSI model that handles the source and destination address for a datagram, although it may be the layer that handles the source and destination address for a message.
Which one of the following activities would present a significant security risk to organizations when employing a Virtual Private Network (VPN) solution?
VPN bandwidth
Simultaneous connection to other networks
Users with Internet Protocol (IP) addressing conflicts
Remote users with administrative rights
According to the CISSP For Dummies4, the activity that would present a significant security risk to organizations when employing a VPN solution is simultaneous connection to other networks. A VPN is a technology that creates a secure and encrypted tunnel over a public or untrusted network, such as the internet, to connect remote users or sites to the organization’s private network, such as the intranet. A VPN provides security and privacy for the data and communication that are transmitted over the tunnel, as well as access to the network resources and services that are available on the private network. However, a VPN also introduces some security risks and challenges, such as configuration errors, authentication issues, malware infections, or data leakage. One of the security risks of a VPN is simultaneous connection to other networks, which occurs when a VPN user connects to the organization’s private network and another network at the same time, such as a home network, a public Wi-Fi network, or a malicious network. This creates a potential vulnerability or backdoor for the attackers to access or compromise the organization’s private network, by exploiting the weaker security or lower trust of the other network. Therefore, the organization should implement and enforce policies and controls to prevent or restrict the simultaneous connection to other networks when using a VPN solution. VPN bandwidth is not an activity that would present a significant security risk to organizations when employing a VPN solution, although it may be a factor that affects the performance and availability of the VPN solution. VPN bandwidth is the amount of data that can be transmitted or received over the VPN tunnel per unit of time, which depends on the speed and capacity of the network connection, the encryption and compression methods, the traffic load, and the network congestion. VPN bandwidth may limit the quality and efficiency of the data and communication that are transmitted over the VPN tunnel, but it does not directly pose a significant security risk to the organization’s private network. Users with IP addressing conflicts is not an activity that would present a significant security risk to organizations when employing a VPN solution, although it may be a factor that causes errors and disruptions in the VPN solution. IP addressing conflicts occur when two or more devices or hosts on the same network have the same IP address, which is a unique identifier that is assigned to each device or host to communicate over the network.
An application developer is deciding on the amount of idle session time that the application allows before a timeout. The BEST reason for determining the session timeout requirement is
organization policy.
industry best practices.
industry laws and regulations.
management feedback.
The session timeout requirement is the maximum amount of time that a user can be inactive on an application before the session is terminated and the user is required to re-authenticate. The best reason for determining the session timeout requirement is the organization policy, as it reflects the organization’s risk appetite, security objectives, and compliance obligations. The organization policy should specify the appropriate session timeout value for different types of applications and data, based on their sensitivity and criticality12. References:
Which of the following command line tools can be used in the reconnaisance phase of a network vulnerability assessment?
dig
ifconfig
ipconfig
nbtstat
Dig is a command line tool that can be used in the reconnaissance phase of a network vulnerability assessment. Dig stands for domain information groper, and it is used to query Domain Name System (DNS) servers and obtain information about domains, hosts, and records. Dig can help discover the network topology, the IP addresses, and the services running on the target network.
References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 411; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 365
What is the GREATEST challenge to identifying data leaks?
Available technical tools that enable user activity monitoring.
Documented asset classification policy and clear labeling of assets.
Senior management cooperation in investigating suspicious behavior.
Law enforcement participation to apprehend and interrogate suspects.
The greatest challenge to identifying data leaks is having a documented asset classification policy and clear labeling of assets. Data leaks are the unauthorized or accidental disclosure or exposure of sensitive or confidential data, such as personal information, trade secrets, or intellectual property. Data leaks can cause serious damage or harm to the data owner, such as reputation loss, legal liability, or competitive disadvantage. The greatest challenge to identifying data leaks is having a documented asset classification policy and clear labeling of assets, which means that the organization has defined and implemented the rules and guidelines for categorizing and marking the data according to their sensitivity, value, or criticality. Having a documented asset classification policy and clear labeling of assets can help to identify data leaks by enabling the detection, tracking, and reporting of the data movements, access, or usage, and by alerting the data owner, custodian, or user of any unauthorized or abnormal data activities or incidents. The other options are not the greatest challenges, but rather the benefits or enablers of identifying data leaks. Available technical tools that enable user activity monitoring are not the greatest challenges, but rather the benefits, of identifying data leaks, as they can provide the means or mechanisms for collecting, analyzing, and auditing the data actions or behaviors of the users or devices. Senior management cooperation in investigating suspicious behavior is not the greatest challenge, but rather the enabler, of identifying data leaks, as it can provide the support or authority for conducting the data leak investigation and taking the appropriate actions or measures. Law enforcement participation to apprehend and interrogate suspects is not the greatest challenge, but rather the enabler, of identifying data leaks, as it can provide the assistance or collaboration for pursuing and prosecuting the data leak perpetrators or offenders. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, p. 29; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, p. 287.
Which of the following types of business continuity tests includes assessment of resilience to internal and external risks without endangering live operations?
Walkthrough
Simulation
Parallel
White box
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations. Business continuity is the ability of an organization to maintain or resume its critical functions and operations in the event of a disruption or disaster. Business continuity testing is the process of evaluating and validating the effectiveness and readiness of the business continuity plan (BCP) and the disaster recovery plan (DRP) through various methods and scenarios. Business continuity testing can provide several benefits, such as:
There are different types of business continuity tests, depending on the scope, purpose, and complexity of the test. Some of the common types are:
Simulation is the type of business continuity test that includes assessment of resilience to internal and external risks without endangering live operations, because it can simulate various types of risks, such as natural, human, or technical, and assess how the organization and its systems can cope and recover from them, without actually causing any harm or disruption to the live operations. Simulation can also help to identify and mitigate any potential risks that might affect the live operations, and to improve the resilience and preparedness of the organization and its systems.
The other options are not the types of business continuity tests that include assessment of resilience to internal and external risks without endangering live operations, but rather types that have other objectives or effects. Walkthrough is a type of business continuity test that does not include assessment of resilience to internal and external risks, but rather a review and discussion of the BCP and DRP, without any actual testing or practice. Parallel is a type of business continuity test that does not endanger live operations, but rather maintains them, while activating and operating the alternate site or system. Full interruption is a type of business continuity test that does endanger live operations, by shutting them down and transferring them to the alternate site or system.
A Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide which of the following?
Guaranteed recovery of all business functions
Minimization of the need decision making during a crisis
Insurance against litigation following a disaster
Protection from loss of organization resources
Minimization of the need for decision making during a crisis is the main benefit that a Business Continuity Plan/Disaster Recovery Plan (BCP/DRP) will provide. A BCP/DRP is a set of policies, procedures, and resources that enable an organization to continue or resume its critical functions and operations in the event of a disruption or disaster. A BCP/DRP can provide several benefits, such as:
Minimization of the need for decision making during a crisis is the main benefit that a BCP/DRP will provide, because it can ensure that the organization and its staff have a clear and consistent guidance and direction on how to respond and act during a disruption or disaster, and avoid any confusion, uncertainty, or inconsistency that might worsen the situation or impact. A BCP/DRP can also help to reduce the stress and pressure on the organization and its staff during a crisis, and increase their confidence and competence in executing the plans.
The other options are not the benefits that a BCP/DRP will provide, but rather unrealistic or incorrect expectations or outcomes of a BCP/DRP. Guaranteed recovery of all business functions is not a benefit that a BCP/DRP will provide, because it is not possible or feasible to recover all business functions after a disruption or disaster, especially if the disruption or disaster is severe or prolonged. A BCP/DRP can only prioritize and recover the most critical or essential business functions, and may have to suspend or terminate the less critical or non-essential business functions. Insurance against litigation following a disaster is not a benefit that a BCP/DRP will provide, because it is not a guarantee or protection that the organization will not face any legal or regulatory consequences or liabilities after a disruption or disaster, especially if the disruption or disaster is caused by the organization’s negligence or misconduct. A BCP/DRP can only help to mitigate or reduce the legal or regulatory risks, and may have to comply with or report to the relevant authorities or parties. Protection from loss of organization resources is not a benefit that a BCP/DRP will provide, because it is not a prevention or avoidance of any damage or destruction of the organization’s assets or resources during a disruption or disaster, especially if the disruption or disaster is physical or natural. A BCP/DRP can only help to restore or replace the lost or damaged assets or resources, and may have to incur some costs or losses.
A continuous information security-monitoring program can BEST reduce risk through which of the following?
Collecting security events and correlating them to identify anomalies
Facilitating system-wide visibility into the activities of critical user accounts
Encompassing people, process, and technology
Logging both scheduled and unscheduled system changes
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology. A continuous information security monitoring program is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. A continuous information security monitoring program can provide several benefits, such as:
A continuous information security monitoring program can best reduce risk through encompassing people, process, and technology, because it can ensure that the continuous information security monitoring program is holistic and comprehensive, and that it covers all the aspects and elements of the system or network security. People, process, and technology are the three pillars of a continuous information security monitoring program, and they represent the following:
The other options are not the best ways to reduce risk through a continuous information security monitoring program, but rather specific or partial ways that can contribute to the risk reduction. Collecting security events and correlating them to identify anomalies is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one aspect of the security data and information, and it does not address the other aspects, such as the security objectives and requirements, the security controls and measures, and the security feedback and improvement. Facilitating system-wide visibility into the activities of critical user accounts is a partial way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only covers one element of the system or network security, and it does not cover the other elements, such as the security threats and vulnerabilities, the security incidents and impacts, and the security response and remediation. Logging both scheduled and unscheduled system changes is a specific way to reduce risk through a continuous information security monitoring program, but it is not the best way, because it only focuses on one type of the security events and activities, and it does not focus on the other types, such as the security alerts and notifications, the security analysis and correlation, and the security reporting and documentation.
When is a Business Continuity Plan (BCP) considered to be valid?
When it has been validated by the Business Continuity (BC) manager
When it has been validated by the board of directors
When it has been validated by all threat scenarios
When it has been validated by realistic exercises
A Business Continuity Plan (BCP) is considered to be valid when it has been validated by realistic exercises. A BCP is a part of a BCP/DRP that focuses on ensuring the continuous operation of the organization’s critical business functions and processes during and after a disruption or disaster. A BCP should include various components, such as:
A BCP is considered to be valid when it has been validated by realistic exercises, because it can ensure that the BCP is practical and applicable, and that it can achieve the desired outcomes and objectives in a real-life scenario. Realistic exercises are a type of testing, training, and exercises that involve performing and practicing the BCP with the relevant stakeholders, using simulated or hypothetical scenarios, such as a fire drill, a power outage, or a cyberattack. Realistic exercises can provide several benefits, such as:
The other options are not the criteria for considering a BCP to be valid, but rather the steps or parties that are involved in developing or approving a BCP. When it has been validated by the Business Continuity (BC) manager is not a criterion for considering a BCP to be valid, but rather a step that is involved in developing a BCP. The BC manager is the person who is responsible for overseeing and coordinating the BCP activities and processes, such as the business impact analysis, the recovery strategies, the BCP document, the testing, training, and exercises, and the maintenance and review. The BC manager can validate the BCP by reviewing and verifying the BCP components and outcomes, and ensuring that they meet the BCP standards and objectives. However, the validation by the BC manager is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by the board of directors is not a criterion for considering a BCP to be valid, but rather a party that is involved in approving a BCP. The board of directors is the group of people who are elected by the shareholders to represent their interests and to oversee the strategic direction and governance of the organization. The board of directors can approve the BCP by endorsing and supporting the BCP components and outcomes, and allocating the necessary resources and funds for the BCP. However, the approval by the board of directors is not enough to consider the BCP to be valid, as it does not test or demonstrate the BCP in a realistic scenario. When it has been validated by all threat scenarios is not a criterion for considering a BCP to be valid, but rather an unrealistic or impossible expectation for validating a BCP. A threat scenario is a description or a simulation of a possible or potential disruption or disaster that might affect the organization’s critical business functions and processes, such as a natural hazard, a human error, or a technical failure. A threat scenario can be used to test and validate the BCP by measuring and evaluating the BCP’s performance and effectiveness in responding and recovering from the disruption or disaster. However, it is not possible or feasible to validate the BCP by all threat scenarios, as there are too many or unknown threat scenarios that might occur, and some threat scenarios might be too severe or complex to simulate or test. Therefore, the BCP should be validated by the most likely or relevant threat scenarios, and not by all threat scenarios.
What should be the FIRST action to protect the chain of evidence when a desktop computer is involved?
Take the computer to a forensic lab
Make a copy of the hard drive
Start documenting
Turn off the computer
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved. A chain of evidence, also known as a chain of custody, is a process that documents and preserves the integrity and authenticity of the evidence collected from a crime scene, such as a desktop computer. A chain of evidence should include information such as:
Making a copy of the hard drive should be the first action to protect the chain of evidence when a desktop computer is involved, because it can ensure that the original hard drive is not altered, damaged, or destroyed during the forensic analysis, and that the copy can be used as a reliable and admissible source of evidence. Making a copy of the hard drive should also involve using a write blocker, which is a device or a software that prevents any modification or deletion of the data on the hard drive, and generating a hash value, which is a unique and fixed identifier that can verify the integrity and consistency of the data on the hard drive.
The other options are not the first actions to protect the chain of evidence when a desktop computer is involved, but rather actions that should be done after or along with making a copy of the hard drive. Taking the computer to a forensic lab is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is transported and stored in a secure and controlled environment, and that the forensic analysis is conducted by qualified and authorized personnel. Starting documenting is an action that should be done along with making a copy of the hard drive, because it can ensure that the chain of evidence is maintained and recorded throughout the forensic process, and that the evidence can be traced and verified. Turning off the computer is an action that should be done after making a copy of the hard drive, because it can ensure that the computer is powered down and disconnected from any network or device, and that the computer is protected from any further damage or tampering.
What would be the MOST cost effective solution for a Disaster Recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours?
Warm site
Hot site
Mirror site
Cold site
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours. A DR site is a backup facility that can be used to restore the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DR site can have different levels of readiness and functionality, depending on the organization’s recovery objectives and budget. The main types of DR sites are:
A warm site is the most cost effective solution for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it can provide a balance between the recovery time and the recovery cost. A warm site can enable the organization to resume its critical functions and operations within a reasonable time frame, without spending too much on the DR site maintenance and operation. A warm site can also provide some flexibility and scalability for the organization to adjust its recovery strategies and resources according to its needs and priorities.
The other options are not the most cost effective solutions for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, but rather solutions that are either too costly or too slow for the organization’s recovery objectives and budget. A hot site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to invest a lot of money on the DR site equipment, software, and services, and to pay for the ongoing operational and maintenance costs. A hot site may be more suitable for the organization’s systems that cannot be unavailable for more than a few hours or minutes, or that have very high availability and performance requirements. A mirror site is a solution that is too costly for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to duplicate its entire primary site, with the same hardware, software, data, and applications, and to keep them online and synchronized at all times. A mirror site may be more suitable for the organization’s systems that cannot afford any downtime or data loss, or that have very strict compliance and regulatory requirements. A cold site is a solution that is too slow for a disaster recovery (DR) site given that the organization’s systems cannot be unavailable for more than 24 hours, because it requires the organization to spend a lot of time and effort on the DR site installation, configuration, and restoration, and to rely on other sources of backup data and applications. A cold site may be more suitable for the organization’s systems that can be unavailable for more than a few days or weeks, or that have very low criticality and priority.
Which of the following is the FIRST step in the incident response process?
Determine the cause of the incident
Disconnect the system involved from the network
Isolate and contain the system involved
Investigate all symptoms to confirm the incident
Investigating all symptoms to confirm the incident is the first step in the incident response process. An incident is an event that violates or threatens the security, availability, integrity, or confidentiality of the IT systems or data. An incident response is a process that involves detecting, analyzing, containing, eradicating, recovering, and learning from an incident, using various methods and tools. An incident response can provide several benefits, such as:
Investigating all symptoms to confirm the incident is the first step in the incident response process, because it can ensure that the incident is verified and validated, and that the incident response is initiated and escalated. A symptom is a sign or an indication that an incident may have occurred or is occurring, such as an alert, a log, or a report. Investigating all symptoms to confirm the incident involves collecting and analyzing the relevant data and information from various sources, such as the IT systems, the network, the users, or the external parties, and determining whether an incident has actually happened or is happening, and how serious or urgent it is. Investigating all symptoms to confirm the incident can also help to:
The other options are not the first steps in the incident response process, but rather steps that should be done after or along with investigating all symptoms to confirm the incident. Determining the cause of the incident is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the root cause and source of the incident are identified and analyzed, and that the incident response is directed and focused. Determining the cause of the incident involves examining and testing the affected IT systems and data, and tracing and tracking the origin and path of the incident, using various techniques and tools, such as forensics, malware analysis, or reverse engineering. Determining the cause of the incident can also help to:
Disconnecting the system involved from the network is a step that should be done along with investigating all symptoms to confirm the incident, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the incident response is conducted in a safe and controlled environment. Disconnecting the system involved from the network can also help to:
Isolating and containing the system involved is a step that should be done after investigating all symptoms to confirm the incident, because it can ensure that the incident is confined and restricted, and that the incident response is continued and maintained. Isolating and containing the system involved involves applying and enforcing the appropriate security measures and controls to limit or stop the activity and impact of the incident on the IT systems and data, such as firewall rules, access policies, or encryption keys. Isolating and containing the system involved can also help to:
What is the PRIMARY reason for implementing change management?
Certify and approve releases to the environment
Provide version rollbacks for system changes
Ensure that all applications are approved
Ensure accountability for changes to the environment
Ensuring accountability for changes to the environment is the primary reason for implementing change management. Change management is a process that ensures that any changes to the system or network environment, such as the hardware, software, configuration, or documentation, are planned, approved, implemented, and documented in a controlled and consistent manner. Change management can provide several benefits, such as:
Ensuring accountability for changes to the environment is the primary reason for implementing change management, because it can ensure that the changes are authorized, justified, and traceable, and that the parties involved in the changes are responsible and accountable for their actions and results. Accountability can also help to deter or detect any unauthorized or malicious changes that might compromise the system or network environment.
The other options are not the primary reasons for implementing change management, but rather secondary or specific reasons for different aspects or phases of change management. Certifying and approving releases to the environment is a reason for implementing change management, but it is more relevant for the approval phase of change management, which is the phase that involves reviewing and validating the changes and their impacts, and granting or denying the permission to proceed with the changes. Providing version rollbacks for system changes is a reason for implementing change management, but it is more relevant for the implementation phase of change management, which is the phase that involves executing and monitoring the changes and their effects, and providing the backup and recovery options for the changes. Ensuring that all applications are approved is a reason for implementing change management, but it is more relevant for the application changes, which are the changes that affect the software components or services that provide the functionality or logic of the system or network environment.
Which of the following is a PRIMARY advantage of using a third-party identity service?
Consolidation of multiple providers
Directory synchronization
Web based logon
Automated account management
Consolidation of multiple providers is the primary advantage of using a third-party identity service. A third-party identity service is a service that provides identity and access management (IAM) functions, such as authentication, authorization, and federation, for multiple applications or systems, using a single identity provider (IdP). A third-party identity service can offer various benefits, such as:
Consolidation of multiple providers is the primary advantage of using a third-party identity service, because it can simplify and streamline the IAM architecture and processes, by reducing the number of IdPs and IAM systems that are involved in managing the identities and access for multiple applications or systems. Consolidation of multiple providers can also help to avoid the issues or risks that might arise from having multiple IdPs and IAM systems, such as the inconsistency, redundancy, or conflict of the IAM policies and controls, or the inefficiency, vulnerability, or disruption of the IAM functions.
The other options are not the primary advantages of using a third-party identity service, but rather secondary or specific advantages for different aspects or scenarios of using a third-party identity service. Directory synchronization is an advantage of using a third-party identity service, but it is more relevant for the scenario where the organization has an existing directory service, such as LDAP or Active Directory, that stores and manages the user accounts and attributes, and wants to synchronize them with the third-party identity service, to enable the SSO or federation for the users. Web based logon is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service uses a web-based protocol, such as SAML or OAuth, to facilitate the SSO or federation for the users, by redirecting them to a web-based logon page, where they can enter their credentials or consent. Automated account management is an advantage of using a third-party identity service, but it is more relevant for the aspect where the third-party identity service provides the IAM functions, such as provisioning, deprovisioning, or updating, for the user accounts and access rights, using an automated or self-service mechanism, such as SCIM or JIT.
An organization is found lacking the ability to properly establish performance indicators for its Web hosting solution during an audit. What would be the MOST probable cause?
Absence of a Business Intelligence (BI) solution
Inadequate cost modeling
Improper deployment of the Service-Oriented Architecture (SOA)
Insufficient Service Level Agreement (SLA)
Insufficient Service Level Agreement (SLA) would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit. A Web hosting solution is a service that provides the infrastructure, resources, and tools for hosting and maintaining a website or a web application on the internet. A Web hosting solution can offer various benefits, such as:
A Service Level Agreement (SLA) is a contract or an agreement that defines the expectations, responsibilities, and obligations of the parties involved in a service, such as the service provider and the service consumer. An SLA can include various components, such as:
Insufficient SLA would be the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it could mean that the SLA does not include or specify the appropriate service level indicators or objectives for the Web hosting solution, or that the SLA does not provide or enforce the adequate service level reporting or penalties for the Web hosting solution. This could affect the ability of the organization to measure and assess the Web hosting solution quality, performance, and availability, and to identify and address any issues or risks in the Web hosting solution.
The other options are not the most probable causes for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, but rather the factors that could affect or improve the Web hosting solution in other ways. Absence of a Business Intelligence (BI) solution is a factor that could affect the ability of the organization to analyze and utilize the data and information from the Web hosting solution, such as the web traffic, behavior, or conversion. A BI solution is a system that involves the collection, integration, processing, and presentation of the data and information from various sources, such as the Web hosting solution, to support the decision making and planning of the organization. However, absence of a BI solution is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the analysis or usage of the performance indicators for the Web hosting solution. Inadequate cost modeling is a factor that could affect the ability of the organization to estimate and optimize the cost and value of the Web hosting solution, such as the web hosting fees, maintenance costs, or return on investment. A cost model is a tool or a method that helps the organization to calculate and compare the cost and value of the Web hosting solution, and to identify and implement the best or most efficient Web hosting solution. However, inadequate cost modeling is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the estimation or optimization of the cost and value of the Web hosting solution. Improper deployment of the Service-Oriented Architecture (SOA) is a factor that could affect the ability of the organization to design and develop the Web hosting solution, such as the web services, components, or interfaces. A SOA is a software architecture that involves the modularization, standardization, and integration of the software components or services that provide the functionality or logic of the Web hosting solution. A SOA can offer various benefits, such as:
However, improper deployment of the SOA is not the most probable cause for an organization to lack the ability to properly establish performance indicators for its Web hosting solution during an audit, because it does not affect the definition or specification of the performance indicators for the Web hosting solution, but rather the design or development of the Web hosting solution.
With what frequency should monitoring of a control occur when implementing Information Security Continuous Monitoring (ISCM) solutions?
Continuously without exception for all security controls
Before and after each change of the control
At a rate concurrent with the volatility of the security control
Only during system implementation and decommissioning
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing Information Security Continuous Monitoring (ISCM) solutions. ISCM is a process that involves maintaining the ongoing awareness of the security status, events, and activities of a system or network, by collecting, analyzing, and reporting the security data and information, using various methods and tools. ISCM can provide several benefits, such as:
A security control is a measure or mechanism that is implemented to protect the system or network from the security threats or risks, by preventing, detecting, or correcting the security incidents or impacts. A security control can have various types, such as administrative, technical, or physical, and various attributes, such as preventive, detective, or corrective. A security control can also have different levels of volatility, which is the degree or frequency of change or variation of the security control, due to various factors, such as the security requirements, the threat landscape, or the system or network environment.
Monitoring of a control should occur at a rate concurrent with the volatility of the security control when implementing ISCM solutions, because it can ensure that the ISCM solutions can capture and reflect the current and accurate state and performance of the security control, and can identify and report any issues or risks that might affect the security control. Monitoring of a control at a rate concurrent with the volatility of the security control can also help to optimize the ISCM resources and efforts, by allocating them according to the priority and urgency of the security control.
The other options are not the correct frequencies for monitoring of a control when implementing ISCM solutions, but rather incorrect or unrealistic frequencies that might cause problems or inefficiencies for the ISCM solutions. Continuously without exception for all security controls is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not feasible or necessary to monitor all security controls at the same and constant rate, regardless of their volatility or importance. Continuously monitoring all security controls without exception might cause the ISCM solutions to consume excessive or wasteful resources and efforts, and might overwhelm or overload the ISCM solutions with too much or irrelevant data and information. Before and after each change of the control is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not sufficient or timely to monitor the security control only when there is a change of the security control, and not during the normal operation of the security control. Monitoring the security control only before and after each change might cause the ISCM solutions to miss or ignore the security status, events, and activities that occur between the changes of the security control, and might delay or hinder the ISCM solutions from detecting and responding to the security issues or incidents that affect the security control. Only during system implementation and decommissioning is an incorrect frequency for monitoring of a control when implementing ISCM solutions, because it is not appropriate or effective to monitor the security control only during the initial or final stages of the system or network lifecycle, and not during the operational or maintenance stages of the system or network lifecycle. Monitoring the security control only during system implementation and decommissioning might cause the ISCM solutions to neglect or overlook the security status, events, and activities that occur during the regular or ongoing operation of the system or network, and might prevent or limit the ISCM solutions from improving and optimizing the security control.
Recovery strategies of a Disaster Recovery planning (DRIP) MUST be aligned with which of the following?
Hardware and software compatibility issues
Applications’ critically and downtime tolerance
Budget constraints and requirements
Cost/benefit analysis and business objectives
Recovery strategies of a Disaster Recovery planning (DRP) must be aligned with the cost/benefit analysis and business objectives. A DRP is a part of a BCP/DRP that focuses on restoring the normal operation of the organization’s IT systems and infrastructure after a disruption or disaster. A DRP should include various components, such as:
Recovery strategies of a DRP must be aligned with the cost/benefit analysis and business objectives, because it can ensure that the DRP is feasible and suitable, and that it can achieve the desired outcomes and objectives in a cost-effective and efficient manner. A cost/benefit analysis is a technique that compares the costs and benefits of different recovery strategies, and determines the optimal one that provides the best value for money. A business objective is a goal or a target that the organization wants to achieve through its IT systems and infrastructure, such as increasing the productivity, profitability, or customer satisfaction. A recovery strategy that is aligned with the cost/benefit analysis and business objectives can help to:
The other options are not the factors that the recovery strategies of a DRP must be aligned with, but rather factors that should be considered or addressed when developing or implementing the recovery strategies of a DRP. Hardware and software compatibility issues are factors that should be considered when developing the recovery strategies of a DRP, because they can affect the functionality and interoperability of the IT systems and infrastructure, and may require additional resources or adjustments to resolve them. Applications’ criticality and downtime tolerance are factors that should be addressed when implementing the recovery strategies of a DRP, because they can determine the priority and urgency of the recovery for different applications, and may require different levels of recovery objectives and resources. Budget constraints and requirements are factors that should be considered when developing the recovery strategies of a DRP, because they can limit the availability and affordability of the IT resources and funds for the recovery, and may require trade-offs or compromises to balance them.
What is the MOST important step during forensic analysis when trying to learn the purpose of an unknown application?
Disable all unnecessary services
Ensure chain of custody
Prepare another backup of the system
Isolate the system from the network
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application. An unknown application is an application that is not recognized or authorized by the system or network administrator, and that may have been installed or executed without the user’s knowledge or consent. An unknown application may have various purposes, such as:
Forensic analysis is a process that involves examining and investigating the system or network for any evidence or traces of the unknown application, such as its origin, nature, behavior, and impact. Forensic analysis can provide several benefits, such as:
Isolating the system from the network is the most important step during forensic analysis when trying to learn the purpose of an unknown application, because it can ensure that the system is isolated and protected from any external or internal influences or interferences, and that the forensic analysis is conducted in a safe and controlled environment. Isolating the system from the network can also help to:
The other options are not the most important steps during forensic analysis when trying to learn the purpose of an unknown application, but rather steps that should be done after or along with isolating the system from the network. Disabling all unnecessary services is a step that should be done after isolating the system from the network, because it can ensure that the system is optimized and simplified for the forensic analysis, and that the system resources and functions are not consumed or affected by any irrelevant or redundant services. Ensuring chain of custody is a step that should be done along with isolating the system from the network, because it can ensure that the integrity and authenticity of the evidence are maintained and documented throughout the forensic process, and that the evidence can be traced and verified. Preparing another backup of the system is a step that should be done after isolating the system from the network, because it can ensure that the system data and configuration are preserved and replicated for the forensic analysis, and that the system can be restored and recovered in case of any damage or loss.
The MAIN use of Layer 2 Tunneling Protocol (L2TP) is to tunnel data
through a firewall at the Session layer
through a firewall at the Transport layer
in the Point-to-Point Protocol (PPP)
in the Payload Compression Protocol (PCP)
The main use of Layer 2 Tunneling Protocol (L2TP) is to tunnel data in the Point-to-Point Protocol (PPP). L2TP is a tunneling protocol that operates at the data link layer (Layer 2) of the OSI model, and is used to support virtual private networks (VPNs) or as part of the delivery of services by ISPs. L2TP does not provide encryption or authentication by itself, but it can be combined with IPsec to provide security and confidentiality for the tunneled data. L2TP is commonly used to tunnel PPP sessions over an IP network, such as the Internet. PPP is a protocol that establishes a direct connection between two nodes, and provides authentication, encryption, and compression for the data transmitted over the connection. PPP is often used to connect a remote client to a corporate network, or a user to an ISP. By using L2TP to encapsulate PPP packets, the connection can be extended over a public or shared network, creating a VPN. This way, the user can access the network resources and services securely and transparently, as if they were directly connected to the network. The other options are not the main use of L2TP, as they involve different protocols or layers. L2TP does not tunnel data through a firewall, but rather over an IP network. L2TP does not operate at the session layer or the transport layer, but at the data link layer. L2TP does not use the Payload Compression Protocol (PCP), but rather the Point-to-Point Protocol (PPP). References: Layer 2 Tunneling Protocol - Wikipedia; What is the Layer 2 Tunneling Protocol (L2TP)? - NordVPN; Understanding VPN protocols: OpenVPN, L2TP, WireGuard & more.
What can happen when an Intrusion Detection System (IDS) is installed inside a firewall-protected internal network?
The IDS can detect failed administrator logon attempts from servers.
The IDS can increase the number of packets to analyze.
The firewall can increase the number of packets to analyze.
The firewall can detect failed administrator login attempts from servers
An Intrusion Detection System (IDS) is a monitoring system that detects suspicious activities and generates alerts when they are detected. An IDS can be installed inside a firewall-protected internal network to monitor the traffic within the network and identify any potential threats or anomalies. One of the scenarios that an IDS can detect is failed administrator logon attempts from servers. This could indicate that an attacker has compromised a server and is trying to escalate privileges or access sensitive data. An IDS can alert the security team of such attempts and help them to investigate and respond to the incident. The other options are not valid consequences of installing an IDS inside a firewall-protected internal network. An IDS does not increase the number of packets to analyze, as it only passively observes the traffic that is already flowing in the network. An IDS does not affect the firewall’s functionality or performance, as it operates independently from the firewall. An IDS does not enable the firewall to detect failed administrator login attempts from servers, as the firewall is not designed to inspect the content or the behavior of the traffic, but only to filter it based on predefined rules. References: Intrusion Detection System (IDS) - GeeksforGeeks; Exploring Firewalls & Intrusion Detection Systems in Network Security ….
As part of an application penetration testing process, session hijacking can BEST be achieved by which of the following?
Known-plaintext attack
Denial of Service (DoS)
Cookie manipulation
Structured Query Language (SQL) injection
Cookie manipulation is a technique that allows an attacker to intercept, modify, or forge a cookie, which is a piece of data that is used to maintain the state of a web session. By manipulating the cookie, the attacker can hijack the session and gain unauthorized access to the web application. Known-plaintext attack, DoS, and SQL injection are not directly related to session hijacking, although they can be used for other purposes, such as breaking encryption, disrupting availability, or executing malicious commands. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 729; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 522.
An organization’s security policy delegates to the data owner the ability to assign which user roles have access
to a particular resource. What type of authorization mechanism is being used?
Discretionary Access Control (DAC)
Role Based Access Control (RBAC)
Media Access Control (MAC)
Mandatory Access Control (MAC)
Discretionary Access Control (DAC) is a type of authorization mechanism that grants or denies access to resources based on the identity of the user and the permissions assigned by the owner of the resource. The owner of the resource has the discretion to decide who can access the resource and what level of access they can have. For example, the owner of a file can assign read, write, or execute permissions to different users or groups. DAC is flexible and easy to implement, but it also poses security risks, such as unauthorized access, data leakage, or privilege escalation, if the owner is not careful or knowledgeable about the security implications of their decisions.
Which of the following could be considered the MOST significant security challenge when adopting DevOps practices compared to a more traditional control framework?
Achieving Service Level Agreements (SLA) on how quickly patches will be released when a security flaw is found.
Maintaining segregation of duties.
Standardized configurations for logging, alerting, and security metrics.
Availability of security teams at the end of design process to perform last-minute manual audits and reviews.
The most significant security challenge when adopting DevOps practices compared to a more traditional control framework is maintaining segregation of duties. DevOps is a set of practices and methodologies that aim to integrate and automate the development and the operations of a system or a network, such as software, applications, or services, to enhance the quality and the speed of the delivery and the deployment of the system or the network. DevOps can provide some benefits for security, such as enhancing the performance and the functionality of the system or the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. DevOps can involve various tools and techniques, such as continuous integration, continuous delivery, continuous testing, continuous monitoring, or continuous feedback. A traditional control framework is a set of policies and procedures that aim to establish and enforce the security and the governance of a system or a network, such as software, applications, or services, to protect the confidentiality, the integrity, and the availability of the system or the network. A traditional control framework can provide some benefits for security, such as enhancing the visibility and the accountability of the system or the network, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. A traditional control framework can involve various controls and mechanisms, such as risk assessment, change management, configuration management, access control, or audit trail. Maintaining segregation of duties is the most significant security challenge when adopting DevOps practices compared to a more traditional control framework, as it can be difficult and costly to implement and manage, due to the differences and the conflicts between the DevOps and the traditional control framework principles and objectives. Segregation of duties is a security principle or a technique that requires that different roles or functions are assigned to different parties, and that no single party can perform all the steps of a process or a task, such as development, testing, deployment, or maintenance. Segregation of duties can provide some benefits for security, such as enhancing the accuracy and the reliability of the process or the task, preventing or detecting fraud or errors, and supporting the audit and the compliance activities.
Who has the PRIMARY responsibility to ensure that security objectives are aligned with organization goals?
Senior management
Information security department
Audit committee
All users
Senior management has the primary responsibility to ensure that security objectives are aligned with organizational goals. Senior management is the highest level of authority and decision-making in an organization, and it sets the vision, mission, strategy, and objectives for the organization. Senior management is also responsible for establishing the security governance framework, which defines the roles, responsibilities, policies, standards, and procedures for security management. Senior management should ensure that the security function supports and enables the organizational goals, and that the security objectives are consistent, measurable, and achievable. Senior management should also provide adequate resources, guidance, and oversight for the security function, and communicate the security expectations and requirements to all stakeholders. The information security department, the audit committee, and all users have some roles and responsibilities in ensuring that security objectives are aligned with organizational goals, but they are not the primary ones. The information security department is responsible for implementing, maintaining, and monitoring the security controls and processes, and reporting on the security performance and incidents. The audit committee is responsible for reviewing and verifying the effectiveness and compliance of the security controls and processes, and providing recommendations for improvement. All users are responsible for following the security policies and procedures, and reporting any security issues or violations.
Which of the following is the GREATEST benefit of implementing a Role Based Access Control (RBAC)
system?
Integration using Lightweight Directory Access Protocol (LDAP)
Form-based user registration process
Integration with the organizations Human Resources (HR) system
A considerably simpler provisioning process
The greatest benefit of implementing a Role Based Access Control (RBAC) system is a considerably simpler provisioning process. Provisioning is the process of creating, modifying, or deleting the user accounts and access rights on a system or a network. Provisioning can be a complex and tedious task, especially in large or dynamic organizations that have many users, systems, and resources. RBAC is a type of access control model that assigns permissions to users based on their roles or functions within the organization, rather than on their individual identities or attributes. RBAC can simplify the provisioning process by reducing the administrative overhead and ensuring the consistency and accuracy of the user accounts and access rights. RBAC can also provide some benefits for security, such as enforcing the principle of least privilege, facilitating the separation of duties, and supporting the audit and compliance activities. Integration using Lightweight Directory Access Protocol (LDAP), form-based user registration process, and integration with the organizations Human Resources (HR) system are not the greatest benefits of implementing a RBAC system, although they may be related or useful features. Integration using LDAP is a technique that uses a standard protocol to communicate and exchange information with a directory service, such as Active Directory or OpenLDAP. LDAP can provide some benefits for access control, such as centralizing and standardizing the user accounts and access rights, supporting the authentication and authorization mechanisms, and enabling the interoperability and scalability of the systems or the network. However, integration using LDAP is not a benefit of RBAC, as it is not a feature or a requirement of RBAC, and it can be used with other access control models, such as discretionary access control (DAC) or mandatory access control (MAC). Form-based user registration process is a technique that uses a web-based form to collect and validate the user information and preferences, such as name, email, password, or role. Form-based user registration process can provide some benefits for access control, such as simplifying and automating the user account creation, enhancing the user experience and satisfaction, and supporting the self-service and delegation capabilities. However, form-based user registration process is not a benefit of RBAC, as it is not a feature or a requirement of RBAC, and it can be used with other access control models, such as DAC or MAC. Integration with the organizations HR system is a technique that uses a software application or a service to synchronize and update the user accounts and access rights with the HR data, such as employee records, job titles, or organizational units. Integration with the organizations HR system can provide some benefits for access control, such as streamlining and automating the provisioning process, improving the accuracy and timeliness of the user accounts and access rights, and supporting the identity lifecycle management activities. However, integration with the organizations HR system is not a benefit of RBAC, as it is not a feature or a requirement of RBAC, and it can be used with other access control models, such as DAC or MAC.
From a security perspective, which of the following assumptions MUST be made about input to an
application?
It is tested
It is logged
It is verified
It is untrusted
From a security perspective, the assumption that must be made about input to an application is that it is untrusted. Untrusted input is any data or information that is provided by an external or an unknown source, such as a user, a client, a network, or a file, and that is not validated or verified by the application before being processed or used by the application. Untrusted input can pose a serious security risk for the application, as it can contain or introduce malicious or harmful content or commands, such as malware, viruses, worms, trojans, or SQL injection, that can compromise or damage the confidentiality, the integrity, or the availability of the application, or the data or the systems that are connected to the application. Therefore, from a security perspective, the assumption that must be made about input to an application is that it is untrusted, and that it should be treated with caution and suspicion, and that it should be subjected to various security controls or mechanisms, such as input validation, input sanitization, input filtering, or input encoding, before being processed or used by the application. Input validation is the process or the technique of checking or verifying that the input meets the expected or the required format, type, length, range, or value, and that it does not contain or introduce any invalid or illegal characters, symbols, or commands. Input sanitization is the process or the technique of removing or modifying any invalid or illegal characters, symbols, or commands from the input, or replacing them with valid or legal ones, to prevent or mitigate any potential attacks or vulnerabilities. Input filtering is the process or the technique of allowing or blocking the input based on a predefined or a configurable set of rules or criteria, such as a whitelist or a blacklist, to prevent or mitigate any unwanted or unauthorized input. Input encoding is the process or the technique of transforming or converting the input into a different or a standard format or representation, such as HTML, URL, or Base64, to prevent or mitigate any interpretation or execution of the input by the application or the system. It is tested, it is logged, and it is verified are not the assumptions that must be made about input to an application from a security perspective, although they may be related or possible aspects or outcomes of input to an application. It is tested is an aspect or an outcome of input to an application, as it implies that the input has been subjected to various tests or evaluations, such as unit testing, integration testing, or penetration testing, to verify or validate the functionality and the quality of the input, as well as to detect or report any errors, bugs, or vulnerabilities in the input. However, it is tested is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application. It is logged is an aspect or an outcome of input to an application, as it implies that the input has been recorded or stored in a log file or a database, along with other relevant information or metadata, such as the source, the destination, the timestamp, or the status of the input, to provide a trace or a history of the input, as well as to support the audit and the compliance activities. However, it is logged is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application. It is verified is an aspect or an outcome of input to an application, as it implies that the input has been confirmed or authenticated by the application or the system, using various security controls or mechanisms, such as digital signatures, certificates, or tokens, to ensure the integrity and the authenticity of the input, as well as to prevent or mitigate any tampering or spoofing of the input. However, it is verified is not an assumption that must be made about input to an application from a security perspective, as it is not a precautionary or a preventive measure to protect the application from untrusted input, and it may not be true or applicable for all input to an application.
Even though a particular digital watermark is difficult to detect, which of the following represents a way it might still be inadvertently removed?
Truncating parts of the data
Applying Access Control Lists (ACL) to the data
Appending non-watermarked data to watermarked data
Storing the data in a database
A digital watermark is a hidden signal embedded in a data file that can be used to identify the owner, source, or authenticity of the data. A watermark is difficult to detect and remove without degrading the quality of the data. However, one way that a watermark might still be inadvertently removed is by truncating parts of the data, such as cropping an image or cutting a video. This might affect the location or size of the watermark and make it unreadable or invalid. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, page 507; CISSP For Dummies, 7th Edition, page 344.
Which of the following is BEST achieved through the use of eXtensible Access Markup Language (XACML)?
Minimize malicious attacks from third parties
Manage resource privileges
Share digital identities in hybrid cloud
Defined a standard protocol
XACML is an XML-based language for specifying access control policies. It defines a declarative, fine-grained, attribute-based access control policy language, an architecture, and a processing model describing how to evaluate access requests according to the rules defined in policies. XACML is best suited for managing resource privileges, as it allows for flexible and dynamic authorization decisions based on various attributes of the subject, resource, action, and environment. XACML is not designed to minimize malicious attacks, share digital identities, or define a standard protocol, although it can interoperate with other standards such as SAML and OAuth. References: XACML - Wikipedia; OASIS eXtensible Access Control Markup Language (XACML) TC; A beginner’s guide to XACML.
What Is the FIRST step in establishing an information security program?
Establish an information security policy.
Identify factors affecting information security.
Establish baseline security controls.
Identify critical security infrastructure.
The first step in establishing an information security program is to establish an information security policy. An information security policy is a document that defines the objectives, scope, principles, and responsibilities of the information security program. An information security policy provides the foundation and direction for the information security program, as well as the basis for the development and implementation of the information security standards, procedures, and guidelines. An information security policy should be approved and supported by the senior management, and communicated and enforced across the organization. Identifying factors affecting information security, establishing baseline security controls, and identifying critical security infrastructure are not the first steps in establishing an information security program, but they may be part of the subsequent steps, such as the risk assessment, risk mitigation, or risk monitoring. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 22; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 14.
Attack trees are MOST useful for which of the following?
Determining system security scopes
Generating attack libraries
Enumerating threats
Evaluating Denial of Service (DoS) attacks
Attack trees are most useful for enumerating threats. Attack trees are graphical models that represent the possible ways that an attacker can exploit a system or achieve a goal. Attack trees consist of nodes that represent the attacker’s actions or conditions, and branches that represent the logical relationships between the nodes. Attack trees can help to enumerate the threats that the system faces, as well as to analyze the likelihood, impact, and countermeasures of each threat. Attack trees are not useful for determining system security scopes, generating attack libraries, or evaluating DoS attacks, although they may be used as inputs or outputs for these tasks. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Security Operations, page 499; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 552.
What is the foundation of cryptographic functions?
Encryption
Cipher
Hash
Entropy
The foundation of cryptographic functions is entropy. Entropy is a measure of the randomness or unpredictability of a system or a process. Entropy is essential for cryptographic functions, such as encryption, decryption, hashing, or key generation, as it provides the security and the strength of the cryptographic algorithms and keys. Entropy can be derived from various sources, such as physical phenomena, user input, or software applications. Entropy can also be quantified in terms of bits, where higher entropy means higher randomness and higher security. Encryption, cipher, and hash are not the foundation of cryptographic functions, although they are related or important concepts or techniques. Encryption is the process of transforming plaintext or cleartext into ciphertext or cryptogram, using a cryptographic algorithm and a key, to protect the confidentiality and the integrity of the data. Encryption can be symmetric or asymmetric, depending on whether the same or different keys are used for encryption and decryption. Cipher is another term for a cryptographic algorithm, which is a mathematical function that performs encryption or decryption. Cipher can be classified into various types, such as substitution, transposition, stream, or block, depending on how they operate on the data. Hash is the process of generating a fixed-length and unique output, called a hash or a digest, from a variable-length and arbitrary input, using a one-way function, to verify the integrity and the authenticity of the data. Hash can be used for various purposes, such as digital signatures, message authentication codes, or password storage.
A security compliance manager of a large enterprise wants to reduce the time it takes to perform network,
system, and application security compliance audits while increasing quality and effectiveness of the results.
What should be implemented to BEST achieve the desired results?
Configuration Management Database (CMDB)
Source code repository
Configuration Management Plan (CMP)
System performance monitoring application
A Configuration Management Database (CMDB) is a database that stores information about configuration items (CIs) for use in change, release, incident, service request, problem, and configuration management processes. A CI is any component or resource that is part of a system or a network, such as hardware, software, documentation, or personnel. A CMDB can provide some benefits for security compliance audits, such as:
A source code repository, a configuration management plan (CMP), and a system performance monitoring application are not the best options to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, although they may be related or useful tools or techniques. A source code repository is a database or a system that stores and manages the source code of a software or an application, and that supports version control, collaboration, and documentation of the code. A source code repository can provide some benefits for security compliance audits, such as:
However, a source code repository is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is only applicable to the application layer, and it does not provide information about the other CIs that are part of the system or the network, such as hardware, documentation, or personnel. A configuration management plan (CMP) is a document or a policy that defines and describes the objectives, scope, roles, responsibilities, processes, and procedures of configuration management, which is the process of identifying, controlling, tracking, and auditing the changes to the CIs. A CMP can provide some benefits for security compliance audits, such as:
However, a CMP is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is not a database or a system that stores and provides information about the CIs, but rather a document or a policy that defines and describes the configuration management process. A system performance monitoring application is a software or a tool that collects and analyzes data and metrics about the performance and the behavior of a system or a network, such as availability, reliability, throughput, response time, or resource utilization. A system performance monitoring application can provide some benefits for security compliance audits, such as:
However, a system performance monitoring application is not the best option to achieve the desired results of reducing the time and increasing the quality and effectiveness of network, system, and application security compliance audits, as it is only applicable to the network and system layers, and it does not provide information about the other CIs that are part of the system or the network, such as software, documentation, or personnel.
An organization adopts a new firewall hardening standard. How can the security professional verify that the technical staff correct implemented the new standard?
Perform a compliance review
Perform a penetration test
Train the technical staff
Survey the technical staff
A compliance review is a process of checking whether the systems and processes meet the established standards, policies, and regulations. A compliance review can help to verify that the technical staff has correctly implemented the new firewall hardening standard, as well as to identify and correct any deviations or violations. A penetration test, a training session, or a survey are not as effective as a compliance review, as they may not cover all the aspects of the firewall hardening standard or provide sufficient evidence of compliance. References: CISSP Exam Outline
Proven application security principles include which of the following?
Minimizing attack surface area
Hardening the network perimeter
Accepting infrastructure security controls
Developing independent modules
Minimizing attack surface area is a proven application security principle that aims to reduce the exposure or the vulnerability of an application to potential attacks, by limiting or eliminating the unnecessary or unused features, functions, or services of the application, as well as the access or the interaction of the application with other applications, systems, or networks. Minimizing attack surface area can provide some benefits for security, such as enhancing the performance and the functionality of the application, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. Hardening the network perimeter, accepting infrastructure security controls, and developing independent modules are not proven application security principles, although they may be related or useful concepts or techniques. Hardening the network perimeter is a network security concept or technique that aims to protect the network from external or unauthorized attacks, by strengthening or enhancing the security controls or mechanisms at the boundary or the edge of the network, such as firewalls, routers, or gateways. Hardening the network perimeter can provide some benefits for security, such as enhancing the performance and the functionality of the network, preventing or mitigating some types of attacks or vulnerabilities, and supporting the audit and the compliance activities. However, hardening the network perimeter is not an application security principle, as it is not specific or applicable to the application layer, and it does not address the internal or the inherent security of the application. Accepting infrastructure security controls is a risk management concept or technique that involves accepting the residual risk of an application after applying the security controls or mechanisms provided by the underlying infrastructure, such as the hardware, the software, the network, or the cloud. Accepting infrastructure security controls can provide some benefits for security, such as reducing the cost and the complexity of the security implementation, leveraging the expertise and the resources of the infrastructure providers, and supporting the audit and the compliance activities. However, accepting infrastructure security controls is not an application security principle, as it is not a proactive or a preventive measure to enhance the security of the application, and it may introduce or increase the dependency or the vulnerability of the application on the infrastructure. Developing independent modules is a software engineering concept or technique that involves designing or creating the application as a collection or a composition of discrete or separate components or units, each with a specific function or purpose, and each with a well-defined interface or contract. Developing independent modules can provide some benefits for security, such as enhancing the usability and the maintainability of the application, preventing or isolating some types of errors or bugs, and supporting the testing and the verification activities. However, developing independent modules is not an application security principle, as it is not a direct or a deliberate measure to improve the security of the application, and it may not address or prevent some types of attacks or vulnerabilities that affect the application as a whole or the interaction between the modules.
Which of the following would MINIMIZE the ability of an attacker to exploit a buffer overflow?
Memory review
Code review
Message division
Buffer division
Code review is the technique that would minimize the ability of an attacker to exploit a buffer overflow. A buffer overflow is a type of vulnerability that occurs when a program writes more data to a buffer than it can hold, causing the data to overwrite the adjacent memory locations, such as the return address or the stack pointer. An attacker can exploit a buffer overflow by injecting malicious code or data into the buffer, and altering the execution flow of the program to execute the malicious code or data. Code review is the technique that would minimize the ability of an attacker to exploit a buffer overflow, as it involves examining the source code of the program to identify and fix any errors, flaws, or weaknesses that may lead to buffer overflow vulnerabilities. Code review can help to detect and prevent the use of unsafe or risky functions, such as gets, strcpy, or sprintf, that do not perform any boundary checking on the buffer, and replace them with safer or more secure alternatives, such as fgets, strncpy, or snprintf, that limit the amount of data that can be written to the buffer. Code review can also help to enforce and verify the use of secure coding practices and standards, such as input validation, output encoding, error handling, or memory management, that can reduce the likelihood or impact of buffer overflow vulnerabilities. Memory review, message division, and buffer division are not techniques that would minimize the ability of an attacker to exploit a buffer overflow, although they may be related or useful concepts. Memory review is not a technique, but a process of analyzing the memory layout or content of a program, such as the stack, the heap, or the registers, to understand or debug its behavior or performance. Memory review may help to identify or investigate the occurrence or effect of a buffer overflow, but it does not prevent or mitigate it. Message division is not a technique, but a concept of splitting a message into smaller or fixed-size segments or blocks, such as in cryptography or networking. Message division may help to improve the security or efficiency of the message transmission or processing, but it does not prevent or mitigate buffer overflow. Buffer division is not a technique, but a concept of dividing a buffer into smaller or separate buffers, such as in buffering or caching. Buffer division may help to optimize the memory usage or allocation of the program, but it does not prevent or mitigate buffer overflow.
Which of the following is MOST effective in detecting information hiding in Transmission Control Protocol/internet Protocol (TCP/IP) traffic?
Stateful inspection firewall
Application-level firewall
Content-filtering proxy
Packet-filter firewall
An application-level firewall is the most effective in detecting information hiding in TCP/IP traffic. Information hiding is a technique that conceals data or messages within other data or messages, such as using steganography, covert channels, or encryption. An application-level firewall is a type of firewall that operates at the application layer of the OSI model, and inspects the content and context of the network packets, such as the headers, payloads, or protocols. An application-level firewall can help to detect information hiding in TCP/IP traffic, as it can analyze the data for any anomalies, inconsistencies, or violations of the expected format or behavior. A stateful inspection firewall, a content-filtering proxy, and a packet-filter firewall are not as effective in detecting information hiding in TCP/IP traffic, as they operate at lower layers of the OSI model, and only inspect the state, content, or header of the network packets, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Communication and Network Security, page 731; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 4: Communication and Network Security, page 511.
As part of the security assessment plan, the security professional has been asked to use a negative testing strategy on a new website. Which of the following actions would be performed?
Use a web scanner to scan for vulnerabilities within the website.
Perform a code review to ensure that the database references are properly addressed.
Establish a secure connection to the web server to validate that only the approved ports are open.
Enter only numbers in the web form and verify that the website prompts the user to enter a valid input.
A negative testing strategy is a type of software testing that aims to verify how the system handles invalid or unexpected inputs, errors, or conditions. A negative testing strategy can help identify potential bugs, vulnerabilities, or failures that could compromise the functionality, security, or usability of the system. One example of a negative testing strategy is to enter only numbers in a web form that expects a text input, such as a name or an email address, and verify that the website prompts the user to enter a valid input. This can help ensure that the website has proper input validation and error handling mechanisms, and that it does not accept or process any malicious or malformed data. A web scanner, a code review, and a secure connection are not examples of a negative testing strategy, as they do not involve providing invalid or unexpected inputs to the system.
When conducting a security assessment of access controls, which activity is part of the data analysis phase?
Present solutions to address audit exceptions.
Conduct statistical sampling of data transactions.
Categorize and identify evidence gathered during the audit.
Collect logs and reports.
The activity that is part of the data analysis phase when conducting a security assessment of access controls is to categorize and identify evidence gathered during the audit. A security assessment of access controls is a process that evaluates the effectiveness and compliance of the access controls implemented in a system or an organization. A security assessment of access controls typically consists of four phases: planning, data collection, data analysis, and reporting. The data analysis phase is the phase where the collected data is processed, interpreted, and evaluated, based on the audit objectives, criteria, and standards. The data analysis phase involves activities such as categorizing and identifying evidence gathered during the audit, which means sorting and labeling the data according to their type, source, and relevance, and verifying their validity, reliability, and sufficiency. Presenting solutions to address audit exceptions, conducting statistical sampling of data transactions, and collecting logs and reports are not activities that are part of the data analysis phase, but of the reporting, data collection, and data collection phases, respectively. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 75; Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 1: Security and Risk Management, page 67.
Which type of test would an organization perform in order to locate and target exploitable defects?
Penetration
System
Performance
Vulnerability
Penetration testing is a type of test that an organization performs in order to locate and target exploitable defects in its information systems and networks. Penetration testing simulates a real-world attack scenario, where a tester, also known as a penetration tester or ethical hacker, tries to find and exploit the vulnerabilities in the system or network, using the same tools and techniques as a malicious attacker. The goal of penetration testing is to identify the weaknesses and gaps in the security posture of the organization, and to provide recommendations and solutions to mitigate or eliminate them. Penetration testing can help the organization improve its security awareness, compliance, and resilience, and prevent potential breaches or incidents.
Which of the following is the BEST reason for writing an information security policy?
To support information security governance
To reduce the number of audit findings
To deter attackers
To implement effective information security controls
The best reason for writing an information security policy is to support information security governance. Information security governance is the process or the framework of establishing and enforcing the policies and standards for the protection and the management of the information and the systems within an organization, as well as for overseeing and evaluating the performance and the effectiveness of the information security program and the information security controls. Information security governance can provide some benefits for security, such as enhancing the visibility and the accountability of the information security program and the information security controls, preventing or detecting any unauthorized or improper activities or changes, and supporting the audit and the compliance activities. Information security governance can involve various elements and roles, such as:
Writing an information security policy is the best reason for writing an information security policy, as it is the foundation and the core of the information security governance process or framework, and it provides the guidance and the direction for the information security program and the information security controls, as well as for the information security stakeholders. Writing an information security policy can involve various tasks or duties, such as:
To reduce the number of audit findings, to deter attackers, and to implement effective information security controls are not the best reasons for writing an information security policy, although they may be related or possible outcomes or benefits of writing an information security policy. To reduce the number of audit findings is an outcome or a benefit of writing an information security policy, as it implies that the information security policy has helped to improve the performance and the effectiveness of the information security program and the information security controls, as well as to comply with the industry regulations or the best practices, and that the information security policy has supported the audit and the compliance activities, by providing the evidence or the data that can validate or verify the information security program and the information security controls. However, to reduce the number of audit findings is not the best reason for writing an information security policy, as it is not the primary or the most important purpose or objective of writing an information security policy, and it may not be true or applicable for all information security policies.
Which of the following is a PRIMARY benefit of using a formalized security testing report format and structure?
Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken
Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability
Management teams will understand the testing objectives and reputational risk to the organization
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure. Security testing is a process that involves evaluating and verifying the security posture, vulnerabilities, and threats of a system or a network, using various methods and techniques, such as vulnerability assessment, penetration testing, code review, and compliance checks. Security testing can provide several benefits, such as:
A security testing report is a document that summarizes and communicates the findings and recommendations of the security testing process to the relevant stakeholders, such as the technical and management teams. A security testing report can have various formats and structures, depending on the scope, purpose, and audience of the report. However, a formalized security testing report format and structure is one that follows a standard and consistent template, such as the one proposed by the National Institute of Standards and Technology (NIST) in the Special Publication 800-115, Technical Guide to Information Security Testing and Assessment. A formalized security testing report format and structure can have several components, such as:
Technical and management teams will better understand the testing objectives, results of each test phase, and potential impact levels is the primary benefit of using a formalized security testing report format and structure, because it can ensure that the security testing report is clear, comprehensive, and consistent, and that it provides the relevant and useful information for the technical and management teams to make informed and effective decisions and actions regarding the system or network security.
The other options are not the primary benefits of using a formalized security testing report format and structure, but rather secondary or specific benefits for different audiences or purposes. Executive audiences will understand the outcomes of testing and most appropriate next steps for corrective actions to be taken is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the executive summary component of the report, which is a brief and high-level overview of the report, rather than the entire report. Technical teams will understand the testing objectives, testing strategies applied, and business risk associated with each vulnerability is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the methodology and results components of the report, which are more technical and detailed parts of the report, rather than the entire report. Management teams will understand the testing objectives and reputational risk to the organization is a benefit of using a formalized security testing report format and structure, but it is not the primary benefit, because it is more relevant for the introduction and conclusion components of the report, which are more contextual and strategic parts of the report, rather than the entire report.
In which of the following programs is it MOST important to include the collection of security process data?
Quarterly access reviews
Security continuous monitoring
Business continuity testing
Annual security training
Security continuous monitoring is the program in which it is most important to include the collection of security process data. Security process data is the data that reflects the performance, effectiveness, and compliance of the security processes, such as the security policies, standards, procedures, and guidelines. Security process data can include metrics, indicators, logs, reports, and assessments. Security process data can provide several benefits, such as:
Security continuous monitoring is the program in which it is most important to include the collection of security process data, because it is the program that involves maintaining the ongoing awareness of the security status, events, and activities of the system. Security continuous monitoring can enable the system to detect and respond to any security issues or incidents in a timely and effective manner, and to adjust and improve the security controls and processes accordingly. Security continuous monitoring can also help the system to comply with the security requirements and standards from the internal or external authorities or frameworks.
The other options are not the programs in which it is most important to include the collection of security process data, but rather programs that have other objectives or scopes. Quarterly access reviews are programs that involve reviewing and verifying the user accounts and access rights on a quarterly basis. Quarterly access reviews can ensure that the user accounts and access rights are valid, authorized, and up to date, and that any inactive, expired, or unauthorized accounts or rights are removed or revoked. However, quarterly access reviews are not the programs in which it is most important to include the collection of security process data, because they are not focused on the security status, events, and activities of the system, but rather on the user accounts and access rights. Business continuity testing is a program that involves testing and validating the business continuity plan (BCP) and the disaster recovery plan (DRP) of the system. Business continuity testing can ensure that the system can continue or resume its critical functions and operations in case of a disruption or disaster, and that the system can meet the recovery objectives and requirements. However, business continuity testing is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the continuity and recovery of the system. Annual security training is a program that involves providing and updating the security knowledge and skills of the system users and staff on an annual basis. Annual security training can increase the security awareness and competence of the system users and staff, and reduce the human errors or risks that might compromise the system security. However, annual security training is not the program in which it is most important to include the collection of security process data, because it is not focused on the security status, events, and activities of the system, but rather on the security education and training of the system users and staff.
Which of the following is of GREATEST assistance to auditors when reviewing system configurations?
Change management processes
User administration procedures
Operating System (OS) baselines
System backup documentation
Operating System (OS) baselines are of greatest assistance to auditors when reviewing system configurations. OS baselines are standard or reference configurations that define the desired and secure state of an OS, including the settings, parameters, patches, and updates. OS baselines can provide several benefits, such as:
OS baselines are of greatest assistance to auditors when reviewing system configurations, because they can enable the auditors to evaluate and verify the current and actual state of the OS against the desired and secure state of the OS. OS baselines can also help the auditors to identify and report any gaps, issues, or risks in the OS configurations, and to recommend or implement any corrective or preventive actions.
The other options are not of greatest assistance to auditors when reviewing system configurations, but rather of assistance for other purposes or aspects. Change management processes are processes that ensure that any changes to the system configurations are planned, approved, implemented, and documented in a controlled and consistent manner. Change management processes can improve the security and reliability of the system configurations by preventing or reducing the errors, conflicts, or disruptions that might occur due to the changes. However, change management processes are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the procedures and controls for managing the changes. User administration procedures are procedures that define the roles, responsibilities, and activities for creating, modifying, deleting, and managing the user accounts and access rights. User administration procedures can enhance the security and accountability of the user accounts and access rights by enforcing the principles of least privilege, separation of duties, and need to know. However, user administration procedures are not of greatest assistance to auditors when reviewing system configurations, because they do not define the desired and secure state of the system configurations, but rather the rules and tasks for administering the users. System backup documentation is documentation that records the information and details about the system backup processes, such as the backup frequency, type, location, retention, and recovery. System backup documentation can increase the availability and resilience of the system by ensuring that the system data and configurations can be restored in case of a loss or damage. However, system backup documentation is not of greatest assistance to auditors when reviewing system configurations, because it does not define the desired and secure state of the system configurations, but rather the backup and recovery of the system configurations.
Which of the following could cause a Denial of Service (DoS) against an authentication system?
Encryption of audit logs
No archiving of audit logs
Hashing of audit logs
Remote access audit logs
Remote access audit logs could cause a Denial of Service (DoS) against an authentication system. A DoS attack is a type of attack that aims to disrupt or degrade the availability or performance of a system or a network by overwhelming it with excessive or malicious traffic or requests. An authentication system is a system that verifies the identity and credentials of the users or entities that want to access the system or network resources or services. An authentication system can use various methods or factors to authenticate the users or entities, such as passwords, tokens, certificates, biometrics, or behavioral patterns.
Remote access audit logs are records that capture and store the information about the events and activities that occur when the users or entities access the system or network remotely, such as via the internet, VPN, or dial-up. Remote access audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the remote access behavior, and facilitating the investigation and response of the incidents.
Remote access audit logs could cause a DoS against an authentication system, because they could consume a large amount of disk space, memory, or bandwidth on the authentication system, especially if the remote access is frequent, intensive, or malicious. This could affect the performance or functionality of the authentication system, and prevent or delay the legitimate users or entities from accessing the system or network resources or services. For example, an attacker could launch a DoS attack against an authentication system by sending a large number of fake or invalid remote access requests, and generating a large amount of remote access audit logs that fill up the disk space or memory of the authentication system, and cause it to crash or slow down.
The other options are not the factors that could cause a DoS against an authentication system, but rather the factors that could improve or protect the authentication system. Encryption of audit logs is a technique that involves using a cryptographic algorithm and a key to transform the audit logs into an unreadable or unintelligible format, that can only be reversed or decrypted by authorized parties. Encryption of audit logs can enhance the security and confidentiality of the audit logs by preventing unauthorized access or disclosure of the sensitive information in the audit logs. However, encryption of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or privacy of the audit logs. No archiving of audit logs is a practice that involves not storing or transferring the audit logs to a separate or external storage device or location, such as a tape, disk, or cloud. No archiving of audit logs can reduce the security and availability of the audit logs by increasing the risk of loss or damage of the audit logs, and limiting the access or retrieval of the audit logs. However, no archiving of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the availability or preservation of the audit logs. Hashing of audit logs is a technique that involves using a hash function, such as MD5 or SHA, to generate a fixed-length and unique value, called a hash or a digest, that represents the audit logs. Hashing of audit logs can improve the security and integrity of the audit logs by verifying the authenticity or consistency of the audit logs, and detecting any modification or tampering of the audit logs. However, hashing of audit logs could not cause a DoS against an authentication system, because it does not affect the availability or performance of the authentication system, but rather the integrity or verification of the audit logs.
At a MINIMUM, a formal review of any Disaster Recovery Plan (DRP) should be conducted
monthly.
quarterly.
annually.
bi-annually.
A formal review of any Disaster Recovery Plan (DRP) should be conducted at a minimum annually, or more frequently if there are significant changes in the business environment, the IT infrastructure, the security threats, or the regulatory requirements. A formal review involves evaluating the DRP against the current business needs, objectives, and risks, and ensuring that the DRP is updated, accurate, complete, and consistent. A formal review also involves testing the DRP to verify its effectiveness and feasibility, and identifying any gaps or weaknesses that need to be addressed12. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 10, page 10352: CISSP For Dummies, 7th Edition, Chapter 10, page 351.
Which of the following is considered best practice for preventing e-mail spoofing?
Spam filtering
Cryptographic signature
Uniform Resource Locator (URL) filtering
Reverse Domain Name Service (DNS) lookup
The best practice for preventing e-mail spoofing is to use cryptographic signatures. E-mail spoofing is a technique that involves forging the sender’s address or identity in an e-mail message, usually to trick the recipient into opening a malicious attachment, clicking on a phishing link, or disclosing sensitive information. Cryptographic signatures are digital signatures that are created by encrypting the e-mail message or a part of it with the sender’s private key, and attaching it to the e-mail message. Cryptographic signatures can be used to verify the authenticity and integrity of the sender and the message, and to prevent e-mail spoofing5 . References: 5: What is Email Spoofing? : How to Prevent Email Spoofing
The Structured Query Language (SQL) implements Discretionary Access Controls (DAC) using
INSERT and DELETE.
GRANT and REVOKE.
PUBLIC and PRIVATE.
ROLLBACK and TERMINATE.
The Structured Query Language (SQL) implements Discretionary Access Controls (DAC) using the GRANT and REVOKE commands. DAC is a type of access control that allows the owner or creator of an object, such as a table, view, or procedure, to grant or revoke permissions to other users or roles. For example, a user can grant SELECT, INSERT, UPDATE, or DELETE privileges to another user on a specific table, or revoke them if needed34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, page 4134: CISSP For Dummies, 7th Edition, Chapter 4, page 123.
Which of the following is the FIRST action that a system administrator should take when it is revealed during a penetration test that everyone in an organization has unauthorized access to a server holding sensitive data?
Immediately document the finding and report to senior management.
Use system privileges to alter the permissions to secure the server
Continue the testing to its completion and then inform IT management
Terminate the penetration test and pass the finding to the server management team
If a system administrator discovers a serious security breach during a penetration test, such as unauthorized access to a server holding sensitive data, the first action that he or she should take is to immediately document the finding and report it to senior management. This is because senior management is ultimately responsible for the security of the organization and its assets, and they need to be aware of the situation and take appropriate actions to mitigate the risk and prevent further damage. Documenting the finding is also important to provide evidence and support for the report, and to comply with any legal or regulatory requirements. Using system privileges to alter the permissions to secure the server, continuing the testing to its completion, or terminating the penetration test and passing the finding to the server management team are not the first actions that a system administrator should take, as they may not address the root cause of the problem, may interfere with the ongoing testing, or may delay the notification of senior management.
Which of the following is a method used to prevent Structured Query Language (SQL) injection attacks?
Data compression
Data classification
Data warehousing
Data validation
Data validation is a method used to prevent Structured Query Language (SQL) injection attacks, which are a type of web application attack that exploit the input fields of a web form to inject malicious SQL commands into the underlying database. Data validation involves checking the input data for any illegal or unexpected characters, such as quotes, semicolons, or keywords, and rejecting or sanitizing them before passing them to the database34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 6604: CISSP For Dummies, 7th Edition, Chapter 6, page 199.
Why must all users be positively identified prior to using multi-user computers?
To provide access to system privileges
To provide access to the operating system
To ensure that unauthorized persons cannot access the computers
To ensure that management knows what users are currently logged on
The main reason why all users must be positively identified prior to using multi-user computers is to ensure that unauthorized persons cannot access the computers. Positive identification is the process of verifying the identity of a user or a device before granting access to a system or a resource2. Positive identification can be achieved by using one or more factors of authentication, such as something the user knows, has, or is. Positive identification can enhance the security and accountability of the system, and prevent unauthorized or malicious access. Providing access to system privileges, providing access to the operating system, and ensuring that management knows what users are currently logged on are not the primary reasons why all users must be positively identified prior to using multi-user computers, as they are more related to the functionality or administration of the system, rather than the security. References: 2: CISSP For Dummies, 7th Edition, Chapter 4, page 89.
Which of the following methods protects Personally Identifiable Information (PII) by use of a full replacement of the data element?
Transparent Database Encryption (TDE)
Column level database encryption
Volume encryption
Data tokenization
Data tokenization is a method of protecting PII by replacing the sensitive data element with a non-sensitive equivalent, called a token, that has no extrinsic or exploitable meaning or value1. The token is then mapped back to the original data element in a secure database. This way, the PII is not exposed in the data processing or storage, and only authorized parties can access the original data element. Data tokenization is different from encryption, which transforms the data element into a ciphertext that can be decrypted with a key. Data tokenization does not require a key, and the token cannot be reversed to reveal the original data element2. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 2812: CISSP For Dummies, 7th Edition, Chapter 10, page 289.
How can a forensic specialist exclude from examination a large percentage of operating system files residing on a copy of the target system?
Take another backup of the media in question then delete all irrelevant operating system files.
Create a comparison database of cryptographic hashes of the files from a system with the same operating system and patch level.
Generate a message digest (MD) or secure hash on the drive image to detect tampering of the media being examined.
Discard harmless files for the operating system, and known installed programs.
A forensic specialist can exclude from examination a large percentage of operating system files residing on a copy of the target system by creating a comparison database of cryptographic hashes of the files from a system with the same operating system and patch level. This method is also known as known file filtering or file signature analysis. It allows the forensic specialist to quickly identify and eliminate the files that are part of the standard operating system installation and focus on the files that are unique or relevant to the investigation. This makes the process of exclusion much faster and more accurate than manually deleting or discarding files12. References: 1: Computer Forensics: Forensic Techniques, Part 1 [Updated 2019]32: Point Checklist: cissp book4
Which of the following is the best practice for testing a Business Continuity Plan (BCP)?
Test before the IT Audit
Test when environment changes
Test after installation of security patches
Test after implementation of system patches
The best practice for testing a Business Continuity Plan (BCP) is to test it when the environment changes, such as when there are new business processes, technologies, threats, or regulations. This ensures that the BCP is updated, relevant, and effective for the current situation. Testing the BCP before the IT audit, after installation of security patches, or after implementation of system patches are not the best practices, as they may not reflect the actual changes in the business environment or the potential disruptions that may occur. References: 5: Comprehensive Guide to Business Continuity Testing67: Maximizing Your BCP Testing Efforts: Best Practices8
While impersonating an Information Security Officer (ISO), an attacker obtains information from company employees about their User IDs and passwords. Which method of information gathering has the attacker used?
Trusted path
Malicious logic
Social engineering
Passive misuse
Social engineering is the method of information gathering that the attacker has used while impersonating an ISO and obtaining information from company employees about their User IDs and passwords. Social engineering is a technique of manipulating or deceiving people into revealing confidential or sensitive information, or performing actions that compromise the security of an organization or a system1. Social engineering can exploit the human factors, such as trust, curiosity, fear, or greed, to influence the behavior or judgment of the target. Social engineering can take various forms, such as phishing, baiting, pretexting, or impersonation. Trusted path, malicious logic, and passive misuse are not methods of information gathering that the attacker has used, as they are related to different aspects of security or attack. References: 1: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 19.
The FIRST step in building a firewall is to
assign the roles and responsibilities of the firewall administrators.
define the intended audience who will read the firewall policy.
identify mechanisms to encourage compliance with the policy.
perform a risk analysis to identify issues to be addressed.
The first step in building a firewall is to perform a risk analysis to identify the assets and resources that need to be protected, the threats and vulnerabilities that exist, and the potential impact of a security breach. Based on the risk analysis, the firewall design can be tailored to meet the specific security requirements and objectives of the organization
The birthday attack is MOST effective against which one of the following cipher technologies?
Chaining block encryption
Asymmetric cryptography
Cryptographic hash
Streaming cryptography
The birthday attack is most effective against cryptographic hash, which is one of the cipher technologies. A cryptographic hash is a function that takes an input of any size and produces an output of a fixed size, called a hash or a digest, that represents the input. A cryptographic hash has several properties, such as being one-way, collision-resistant, and deterministic3. A birthday attack is a type of brute-force attack that exploits the mathematical phenomenon known as the birthday paradox, which states that in a set of randomly chosen elements, there is a high probability that some pair of elements will have the same value. A birthday attack can be used to find collisions in a cryptographic hash, which means finding two different inputs that produce the same hash. Finding collisions can compromise the integrity or the security of the hash, as it can allow an attacker to forge or modify the input without changing the hash. Chaining block encryption, asymmetric cryptography, and streaming cryptography are not as vulnerable to the birthday attack, as they are different types of encryption algorithms that use keys and ciphers to transform the input into an output. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 3, page 133. : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, page 143.
Which layer of the Open Systems Interconnections (OSI) model implementation adds information concerning the logical connection between the sender and receiver?
Physical
Session
Transport
Data-Link
The Transport layer of the Open Systems Interconnection (OSI) model implementation adds information concerning the logical connection between the sender and receiver. The Transport layer is responsible for establishing, maintaining, and terminating the end-to-end communication between two hosts, as well as ensuring the reliability, integrity, and flow control of the data. The Transport layer uses protocols such as TCP and UDP to provide connection-oriented or connectionless services, and adds headers that contain information such as source and destination ports, sequence and acknowledgment numbers, and checksums . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 499. : CISSP For Dummies, 7th Edition, Chapter 5, page 145.
Alternate encoding such as hexadecimal representations is MOST often observed in which of the following forms of attack?
Smurf
Rootkit exploit
Denial of Service (DoS)
Cross site scripting (XSS)
Alternate encoding such as hexadecimal representations is most often observed in cross site scripting (XSS) attacks. XSS is a type of web application attack that involves injecting malicious code or scripts into a web page or a web application, usually through user input fields or parameters. The malicious code or script is then executed by the victim’s browser, and can perform various actions, such as stealing cookies, session tokens, or credentials, redirecting to malicious sites, or displaying fake content. Alternate encoding is a technique that is used by attackers to bypass input validation or filtering mechanisms, and to conceal or obfuscate the malicious code or script. Alternate encoding can use hexadecimal, decimal, octal, binary, or Unicode representations of the characters or symbols in the code or script . References: : What is Cross-Site Scripting (XSS)? : XSS Filter Evasion Cheat Sheet
A vulnerability test on an Information System (IS) is conducted to
exploit security weaknesses in the IS.
measure system performance on systems with weak security controls.
evaluate the effectiveness of security controls.
prepare for Disaster Recovery (DR) planning.
A vulnerability test is a type of security assessment that identifies and analyzes the security weaknesses in an information system. The purpose of a vulnerability test is to evaluate the effectiveness of security controls and to provide recommendations for improvement. A vulnerability test does not exploit the security weaknesses or measure the system performance. A vulnerability test is also different from a penetration test, which is a type of security assessment that attempts to exploit the security weaknesses and gain unauthorized access to the system. A vulnerability test is also not related to disaster recovery planning, which is a process of preparing for and recovering from a disruptive event that affects the availability of an information system.
Internet Protocol (IP) source address spoofing is used to defeat
address-based authentication.
Address Resolution Protocol (ARP).
Reverse Address Resolution Protocol (RARP).
Transmission Control Protocol (TCP) hijacking.
Internet Protocol (IP) source address spoofing is used to defeat address-based authentication, which is a method of verifying the identity of a user or a system based on their IP address. IP source address spoofing involves forging the IP header of a packet to make it appear as if it came from a trusted or authorized source, and bypassing the authentication check. IP source address spoofing can be used for various malicious purposes, such as denial-of-service attacks, man-in-the-middle attacks, or session hijacking34. References: 3: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 5274: CISSP For Dummies, 7th Edition, Chapter 5, page 153.
A software scanner identifies a region within a binary image having high entropy. What does this MOST likely indicate?
Encryption routines
Random number generator
Obfuscated code
Botnet command and control
Obfuscated code is a type of code that is deliberately written or modified to make it difficult to understand or reverse engineer3. Obfuscation techniques can include changing variable names, removing comments, adding irrelevant code, or encrypting parts of the code. Obfuscated code can have high entropy, which means that it has a high degree of randomness or unpredictability4. A software scanner can identify a region within a binary image having high entropy as a possible indication of obfuscated code. Encryption routines, random number generators, and botnet command and control are not necessarily related to obfuscated code, and may not have high entropy. References: 3: Official (ISC)2 CISSP CBK Reference, 5th Edition, Chapter 8, page 4674: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8, page 508.
A practice that permits the owner of a data object to grant other users access to that object would usually provide
Mandatory Access Control (MAC).
owner-administered control.
owner-dependent access control.
Discretionary Access Control (DAC).
A practice that permits the owner of a data object to grant other users access to that object would usually provide Discretionary Access Control (DAC). DAC is a type of access control that allows the data owner or creator to decide who can access or modify the data object, based on their identity or membership in a group. DAC is implemented using access control lists (ACLs), which specify the permissions or rights of each user or group for each data object. DAC is flexible and easy to implement, but it can also pose a security risk if the data owner grants excessive or inappropriate access to unauthorized or malicious users. Mandatory Access Control (MAC), owner-administered control, and owner-dependent access control are not types of access control that permit the owner of a data object to grant other users access to that object, as they are either based on predefined rules or policies, or not related to access control at all. References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6, page 354.
Which of the following defines the key exchange for Internet Protocol Security (IPSec)?
Secure Sockets Layer (SSL) key exchange
Internet Key Exchange (IKE)
Security Key Exchange (SKE)
Internet Control Message Protocol (ICMP)
Internet Key Exchange (IKE) is a protocol that defines the key exchange for Internet Protocol Security (IPSec). IPSec is a suite of protocols that provides security for IP-based communications, such as encryption, authentication, and integrity. IKE establishes a secure channel between two parties, negotiates the security parameters, and generates the cryptographic keys for IPSec . References: : CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, page 541. : CISSP For Dummies, 7th Edition, Chapter 5, page 157.
The configuration management and control task of the certification and accreditation process is incorporated in which phase of the System Development Life Cycle (SDLC)?
System acquisition and development
System operations and maintenance
System initiation
System implementation
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the System Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
The certification and accreditation process is a process that involves assessing and verifying the security and compliance of a system, and authorizing and approving the system operation and maintenance, using various standards and frameworks, such as NIST SP 800-37 or ISO/IEC 27001. The certification and accreditation process can be divided into several tasks, each with its own objectives and activities, such as:
The configuration management and control task of the certification and accreditation process is incorporated in the system acquisition and development phase of the SDLC, because it can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system changes are controlled and documented. Configuration management and control is a process that involves establishing and maintaining the baseline and the inventory of the system components and resources, such as hardware, software, data, or documentation, and tracking and recording any modifications or updates to the system components and resources, using various techniques and tools, such as version control, change control, or configuration audits. Configuration management and control can provide several benefits, such as:
The other options are not the phases of the SDLC that incorporate the configuration management and control task of the certification and accreditation process, but rather phases that involve other tasks of the certification and accreditation process. System operations and maintenance is a phase of the SDLC that incorporates the security monitoring task of the certification and accreditation process, because it can ensure that the system operation and maintenance are consistent and compliant with the security objectives and requirements, and that the system security is updated and improved. System initiation is a phase of the SDLC that incorporates the security categorization and security planning tasks of the certification and accreditation process, because it can ensure that the system scope and objectives are defined and aligned with the security objectives and requirements, and that the security plan and policy are developed and documented. System implementation is a phase of the SDLC that incorporates the security assessment and security authorization tasks of the certification and accreditation process, because it can ensure that the system deployment and installation are evaluated and verified for the security effectiveness and compliance, and that the system operation and maintenance are authorized and approved based on the risk and impact analysis and the security objectives and requirements.
When in the Software Development Life Cycle (SDLC) MUST software security functional requirements be defined?
After the system preliminary design has been developed and the data security categorization has been performed
After the vulnerability analysis has been performed and before the system detailed design begins
After the system preliminary design has been developed and before the data security categorization begins
After the business functional analysis and the data security categorization have been performed
Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed in the Software Development Life Cycle (SDLC). The SDLC is a process that involves planning, designing, developing, testing, deploying, operating, and maintaining a system, using various models and methodologies, such as waterfall, spiral, agile, or DevSecOps. The SDLC can be divided into several phases, each with its own objectives and activities, such as:
Software security functional requirements are the specific and measurable security features and capabilities that the system must provide to meet the security objectives and requirements. Software security functional requirements are derived from the business functional analysis and the data security categorization, which are two tasks that are performed in the system initiation phase of the SDLC. The business functional analysis is the process of identifying and documenting the business functions and processes that the system must support and enable, such as the inputs, outputs, workflows, and tasks. The data security categorization is the process of determining the security level and impact of the system and its data, based on the confidentiality, integrity, and availability criteria, and applying the appropriate security controls and measures. Software security functional requirements must be defined after the business functional analysis and the data security categorization have been performed, because they can ensure that the system design and development are consistent and compliant with the security objectives and requirements, and that the system security is aligned and integrated with the business functions and processes.
The other options are not the phases of the SDLC when the software security functional requirements must be defined, but rather phases that involve other tasks or activities related to the system design and development. After the system preliminary design has been developed and the data security categorization has been performed is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is verified and validated. After the vulnerability analysis has been performed and before the system detailed design begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system design and components are evaluated and tested for the security effectiveness and compliance, and the system detailed design is developed, based on the system architecture and components. After the system preliminary design has been developed and before the data security categorization begins is not the phase when the software security functional requirements must be defined, but rather the phase when the system architecture and components are designed, based on the system scope and objectives, and the data security categorization is initiated and planned.
A Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. The program is not working as expected. What is the MOST probable security feature of Java preventing the program from operating as intended?
Least privilege
Privilege escalation
Defense in depth
Privilege bracketing
The most probable security feature of Java preventing the program from operating as intended is least privilege. Least privilege is a principle that states that a subject (such as a user, a process, or a program) should only have the minimum amount of access or permissions that are necessary to perform its function or task. Least privilege can help to reduce the attack surface and the potential damage of a system or network, by limiting the exposure and impact of a subject in case of a compromise or misuse.
Java implements the principle of least privilege through its security model, which consists of several components, such as:
In this question, the Java program is being developed to read a file from computer A and write it to computer B, using a third computer C. This means that the Java program needs to have the permissions to perform the file I/O and the network communication operations, which are considered as sensitive or risky actions by the Java security model. However, if the Java program is running on computer C with the default or the minimal security permissions, such as in the Java Security Sandbox, then it will not be able to perform these operations, and the program will not work as expected. Therefore, the most probable security feature of Java preventing the program from operating as intended is least privilege, which limits the access or permissions of the Java program based on its source, signer, or policy.
The other options are not the security features of Java preventing the program from operating as intended, but rather concepts or techniques that are related to security in general or in other contexts. Privilege escalation is a technique that allows a subject to gain higher or unauthorized access or permissions than what it is supposed to have, by exploiting a vulnerability or a flaw in a system or network. Privilege escalation can help an attacker to perform malicious actions or to access sensitive resources or data, by bypassing the security controls or restrictions. Defense in depth is a concept that states that a system or network should have multiple layers or levels of security, to provide redundancy and resilience in case of a breach or an attack. Defense in depth can help to protect a system or network from various threats and risks, by using different types of security measures and controls, such as the physical, the technical, or the administrative ones. Privilege bracketing is a technique that allows a subject to temporarily elevate or lower its access or permissions, to perform a specific function or task, and then return to its original or normal level. Privilege bracketing can help to reduce the exposure and impact of a subject, by minimizing the time and scope of its higher or lower access or permissions.
Which of the following is the PRIMARY risk with using open source software in a commercial software construction?
Lack of software documentation
License agreements requiring release of modified code
Expiration of the license agreement
Costs associated with support of the software
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code. Open source software is software that uses publicly available source code, which can be seen, modified, and distributed by anyone. Open source software has some advantages, such as being affordable and flexible, but it also has some disadvantages, such as being potentially insecure or unsupported.
One of the main disadvantages of using open source software in a commercial software construction is the license agreements that govern the use and distribution of the open source software. License agreements are legal contracts that specify the rights and obligations of the parties involved in the software, such as the original authors, the developers, and the users. License agreements can vary in terms of their terms and conditions, such as the scope, the duration, or the fees of the software.
Some of the common types of license agreements for open source software are:
The primary risk with using open source software in a commercial software construction is license agreements requiring release of modified code, which are usually associated with copyleft licenses. This means that if a commercial software construction uses or incorporates open source software that is licensed under a copyleft license, then it must also release its own source code and any modifications or derivatives of it, under the same or compatible copyleft license. This can pose a significant risk for the commercial software construction, as it may lose its competitive advantage, intellectual property, or revenue, by disclosing its source code and allowing others to use, modify, or distribute it.
The other options are not the primary risks with using open source software in a commercial software construction, but rather secondary or minor risks that may or may not apply to the open source software. Lack of software documentation is a secondary risk with using open source software in a commercial software construction, as it may affect the quality, usability, or maintainability of the open source software, but it does not necessarily affect the rights or obligations of the commercial software construction. Expiration of the license agreement is a minor risk with using open source software in a commercial software construction, as it may affect the availability or continuity of the open source software, but it is unlikely to happen, as most open source software licenses are perpetual or indefinite. Costs associated with support of the software is a secondary risk with using open source software in a commercial software construction, as it may affect the reliability, security, or performance of the open source software, but it can be mitigated or avoided by choosing the open source software that has adequate or alternative support options.
Which of the following is a web application control that should be put into place to prevent exploitation of Operating System (OS) bugs?
Check arguments in function calls
Test for the security patch level of the environment
Include logging functions
Digitally sign each application module
Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of Operating System (OS) bugs. OS bugs are errors or defects in the code or logic of the OS that can cause the OS to malfunction or behave unexpectedly. OS bugs can be exploited by attackers to gain unauthorized access, disrupt business operations, or steal or leak sensitive data. Testing for the security patch level of the environment is the web application control that should be put into place to prevent exploitation of OS bugs, because it can provide several benefits, such as:
The other options are not the web application controls that should be put into place to prevent exploitation of OS bugs, but rather web application controls that can prevent or mitigate other types of web application attacks or issues. Checking arguments in function calls is a web application control that can prevent or mitigate buffer overflow attacks, which are attacks that exploit the vulnerability of the web application code that does not properly check the size or length of the input data that is passed to a function or a variable, and overwrite the adjacent memory locations with malicious code or data. Including logging functions is a web application control that can prevent or mitigate unauthorized access or modification attacks, which are attacks that exploit the lack of or weak authentication or authorization mechanisms of the web applications, and access or modify the web application data or functionality without proper permission or verification. Digitally signing each application module is a web application control that can prevent or mitigate code injection or tampering attacks, which are attacks that exploit the vulnerability of the web application code that does not properly validate or sanitize the input data that is executed or interpreted by the web application, and inject or modify the web application code with malicious code or data.
What is the BEST approach to addressing security issues in legacy web applications?
Debug the security issues
Migrate to newer, supported applications where possible
Conduct a security assessment
Protect the legacy application with a web application firewall
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications. Legacy web applications are web applications that are outdated, unsupported, or incompatible with the current technologies and standards. Legacy web applications may have various security issues, such as:
Migrating to newer, supported applications where possible is the best approach to addressing security issues in legacy web applications, because it can provide several benefits, such as:
The other options are not the best approaches to addressing security issues in legacy web applications, but rather approaches that can mitigate or remediate the security issues, but not eliminate or prevent them. Debugging the security issues is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves identifying and fixing the errors or defects in the code or logic of the web applications, which may be difficult or impossible to do for the legacy web applications that are outdated or unsupported. Conducting a security assessment is an approach that can remediate the security issues in legacy web applications, but not the best approach, because it involves evaluating and testing the security effectiveness and compliance of the web applications, using various techniques and tools, such as audits, reviews, scans, or penetration tests, and identifying and reporting any security weaknesses or gaps, which may not be sufficient or feasible to do for the legacy web applications that are incompatible or obsolete. Protecting the legacy application with a web application firewall is an approach that can mitigate the security issues in legacy web applications, but not the best approach, because it involves deploying and configuring a web application firewall, which is a security device or software that monitors and filters the web traffic between the web applications and the users or clients, and blocks or allows the web requests or responses based on the predefined rules or policies, which may not be effective or efficient to do for the legacy web applications that have weak or outdated encryption or authentication mechanisms.
Which of the following is the BEST method to prevent malware from being introduced into a production environment?
Purchase software from a limited list of retailers
Verify the hash key or certificate key of all updates
Do not permit programs, patches, or updates from the Internet
Test all new software in a segregated environment
Testing all new software in a segregated environment is the best method to prevent malware from being introduced into a production environment. Malware is any malicious software that can harm or compromise the security, availability, integrity, or confidentiality of a system or data. Malware can be introduced into a production environment through various sources, such as software downloads, updates, patches, or installations. Testing all new software in a segregated environment involves verifying and validating the functionality and security of the software before deploying it to the production environment, using a separate system or network that is isolated and protected from the production environment. Testing all new software in a segregated environment can provide several benefits, such as:
The other options are not the best methods to prevent malware from being introduced into a production environment, but rather methods that can reduce or mitigate the risk of malware, but not eliminate it. Purchasing software from a limited list of retailers is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves obtaining software only from trusted and reputable sources, such as official vendors or distributors, that can provide some assurance of the quality and security of the software. However, this method does not guarantee that the software is free of malware, as it may still contain hidden or embedded malware, or it may be tampered with or compromised during the delivery or installation process. Verifying the hash key or certificate key of all updates is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves checking the authenticity and integrity of the software updates, patches, or installations, by comparing the hash key or certificate key of the software with the expected or published value, using cryptographic techniques and tools. However, this method does not guarantee that the software is free of malware, as it may still contain malware that is not detected or altered by the hash key or certificate key, or it may be subject to a man-in-the-middle attack or a replay attack that can intercept or modify the software or the key. Not permitting programs, patches, or updates from the Internet is a method that can reduce the risk of malware from being introduced into a production environment, but not prevent it. This method involves restricting or blocking the access or download of software from the Internet, which is a common and convenient source of malware, by applying and enforcing the appropriate security policies and controls, such as firewall rules, antivirus software, or web filters. However, this method does not guarantee that the software is free of malware, as it may still be obtained or infected from other sources, such as removable media, email attachments, or network shares.
Which one of the following affects the classification of data?
Assigned security label
Multilevel Security (MLS) architecture
Minimum query size
Passage of time
The passage of time is one of the factors that affects the classification of data. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not static, but dynamic, meaning that it can change over time depending on various factors. One of these factors is the passage of time, which can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as confidential or secret at one point in time may become obsolete, outdated, or declassified at a later point in time, and thus require a lower level of protection. Conversely, data that is classified as public or unclassified at one point in time may become more valuable, sensitive, or regulated at a later point in time, and thus require a higher level of protection. Therefore, data classification should be reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or components of data classification. Assigned security label is the result of data classification, which indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a system that supports data classification, which allows different levels of access to data based on the clearance and need-to-know of the users. Minimum query size is a parameter that can be used to enforce data classification, which limits the amount of data that can be retrieved or displayed at a time.
Which of the following BEST describes the responsibilities of a data owner?
Ensuring quality and validation through periodic audits for ongoing data integrity
Maintaining fundamental data availability, including data storage and archiving
Ensuring accessibility to appropriate users, maintaining appropriate levels of data security
Determining the impact the information has on the mission of the organization
The best description of the responsibilities of a data owner is determining the impact the information has on the mission of the organization. A data owner is a person or entity that has the authority and accountability for the creation, collection, processing, and disposal of a set of data. A data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. A data owner should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the responsibilities of other roles or functions related to data management. Ensuring quality and validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining fundamental data availability, including data storage and archiving is a responsibility of a data custodian, who is a person or entity that implements and maintains the technical and physical security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of data security is a responsibility of a data controller, who is a person or entity that determines the purposes and means of processing the data.
In a data classification scheme, the data is owned by the
system security managers
business managers
Information Technology (IT) managers
end users
In a data classification scheme, the data is owned by the business managers. Business managers are the persons or entities that have the authority and accountability for the creation, collection, processing, and disposal of a set of data. Business managers are also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. Business managers should be able to determine the impact the information has on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the information on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles or functions related to data management. System security managers are the persons or entities that oversee the security of the information systems and networks that store, process, and transmit the data. They are responsible for implementing and maintaining the technical and physical security of the data, as well as monitoring and auditing the security performance and incidents. Information Technology (IT) managers are the persons or entities that manage the IT resources and services that support the business processes and functions that use the data. They are responsible for ensuring the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing technical support and guidance to the users and stakeholders. End users are the persons or entities that access and use the data for their legitimate purposes and needs. They are responsible for complying with the security policies and procedures for the data, as well as reporting any security issues or violations.
An organization has doubled in size due to a rapid market share increase. The size of the Information Technology (IT) staff has maintained pace with this growth. The organization hires several contractors whose onsite time is limited. The IT department has pushed its limits building servers and rolling out workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
Platform as a Service (PaaS)
Identity as a Service (IDaaS)
Desktop as a Service (DaaS)
Software as a Service (SaaS)
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM) functions, such as user authentication, authorization, provisioning, deprovisioning, password management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the organization to streamline and automate the account management process, reduce the workload and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can also support the contractors who have limited onsite time, as they can access the organization’s resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based service that provides a platform for developing, testing, and deploying applications, but it does not manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service that provides virtual desktops for users to access applications and data, but it does not manage the user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that provides software applications for users to use, but it does not manage the user accounts for the software applications.
Which of the following is MOST important when assigning ownership of an asset to a department?
The department should report to the business owner
Ownership of the asset should be periodically reviewed
Individual accountability should be ensured
All members should be trained on their responsibilities
When assigning ownership of an asset to a department, the most important factor is to ensure individual accountability for the asset. Individual accountability means that each person who has access to or uses the asset is responsible for its protection and proper handling. Individual accountability also implies that each person who causes or contributes to a security breach or incident involving the asset can be identified and held liable. Individual accountability can be achieved by implementing security controls such as authentication, authorization, auditing, and logging.
The other options are not as important as ensuring individual accountability, as they do not directly address the security risks associated with the asset. The department should report to the business owner is a management issue, not a security issue. Ownership of the asset should be periodically reviewed is a good practice, but it does not prevent misuse or abuse of the asset. All members should be trained on their responsibilities is a preventive measure, but it does not guarantee compliance or enforcement of the responsibilities.
When implementing a data classification program, why is it important to avoid too much granularity?
The process will require too many resources
It will be difficult to apply to both hardware and software
It will be difficult to assign ownership to the data
The process will be perceived as having value
When implementing a data classification program, it is important to avoid too much granularity, because the process will require too many resources. Data classification is the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data classification helps to determine the appropriate security controls and handling procedures for the data. However, data classification is not a simple or straightforward process, as it involves many factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the standards. If the data classification program has too many levels or categories of data, it will increase the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data protection. Therefore, data classification should be done with a balance between granularity and simplicity, and follow the principle of proportionality, which means that the level of protection should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but rather the potential challenges or benefits of data classification. It will be difficult to apply to both hardware and software is a challenge of data classification, as it requires consistent and compatible methods and tools for labeling and protecting data across different types of media and devices. It will be difficult to assign ownership to the data is a challenge of data classification, as it requires clear and accountable roles and responsibilities for the creation, collection, processing, and disposal of data. The process will be perceived as having value is a benefit of data classification, as it demonstrates the commitment and awareness of the organization to protect its data assets and comply with its obligations.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency Identification (RFID) based access cards?
Personal Identity Verification (PIV)
Cardholder Unique Identifier (CHUID) authentication
Physical Access Control System (PACS) repeated attempt detection
Asymmetric Card Authentication Key (CAK) challenge-response
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use radio frequency identification (RFID) technology to communicate with a reader and grant access to a physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the process of copying the data and identity of a legitimate card to a counterfeit card, and using it to impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-response is a cryptographic technique that prevents electronic cloning by using public key cryptography and digital signatures to verify the authenticity and integrity of the card and the reader. Asymmetric CAK challenge-response works as follows:
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the private key of the original card, and a rogue reader cannot impersonate a legitimate reader without knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for federal employees and contractors to use smart cards for physical and logical access, but it does not specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier (CHUID) authentication is a technique that uses a unique number and a digital certificate to identify the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity. Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the cloning of the card or the impersonation of the reader.
Which of the following is an initial consideration when developing an information security management system?
Identify the contractual security obligations that apply to the organizations
Understand the value of the information assets
Identify the level of residual risk that is tolerable to management
Identify relevant legislative and regulatory compliance requirements
When developing an information security management system (ISMS), an initial consideration is to understand the value of the information assets that the organization owns or processes. An information asset is any data, information, or knowledge that has value to the organization and supports its mission, objectives, and operations. Understanding the value of the information assets helps to determine the appropriate level of protection and investment for them, as well as the potential impact and consequences of losing, compromising, or disclosing them. Understanding the value of the information assets also helps to identify the stakeholders, owners, and custodians of the information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations when developing an ISMS. Identifying the contractual security obligations that apply to the organizations is a consideration that depends on the nature, scope, and context of the information assets, as well as the relationships and agreements with the external parties. Identifying the level of residual risk that is tolerable to management is a consideration that depends on the risk appetite and tolerance of the organization, as well as the risk assessment and analysis of the information assets. Identifying relevant legislative and regulatory compliance requirements is a consideration that depends on the legal and ethical obligations and expectations of the organization, as well as the jurisdiction and industry of the information assets.
At what level of the Open System Interconnection (OSI) model is data at rest on a Storage Area Network (SAN) located?
Link layer
Physical layer
Session layer
Application layer
Data at rest on a Storage Area Network (SAN) is located at the physical layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The physical layer is the lowest layer of the OSI model, and it is responsible for the transmission and reception of raw bits over a physical medium, such as cables, wires, or optical fibers. The physical layer defines the physical characteristics of the medium, such as voltage, frequency, modulation, connectors, etc. The physical layer also deals with the physical topology of the network, such as bus, ring, star, mesh, etc.
A Storage Area Network (SAN) is a dedicated network that provides access to consolidated and block-level data storage. A SAN consists of storage devices, such as disks, tapes, or arrays, that are connected to servers or clients via a network infrastructure, such as switches, routers, or hubs. A SAN allows multiple servers or clients to share the same storage devices, and it provides high performance, availability, scalability, and security for data storage. Data at rest on a SAN is located at the physical layer of the OSI model, because it is stored as raw bits on the physical medium of the storage devices, and it is accessed by the servers or clients through the physical medium of the network infrastructure.
Which of the following factors contributes to the weakness of Wired Equivalent Privacy (WEP) protocol?
WEP uses a small range Initialization Vector (IV)
WEP uses Message Digest 5 (MD5)
WEP uses Diffie-Hellman
WEP does not use any Initialization Vector (IV)
WEP uses a small range Initialization Vector (IV) is the factor that contributes to the weakness of Wired Equivalent Privacy (WEP) protocol. WEP is a security protocol that provides encryption and authentication for wireless networks, such as Wi-Fi. WEP uses the RC4 stream cipher to encrypt the data packets, and the CRC-32 checksum to verify the data integrity. WEP also uses a shared secret key, which is concatenated with a 24-bit Initialization Vector (IV), to generate the keystream for the RC4 encryption. WEP has several weaknesses and vulnerabilities, such as:
WEP has been deprecated and replaced by more secure protocols, such as Wi-Fi Protected Access (WPA) or Wi-Fi Protected Access II (WPA2), which use stronger encryption and authentication methods, such as the Temporal Key Integrity Protocol (TKIP), the Advanced Encryption Standard (AES), or the Extensible Authentication Protocol (EAP).
The other options are not factors that contribute to the weakness of WEP, but rather factors that are irrelevant or incorrect. WEP does not use Message Digest 5 (MD5), which is a hash function that produces a 128-bit output from a variable-length input. WEP does not use Diffie-Hellman, which is a method for generating a shared secret key between two parties. WEP does use an Initialization Vector (IV), which is a 24-bit value that is concatenated with the secret key.
What is the purpose of an Internet Protocol (IP) spoofing attack?
To send excessive amounts of data to a process, making it unpredictable
To intercept network traffic without authorization
To disguise the destination address from a target’s IP filtering devices
To convince a system that it is communicating with a known entity
The purpose of an Internet Protocol (IP) spoofing attack is to convince a system that it is communicating with a known entity. IP spoofing is a technique that involves creating and sending IP packets with a forged source IP address, which is usually the IP address of a trusted or authorized host. IP spoofing can be used for various malicious purposes, such as:
The purpose of IP spoofing is to convince a system that it is communicating with a known entity, because it allows the attacker to evade detection, avoid responsibility, and exploit trust relationships.
The other options are not the main purposes of IP spoofing, but rather the possible consequences or methods of IP spoofing. To send excessive amounts of data to a process, making it unpredictable is a possible consequence of IP spoofing, as it can cause a DoS or DDoS attack. To intercept network traffic without authorization is a possible method of IP spoofing, as it can be used to hijack or intercept a TCP session. To disguise the destination address from a target’s IP filtering devices is not a valid option, as IP spoofing involves forging the source address, not the destination address.
Which of the following operates at the Network Layer of the Open System Interconnection (OSI) model?
Packet filtering
Port services filtering
Content filtering
Application access control
Packet filtering operates at the network layer of the Open System Interconnection (OSI) model. The OSI model is a conceptual framework that describes how data is transmitted and processed across different layers of a network. The OSI model consists of seven layers: application, presentation, session, transport, network, data link, and physical. The network layer is the third layer from the bottom of the OSI model, and it is responsible for routing and forwarding data packets between different networks or subnets. The network layer uses logical addresses, such as IP addresses, to identify the source and destination of the data packets, and it uses protocols, such as IP, ICMP, or ARP, to perform the routing and forwarding functions.
Packet filtering is a technique that controls the access to a network or a host by inspecting the incoming and outgoing data packets and applying a set of rules or policies to allow or deny them. Packet filtering can be performed by devices, such as routers, firewalls, or proxies, that operate at the network layer of the OSI model. Packet filtering typically examines the network layer header of the data packets, such as the source and destination IP addresses, the protocol type, or the fragmentation flags, and compares them with the predefined rules or policies. Packet filtering can also examine the transport layer header of the data packets, such as the source and destination port numbers, the TCP flags, or the sequence numbers, and compare them with the rules or policies. Packet filtering can provide a basic level of security and performance for a network or a host, but it also has some limitations, such as the inability to inspect the payload or the content of the data packets, the vulnerability to spoofing or fragmentation attacks, or the complexity and maintenance of the rules or policies.
The other options are not techniques that operate at the network layer of the OSI model, but rather at other layers. Port services filtering is a technique that controls the access to a network or a host by inspecting the transport layer header of the data packets and applying a set of rules or policies to allow or deny them based on the port numbers or the services. Port services filtering operates at the transport layer of the OSI model, which is the fourth layer from the bottom. Content filtering is a technique that controls the access to a network or a host by inspecting the application layer payload or the content of the data packets and applying a set of rules or policies to allow or deny them based on the keywords, URLs, file types, or other criteria. Content filtering operates at the application layer of the OSI model, which is the seventh and the topmost layer. Application access control is a technique that controls the access to a network or a host by inspecting the application layer identity or the credentials of the users or the processes and applying a set of rules or policies to allow or deny them based on the roles, permissions, or other attributes. Application access control operates at the application layer of the OSI model, which is the seventh and the topmost layer.
Which of the following is the BEST network defense against unknown types of attacks or stealth attacks in progress?
Intrusion Prevention Systems (IPS)
Intrusion Detection Systems (IDS)
Stateful firewalls
Network Behavior Analysis (NBA) tools
Network Behavior Analysis (NBA) tools are the best network defense against unknown types of attacks or stealth attacks in progress. NBA tools are devices or software that monitor and analyze the network traffic and activities, and detect any anomalies or deviations from the normal or expected behavior. NBA tools use various techniques, such as statistical analysis, machine learning, artificial intelligence, or heuristics, to establish a baseline of the network behavior, and to identify any outliers or indicators of compromise. NBA tools can provide several benefits, such as:
The other options are not the best network defense against unknown types of attacks or stealth attacks in progress, but rather network defenses that have other limitations or drawbacks. Intrusion Prevention Systems (IPS) are devices or software that monitor and block the network traffic and activities that match the predefined signatures or rules of known attacks. IPS can provide a proactive and preventive layer of security, but they cannot detect or stop unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IPS. Intrusion Detection Systems (IDS) are devices or software that monitor and alert the network traffic and activities that match the predefined signatures or rules of known attacks. IDS can provide a reactive and detective layer of security, but they cannot detect or alert unknown types of attacks or stealth attacks that do not match any signatures or rules, or that can evade or disable the IDS. Stateful firewalls are devices or software that filter and control the network traffic and activities based on the state and context of the network sessions, such as the source and destination IP addresses, port numbers, protocol types, and sequence numbers. Stateful firewalls can provide a granular and dynamic layer of security, but they cannot filter or control unknown types of attacks or stealth attacks that use valid or spoofed network sessions, or that can exploit or bypass the firewall rules.
An input validation and exception handling vulnerability has been discovered on a critical web-based system. Which of the following is MOST suited to quickly implement a control?
Add a new rule to the application layer firewall
Block access to the service
Install an Intrusion Detection System (IDS)
Patch the application source code
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system. An input validation and exception handling vulnerability is a type of vulnerability that occurs when a web-based system does not properly check, filter, or sanitize the input data that is received from the users or other sources, or does not properly handle the errors or exceptions that are generated by the system. An input validation and exception handling vulnerability can lead to various attacks, such as:
An application layer firewall is a device or software that operates at the application layer of the OSI model and inspects the application layer payload or the content of the data packets. An application layer firewall can provide various functions, such as:
Adding a new rule to the application layer firewall is the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, because it can prevent or reduce the impact of the attacks by filtering or blocking the malicious or invalid input data that exploit the vulnerability. For example, a new rule can be added to the application layer firewall to:
Adding a new rule to the application layer firewall can be done quickly and easily, without requiring any changes or patches to the web-based system, which can be time-consuming and risky, especially for a critical system. Adding a new rule to the application layer firewall can also be done remotely and centrally, without requiring any physical access or installation on the web-based system, which can be inconvenient and costly, especially for a distributed system.
The other options are not the most suited to quickly implement a control for an input validation and exception handling vulnerability on a critical web-based system, but rather options that have other limitations or drawbacks. Blocking access to the service is not the most suited option, because it can cause disruption and unavailability of the service, which can affect the business operations and customer satisfaction, especially for a critical system. Blocking access to the service can also be a temporary and incomplete solution, as it does not address the root cause of the vulnerability or prevent the attacks from occurring again. Installing an Intrusion Detection System (IDS) is not the most suited option, because IDS only monitors and detects the attacks, and does not prevent or respond to them. IDS can also generate false positives or false negatives, which can affect the accuracy and reliability of the detection. IDS can also be overwhelmed or evaded by the attacks, which can affect the effectiveness and efficiency of the detection. Patching the application source code is not the most suited option, because it can take a long time and require a lot of resources and testing to identify, fix, and deploy the patch, especially for a complex and critical system. Patching the application source code can also introduce new errors or vulnerabilities, which can affect the functionality and security of the system. Patching the application source code can also be difficult or impossible, if the system is proprietary or legacy, which can affect the feasibility and compatibility of the patch.
In a Transmission Control Protocol/Internet Protocol (TCP/IP) stack, which layer is responsible for negotiating and establishing a connection with another node?
Transport layer
Application layer
Network layer
Session layer
The transport layer of the Transmission Control Protocol/Internet Protocol (TCP/IP) stack is responsible for negotiating and establishing a connection with another node. The TCP/IP stack is a simplified version of the OSI model, and it consists of four layers: application, transport, internet, and link. The transport layer is the third layer of the TCP/IP stack, and it is responsible for providing reliable and efficient end-to-end data transfer between two nodes on a network. The transport layer uses protocols, such as Transmission Control Protocol (TCP) or User Datagram Protocol (UDP), to segment, sequence, acknowledge, and reassemble the data packets, and to handle error detection and correction, flow control, and congestion control. The transport layer also provides connection-oriented or connectionless services, depending on the protocol used.
TCP is a connection-oriented protocol, which means that it establishes a logical connection between two nodes before exchanging data, and it maintains the connection until the data transfer is complete. TCP uses a three-way handshake to negotiate and establish a connection with another node. The three-way handshake works as follows:
UDP is a connectionless protocol, which means that it does not establish or maintain a connection between two nodes, but rather sends data packets independently and without any guarantee of delivery, order, or integrity. UDP does not use a handshake or any other mechanism to negotiate and establish a connection with another node, but rather relies on the application layer to handle any connection-related issues.
Which of the following is used by the Point-to-Point Protocol (PPP) to determine packet formats?
Layer 2 Tunneling Protocol (L2TP)
Link Control Protocol (LCP)
Challenge Handshake Authentication Protocol (CHAP)
Packet Transfer Protocol (PTP)
Link Control Protocol (LCP) is used by the Point-to-Point Protocol (PPP) to determine packet formats. PPP is a data link layer protocol that provides a standard method for transporting network layer packets over point-to-point links, such as serial lines, modems, or dial-up connections. PPP supports various network layer protocols, such as IP, IPX, or AppleTalk, and it can encapsulate them in a common frame format. PPP also provides features such as authentication, compression, error detection, and multilink aggregation. LCP is a subprotocol of PPP that is responsible for establishing, configuring, maintaining, and terminating the point-to-point connection. LCP negotiates and agrees on various options and parameters for the PPP link, such as the maximum transmission unit (MTU), the authentication method, the compression method, the error detection method, and the packet format. LCP uses a series of messages, such as configure-request, configure-ack, configure-nak, configure-reject, terminate-request, terminate-ack, code-reject, protocol-reject, echo-request, echo-reply, and discard-request, to communicate and exchange information between the PPP peers.
The other options are not used by PPP to determine packet formats, but rather for other purposes. Layer 2 Tunneling Protocol (L2TP) is a tunneling protocol that allows the creation of virtual private networks (VPNs) over public networks, such as the Internet. L2TP encapsulates PPP frames in IP datagrams and sends them across the tunnel between two L2TP endpoints. L2TP does not determine the packet format of PPP, but rather uses it as a payload. Challenge Handshake Authentication Protocol (CHAP) is an authentication protocol that is used by PPP to verify the identity of the remote peer before allowing access to the network. CHAP uses a challenge-response mechanism that involves a random number (nonce) and a hash function to prevent replay attacks. CHAP does not determine the packet format of PPP, but rather uses it as a transport. Packet Transfer Protocol (PTP) is not a valid option, as there is no such protocol with this name. There is a Point-to-Point Protocol over Ethernet (PPPoE), which is a protocol that encapsulates PPP frames in Ethernet frames and allows the use of PPP over Ethernet networks. PPPoE does not determine the packet format of PPP, but rather uses it as a payload.
An external attacker has compromised an organization’s network security perimeter and installed a sniffer onto an inside computer. Which of the following is the MOST effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information?
Implement packet filtering on the network firewalls
Install Host Based Intrusion Detection Systems (HIDS)
Require strong authentication for administrators
Implement logical network segmentation at the switches
Implementing logical network segmentation at the switches is the most effective layer of security the organization could have implemented to mitigate the attacker’s ability to gain further information. Logical network segmentation is the process of dividing a network into smaller subnetworks or segments based on criteria such as function, location, or security level. Logical network segmentation can be implemented at the switches, which are devices that operate at the data link layer of the OSI model and forward data packets based on the MAC addresses. Logical network segmentation can provide several benefits, such as:
Logical network segmentation can mitigate the attacker’s ability to gain further information by limiting the visibility and access of the sniffer to the segment where it is installed. A sniffer is a tool that captures and analyzes the data packets that are transmitted over a network. A sniffer can be used for legitimate purposes, such as troubleshooting, testing, or monitoring the network, or for malicious purposes, such as eavesdropping, stealing, or modifying the data. A sniffer can only capture the data packets that are within its broadcast domain, which is the set of devices that can communicate with each other without a router. By implementing logical network segmentation at the switches, the organization can create multiple broadcast domains and isolate the sensitive or critical data from the compromised segment. This way, the attacker can only see the data packets that belong to the same segment as the sniffer, and not the data packets that belong to other segments. This can prevent the attacker from gaining further information or accessing other resources on the network.
The other options are not the most effective layers of security the organization could have implemented to mitigate the attacker’s ability to gain further information, but rather layers that have other limitations or drawbacks. Implementing packet filtering on the network firewalls is not the most effective layer of security, because packet filtering only examines the network layer header of the data packets, such as the source and destination IP addresses, and does not inspect the payload or the content of the data. Packet filtering can also be bypassed by using techniques such as IP spoofing or fragmentation. Installing Host Based Intrusion Detection Systems (HIDS) is not the most effective layer of security, because HIDS only monitors and detects the activities and events on a single host, and does not prevent or respond to the attacks. HIDS can also be disabled or evaded by the attacker if the host is compromised. Requiring strong authentication for administrators is not the most effective layer of security, because authentication only verifies the identity of the users or processes, and does not protect the data in transit or at rest. Authentication can also be defeated by using techniques such as phishing, keylogging, or credential theft.
What is the second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management?
Implementation Phase
Initialization Phase
Cancellation Phase
Issued Phase
The second phase of Public Key Infrastructure (PKI) key/certificate life-cycle management is the initialization phase. PKI is a system that uses public key cryptography and digital certificates to provide authentication, confidentiality, integrity, and non-repudiation for electronic transactions. PKI key/certificate life-cycle management is the process of managing the creation, distribution, usage, storage, revocation, and expiration of keys and certificates in a PKI system. The key/certificate life-cycle management consists of six phases: pre-certification, initialization, certification, operational, suspension, and termination. The initialization phase is the second phase, where the key pair and the certificate request are generated by the end entity or the registration authority (RA). The initialization phase involves the following steps:
The other options are not the second phase of PKI key/certificate life-cycle management, but rather other phases. The implementation phase is not a phase of PKI key/certificate life-cycle management, but rather a phase of PKI system deployment, where the PKI components and policies are installed and configured. The cancellation phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the termination phase, where the key pair and the certificate are permanently revoked and deleted. The issued phase is not a phase of PKI key/certificate life-cycle management, but rather a possible outcome of the certification phase, where the CA verifies and approves the certificate request and issues the certificate to the end entity or the RA.
Which of the following mobile code security models relies only on trust?
Code signing
Class authentication
Sandboxing
Type safety
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of software that can be transferred from one system to another and executed without installation or compilation. Mobile code can be used for various purposes, such as web applications, applets, scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code, unauthorized access, data leakage, etc. Mobile code security models are the techniques that are used to protect the systems and users from the threats of mobile code. Code signing is a mobile code security model that relies only on trust, which means that the security of the mobile code depends on the reputation and credibility of the code provider. Code signing works as follows:
Code signing relies only on trust because it does not enforce any security restrictions or controls on the mobile code, but rather leaves the decision to the code consumer. Code signing also does not guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of the code provider. Code signing can be effective if the code consumer knows and trusts the code provider, and if the code provider follows the security standards and best practices. However, code signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other techniques that limit or isolate the mobile code. Class authentication is a mobile code security model that verifies the permissions and capabilities of the mobile code based on its class or type, and allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security model that executes the mobile code in a separate and restricted environment, and prevents the mobile code from accessing or affecting the system resources or data. Type safety is a mobile code security model that checks the validity and consistency of the mobile code, and prevents the mobile code from performing illegal or unsafe operations.
Which security service is served by the process of encryption plaintext with the sender’s private key and decrypting cipher text with the sender’s public key?
Confidentiality
Integrity
Identification
Availability
The security service that is served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key is identification. Identification is the process of verifying the identity of a person or entity that claims to be who or what it is. Identification can be achieved by using public key cryptography and digital signatures, which are based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. This process works as follows:
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key serves identification because it ensures that only the sender can produce a valid ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s identity by using the sender’s public key. This process also provides non-repudiation, which means that the sender cannot deny sending the message or the receiver cannot deny receiving the message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality is the process of ensuring that the message is only readable by the intended parties, and it is achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the receiver’s private key. Integrity is the process of ensuring that the message is not modified or corrupted during transmission, and it is achieved by using hash functions and message authentication codes. Availability is the process of ensuring that the message is accessible and usable by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
Which technique can be used to make an encryption scheme more resistant to a known plaintext attack?
Hashing the data before encryption
Hashing the data after encryption
Compressing the data after encryption
Compressing the data before encryption
Compressing the data before encryption is a technique that can be used to make an encryption scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key, and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess the key. Compressing the data before encryption can reduce the redundancy and increase the entropy of the plaintext, making it harder for the attacker to find any correlations or similarities between the plaintext and the ciphertext. Compressing the data before encryption can also reduce the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant to a known plaintext attack, but rather techniques that can introduce other security issues or inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the original data. Hashing the data after encryption is also not a useful technique, as hashing does not add any security to the encryption, and the hash can be easily computed by anyone who has access to the ciphertext. Compressing the data after encryption is not a recommended technique, as compression algorithms usually work better on uncompressed data, and compressing the ciphertext can introduce errors or vulnerabilities that can compromise the encryption.
Which component of the Security Content Automation Protocol (SCAP) specification contains the data required to estimate the severity of vulnerabilities identified automated vulnerability assessments?
Common Vulnerabilities and Exposures (CVE)
Common Vulnerability Scoring System (CVSS)
Asset Reporting Format (ARF)
Open Vulnerability and Assessment Language (OVAL)
The component of the Security Content Automation Protocol (SCAP) specification that contains the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments is the Common Vulnerability Scoring System (CVSS). CVSS is a framework that provides a standardized and objective way to measure and communicate the characteristics and impacts of vulnerabilities. CVSS consists of three metric groups: base, temporal, and environmental. The base metric group captures the intrinsic and fundamental properties of a vulnerability that are constant over time and across user environments. The temporal metric group captures the characteristics of a vulnerability that change over time, such as the availability and effectiveness of exploits, patches, and workarounds. The environmental metric group captures the characteristics of a vulnerability that are relevant and unique to a user’s environment, such as the configuration and importance of the affected system. Each metric group has a set of metrics that are assigned values based on the vulnerability’s attributes. The values are then combined using a formula to produce a numerical score that ranges from 0 to 10, where 0 means no impact and 10 means critical impact. The score can also be translated into a qualitative rating that ranges from none to low, medium, high, and critical. CVSS provides a consistent and comprehensive way to estimate the severity of vulnerabilities and prioritize their remediation.
The other options are not components of the SCAP specification that contain the data required to estimate the severity of vulnerabilities identified by automated vulnerability assessments, but rather components that serve other purposes. Common Vulnerabilities and Exposures (CVE) is a component that provides a standardized and unique identifier and description for each publicly known vulnerability. CVE facilitates the sharing and comparison of vulnerability information across different sources and tools. Asset Reporting Format (ARF) is a component that provides a standardized and extensible format for expressing the information about the assets and their characteristics, such as configuration, vulnerabilities, and compliance. ARF enables the aggregation and correlation of asset information from different sources and tools. Open Vulnerability and Assessment Language (OVAL) is a component that provides a standardized and expressive language for defining and testing the state of a system for the presence of vulnerabilities, configuration issues, patches, and other aspects. OVAL enables the automation and interoperability of vulnerability assessment and management.
Who in the organization is accountable for classification of data information assets?
Data owner
Data architect
Chief Information Security Officer (CISO)
Chief Information Officer (CIO)
The person in the organization who is accountable for the classification of data information assets is the data owner. The data owner is the person or entity that has the authority and responsibility for the creation, collection, processing, and disposal of a set of data. The data owner is also responsible for defining the purpose, value, and classification of the data, as well as the security requirements and controls for the data. The data owner should be able to determine the impact of the data on the mission of the organization, which means assessing the potential consequences of losing, compromising, or disclosing the data. The impact of the data on the mission of the organization is one of the main criteria for data classification, which helps to establish the appropriate level of protection and handling for the data. The data owner should also ensure that the data is properly labeled, stored, accessed, shared, and destroyed according to the data classification policy and procedures.
The other options are not the persons in the organization who are accountable for the classification of data information assets, but rather persons who have other roles or functions related to data management. The data architect is the person or entity that designs and models the structure, format, and relationships of the data, as well as the data standards, specifications, and lifecycle. The data architect supports the data owner by providing technical guidance and expertise on the data architecture and quality. The Chief Information Security Officer (CISO) is the person or entity that oversees the security strategy, policies, and programs of the organization, as well as the security performance and incidents. The CISO supports the data owner by providing security leadership and governance, as well as ensuring the compliance and alignment of the data security with the organizational objectives and regulations. The Chief Information Officer (CIO) is the person or entity that manages the information technology (IT) resources and services of the organization, as well as the IT strategy and innovation. The CIO supports the data owner by providing IT management and direction, as well as ensuring the availability, reliability, and scalability of the IT infrastructure and applications.
The use of private and public encryption keys is fundamental in the implementation of which of the following?
Diffie-Hellman algorithm
Secure Sockets Layer (SSL)
Advanced Encryption Standard (AES)
Message Digest 5 (MD5)
The use of private and public encryption keys is fundamental in the implementation of Secure Sockets Layer (SSL). SSL is a protocol that provides secure communication over the Internet by using public key cryptography and digital certificates. SSL works as follows:
The use of private and public encryption keys is fundamental in the implementation of SSL because it enables the authentication of the parties, the establishment of the shared secret key, and the protection of the data from eavesdropping, tampering, and replay attacks.
The other options are not protocols or algorithms that use private and public encryption keys in their implementation. Diffie-Hellman algorithm is a method for generating a shared secret key between two parties, but it does not use private and public encryption keys, but rather public and private parameters. Advanced Encryption Standard (AES) is a symmetric encryption algorithm that uses the same key for encryption and decryption, but it does not use private and public encryption keys, but rather a single secret key. Message Digest 5 (MD5) is a hash function that produces a fixed-length output from a variable-length input, but it does not use private and public encryption keys, but rather a one-way mathematical function.
Which of the following is the MOST beneficial to review when performing an IT audit?
Audit policy
Security log
Security policies
Configuration settings
The most beneficial item to review when performing an IT audit is the security log. The security log is a record of the events and activities that occur on a system or network, such as logins, logouts, file accesses, policy changes, or security incidents. The security log can provide valuable information for the auditor to assess the security posture, performance, and compliance of the system or network, and to identify any anomalies, vulnerabilities, or breaches that need to be addressed. The other options are not as beneficial as the security log, as they either do not provide enough information for the audit (A and C), or do not reflect the actual state of the system or network (D). References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7, page 405; Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7, page 465.
Given the various means to protect physical and logical assets, match the access management area to the technology.
In the context of protecting physical and logical assets, the access management areas and the technologies can be matched as follows: - Facilities are the physical buildings or locations that house the organization’s assets, such as servers, computers, or documents. Facilities can be protected by using windows that are resistant to breakage, intrusion, or eavesdropping, and that can prevent the leakage of light or sound from inside the facilities. - Devices are the hardware or software components that enable the communication or processing of data, such as routers, switches, firewalls, or applications. Devices can be protected by using firewalls that can filter, block, or allow the network traffic based on the predefined rules or policies, and that can prevent unauthorized or malicious access or attacks to the devices or the data. - Information Systems are the systems that store, process, or transmit data, such as databases, servers, or applications. Information Systems can be protected by using authentication mechanisms that can verify the identity or the credentials of the users or the devices that request access to the information systems, and that can prevent impersonation or spoofing of the users or the devices. - Encryption is a technology that can be applied in various areas, such as Devices or Information Systems, to protect the confidentiality or the integrity of the data. Encryption can transform the data into an unreadable or unrecognizable form, using a secret key or an algorithm, and can prevent the interception, disclosure, or modification of the data by unauthorized parties.
What is a common challenge when implementing Security Assertion Markup Language (SAML) for identity integration between on-premise environment and an external identity provider service?
Some users are not provisioned into the service.
SAML tokens are provided by the on-premise identity provider.
Single users cannot be revoked from the service.
SAML tokens contain user information.
A common challenge when implementing SAML for identity integration between on-premise environment and an external identity provider service is that some users are not provisioned into the service. Provisioning is a process of creating, updating, or deleting the user accounts or profiles in a service or an application, based on the user identity or credentials. When implementing SAML for identity integration, the on-premise environment acts as the identity provider, which authenticates the user and issues the SAML assertion, and the external service acts as the service provider, which receives the SAML assertion and grants access to the user. However, if the user account or profile is not provisioned or synchronized in the external service, the user may not be able to access the service, even if they have a valid SAML assertion. Therefore, a common challenge when implementing SAML for identity integration is to ensure that the user provisioning is consistent and accurate between the on-premise environment and the external service. SAML tokens are provided by the on-premise identity provider, single users can be revoked from the service, and SAML tokens contain user information are not common challenges when implementing SAML for identity integration, as they are related to the functionality, granularity, or content of the SAML protocol, not the provisioning of the user accounts or profiles. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5, Identity and Access Management, page 693. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5, Identity and Access Management, page 709.
What is the MAIN feature that onion routing networks offer?
Non-repudiation
Traceability
Anonymity
Resilience
The main feature that onion routing networks offer is anonymity. Anonymity is the state of being unknown or unidentifiable by hiding or masking the identity or the location of the sender or the receiver of a communication. Onion routing is a technique that enables anonymous communication over a network, such as the internet, by encrypting and routing the messages through multiple layers of intermediate nodes, called onion routers. Onion routing can protect the privacy and security of the users or the data, and can prevent censorship, surveillance, or tracking by third parties. Non-repudiation, traceability, and resilience are not the main features that onion routing networks offer, as they are related to the proof, tracking, or recovery of the communication, not the anonymity of the communication. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4, Communication and Network Security, page 467. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4, Communication and Network Security, page 483.
The security architect has been mandated to assess the security of various brands of mobile devices. At what phase of the product lifecycle would this be MOST likely to occur?
Disposal
Implementation
Development
Operations and maintenance
The product lifecycle consists of four phases: development, implementation, operations and maintenance, and disposal. The security architect has been mandated to assess the security of various brands of mobile devices, which are products that have already been developed and are ready to be deployed. Therefore, the most likely phase of the product lifecycle for this task is the implementation phase, where the products are installed, configured, tested, and integrated into the existing environment. The security architect should evaluate the security features, controls, and risks of each brand of mobile device and compare them with the security requirements and standards of the organization. The security architect should also consider the usability, performance, and compatibility of the mobile devices with the existing infrastructure and applications. References: CISSP CBK Reference, 5th Edition, Chapter 3, page 139; CISSP All-in-One Exam Guide, 8th Edition, Chapter 3, page 107
Which of the following is the MOST effective strategy to prevent an attacker from disabling a network?
Test business continuity and disaster recovery (DR) plans.
Design networks with the ability to adapt, reconfigure, and fail over.
Implement network segmentation to achieve robustness.
Follow security guidelines to prevent unauthorized network access.
The most effective strategy to prevent an attacker from disabling a network is to design networks with the ability to adapt, reconfigure, and fail over. A network that can adapt, reconfigure, and fail over is a network that can dynamically adjust its topology, configuration, and routing, and switch to alternative or backup components, in response to any changes, disruptions, or attacks in the network environment. A network that can adapt, reconfigure, and fail over can prevent an attacker from disabling a network, as it can maintain the availability, resilience, and performance of the network, and mitigate the impact or damage of the attack12. References: CISSP CBK, Fifth Edition, Chapter 4, page 372; CISSP Practice Exam – FREE 20 Questions and Answers, Question 17.
At the destination host, which of the following OSI model layers will discard a segment with a bad checksum in the UDP header?
Network
Data link
Transport
Session
The transport layer is the OSI model layer that will discard a segment with a bad checksum in the UDP header. The transport layer is responsible for providing end-to-end data transmission and reliability between the source and destination hosts. The transport layer uses protocols such as TCP (transmission control protocol) or UDP (user datagram protocol) to segment, encapsulate, and deliver the data. The transport layer also performs error detection and correction using checksums, which are values calculated from the data and added to the header of each segment. The checksums are verified at the destination host to ensure the integrity of the data. If the checksum in the UDP header does not match the expected value, the transport layer will discard the segment as corrupted. The other options are not OSI model layers that will discard a segment with a bad checksum in the UDP header, as they either do not use checksums, do not operate on segments, or do not handle UDP. References: CISSP - Certified Information Systems Security Professional, Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.1 Apply secure design principles to network architecture, 4.1.1.1 OSI and TCP/IP models; CISSP Exam Outline, Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.1 Apply secure design principles to network architecture, 4.1.1.1 OSI and TCP/IP models
Which evidence collecting technique would be utilized when it is believed an attacker is employing a rootkit and a quick analysis is needed?
Memory collection
Forensic disk imaging
Malware analysis
Live response
Live response is an evidence collecting technique that involves analyzing a system while it is still running, without shutting it down or altering it. Live response can be useful when it is believed that an attacker is employing a rootkit and a quick analysis is needed. A rootkit is a type of malicious software that hides itself and other malware from detection and removal by modifying the system’s core components, such as the kernel, drivers, or libraries. A rootkit may also erase or alter the evidence of its presence or activities on the system, such as log files, registry entries, or processes. Therefore, live response can help capture the volatile data that may be lost or changed if the system is powered off or rebooted, such as memory contents, network connections, or running processes. Live response can also help identify and isolate the rootkit before it causes more damage or spreads to other systems. References: CISSP All-in-One Exam Guide, Chapter 10: Legal, Regulations, Investigations, and Compliance, Section: Forensics, pp. 1328-1329.
Which of the following is a process in the access provisioning lifecycle that will MOST likely identify access aggregation issues?
Test
Assessment
Review
Peer review
Review is the process in the access provisioning lifecycle that will most likely identify access aggregation issues. The access provisioning lifecycle is the set of activities and stages that govern the creation, modification, and deletion of user accounts and access privileges in an organization. The access provisioning lifecycle consists of six phases: request, approval, provision, test, review, and audit. Review is the process of verifying and validating that the user accounts and access privileges are correct, appropriate, and compliant with the organization’s policies and standards. Review can help identify access aggregation issues, which are the accumulation of excessive or unnecessary access privileges by a user or an account over time, due to changes in roles, responsibilities, or assignments. Access aggregation issues can pose a security risk, as they may violate the principle of least privilege and increase the attack surface or the potential for misuse. Review can help prevent or resolve access aggregation issues by ensuring that the user accounts and access privileges are updated and aligned with the current needs and duties of the user or the account. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 206. Free daily CISSP practice questions, Question 3.
Building blocks for software-defined networks (SDN) require which of the following?
The SDN is mostly composed of virtual machines (VM).
The SDN is composed entirely of client-server pairs.
Virtual memory is used in preference to random-access memory (RAM).
Random-access memory (RAM) is used in preference to virtual memory.
The SDN is composed entirely of client-server pairs is the building block for software-defined networks (SDN) that is required. SDN is a network architecture that decouples the network control plane from the data plane, and that enables the network to be programmatically configured and managed by a centralized software controller. The control plane is the part of the network that makes the decisions about how to route and forward the network traffic, and that communicates with the network devices, such as the switches and routers. The data plane is the part of the network that carries the network traffic, and that executes the instructions from the control plane, such as the forwarding tables and rules. The client-server pair is the basic unit of the SDN, and it consists of a client device that requests a network service or resource, and a server device that provides the network service or resource. The client-server pair communicates with each other through the data plane, and with the software controller through the control plane. The software controller acts as the intermediary between the client-server pairs, and it dynamically configures and optimizes the network according to the policies and requirements of the client-server pairs. The other options are not the building blocks for SDN that are required, as they either do not relate to the SDN architecture, or do not enable the network to be programmatically configured and managed by a centralized software controller. References: CISSP - Certified Information Systems Security Professional, Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.1 Apply secure design principles to network architecture, 4.1.1.2 Software-defined networks; CISSP Exam Outline, Domain 4. Communication and Network Security, 4.1 Implement secure design principles in network architectures, 4.1.1 Apply secure design principles to network architecture, 4.1.1.2 Software-defined networks
Physical assets defined in an organization’s Business Impact Analysis (BIA) could include which of the following?
Personal belongings of organizational staff members
Supplies kept off-site at a remote facility
Cloud-based applications
Disaster Recovery (DR) line-item revenues
Supplies kept off-site at a remote facility are physical assets that could be defined in an organization’s Business Impact Analysis (BIA). A BIA is a process that involves identifying and evaluating the potential impacts of various disruptions or disasters on the organization’s critical business functions and processes, and determining the recovery priorities and objectives for the organization. A BIA can help the organization plan and prepare for the continuity and the resilience of its business operations in the event of a crisis. A physical asset is a tangible and valuable resource that is owned or controlled by the organization, and that supports its business activities and objectives. A physical asset could be a hardware, a software, a network, a data, a facility, a equipment, a material, or a personnel. Supplies kept off-site at a remote facility are physical assets that could be defined in a BIA, as they are resources that are essential for the organization’s business operations, and that could be affected by a disruption or a disaster. For example, the organization may need to access or use the supplies to resume or restore its business functions and processes, or to mitigate or recover from the impacts of the crisis. Therefore, the organization should include the supplies kept off-site at a remote facility in its BIA, and assess the potential impacts, risks, and dependencies of these assets on its business continuity and recovery. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, page 387. CISSP Practice Exam – FREE 20 Questions and Answers, Question 16.
Which of the following was developed to support multiple protocols as well as provide as well as provide login, password, and error correction capabilities?
Challenge Handshake Authentication Protocol (CHAP)
Point-to-Point Protocol (PPP)
Password Authentication Protocol (PAP)
Post Office Protocol (POP)
Point-to-Point Protocol (PPP) is the protocol that was developed to support multiple protocols as well as provide login, password, and error correction capabilities. PPP is a data link layer protocol that is used to establish a direct connection between two nodes over a serial link, such as a phone line, cable, or fiber. PPP can support multiple network layer protocols, such as IP, IPX, or AppleTalk, by using the Network Control Protocol (NCP) for each protocol. PPP can also provide authentication, encryption, and compression features, by using the Link Control Protocol (LCP) and its extensions, such as Password Authentication Protocol (PAP), Challenge Handshake Authentication Protocol (CHAP), or Microsoft Challenge Handshake Authentication Protocol (MS-CHAP). PPP can also detect and correct errors on the link, by using the Frame Check Sequence (FCS) field in the PPP frame. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 177; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Communication and Network Security, page 251]
Which access control method is based on users issuing access requests on system resources, features assigned to those resources, the operational or situational context, and a set of policies specified in terms of those features and context?
Mandatory Access Control (MAC)
Role Based Access Control (RBAC)
Discretionary Access Control (DAC)
Attribute Based Access Control (ABAC)
Attribute Based Access Control (ABAC) is an access control method that is based on users issuing access requests on system resources, features assigned to those resources, the operational or situational context, and a set of policies specified in terms of those features and context. ABAC uses attributes, which are characteristics or properties of users, resources, actions, or environments, to define access rules and enforce access decisions. ABAC allows for fine-grained, dynamic, and flexible access control that can accommodate complex and changing scenarios and requirements. Mandatory Access Control (MAC) is an access control method that is based on security labels assigned to users and resources, and a set of rules that determine the access permissions based on the comparison of those labels. MAC is rigid, static, and centralized, and it enforces a strict need-to-know policy. Role Based Access Control (RBAC) is an access control method that is based on roles assigned to users and permissions assigned to roles, and a set of rules that determine the access permissions based on the user’s role membership. RBAC is simple, scalable, and decentralized, and it enforces the principle of least privilege. Discretionary Access Control (DAC) is an access control method that is based on the identity of users and the ownership of resources, and a set of rules that determine the access permissions based on the user’s identity or the owner’s discretion. DAC is flexible, user-controlled, and individualized, but it can also be inconsistent, insecure, and difficult to manage. References: CISSP CBK Reference, 5th Edition, Chapter 5, page 269; CISSP All-in-One Exam Guide, 8th Edition, Chapter 5, page 241
A large human resources organization wants to integrate their identity management with a trusted partner organization. The human resources organization wants to maintain the creation and management of the identities and may want to share with other partners in the future. Which of the following options BEST serves their needs?
Federated identity
Cloud Active Directory (AD)
Security Assertion Markup Language (SAML)
Single sign-on (SSO)
Federated identity is a mechanism that allows users to use a single identity across multiple systems or organizations, without requiring the creation or management of separate accounts for each system or organization. Federated identity relies on trust relationships between the identity providers (IdPs) and the service providers (SPs) that participate in the federation. The IdPs are responsible for authenticating the users and issuing security tokens that contain identity attributes or claims. The SPs are responsible for validating the security tokens and granting access to the users based on the identity attributes or claims. Federated identity enables users to have a seamless and consistent user experience, while reducing the administrative overhead and security risks associated with multiple accounts. Federated identity also supports the principle of data minimization, as the IdPs only share the necessary identity attributes or claims with the SPs, and the SPs do not store any user identity information. Federated identity is often implemented using standards such as Security Assertion Markup Language (SAML), OpenID Connect, or OAuth. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 295. Official (ISC)² CISSP CBK Reference, Fifth Edition, Domain 5: Identity and Access Management (IAM), page 609.
Using Address Space Layout Randomization (ASLR) reduces the potential for which of the following attacks?
SQL injection (SQLi)
Man-in-the-middle (MITM)
Cross-Site Scripting (XSS)
Heap overflow
Address Space Layout Randomization (ASLR) is a security technique that randomizes the memory locations of the executable code, data, and libraries of a software application or system, making it harder for attackers to predict or manipulate the memory addresses of the target. ASLR reduces the potential for heap overflow attacks, which are a type of buffer overflow attack that exploit the memory allocation and deallocation functions of the heap, which is a dynamic memory area where variables and objects are stored during the execution of a program. Heap overflow attacks can result in arbitrary code execution, denial of service, or privilege escalation. ASLR makes heap overflow attacks more difficult by changing the base address of the heap each time the program runs, making it less likely for the attacker to find or overwrite the memory locations of the heap variables or objects. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 21: Software Development Security, pp. 2071-2072; [Official (ISC)2 CISSP CBK Reference, Fifth Edition], Domain 8: Software Development Security, pp. 1439-1440.
An organization contracts with a consultant to perform a System Organization Control (SOC) 2 audit on their internal security controls. An auditor documents a finding related to an Application Programming Interface (API) performing an action that is not aligned with the scope or objective of the system. Which trust service principle would
be MOST applicable in this situation?
Processing Integrity
Availability
Confidentiality
Security
Processing integrity is one of the five trust service principles that are used to evaluate the security controls of a service organization in a SOC 2 audit. Processing integrity refers to the completeness, validity, accuracy, timeliness, and authorization of the system’s processing of data and transactions. An API that performs an action that is not aligned with the scope or objective of the system violates the processing integrity principle, because it may compromise the quality, reliability, and consistency of the system’s output. The other trust service principles are availability, confidentiality, security, and privacy34. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1, page 51; 2024 Pass4itsure CISSP Dumps, Question 9.
In fault-tolerant systems, what do rollback capabilities permit?
Restoring the system to a previous functional state
Identifying the error that caused the problem
Allowing the system to an in a reduced manner
Isolating the error that caused the problem
Fault-tolerant systems are systems that can continue to operate despite the occurrence of faults, errors, or failures in some of their components. Fault-tolerant systems use redundancy, diversity, and error detection and correction mechanisms to achieve high availability, reliability, and resilience. Rollback capabilities are one of the mechanisms that enable fault tolerance, which allow the system to restore itself to a previous functional state before the fault occurred. Rollback capabilities can be implemented using checkpoints, snapshots, backups, or logs that record the state of the system at regular intervals or before critical operations. If a fault is detected, the system can revert to the most recent or closest checkpoint, snapshot, backup, or log that represents a valid and consistent state of the system, and resume its normal operation from there. References: What Is Fault Tolerance? | Creating a Fault-tolerant System, What is Fault Tolerance? | Creating a Fault Tolerant System, Fault Tolerance, RAID - System Resilience and Fault Tolerance, System Resilience, High Availability, QoS, and Fault Tolerance
Change management policies and procedures belong to which of the following types of controls?
Directive
Detective
Corrective
Preventative
Change management policies and procedures belong to the type of controls that are directive. Controls are the measures and the mechanisms that are used to protect and safeguard the organization’s information systems and assets, and to ensure that they comply with the organization’s security and business objectives. Controls can be classified into different types, based on their purpose, function, or nature, such as preventive, detective, corrective, deterrent, compensating, or recovery controls. Directive controls are the type of controls that guide and regulate the actions and the behaviors of the organization’s staff, processes, and systems, and that ensure that they follow the organization’s policies, standards, and regulations. Directive controls can include policies, procedures, guidelines, standards, rules, regulations, laws, or contracts. Change management policies and procedures belong to the type of controls that are directive, as they provide the instructions and the requirements for managing and controlling any changes to the organization’s information systems and assets, and for ensuring that the changes align with the organization’s security and business requirements. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 18. Free daily CISSP practice questions, Question 4.
After the INITIAL input o f a user identification (ID) and password, what is an authentication system that prompts the user for a different response each time the user logs on?
Persons Identification Number (PIN)
Secondary password
Challenge response
Voice authentication
A challenge response is an authentication system that prompts the user for a different response each time the user logs on, based on a challenge that is generated by the system or the user. The challenge can be a random number, a question, a passphrase, or a biometric feature. The response can be a one-time password, a secret answer, a hash value, or a biometric verification. A challenge response system provides a higher level of security than a static password, as it prevents replay attacks and password guessing. A personal identification number (PIN) is a type of password that consists of a numeric code. A secondary password is another type of password that is used in addition to the primary password. A voice authentication is a type of biometric authentication that uses the voice characteristics of a user.
Which of the following is the FIRST step an organization's security professional performs when defining a cyber-security program based upon industry standards?
Map the organization's current security practices to industry standards and frameworks.
Define the organization's objectives regarding security and risk mitigation.
Select from a choice of security best practices.
Review the past security assessments.
The first step an organization’s security professional performs when defining a cybersecurity program based on industry standards is to define the organization’s objectives regarding security and risk mitigation. A cybersecurity program is a set of policies, procedures, and practices that aim to protect the organization’s information assets and systems from cyber threats and attacks. A cybersecurity program should be based on industry standards and frameworks, such as ISO/IEC 27001, NIST SP 800-53, or COBIT, that provide best practices and guidelines for establishing and maintaining an effective and efficient cybersecurity program. The first step in defining a cybersecurity program based on industry standards is to define the organization’s objectives regarding security and risk mitigation, which are the goals or outcomes that the organization wants to achieve or accomplish through the cybersecurity program. The objectives should be aligned with the organization’s mission, vision, values, and strategy, and they should reflect the organization’s risk appetite, risk tolerance, and risk management approach. The objectives should also be specific, measurable, achievable, relevant, and time-bound (SMART), and they should be communicated and agreed upon by the relevant stakeholders, such as the management, the staff, or the customers. The other options are not the first step in defining a cybersecurity program based on industry standards. Mapping the organization’s current security practices to industry standards and frameworks is a subsequent step in defining a cybersecurity program based on industry standards, and it involves assessing and evaluating the organization’s existing security posture and capabilities, and identifying the gaps, issues, or improvements that need to be addressed or implemented. Selecting from a choice of security best practices is not a specific step in defining a cybersecurity program based on industry standards, although it may be part of the process of designing and implementing the cybersecurity program, based on the organization’s objectives and the industry standards and frameworks. Reviewing the past security assessments is not a specific step in defining a cybersecurity program based on industry standards, although it may be part of the process of monitoring and improving the cybersecurity program, based on the organization’s objectives and the industry standards and frameworks. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 23. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter
Which of the following encryption technologies has the ability to function as a stream cipher?
Cipher Feedback (CFB)
Feistel cipher
Cipher Block Chaining (CBC) with error propagation
Electronic Code Book (ECB)
Cipher Feedback (CFB) is an encryption technology that has the ability to function as a stream cipher. A stream cipher is a type of symmetric encryption that encrypts or decrypts one bit or byte of plaintext or ciphertext at a time, using a keystream that is derived from a secret key and an initialization vector. CFB is a mode of operation that converts a block cipher, such as AES or DES, into a stream cipher, by feeding the output of the block cipher back into its input, and XORing it with the plaintext or ciphertext. CFB can provide the advantages of both block ciphers and stream ciphers, such as high security, low error propagation, and high efficiency. Feistel cipher, Cipher Block Chaining (CBC) with error propagation, and Electronic Code Book (ECB) are not encryption technologies that have the ability to function as stream ciphers. These are types of block ciphers or modes of operation that encrypt or decrypt a fixed-length block of plaintext or ciphertext at a time, using a secret key and a chaining mechanism. Block ciphers are different from stream ciphers in terms of their design, operation, and performance. References: Official (ISC)2 CISSP CBK Reference, Fifth Edition, Domain 3, Security Architecture and Engineering, page 254. CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3, Security Architecture and Engineering, page 217.
A company hired an external vendor to perform a penetration test ofa new payroll system. The company’s internal test team had already performed an in-depth application
and security test of the system and determined that it met security requirements. However, the external vendor uncovered significant security weaknesses where sensitive
personal data was being sent unencrypted to the tax processing systems. What is the MOST likely cause of the security issues?
Failure to perform interface testing
Failure to perform negative testing
Inadequate performance testing
Inadequate application level testing
The most likely cause of the security issues is the failure to perform interface testing. Interface testing is a type of testing that verifies the functionality and security of the interactions and communications between different components or systems. Interface testing can detect and prevent errors, defects, or vulnerabilities that may occur due to the integration or interoperability of the components or systems. In this scenario, the company’s internal test team had performed an in-depth application and security test of the system, but they had failed to test the interface between the payroll system and the tax processing systems. This resulted in the external vendor uncovering significant security weaknesses where sensitive personal data was being sent unencrypted to the tax processing systems. Failure to perform negative testing, inadequate performance testing, or inadequate application level testing are not the most likely causes of the security issues, as they are not directly related to the interface between the payroll system and the tax processing systems. Negative testing is a type of testing that verifies the behavior and security of the system when invalid or unexpected inputs or conditions are given. Performance testing is a type of testing that measures the speed, scalability, reliability, or availability of the system under different workloads or scenarios. Application level testing is a type of testing that verifies the functionality and security of the application as a whole, rather than its individual components or systems. References: Official (ISC)2 Guide to the CISSP CBK, Fifth Edition, Chapter 21: Software Development Security, page 2009.
In a dispersed network that lacks central control, which of the following is die PRIMARY course of action to mitigate exposure?
Implement management policies, audit control, and data backups
Implement security policies and standards, access controls, and access limitations
Implement security policies and standards, data backups, and audit controls
Implement remote access policies, shared workstations, and log management
In a dispersed network that lacks central control, the primary course of action to mitigate exposure is to implement security policies and standards, access controls, and access limitations. A dispersed network is a network that consists of multiple nodes or devices that are geographically distributed and connected by various communication channels, such as the internet, satellite, or cellular networks. A dispersed network may lack central control due to the diversity of the nodes, the autonomy of the users, or the absence of a central authority. This can pose security challenges, such as inconsistent configurations, unauthorized access, or data leakage. To mitigate these risks, the organization should implement security policies and standards that define the security objectives, requirements, and responsibilities for the dispersed network. The organization should also implement access controls and access limitations that restrict who, what, when, where, and how the dispersed network can be accessed and used. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 156; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Communication and Network Security, page 230]
Which of the following is the BEST definition of Cross-Site Request Forgery (CSRF)?
An attack which forces an end user to execute unwanted actions on a web application in which they are currently authenticated
An attack that injects a script into a web page to execute a privileged command
An attack that makes an illegal request across security zones and thereby forges itself into the security database of the system
An attack that forges a false Structure Query Language (SQL) command across systems
An attack which forces an end user to execute unwanted actions on a web application in which they are currently authenticated is the best definition of Cross-Site Request Forgery (CSRF). CSRF is a type of web-based attack that exploits the trust relationship between a web browser and a web server. CSRF occurs when an attacker tricks or coerces an end user to visit a malicious website or click on a malicious link, which then sends a forged request to a web application that the end user is already logged in to. The web application, assuming that the request is legitimate and authorized, executes the request and performs the action that the attacker intended, such as transferring funds, changing passwords, or deleting data. The end user may not be aware of the CSRF attack, as it happens in the background and does not require the user’s input or consent. CSRF can compromise the security and privacy of the end user and the web application, and cause financial or reputational damage to both parties. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Security Assessment and Testing, page 281. CISSP Practice Exam | Boson, Question 11.
The security team has been tasked with performing an interface test against a frontend external facing application and needs to verify that all input fields protect against
invalid input. Which of the following BEST assists this process?
Application fuzzing
Instruction set simulation
Regression testing
Sanity testing
The technique that can be used to verify that all input fields protect against invalid input is application fuzzing. Application fuzzing is a technique that involves the generation, injection, or submission of random, malformed, or unexpected data or input, to an application, system, or resource, to test or evaluate the behavior, response, or output, of the application, system, or resource, to the data or input, as well as to identify or detect any errors, bugs, or vulnerabilities, that may exist or occur in the application, system, or resource, due to the data or input. Application fuzzing can be used to verify that all input fields protect against invalid input, by providing various types or formats of data or input, such as strings, numbers, symbols, or commands, to the input fields of the application, system, or resource, and by observing or analyzing the results or effects of the data or input, such as crashes, exceptions, or anomalies, on the application, system, or resource. Application fuzzing can help to ensure the functionality, performance, or security of the application, system, or resource, by discovering, testing, or validating the input validation, sanitization, or filtering mechanisms or functions, that are implemented or applied to the application, system, or resource, to prevent, mitigate, or handle the invalid input. Instruction set simulation, regression testing, or sanity testing are not the techniques that can be used to verify that all input fields protect against invalid input, as they are either more related to the methods, techniques, or tools, that are used to emulate, verify, or check the functionality, performance, or compatibility of the application, system, or resource, rather than to test or evaluate the behavior, response, or output of the application, system, or resource, to the random, malformed, or unexpected data or input. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Software Development Security, page 552; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 8: Software Development Security, Question 8.14, page 306.
An organization wants to share data securely with their partners via the Internet. Which standard port is typically used to meet this requirement?
Setup a server on User Datagram Protocol (UDP) port 69
Setup a server on Transmission Control Protocol (TCP) port 21
Setup a server on Transmission Control Protocol (TCP) port 22
Setup a server on Transmission Control Protocol (TCP) port 80
The standard port that is typically used to share data securely with partners via the Internet is Transmission Control Protocol (TCP) port 22. TCP port 22 is the default port for Secure Shell (SSH), a protocol that provides encrypted and authenticated communication between systems over an unsecured network. SSH can be used to securely transfer files, execute commands, or tunnel other protocols. SSH uses public key cryptography to authenticate the systems and users, and symmetric cryptography to encrypt the data. SSH can also compress the data to reduce the bandwidth usage and improve the performance. SSH is widely used for remote administration, file transfer, and network management. References: CISSP All-in-One Exam Guide, Chapter 4: Communication and Network Security, Section: Secure Communications, pp. 249-250.
An enterprise is developing a baseline cybersecurity standard its suppliers must meet before being awarded a contract. Which of the following statements is TRUE about
the baseline cybersecurity standard?
It should be expressed as general requirements.
It should be expressed in legal terminology.
It should be expressed in business terminology.
It should be expressed as technical requirements.
The statement that is true about the baseline cybersecurity standard that an enterprise is developing for its suppliers is that it should be expressed in business terminology. A baseline cybersecurity standard is a standard that defines the minimum level and type of security controls that are required to protect the information assets and systems of an organization, or its suppliers, from the security risks and threats that they may face. A baseline cybersecurity standard should be expressed in business terminology, which means using the language and concepts that are relevant and understandable for the business stakeholders, such as the management, the customers, or the suppliers. Expressing the baseline cybersecurity standard in business terminology can help to communicate and convey the security objectives and criteria, and to ensure the alignment and integration of the security controls with the business needs and goals of the organization, or its suppliers . References: [CISSP CBK, Fifth Edition, Chapter 2, page 113]; [100 CISSP Questions, Answers and Explanations, Question 18].
The quality assurance (QA) department is short-staffed and is unable to test all modules before the anticipated release date of an application. What security control is MOST likely to be violated?
Separation of environments
Program management
Mobile code controls
Change management
Change management is the process of controlling and documenting any modifications to the hardware, software, firmware, or documentation of an information system. Change management ensures that changes are authorized, tested, approved, implemented, and reviewed in a systematic and consistent manner. Change management also reduces the risks of introducing errors, vulnerabilities, or disruptions to the system or the business operations. The quality assurance (QA) department is responsible for testing the changes before they are released to the production environment, and verifying that they meet the functional and security requirements. If the QA department is short-staffed and is unable to test all modules before the anticipated release date of an application, the security control of change management is most likely to be violated, as the changes may not be properly tested, validated, or documented, and may introduce unforeseen issues or risks to the system or the organization. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Security Operations, page 517. Official (ISC)² CISSP CBK Reference, Fifth Edition, Domain 7: Security Operations, page 881.
Which of the following is the BEST way to mitigate circumvention of access controls?
Multi-layer access controls working in isolation
Multi-vendor approach to technology implementation
Multi-layer firewall architecture with Internet Protocol (IP) filtering enabled
Multi-layer access controls with diversification of technologies
The best way to mitigate circumvention of access controls is to use multi-layer access controls with diversification of technologies. Access controls are security mechanisms or functions that regulate or restrict the access or use of a system, network, or resource, by the users, devices, or processes, based on a set of rules, policies, or criteria. Access controls can help to ensure the confidentiality, integrity, availability, and accountability of the system, network, or resource, as well as to protect the system, network, or resource from various security threats or risks, such as unauthorized access, data leakage, or privilege escalation. However, access controls can also be circumvented or bypassed by various methods or techniques, such as spoofing, cracking, or exploiting, that can compromise or defeat the access controls, and allow the unauthorized or malicious access or use of the system, network, or resource. Therefore, access controls should be designed and implemented with security in mind, by following various security principles, standards, or best practices. The best way to mitigate circumvention of access controls is to use multi-layer access controls with diversification of technologies. Multi-layer access controls are access controls that use multiple layers or levels of security mechanisms or functions, such as physical, logical, or administrative access controls, to provide a defense-in-depth or layered security approach for the system, network, or resource. Diversification of technologies is a security principle or practice that uses different types or vendors of technologies, such as hardware, software, or protocols, to provide a heterogeneous or diverse security environment for the system, network, or resource. Using multi-layer access controls with diversification of technologies can help to mitigate circumvention of access controls, by increasing the complexity and difficulty of the access control process, as well as by reducing the dependency and vulnerability of the access control process, and making it harder or impossible for the attackers or intruders to circumvent or bypass the access controls. Multi-layer access controls working in isolation, multi-vendor approach to technology implementation, or multi-layer firewall architecture with Internet Protocol (IP) filtering enabled are not the best ways to mitigate circumvention of access controls, as they are either less effective, efficient, or integrated than using multi-layer access controls with diversification of technologies. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Identity and Access Management, page 281; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 5: Identity and Access Management, Question 5.11, page 221.
A criminal organization is planning an attack on a government network. Which of the following is the MOST severe attack to the network availability?
Network management communications is disrupted by attacker
Operator loses control of network devices to attacker
Sensitive information is gathered on the network topology by attacker
Network is flooded with communication traffic by attacker
A network availability attack is an attack that aims to disrupt or deny the normal functioning of a network or its resources. The most severe attack to the network availability is when the network is flooded with communication traffic by the attacker, which is also known as a denial-of-service (DoS) attack. A DoS attack can overwhelm the network bandwidth, consume the processing power of the network devices, or exhaust the memory or disk space of the servers, resulting in degraded performance, slow response, or complete shutdown of the network services. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Communication and Network Security, page 202; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 4: Communication and Network Security, page 276]
An organization is considering outsourcing applications and data to a Cloud Service
Provider (CSP). Which of the following is the MOST important concern regarding
privacy?
The CSP determines data criticality.
The CSP provides end-to-end encryption services.
The CSP’s privacy policy may be developer by the organization.
The CSP may not be subject to the organization’s country legation.
Privacy is the right or ability of individuals or groups to control or limit the collection, use, disclosure, or retention of their personal or sensitive data by others. Privacy is an important concern for organizations that are considering outsourcing applications and data to a Cloud Service Provider (CSP). The most important concern regarding privacy is that the CSP may not be subject to the organization’s country legislation. The organization’s country legislation may have specific laws, regulations, or standards that govern the privacy of data, such as the General Data Protection Regulation (GDPR) in the European Union, or the Health Insurance Portability and Accountability Act (HIPAA) in the United States. However, the CSP may operate in a different country or jurisdiction that has different or less stringent privacy laws, regulations, or standards. This may create a conflict or a gap between the organization’s privacy obligations and the CSP’s privacy practices, and expose the organization to legal, regulatory, or reputational risks. Therefore, the organization should carefully review the CSP’s privacy policy and contract, and ensure that the CSP complies with the organization’s country legislation and the organization’s privacy requirements and expectations. The CSP determining data criticality, providing end-to-end encryption services, or allowing the organization to develop its privacy policy are not the most important concerns regarding privacy, as they are more related to data security, data protection, or data governance. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Data Security, page 186; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 2: Asset Security, Question 2.10, page 79.
What is the MAIN reason to ensure the appropriate retention periods are enforced for data stored on electronic media?
To reduce the carbon footprint by eliminating paper
To create an inventory of data assets stored on disk for backup and recovery
To declassify information that has been improperly classified
To reduce the risk of loss, unauthorized access, use, modification, and disclosure
Data stored on electronic media, such as hard disks, flash drives, or optical disks, are subject to various security risks, such as loss, unauthorized access, use, modification, or disclosure. These risks can compromise the confidentiality, integrity, or availability of the data, as well as the reputation, compliance, or liability of the organization or the data owner. Therefore, the main reason to ensure the appropriate retention periods are enforced for data stored on electronic media is to reduce these risks. Retention periods are the duration of time that the data must be kept or preserved on the electronic media, based on the value, sensitivity, or legal requirements of the data. Enforcing the appropriate retention periods can help to minimize the exposure or vulnerability of the data to the security risks, as well as to optimize the storage capacity and performance of the electronic media. Reducing the carbon footprint by eliminating paper, creating an inventory of data assets stored on disk for backup and recovery, or declassifying information that has been improperly classified are not the main reasons to ensure the appropriate retention periods are enforced for data stored on electronic media, as they are more related to environmental, operational, or compliance objectives. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 4: Data Security, page 179; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 2: Asset Security, Question 2.14, page 80.
Computer forensics require which of the following are MAIN steps?
Announce the incident to responsible sections, analyze the data, and assimilate the data for correlation
Take action to contain the damage, announce the incident to responsible sections, and analyze the data
Acquire the data without altering, authenticate the recovered data, and analyze the data
Access the data before destruction, assimilate the data for correlation, and take action to contain the damage
The main steps that computer forensics requires are to acquire the data without altering, authenticate the recovered data, and analyze the data. Computer forensics is the process of collecting, preserving, and examining digital evidence from computers or other electronic devices, such as smartphones, tablets, or cameras. Computer forensics follows a standard methodology that consists of the following steps:
The other options are not the main steps that computer forensics requires. Announce the incident to responsible sections, take action to contain the damage, and assimilate the data for correlation are steps that are more related to incident response or security operations, not computer forensics. Access the data before destruction is not a step that computer forensics requires, as it implies that the data is already compromised or lost, which may prevent the acquisition or the authentication of the data. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 8: Security Operations, page 1070. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7: Security Operations, page 1071.
Which of the following is considered the last line defense in regard to a Governance, Risk managements, and compliance (GRC) program?
Internal audit
Internal controls
Board review
Risk management
Internal audit is considered the last line of defense in regard to a governance, risk management, and compliance (GRC) program. Internal audit is an independent and objective function that provides assurance and consulting services to the organization. Internal audit evaluates the effectiveness and efficiency of the GRC program, identifies gaps and weaknesses, and recommends improvements. Internal audit also reports to the senior management and the board of directors on the status and results of the GRC program. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 1: Security and Risk Management, page 17. CISSP Practice Exam – FREE 20 Questions and Answers, Question 18.
When determining data and information asset handling, regardless of the specific toolset being used, which of the following is one of the common components of big data?
Consolidated data collection
Distributed storage locations
Distributed data collection
Centralized processing location
Distributed data collection is one of the common components of big data. Big data is a term that describes the large volume, variety, and velocity of data that is generated, collected, stored, processed, and analyzed by various sources and applications. Distributed data collection refers to the process of collecting data from multiple and diverse sources, such as sensors, devices, social media, web logs, or transactions, and transferring the data to a centralized or distributed storage location. Distributed data collection enables the capture and aggregation of different types of data, such as structured, unstructured, or semi-structured data, and it can improve the scalability, performance, and reliability of the data collection process. The other options are not correct. Consolidated data collection is not a common component of big data, as it implies that the data is collected from a single or homogeneous source, which may limit the volume, variety, and velocity of the data. Distributed storage locations and centralized processing location are not components of big data, but rather possible architectures or designs for big data systems. Distributed storage locations refer to the use of multiple and geographically dispersed servers or nodes to store the data, which can improve the availability, redundancy, and fault tolerance of the data storage. Centralized processing location refers to the use of a single or clustered server or node to process the data, which can improve the efficiency, consistency, and security of the data processing. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 3: Asset Security, page 263. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 3: Asset Security, page 264.
Which of the following initiates the systems recovery phase of a disaster recovery plan?
Issuing a formal disaster declaration
Activating the organization's hot site
Evacuating the disaster site
Assessing the extent of damage following the disaster
The systems recovery phase of a disaster recovery plan is the phase that involves restoring the critical systems and operations of the organization after a disaster. The systems recovery phase is initiated by activating the organization’s hot site. A hot site is a fully equipped and operational alternative site that can be used to resume the business functions within a short time after a disaster. A hot site typically has the same hardware, software, network, and data as the original site, and can be switched to quickly and seamlessly. A hot site can ensure the continuity and availability of the organization’s systems and services during a disaster recovery situation. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Business Continuity and Disaster Recovery Planning, page 365; [Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 7: Business Continuity Planning, page 499]
Who is essential for developing effective test scenarios for disaster recovery (DR) test plans?
Business line management and IT staff members
Chief Information Officer (CIO) and DR manager
DR manager end IT staff members
IT staff members and project managers
Business line management and IT staff members are essential for developing effective test scenarios for DR test plans. Business line management can provide the business requirements, priorities, and impact analysis for the critical processes and functions that need to be recovered in the event of a disaster. IT staff members can provide the technical expertise, resources, and support for the recovery of the IT infrastructure and systems. Together, they can design realistic and comprehensive test scenarios that can validate the effectiveness and readiness of the DR plan. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 7: Security Operations, page 411. CISSP Practice Exam – FREE 20 Questions and Answers, Question 15.
Which of the following is included in the Global System for Mobile Communications (GSM) security framework?
Public-Key Infrastructure (PKI)
Symmetric key cryptography
Digital signatures
Biometric authentication
The component that is included in the Global System for Mobile Communications (GSM) security framework is symmetric key cryptography. GSM is a standard for mobile communication that is widely used around the world. GSM provides various services and features, such as voice, data, text, multimedia, roaming, or emergency calls. GSM also provides various security mechanisms and functions, such as authentication, encryption, or integrity. The GSM security framework is a set of specifications or protocols that define the security architecture, components, and procedures of the GSM system. The GSM security framework includes various components, such as the Subscriber Identity Module (SIM), the Authentication Center (AuC), the Equipment Identity Register (EIR), or the ciphering algorithms. The component that is included in the GSM security framework is symmetric key cryptography, which is a type of cryptography that uses the same key or a pair of keys that are mathematically related for both encryption and decryption of data or information. Symmetric key cryptography is used in the GSM security framework for various purposes, such as encrypting the communication between the mobile station and the base station, generating the authentication and ciphering keys, or deriving the session keys. Symmetric key cryptography can help to ensure the confidentiality, integrity, and authenticity of the data or information in the GSM system, as well as to protect the data or information from various security threats or attacks, such as eavesdropping, interception, or modification. Public-Key Infrastructure (PKI), digital signatures, or biometric authentication are not the components that are included in the GSM security framework, as they are either more related to the asymmetric key cryptography, which is a type of cryptography that uses different keys for encryption and decryption of data or information, or to the identity verification, which is a process of confirming the identity of a person or entity. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 6: Secure Network Architecture and Securing Network Components, page 388; CISSP Official (ISC)2 Practice Tests, Third Edition, Domain 4: Communication and Network Security, Question 4.11, page 188.
Assuming an individual has taken all of the steps to keep their internet connection private, which of the following is the BEST to browse the web privately?
Prevent information about browsing activities from being stored in the cloud.
Store browsing activities in the cloud.
Prevent information about browsing activities farm being stored on the personal device.
Store information about browsing activities on the personal device.
Assuming an individual has taken all of the steps to keep their internet connection private, such as using encryption, VPN, and secure protocols, the best option to browse the web privately is to prevent information about browsing activities from being stored on the personal device. This can be achieved by using the private or incognito mode of the web browser, which does not save the browsing history, cookies, cache, or other temporary files on the device. This can help protect the individual’s privacy from other users who may have access to the device, or from malware that may compromise the device. The other options are not as good as preventing information from being stored on the device. Preventing information from being stored in the cloud may not be possible or effective, as some web services or applications may still collect or store the user’s data on their servers, regardless of the user’s preferences. Storing information in the cloud or on the device may expose the user’s browsing activities to unauthorized access or disclosure, unless the data is encrypted and protected by strong authentication and authorization mechanisms. References: CISSP All-in-One Exam Guide, Eighth Edition, Chapter 5: Communication and Network Security, page 608. Official (ISC)2 CISSP CBK Reference, Fifth Edition, Chapter 5: Communication and Network Security, page 609.
A manufacturing organization wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. Which of the following is the BEST solution for the manufacturing organization?
Trusted third-party certification
Lightweight Directory Access Protocol (LDAP)
Security Assertion Markup language (SAML)
Cross-certification
Security Assertion Markup Language (SAML) is the best solution for the manufacturing organization that wants to establish a Federated Identity Management (FIM) system with its 20 different supplier companies. FIM is a process that allows the sharing and recognition of identities across different organizations that have a trust relationship. FIM enables the users of one organization to access the resources or services of another organization without having to create or maintain multiple accounts or credentials. FIM can provide several benefits, such as:
SAML is a standard protocol that supports FIM by allowing the exchange of authentication and authorization information between different parties. SAML uses XML-based messages, called assertions, to convey the identity, attributes, and entitlements of a user to a service provider. SAML defines three roles for the parties involved in FIM:
SAML works as follows:
SAML is the best solution for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, because it can enable the seamless and secure access to the resources or services across the different organizations, without requiring the users to create or maintain multiple accounts or credentials. SAML can also provide interoperability and compatibility between different platforms and technologies, as it is based on a standard and open protocol.
The other options are not the best solutions for the manufacturing organization that wants to establish a FIM system with its 20 different supplier companies, but rather solutions that have other limitations or drawbacks. Trusted third-party certification is a process that involves a third party, such as a certificate authority (CA), that issues and verifies digital certificates that contain the public key and identity information of a user or an entity. Trusted third-party certification can provide authentication and encryption for the communication between different parties, but it does not provide authorization or entitlement information for the access to the resources or services. Lightweight Directory Access Protocol (LDAP) is a protocol that allows the access and management of directory services, such as Active Directory, that store the identity and attribute information of users and entities. LDAP can provide a centralized and standardized way to store and retrieve identity and attribute information, but it does not provide a mechanism to exchange or federate the information across different organizations. Cross-certification is a process that involves two or more CAs that establish a trust relationship and recognize each other’s certificates. Cross-certification can extend the trust and validity of the certificates across different domains or organizations, but it does not provide a mechanism to exchange or federate the identity, attribute, or entitlement information.
Which of the following BEST describes an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices?
Derived credential
Temporary security credential
Mobile device credentialing service
Digest authentication
Derived credential is the best description of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices. A smart card is a device that contains a microchip that stores a private key and a digital certificate that are used for authentication and encryption. A smart card is typically inserted into a reader that is attached to a computer or a terminal, and the user enters a personal identification number (PIN) to unlock the smart card and access the private key and the certificate. A smart card can provide a high level of security and convenience for the user, as it implements a two-factor authentication method that combines something the user has (the smart card) and something the user knows (the PIN).
However, a smart card may not be compatible or convenient for mobile devices, such as smartphones or tablets, that do not have a smart card reader or a USB port. To address this issue, a derived credential is a solution that allows the user to use a mobile device as an alternative to a smart card for authentication and encryption. A derived credential is a cryptographic key and a certificate that are derived from the smart card private key and certificate, and that are stored on the mobile device. A derived credential works as follows:
A derived credential can provide a secure and convenient way to use a mobile device as an alternative to a smart card for authentication and encryption, as it implements a two-factor authentication method that combines something the user has (the mobile device) and something the user is (the biometric feature). A derived credential can also comply with the standards and policies for the use of smart cards, such as the Personal Identity Verification (PIV) or the Common Access Card (CAC) programs.
The other options are not the best descriptions of an access control method utilizing cryptographic keys derived from a smart card private key that is embedded within mobile devices, but rather descriptions of other methods or concepts. Temporary security credential is a method that involves issuing a short-lived credential, such as a token or a password, that can be used for a limited time or a specific purpose. Temporary security credential can provide a flexible and dynamic way to grant access to the users or entities, but it does not involve deriving a cryptographic key from a smart card private key. Mobile device credentialing service is a concept that involves providing a service that can issue, manage, or revoke credentials for mobile devices, such as certificates, tokens, or passwords. Mobile device credentialing service can provide a centralized and standardized way to control the access of mobile devices, but it does not involve deriving a cryptographic key from a smart card private key. Digest authentication is a method that involves using a hash function, such as MD5, to generate a digest or a fingerprint of the user’s credentials, such as the username and password, and sending it to the server for verification. Digest authentication can provide a more secure way to authenticate the user than the basic authentication, which sends the credentials in plain text, but it does not involve deriving a cryptographic key from a smart card private key.
What is the BEST approach for controlling access to highly sensitive information when employees have the same level of security clearance?
Audit logs
Role-Based Access Control (RBAC)
Two-factor authentication
Application of least privilege
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance. The principle of least privilege is a security concept that states that every user or process should have the minimum amount of access rights and permissions that are necessary to perform their tasks or functions, and nothing more. The principle of least privilege can provide several benefits, such as:
Applying the principle of least privilege is the best approach for controlling access to highly sensitive information when employees have the same level of security clearance, because it can ensure that the employees can only access the information that is relevant and necessary for their tasks or functions, and that they cannot access or manipulate the information that is beyond their scope or authority. For example, if the highly sensitive information is related to a specific project or department, then only the employees who are involved in that project or department should have access to that information, and not the employees who have the same level of security clearance but are not involved in that project or department.
The other options are not the best approaches for controlling access to highly sensitive information when employees have the same level of security clearance, but rather approaches that have other purposes or effects. Audit logs are records that capture and store the information about the events and activities that occur within a system or a network, such as the access and usage of the sensitive data. Audit logs can provide a reactive and detective layer of security by enabling the monitoring and analysis of the system or network behavior, and facilitating the investigation and response of the incidents. However, audit logs cannot prevent or reduce the access or disclosure of the sensitive information, but rather provide evidence or clues after the fact. Role-Based Access Control (RBAC) is a method that enforces the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. RBAC can provide a granular and dynamic layer of security by defining and assigning the roles and permissions according to the organizational structure and policies. However, RBAC cannot control the access to highly sensitive information when employees have the same level of security clearance and the same role or function within the organization, but rather rely on other criteria or mechanisms. Two-factor authentication is a technique that verifies the identity of the users by requiring them to provide two pieces of evidence or factors, such as something they know (e.g., password, PIN), something they have (e.g., token, smart card), or something they are (e.g., fingerprint, face). Two-factor authentication can provide a strong and preventive layer of security by preventing unauthorized access to the system or network by the users who do not have both factors. However, two-factor authentication cannot control the access to highly sensitive information when employees have the same level of security clearance and the same two factors, but rather rely on other criteria or mechanisms.
Users require access rights that allow them to view the average salary of groups of employees. Which control would prevent the users from obtaining an individual employee’s salary?
Limit access to predefined queries
Segregate the database into a small number of partitions each with a separate security level
Implement Role Based Access Control (RBAC)
Reduce the number of people who have access to the system for statistical purposes
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees. A query is a request for information from a database, which can be expressed in a structured query language (SQL) or a graphical user interface (GUI). A query can specify the criteria, conditions, and operations for selecting, filtering, sorting, grouping, and aggregating the data from the database. A predefined query is a query that has been created and stored in advance by the database administrator or the data owner, and that can be executed by the authorized users without any modification. A predefined query can provide several benefits, such as:
Limiting access to predefined queries is the control that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, because it can ensure that the users can only access the data that is relevant and necessary for their tasks, and that they cannot access or manipulate the data that is beyond their scope or authority. For example, a predefined query can be created and stored that calculates and displays the average salary of groups of employees based on certain criteria, such as department, position, or experience. The users who need to view this information can execute this predefined query, but they cannot modify it or create their own queries that might reveal the individual employee’s salary or other sensitive data.
The other options are not the controls that would prevent the users from obtaining an individual employee’s salary, if they only require access rights that allow them to view the average salary of groups of employees, but rather controls that have other purposes or effects. Segregating the database into a small number of partitions each with a separate security level is a control that would improve the performance and security of the database by dividing it into smaller and manageable segments that can be accessed and processed independently and concurrently. However, this control would not prevent the users from obtaining an individual employee’s salary, if they have access to the partition that contains the salary data, and if they can create or modify their own queries. Implementing Role Based Access Control (RBAC) is a control that would enforce the access rights and permissions of the users based on their roles or functions within the organization, rather than their identities or attributes. However, this control would not prevent the users from obtaining an individual employee’s salary, if their roles or functions require them to access the salary data, and if they can create or modify their own queries. Reducing the number of people who have access to the system for statistical purposes is a control that would reduce the risk and impact of unauthorized access or disclosure of the sensitive data by minimizing the exposure and distribution of the data. However, this control would not prevent the users from obtaining an individual employee’s salary, if they are among the people who have access to the system, and if they can create or modify their own queries.
TESTED 01 May 2024