3 Months Free Update
3 Months Free Update
3 Months Free Update
Which of the following are WELL KNOWN PORTS assigned by the IANA?
Ports 0 to 255
Ports 0 to 1024
Ports 0 to 1023
Ports 0 to 127
The port numbers are divided into three ranges: the Well Known Ports, the Registered Ports, and the Dynamic and/or Private Ports. The range for assigned "Well Known" ports managed by the IANA (Internet Assigned Numbers Authority) is 0-1023.
Source: iana.org: port assignments.
A business continuity plan should list and prioritize the services that need to be brought back after a disaster strikes. Which of the following services is more likely to be of primary concern in the context of what your Disaster Recovery Plan would include?
Marketing/Public relations
Data/Telecomm/IS facilities
IS Operations
Facilities security
The main concern when recovering after a disaster is data, telecomm and IS facilities. Other services, in descending priority order are: IS operations, IS support services, market structure, marketing/public relations, customer service & systems support, market regulation/surveillance, listing, application development, accounting services, facilities, human resources, facilities security, legal and Office of the Secretary, national sales.
Source: BARNES, James C. & ROTHSTEIN, Philip J., A Guide to Business Continuity Planning, John Wiley & Sons, 2001 (page 129).
Which device acting as a translator is used to connect two networks or applications from layer 4 up to layer 7 of the ISO/OSI Model?
Bridge
Repeater
Router
Gateway
A gateway is used to connect two networks using dissimilar protocols at the lower layers or it could also be at the highest level of the protocol stack.
Important Note:
For the purpose of the exam, you have to remember that a gateway is not synonymous to the term firewall.
The second thing you must remembers is the fact that a gateway act as a translation device.
It could be used to translate from IPX to TCP/IP for example. It could be used to convert different types of applications protocols and allow them to communicate together. A gateway could be at any of the OSI layers but usually tend to be higher up in the stack.
For your exam you should know the information below:
Repeaters
A repeater provides the simplest type of connectivity, because it only repeats electrical signals between cable segments, which enables it to extend a network. Repeaters work at the physical layer and are add-on devices for extending a network connection over a greater distance. The device amplifies signals because signals attenuate the farther they have to travel.
Repeaters can also work as line conditioners by actually cleaning up the signals. This works much better when amplifying digital signals than when amplifying analog signals, because digital signals are discrete units, which makes extraction of background noise from them much easier for the amplifier. If the device is amplifying analog signals, any accompanying noise often is amplified as well, which may further distort the signal.
A hub is a multi-port repeater. A hub is often referred to as a concentrator because it is the physical communication device that allows several computers and devices to communicate with each other. A hub does not understand or work with IP or MAC addresses. When one system sends a signal to go to another system connected to it, the signal is broadcast to all the ports, and thus to all the systems connected to the concentrator.
Repeater
Image Reference- http://www.erg.abdn.ac.uk/~gorry/course/images/repeater.gif
Bridges
A bridge is a LAN device used to connect LAN segments. It works at the data link layer and therefore works with MAC addresses. A repeater does not work with addresses; it just forwards all signals it receives. When a frame arrives at a bridge, the bridge determines whether or not the MAC address is on the local network segment. If the MAC address is not on the local network segment, the bridge forwards the frame to the necessary network segment.
Bridge
Image Reference- http://www.oreillynet.com/network/2001/01/30/graphics/bridge.jpg
Routers
Routers are layer 3, or network layer, devices that are used to connect similar or different networks. (For example, they can connect two Ethernet LANs or an Ethernet LAN to a Token Ring LAN.) A router is a device that has two or more interfaces and a routing table so it knows how to get packets to their destinations. It can filter traffic based on access control lists (ACLs), and it fragments packets when necessary. Because routers have more network-level knowledge, they can perform higher-level functions, such as calculating the shortest and most economical path between the sending and receiving hosts.
Router and Switch
Image Reference- http://www.computer-networking-success.com/images/router-switch.jpg
Switches
Switches combine the functionality of a repeater and the functionality of a bridge. A switch amplifies the electrical signal, like a repeater, and has the built-in circuitry and intelligence of a bridge. It is a multi-port connection device that provides connections for individual computers or other hubs and switches.
Gateways
Gateway is a general term for software running on a device that connects two different environments and that many times acts as a translator for them or somehow restricts their interactions. Usually a gateway is needed when one environment speaks a different language, meaning it uses a certain protocol that the other environment does not understand. The gateway can translate Internetwork Packet Exchange (IPX) protocol
packets to IP packets, accept mail from one type of mail server and format it so another type of mail server can accept and understand it, or connect and translate different data link technologies such as FDDI to Ethernet.
Gateway Server
Image Reference- http://static.howtoforge.com/images/screenshots/556af08d5e43aa768260f9e589dc547f-3024.jpg
The following answers are incorrect:
Repeater - A repeater provides the simplest type of connectivity, because it only repeats electrical signals between cable segments, which enables it to extend a network. Repeaters work at the physical layer and are add-on devices for extending a network connection over a greater distance. The device amplifies signals because signals attenuate the farther they have to travel.
Bridges - A bridge is a LAN device used to connect LAN segments. It works at the data link layer and therefore works with MAC addresses. A repeater does not work with addresses; it just forwards all signals it receives. When a frame arrives at a bridge, the bridge determines whether or not the MAC address is on the local network segment. If the MAC address is not on the local network segment, the bridge forwards the frame to the necessary network segment.
Routers - Routers are layer 3, or network layer, devices that are used to connect similar or different networks. (For example, they can connect two Ethernet LANs or an Ethernet LAN to a Token Ring LAN.) A router is a device that has two or more interfaces and a routing table so it knows how to get packets to their destinations. It can filter traffic based on access control lists (ACLs), and it fragments packets when necessary.
Following reference(s) were/was used to create this question:
CISA review manual 2014 Page number 263
Official ISC2 guide to CISSP CBK 3rd Edition Page number 229 and 230
What is called an attack in which an attacker floods a system with connection requests but does not respond when the target system replies to those requests?
Ping of death attack
SYN attack
Smurf attack
Buffer overflow attack
A SYN attack occurs when an attacker floods the target system's small "in-process" queue with connection requests, but it does not respond when the target system replies to those requests. This causes the target system to "time out" while waiting for the proper response, which makes the system crash or become unusable. A buffer overflow attack occurs when a process receives much more data than expected. One common buffer overflow attack is the ping of death, where an attacker sends IP packets that exceed the maximum legal length (65535 octets). A smurf attack is an attack where the attacker spoofs the source IP address in an ICMP ECHO broadcast packet so it seems to have originated at the victim's system, in order to flood it with REPLY packets.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 76).
Which of the following LAN topologies offers the highest availability?
Bus topology
Tree topology
Full mesh topology
Partial mesh topology
In a full mesh topology, all network nodes are individually connected with each other, providing the highest availability. A partial mesh topology can sometimes be used to offer some redundancy.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 106).
Which of the following is a token-passing scheme like token ring that also has a second ring that remains dormant until an error condition is detected on the primary ring?
Fiber Distributed Data Interface (FDDI).
Ethernet
Fast Ethernet
Broadband
FDDI is a token-passing ring scheme like a token ring, yet it also has a second ring that remains dormant until an error condition is detected on the primary ring.
Fiber Distributed Data Interface (FDDI) provides a 100 Mbit/s optical standard for data transmission in a local area network that can extend in range up to 200 kilometers (124 miles). Although FDDI logical topology is a ring-based token network, it does not use the IEEE 802.5 token ring protocol as its basis; instead, its protocol is derived from the IEEE 802.4 token bus timed token protocol. In addition to covering large geographical areas, FDDI local area networks can support thousands of users. As a standard underlying medium it uses optical fiber, although it can use copper cable, in which case it may be refer to as CDDI (Copper Distributed Data Interface). FDDI offers both a Dual-Attached Station (DAS), counter-rotating token ring topology and a Single-Attached Station (SAS), token bus passing ring topology.
Ethernet is a family of frame-based computer networking technologies for local area networks (LANs). The name came from the physical concept of the ether. It defines a number of wiring and signaling standards for the Physical Layer of the OSI networking model as well as a common addressing format and Media Access Control at the Data Link Layer.
In computer networking, Fast Ethernet is a collective term for a number of Ethernet standards that carry traffic at the nominal rate of 100 Mbit/s, against the original Ethernet speed of 10 Mbit/s. Of the fast Ethernet standards 100BASE-TX is by far the most common and is supported by the vast majority of Ethernet hardware currently produced. Fast Ethernet was introduced in 1995 and remained the fastest version of Ethernet for three years before being superseded by gigabit Ethernet.
Broadband in data can refer to broadband networks or broadband Internet and may have the same meaning as above, so that data transmission over a fiber optic cable would be referred to as broadband as compared to a telephone modem operating at 56,000 bits per second. However, a worldwide standard for what level of bandwidth and network speeds actually constitute Broadband have not been determined.[1]
Broadband in data communications is frequently used in a more technical sense to refer to data transmission where multiple pieces of data are sent simultaneously to increase the effective rate of transmission, regardless of data signaling rate. In network engineering this term is used for methods where two or more signals share a medium.[Broadband Internet access, often shortened to just broadband, is a high data rate Internet access—typically contrasted with dial-up access using a 56k modem.
Dial-up modems are limited to a bitrate of less than 56 kbit/s (kilobits per second) and require the full use of a telephone line—whereas broadband technologies supply more than double this rate and generally without disrupting telephone use.
Source:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 72.
also see:
http://en.wikipedia.org/
How many bits of a MAC address uniquely identify a vendor, as provided by the IEEE?
6 bits
12 bits
16 bits
24 bits
The MAC address is 48 bits long, 24 of which identify the vendor, as provided by the IEEE. The other 24 bits are provided by the vendor.
A media access control address (MAC address) is a unique identifier assigned to network interfaces for communications on the physical network segment. MAC addresses are used for numerous network technologies and most IEEE 802 network technologies, including Ethernet. Logically, MAC addresses are used in the media access control protocol sublayer of the OSI reference model.
MAC addresses are most often assigned by the manufacturer of a network interface card (NIC) and are stored in its hardware, such as the card's read-only memory or some other firmware mechanism. If assigned by the manufacturer, a MAC address usually encodes the manufacturer's registered identification number and may be referred to as the burned-in address. It may also be known as an Ethernet hardware address (EHA), hardware address or physical address. This is can be contrasted to a programmed address, where the host device issues commands to the NIC to use an arbitrary address. An example is many SOHO routers, where the ISP grants access to only one MAC address (used previously to inserting the router) so the router must use that MAC address on its Internet-facing NIC. Therefore the router administrator configures a MAC address to override the burned-in one.
A network node may have multiple NICs and each must have one unique MAC address per NIC.
See diagram below from Wikipedia showing the format of a MAC address. :
MAC Address format
Reference(s) used for this question:
http://en.wikipedia.org/wiki/MAC_address
Proxies works by transferring a copy of each accepted data packet from one network to another, thereby masking the:
data's payload
data's details
data's owner
data's origin
The application firewall (proxy) relays the traffic from a trusted host running a specific application to an untrusted server. It will appear to the untrusted server as if the request originated from the proxy server.
"Data's payload" is incorrect. Only the origin is changed.
"Data's details" is incorrect. Only the origin is changed.
"Data's owner" is incorrect. Only the origin is changed.
References:
CBK, p. 467
AIO3, pp. 486 - 490
FTP, TFTP, SNMP, and SMTP are provided at what level of the Open Systems Interconnect (OSI) Reference Model?
Application
Network
Presentation
Transport
The Answer: Application. The Layer 7 Application Layer of the Open Systems Interconnect (OSI) Reference Model is a service for applications and Operating Systems data transmission, for example FTP, TFTP, SNMP, and SMTP.
The following answers are incorrect:
Network. The Network layer moves information between hosts that are not physically connected. It deals with routing of information. IP is a protocol that is used in Network Layer. FTP, TFTP, SNMP, and SMTP do not reside at the Layer 3 Network Layer in the OSI Reference Model.
Presentation. The Presentation Layer is concerned with the formatting of data into a standard presentation such as
ASCII. FTP, TFTP, SNMP, and SMTP do not reside at the Layer 6 Presentation Layer in the OSI Reference Model.
Transport. The Transport Layer creates an end-to-end transportation between peer hosts. The transmission can be connectionless and unreliable such as UDP, or connection-oriented and ensure error-free delivery such as TCP. FTP, TFTP, SNMP, and SMTP do not reside at the Layer 4 Transportation Layer in the OSI Reference Model.
The following reference(s) were/was used to create this question: Reference: OSI/ISO.
Shon Harris AIO v.3 p. 420-421
ISC2 OIG, 2997 p.412-413
What is the greatest danger from DHCP?
An intruder on the network impersonating a DHCP server and thereby misconfiguring the DHCP clients.
Having multiple clients on the same LAN having the same IP address.
Having the wrong router used as the default gateway.
Having the organization's mail server unreachable.
The greatest danger from BootP or DHCP (Dynamic Host Control Protocol) is from an intruder on the network impersonating a DHCP server and thereby misconfiguring the DHCP clients. Other choices are possible consequences of DHCP impersonation.
Source: STREBE, Matthew and PERKINS, Charles, Firewalls 24seven, Sybex 2000, Chapter 4: Sockets and Services from a Security Viewpoint.
The standard server port number for HTTP is which of the following?
81
80
8080
8180
HTTP is Port 80.
What is the main characteristic of a multi-homed host?
It is placed between two routers or firewalls.
It allows IP routing.
It has multiple network interfaces, each connected to separate networks.
It operates at multiple layers.
The main characteristic of a multi-homed host is that is has multiple network interfaces, each connected to logically and physically separate networks. IP routing should be disabled to prevent the firewall from routing packets directly from one interface to the other.
Source: FERREL, Robert G, Questions and Answers for the CISSP Exam, domain 2 (derived from the Information Security Management Handbook, 4th Ed., by Tipton & Krause).
Which layer of the DoD TCP/IP model controls the communication flow between hosts?
Internet layer
Host-to-host transport layer
Application layer
Network access layer
Whereas the host-to-host layer (equivalent to the OSI's transport layer) provides end-to-end data delivery service, flow control, to the application layer.
The four layers in the DoD model, from top to bottom, are:
The Application Layer contains protocols that implement user-level functions, such as mail delivery, file transfer and remote login.
The Host-to-Host Layer handles connection rendez vous, flow control, retransmission of lost data, and other generic data flow management between hosts. The mutually exclusive TCP and UDP protocols are this layer's most important members.
The Internet Layer is responsible for delivering data across a series of different physical networks that interconnect a source and destination machine. Routing protocols are most closely associated with this layer, as is the IP Protocol, the Internet's fundamental protocol.
The Network Access Layer is responsible for delivering data over the particular hardware media in use. Different protocols are selected from this layer, depending on the type of physical network
The OSI model organizes communication services into seven groups called layers. The layers are as follows:
Layer 7, The Application Layer: The application layer serves as a window for users and application processes to access network services. It handles issues such as network transparency, resource allocation, etc. This layer is not an application in itself, although some applications may perform application layer functions.
Layer 6, The Presentation Layer: The presentation layer serves as the data translator for a network. It is usually a part of an operating system and converts incoming and outgoing data from one presentation format to another. This layer is also known as syntax layer.
Layer 5, The Session Layer: The session layer establishes a communication session between processes running on different communication entities in a network and can support a message-mode data transfer. It deals with session and connection coordination.
Layer 4, The Transport Layer: The transport layer ensures that messages are delivered in the order in which they are sent and that there is no loss or duplication. It ensures complete data transfer. This layer provides an additional connection below the Session layer and assists with managing some data flow control between hosts. Data is divided into packets on the sending node, and the receiving node's Transport layer reassembles the message from packets. This layer is also responsible for error checking to guarantee error-free data delivery, and requests a retransmission if necessary. It is also responsible for sending acknowledgments of successful transmissions back to the sending host. A number of protocols run at the Transport layer, including TCP, UDP, Sequenced Packet Exchange (SPX), and NWLink.
Layer 3, The Network Layer: The network layer controls the operation of the subnet. It determines the physical path that data takes on the basis of network conditions, priority of service, and other factors. The network layer is responsible for routing and forwarding data packets.
Layer 2, The Data-Link Layer: The data-link layer is responsible for error free transfer of data frames. This layer provides synchronization for the physical layer. ARP and RARP would be found at this layer.
Layer 1, The Physical Layer: The physical layer is responsible for packaging and transmitting data on the physical media. This layer conveys the bit stream through a network at the electrical and mechanical level.
See a great flash animation on the subject at:
http://www.maris.com/content/applets/flash/comp/fa0301.swf
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 85).
Also: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 7: Telecommunications and Network Security (page 344).
Which of the following countermeasures would be the most appropriate to prevent possible intrusion or damage from wardialing attacks?
Monitoring and auditing for such activity
Require user authentication
Making sure only necessary phone numbers are made public
Using completely different numbers for voice and data accesses
Knowlege of modem numbers is a poor access control method as an attacker can discover modem numbers by dialing all numbers in a range. Requiring user authentication before remote access is granted will help in avoiding unauthorized access over a modem line.
"Monitoring and auditing for such activity" is incorrect. While monitoring and auditing can assist in detecting a wardialing attack, they do not defend against a successful wardialing attack.
"Making sure that only necessary phone numbers are made public" is incorrect. Since a wardialing attack blindly calls all numbers in a range, whether certain numbers in the range are public or not is irrelevant.
"Using completely different numbers for voice and data accesses" is incorrect. Using different number ranges for voice and data access might help prevent an attacker from stumbling across the data lines while wardialing the public voice number range but this is not an adequate countermeaure.
References:
CBK, p. 214
AIO3, p. 534-535
Which OSI/ISO layer does a SOCKS server operate at?
Session layer
Transport layer
Network layer
Data link layer
A SOCKS based server operates at the Session layer of the OSI model.
SOCKS is an Internet protocol that allows client-server applications to transparently use the services of a network firewall. SOCKS is an abbreviation for "SOCKetS". As of Version 5 of SOCK, both UDP and TCP is supported.
One of the best known circuit-level proxies is SOCKS proxy server. The basic purpose of the protocol is to enable hosts on one side of a SOCKS server to gain access to hosts on the other side of a SOCKS Server, without requiring direct “IP-reachability”
The protocol was originally developed by David Koblas, a system administrator of MIPS Computer Systems. After MIPS was taken over by Silicon Graphics in 1992, Koblas presented a paper on SOCKS at that year's Usenix Security Symposium and SOCKS became publicly available. The protocol was extended to version 4 by Ying-Da Lee of NEC.
SOCKS includes two components, the SOCKS server and the SOCKS client.
The SOCKS protocol performs four functions:
Making connection requests
Setting up proxy circuits
Relaying application data
Performing user authentication (optional)
Source:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 96).
and
http://en.wikipedia.org/wiki/SOCKS
and
http://www.faqs.org/rfcs/rfc1928.html
and
The ISC2 OIG on page 619
Which of the following would be used to detect and correct errors so that integrity and confidentiality of transactions over networks may be maintained while preventing unauthorize interception of the traffic?
Information security
Server security
Client security
Communications security
Communications security is the discipline of preventing unauthorized interceptors from accessing telecommunications in an intelligible form, while still delivering content to the intended recipients. In the United States Department of Defense culture, it is often referred to by the abbreviation COMSEC. The field includes cryptosecurity, transmission security, emission security, traffic-flow security and physical security of COMSEC equipment.
All of the other answers are incorrect answers:
Information security
Information security would be the overall program but communications security is the more specific and better answer. Information security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction.
The terms information security, computer security and information assurance are frequently incorrectly used interchangeably. These fields are interrelated often and share the common goals of protecting the confidentiality, integrity and availability of information; however, there are some subtle differences between them.
These differences lie primarily in the approach to the subject, the methodologies used, and the areas of concentration. Information security is concerned with the confidentiality, integrity and availability of data regardless of the form the data may take: electronic, print, or other forms. Computer security can focus on ensuring the availability and correct operation of a computer system without concern for the information stored or processed by the computer.
Server security
While server security plays a part in the overall information security program, communications security is a better answer when talking about data over the network and preventing interception. See publication 800-123 listed in the reference below to learn more.
Client security
While client security plays a part in the overall information security program, communications security is a better answer. Securing the client would not prevent interception of data or capture of data over the network. Today people referred to this as endpoint security.
References:
http://csrc.nist.gov/publications/nistpubs/800-123/SP800-123.pdf
and
https://en.wikipedia.org/wiki/Information_security
and
https://en.wikipedia.org/wiki/Communications_security
What is a decrease in amplitude as a signal propagates along a transmission medium best known as?
Crosstalk
Noise
Delay distortion
Attenuation
Attenuation is the loss of signal strength as it travels. The longer a cable, the more at tenuation occurs, which causes the signal carrying the data to deteriorate. This is why standards include suggested cable-run lengths. If a networking cable is too long, attenuation may occur. Basically, the data are in the form of electrons, and these electrons have to “swim” through a copper wire. However, this is more like swimming upstream, because there is a lot of resistance on the electrons working in this media. After a certain distance, the electrons start to slow down and their encoding format loses form. If the form gets too degraded, the receiving system cannot interpret them any longer. If a network administrator needs to run a cable longer than its recommended segment length, she needs to insert a repeater or some type of device that will amplify the signal and ensure it gets to its destination in the right encoding format.
Attenuation can also be caused by cable breaks and malfunctions. This is why cables should be tested. If a cable is suspected of attenuation problems, cable testers can inject signals into the cable and read the results at the end of the cable.
The following answers are incorrect:
Crosstalk - Crosstalk is one example of noise where unwanted electrical coupling between adjacent lines causes the signal in one wire to be picked up by the signal in an adjacent wire.
Noise - Noise is also a signal degradation but it refers to a large amount of electrical fluctuation that can interfere with the interpretation of the signal by the receiver.
Delay distortion - Delay distortion can result in a misinterpretation of a signal that results from transmitting a digital signal with varying frequency components. The various components arrive at the receiver with varying delays.
Following reference(s) were/was used to create this question:
CISA review manual 2014 Page number 265
Official ISC2 guide to CISSP CBK 3rd Edition Page number 229 &
CISSP All-In-One Exam guide 6th Edition Page Number 561
Which of the following media is MOST resistant to EMI interference?
microwave
fiber optic
twisted pair
coaxial cable
A fiber optic cable is a physical medium that is capable of conducting modulated light trasmission. Fiber optic cable carries signals as light waves, thus creating higher trasmission speeds and greater distances due to less attenuation. This type of cabling is more difficult to tap than other cabling and is most resistant to interference, especially EMI.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 103).
What is the MOST critical piece to disaster recovery and continuity planning?
Security policy
Management support
Availability of backup information processing facilities
Staff training
The keyword is ' MOST CRITICAL ' and the correct answer is ' Management Support ' as the management must be convinced of its necessity and that's why a business case must be made. The decision of how a company should recover from any disaster is purely a business decision and should be treated as so.
The other answers are incorrect because :
Security policy is incorrect as it is not the MOST CRITICAL piece.
Availability of backup information processing facilities is incorrect as this comes once the organization has BCP Plans in place and for a BCP Plan , management support must be there.
Staff training comes after the plans are in place with the support from management.
Reference : Shon Harris , AIO v3 , Chapter-9: Business Continuity Planning , Page : 697.
Secure Shell (SSH) is a strong method of performing:
client authentication
server authentication
host authentication
guest authentication
Secure shell (SSH) was designed as an alternative to some of the insecure protocols and allows users to securely access resources on remote computers over an encrypted tunnel. The Secure Shell Protocol (SSH) is a protocol for secure remote login and other secure network services over an insecure network. The SSH authentication protocol runs on top of the SSH transport layer protocol and provides a single authenticated tunnel for the SSH connection protocol.
SSH’s services include remote log-on, file transfer, and command execution. It also supports port forwarding, which redirects other protocols through an encrypted SSH tunnel. Many users protect less secure traffic of protocols, such as X Windows and VNC (virtual network computing), by forwarding them through a SSH tunnel.
The SSH tunnel protects the integrity of communication, preventing session hijacking and other man-in-the-middle attacks. Another advantage of SSH over its predecessors is that it supports strong authentication. There are several alternatives for SSH clients to authenticate to a SSH server, including passwords and digital certificates.
Keep in mind that authenticating with a password is still a significant improvement over the other protocols because the password is transmitted encrypted.
There are two incompatible versions of the protocol, SSH-1 and SSH-2, though many servers support both. SSH-2 has improved integrity checks (SSH-1 is vulnerable to an insertion attack due to weak CRC-32 integrity checking) and supports local extensions and additional types of digital certificates such as Open PGP. SSH was originally designed for UNIX, but there are now implementations for other operating systems, including Windows, Macintosh, and OpenVMS.
Is SSH 3.0 the same as SSH3?
The short answer is: NO SSH 3.0 refers to version 3 of SSH Communications SSH2 protocol implementation and it could also refer to OpenSSH Version 3.0 of its SSH2 software. The "3" refers to the software release version not the protocol version. As of this writing (July 2013), there is no SSH3 protocol.
"Server authentication" is incorrect. Though many SSH clients allow pre-caching of server/host keys, this is a minimal form of server/host authentication.
"Host authentication" is incorrect. Though many SSH clients allow pre-caching of server/host keys, this is a minimal form of server/host authentication.
"Guest authentication" is incorrect. The general idea of "guest" is that it is unauthenticated access.
Reference(s) used for this question:
http://www.ietf.org/rfc/rfc4252.txt
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 7080-7088). Auerbach Publications. Kindle Edition.
Which backup method only copies files that have been recently added or changed and also leaves the archive bit unchanged?
Full backup method
Incremental backup method
Fast backup method
Differential backup method
A differential backup is a partial backup that copies a selected file to tape only if the archive bit for that file is turned on, indicating that it has changed since the last full backup. A differential backup leaves the archive bits unchanged on the files it copies.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 69).
Also see: http://e-articles.info/e/a/title/Backup-Types/
Backup software can use or ignore the archive bit in determining which files to back up, and can either turn the archive bit off or leave it unchanged when the backup is complete. How the archive bit is used and manipulated determines what type of backup is done, as follows
Full backup
A full backup, which Microsoft calls a normal backup, backs up every selected file, regardless of the status of the archive bit. When the backup completes, the backup software turns off the archive bit for every file that was backed up. Note that "full" is a misnomer because a full backup backs up only the files you have selected, which may be as little as one directory or even a single file, so in that sense Microsoft's terminology is actually more accurate. Given the choice, full backup is the method to use because all files are on one tape, which makes it much easier to retrieve files from tape when necessary. Relative to partial backups, full backups also increase redundancy because all files are on all tapes. That means that if one tape fails, you may still be able to retrieve a given file from another tape.
Differential backup
A differential backup is a partial backup that copies a selected file to tape only if the archive bit for that file is turned on, indicating that it has changed since the last full backup. A differential backup leaves the archive bits unchanged on the files it copies. Accordingly, any differential backup set contains all files that have changed since the last full backup. A differential backup set run soon after a full backup will contain relatively few files. One run soon before the next full backup is due will contain many files, including those contained on all previous differential backup sets since the last full backup. When you use differential backup, a complete backup set comprises only two tapes or tape sets: the tape that contains the last full backup and the tape that contains the most recent differential backup.
Incremental backup
An incremental backup is another form of partial backup. Like differential backups, Incremental Backups copy a selected file to tape only if the archive bit for that file is turned on. Unlike the differential backup, however, the incremental backup clears the archive bits for the files it backs up. An incremental backup set therefore contains only files that have changed since the last full backup or the last incremental backup. If you run an incremental backup daily, files changed on Monday are on the Monday tape, files changed on Tuesday are on the Tuesday tape, and so forth. When you use an incremental backup scheme, a complete backup set comprises the tape that contains the last full backup and all of the tapes that contain every incremental backup done since the last normal backup. The only advantages of incremental backups are that they minimize backup time and keep multiple versions of files that change frequently. The disadvantages are that backed-up files are scattered across multiple tapes, making it difficult to locate any particular file you need to restore, and that there is no redundancy. That is, each file is stored only on one tape.
Full copy backup
A full copy backup (which Microsoft calls a copy backup) is identical to a full backup except for the last step. The full backup finishes by turning off the archive bit on all files that have been backed up. The full copy backup instead leaves the archive bits unchanged. The full copy backup is useful only if you are using a combination of full backups and incremental or differential partial backups. The full copy backup allows you to make a duplicate "full" backup—e.g., for storage offsite, without altering the state of the hard drive you are backing up, which would destroy the integrity of the partial backup rotation.
Some Microsoft backup software provides a bizarre backup method Microsoft calls a daily copy backup. This method ignores the archive bit entirely and instead depends on the date- and timestamp of files to determine which files should be backed up. The problem is, it's quite possible for software to change a file without changing the date- and timestamp, or to change the date- and timestamp without changing the contents of the file. For this reason, we regard the daily copy backup as entirely unreliable and recommend you avoid using it.
Which layer of the TCP/IP protocol stack corresponds to the ISO/OSI Network layer (layer 3)?
Host-to-host layer
Internet layer
Network access layer
Session layer
The Internet layer in the TCP/IP protocol stack corresponds to the network layer (layer 3) in the OSI/ISO model. The host-to-host layer corresponds to the transport layer (layer 4) in the OSI/ISO model. The Network access layer corresponds to the data link and physical layers (layers 2 and 1) in the OSI/ISO model. The session layer is not defined in the TCP/IP protocol stack.
Source: WALLHOFF, John, CBK#2 Telecommunications and Network Security (CISSP Study Guide), April 2002 (page 1).
Which of the following is an advantage that UDP has over TCP?
UDP is connection-oriented whereas TCP is not.
UDP is more reliable than TCP.
UDP is faster than TCP.
UDP makes a better effort to deliver packets.
UDP is a scaled-down version of TCP. It is used like TCP, but only offers a "best effort" delivery. It is connectionless, does not offer error correction, does not sequence the packet segments, and less reliable than TCP but because of its lower overhead, it provides a faster transmission than TCP.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 86).
Which of the following mechanisms was created to overcome the problem of collisions that occur on wired networks when traffic is simultaneously transmitted from different nodes?
Carrier sense multiple access with collision avoidance (CSMA/CA)
Carrier sense multiple access with collision detection (CSMA/CD)
Polling
Token-passing
Which type of attack involves the alteration of a packet at the IP level to convince a system that it is communicating with a known entity in order to gain access to a system?
TCP sequence number attack
IP spoofing attack
Piggybacking attack
Teardrop attack
An IP spoofing attack is used to convince a system that it is communication with a known entity that gives an intruder access. It involves modifying the source address of a packet for a trusted source's address. A TCP sequence number attack involves hijacking a session between a host and a target by predicting the target's choice of an initial TCP sequence number. Piggybacking refers to an attacker gaining unauthorized access to a system by using a legitimate user's connection. A teardrop attack consists of modifying the length and fragmentation offset fields in sequential IP packets so the target system becomes confused and crashes after it receives contradictory instructions on how the fragments are offset on these packets.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 77).
The deliberate planting of apparent flaws in a system for the purpose of detecting attempted penetrations or confusing an intruder about which flaws to exploit is called:
alteration
investigation
entrapment
enticement.
Enticement deals with someone that is breaking the law. Entrapment encourages someone to commit a crime that the individual may or many have had no intention of committing. Enticement is not necessarily illegal but does raise ethical arguments and may not be admissible in court. Enticement lures someone toward some evidence (a honeypot would be a great example) after that individual has already committed a crime.
Entrapment is when you persuade someone to commit a crime when the person otherwise had no intention to commit a crime. Entrapment is committed by a law enforcement player where you get tricked into committing a crime for which you woud later on get arrested without knowing you rare committing such a scrime. It is illegal and unethical as well.
All other choices were not applicable and only detractors.
References:
TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
and
CISSP Study Guide (Conrad, Misenar, Feldman). Elsevier. 2010. p. 428
and
http://www.dummies.com/how-to/content/security-certification-computer-forensics-and-inci.html
After a company is out of an emergency state, what should be moved back to the original site first?
Executives
Least critical components
IT support staff
Most critical components
This will expose any weaknesses in the plan and ensure the primary site has been properly repaired before moving back. Moving critical assets first may induce a second disaster if the primary site has not been repaired properly.
The first group to go back would test items such as connectivity, HVAC, power, water, improper procedures, and/or steps that has been overlooked or not done properly. By moving these first, and fixing any problems identified, the critical operations of the company are not negatively affected.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 9: Disaster Recovery and Business continuity (page 621).
Which one of the following represents an ALE calculation?
single loss expectancy x annualized rate of occurrence.
gross loss expectancy x loss frequency.
actual replacement cost - proceeds of salvage.
asset value x loss expectancy.
Single Loss Expectancy (SLE) is the dollar amount that would be lost if there was a loss of an asset. Annualized Rate of Occurrence (ARO) is an estimated possibility of a threat to an asset taking place in one year (for example if there is a change of a flood occuring once in 10 years the ARO would be .1, and if there was a chance of a flood occuring once in 100 years then the ARO would be .01).
The following answers are incorrect:
gross loss expectancy x loss frequency. Is incorrect because this is a distractor.
actual replacement cost - proceeds of salvage. Is incorrect because this is a distractor.
asset value x loss expectancy. Is incorrect because this is a distractor.
Controls are implemented to:
eliminate risk and reduce the potential for loss
mitigate risk and eliminate the potential for loss
mitigate risk and reduce the potential for loss
eliminate risk and eliminate the potential for loss
Controls are implemented to mitigate risk and reduce the potential for loss. Preventive controls are put in place to inhibit harmful occurrences; detective controls are established to discover harmful occurrences; corrective controls are used to restore systems that are victims of harmful attacks.
It is not feasible and possible to eliminate all risks and the potential for loss as risk/threats are constantly changing.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 32.
What is called an event or activity that has the potential to cause harm to the information systems or networks?
Vulnerability
Threat agent
Weakness
Threat
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Pages 16, 32.
What can be defined as a batch process dumping backup data through communications lines to a server at an alternate location?
Remote journaling
Electronic vaulting
Data clustering
Database shadowing
Electronic vaulting refers to the transfer of backup data to an off-site location. This is primarily a batch process of dumping backup data through communications lines to a server at an alternate location.
Electronic vaulting is accomplished by backing up system data over a network. The backup location is usually at a separate geographical location known as the vault site. Vaulting can be used as a mirror or a backup mechanism using the standard incremental or differential backup cycle. Changes to the host system are sent to the vault server in real-time when the backup method is implemented as a mirror. If vaulting updates are recorded in real-time, then it will be necessary to perform regular backups at the off-site location to provide recovery services due to inadvertent or malicious alterations to user or system data.
The following are incorrect answers:
Remote journaling refers to the parallel processing of transactions to an alternate site (as opposed to a batch dump process). Journaling is a technique used by database management systems to provide redundancy for their transactions. When a transaction is completed, the database management system duplicates the journal entry at a remote location. The journal provides sufficient detail for the transaction to be replayed on the remote system. This provides for database recovery in the event that the database becomes corrupted or unavailable.
Database shadowing uses the live processing of remote journaling, but creates even more redundancy by duplicating the database sets to multiple servers. There are also additional redundancy options available within application and database software platforms. For example, database shadowing may be used where a database management system updates records in multiple locations. This technique updates an entire copy of the database at a remote location.
Data clustering refers to the classification of data into groups (clusters). Clustering may also be used, although it should not be confused with redundancy. In clustering, two or more “partners” are joined into the cluster and may all provide service at the same time. For example, in an active–active pair, both systems may provide services at any time. In the case of a failure, the remaining partners may continue to provide service but at a decreased capacity.
The following resource(s) were used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 20403-20407 and 20411-20414 and 20375-20377 and 20280-20283). Auerbach Publications. Kindle Edition.
A momentary power outage is a:
spike
blackout
surge
fault
A momentary power outage is a fault.
Power Excess
Spike --> Too much voltage for a short period of time.
Surge --> Too much voltage for a long period of time.
Power Loss
Fault --> A momentary power outage.
Blackout --> A long power interruption.
Power Degradation
Sag or Dip --> A momentary low voltage.
Brownout --> A prolonged power supply that is below normal voltage.
Reference(s) used for this question:
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, 3rd. Edition McGraw-Hill/Osborne, 2005, page 368.
and
https://en.wikipedia.org/wiki/Power_quality
During the testing of the business continuity plan (BCP), which of the following methods of results analysis provides the BEST assurance that the plan is workable?
Measurement of accuracy
Elapsed time for completion of critical tasks
Quantitatively measuring the results of the test
Evaluation of the observed test results
It is important to have ways to measure the success of the plan and tests against the stated objectives. Therefore, results must be quantitatively gauged as opposed to an evaluation based only on observation. Quantitatively measuring the results of the test involves a generic statement measuring all the activities performed during BCP, which gives the best assurance of an effective plan. Although choices A and B are also quantitative, they relate to specific areas, or an analysis of results from one viewpoint, namely the accuracy of the results and the elapsed time.
Source: Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, Chapter 5: Disaster Recovery and Business Continuity (page 269).
Which backup method is additive because the time and tape space required for each night's backup grows during the week as it copies the day's changed files and the previous days' changed files up to the last full backup?
differential backup method
full backup method
incremental backup method
tape backup method.
The Differential Backup Method is additive because the time and tape space required for each night's backup grows during the week as it copies the day's changed files and the previous days' changed files up to the last full backup.
Archive Bits
Unless you've done a lot of backups in your time you've probably never heard of an Archive Bit. An archive bit is, essentially, a tag that is attached to every file. In actuality, it is a binary digit that is set on or off in the file, but that's crummy technical jargon that doesn't really tell us anything. For the sake of our discussion, just think of it as the flag on a mail box. If the flag is up, it means the file has been changed. If it's down, then the file is unchanged.
Archive bits let the backup software know what needs to be backed up. The differential and incremental backup types rely on the archive bit to direct them.
Backup Types
Full or Normal
The "Full" or "normal" backup type is the most standard. This is the backup type that you would use if you wanted to backup every file in a given folder or drive. It backs up everything you direct it to regardless of what the archive bit says. It also resets all archive bits (puts the flags down). Most backup software, including the built-in Windows backup software, lets you select down to the individual file that you want backed up. You can also choose to backup things like the "system state".
Incremental
When you schedule an incremental backup, you are in essence instructing the software to only backup files that have been changed, or files that have their flag up. After the incremental backup of that file has occured, that flag will go back down. If you perform a normal backup on Monday, then an incremental backup on Wednesday, the only files that will be backed up are those that have changed since Monday. If on Thursday someone deletes a file by accident, in order to get it back you will have to restore the full backup from Monday, followed by the Incremental backup from Wednesday.
Differential
Differential backups are similar to incremental backups in that they only backup files with their archive bit, or flag, up. However, when a differential backup occurs it does not reset those archive bits which means, if the following day, another differential backup occurs, it will back up that file again regardless of whether that file has been changed or not.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 69.
And: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 9: Disaster Recovery and Business continuity (pages 617-619).
And: http://www.brighthub.com/computing/windows-platform/articles/24531.aspx
What is the highest amount a company should spend annually on countermeasures for protecting an asset valued at $1,000,000 from a threat that has an annualized rate of occurrence (ARO) of once every five years and an exposure factor (EF) of 30%?
$300,000
$150,000
$60,000
$1,500
The cost of a countermeasure should not be greater in cost than the risk it mitigates (ALE). For a quantitative risk assessment, the equation is ALE = ARO x SLE where the SLE is calculated as the product of asset value x exposure factor. An event that happen once every five years would have an ARO of .2 (1 divided by 5).
SLE = Asset Value (AV) x Exposure Fact (EF)
SLE = 1,000,000 x .30 = 300,000
ALE = SLE x Annualized Rate of Occurance (ARO)
ALE = 300,000 x .2 = 60,000
Know your acronyms:
ALE -- Annual loss expectancy
ARO -- Annual rate of occurrence
SLE -- Single loss expectancy
The following are incorrect answers:
$300,000 is incorrect. See the explanation of the correct answer for the correct calculation.
$150,000 is incorrect. See the explanation of the correct answer for the correct calculation.
$1,500 is incorrect. See the explanation of the correct answer for the correct calculation.
Reference(s) used for this question:
Mc Graw Hill, Shon Harris, CISSP All In One (AIO) book, Sixth Edition , Pages 87-88
and
Official ISC2 Guide to the CISSP Exam, (OIG), Pages 60-61
This type of supporting evidence is used to help prove an idea or a point, however It cannot stand on its own, it is used as a supplementary tool to help prove a primary piece of evidence. What is the name of this type of evidence?
Circumstantial evidence
Corroborative evidence
Opinion evidence
Secondary evidence
This type of supporting evidence is used to help prove an idea or a point, however It cannot stand on its own, it is used as a supplementary tool to help prove a primary piece of evidence. Corrobative evidence takes many forms.
In a rape case for example, this could consist of torn clothing, soiled bed sheets, 911 emergency calls tapes, and
prompt complaint witnesses.
There are many types of evidence that exist. Below you have explanations of some of the most common types:
Physical Evidence
Physical evidence is any evidence introduced in a trial in the form of a physical object, intended to prove a fact in issue based on its demonstrable physical characteristics. Physical evidence can conceivably include all or part of any object.
In a murder trial for example (or a civil trial for assault), the physical evidence might include DNA left by the attacker on the victim's body, the body itself, the weapon used, pieces of carpet spattered with blood, or casts of footprints or tire prints found at the scene of the crime.
Real Evidence
Real evidence is a type of physical evidence and consists of objects that were involved in a case or actually played a part in the incident or transaction in question.
Examples include the written contract, the defective part or defective product, the murder weapon, the gloves used by an alleged murderer. Trace evidence, such as fingerprints and firearm residue, is a species of real evidence. Real evidence is usually reported upon by an expert witness with appropriate qualifications to give an opinion. This normally means a forensic scientist or one qualified in forensic engineering.
Admission of real evidence requires authentication, a showing of relevance, and a showing that the object is in “the same or substantially the same condition” now as it was on the relevant date. An object of real evidence is authenticated through the senses of witnesses or by circumstantial evidence called chain of custody.
Documentary
Documentary evidence is any evidence introduced at a trial in the form of documents. Although this term is most widely understood to mean writings on paper (such as an invoice, a contract or a will), the term actually include any media by which information can be preserved. Photographs, tape recordings, films, and printed emails are all forms of documentary evidence.
Documentary versus physical evidence
A piece of evidence is not documentary evidence if it is presented for some purpose other than the examination of the contents of the document. For example, if a blood-spattered letter is introduced solely to show that the defendant stabbed the author of the letter from behind as it was being written, then the evidence is physical evidence, not documentary evidence. However, a film of the murder taking place would be documentary evidence (just as a written description of the event from an eyewitness). If the content of that same letter is then introduced to show the motive for the murder, then the evidence would be both physical and documentary.
Documentary Evidence Authentication
Documentary evidence is subject to specific forms of authentication, usually through the testimony of an eyewitness to the execution of the document, or to the testimony of a witness able to identify the handwriting of the purported author. Documentary evidence is also subject to the best evidence rule, which requires that the original document be produced unless there is a good reason not to do so.
The role of the expert witness
Where physical evidence is of a complexity that makes it difficult for the average person to understand its significance, an expert witness may be called to explain to the jury the proper interpretation of the evidence at hand.
Digital Evidence or Electronic Evidence
Digital evidence or electronic evidence is any probative information stored or transmitted in digital form that a party to a court case may use at trial.
The use of digital evidence has increased in the past few decades as courts have allowed the use of e-mails, digital photographs, ATM transaction logs, word processing documents, instant message histories, files saved from accounting programs, spreadsheets, internet browser histories, databases, the contents of computer memory, computer backups, computer printouts, Global Positioning System tracks, logs from a hotel’s electronic door locks, and digital video or audio files.
While many courts in the United States have applied the Federal Rules of Evidence to digital evidence in the same way as more traditional documents, courts have noted very important differences. As compared to the more traditional evidence, courts have noted that digital evidence tends to be more voluminous, more difficult to destroy, easily modified, easily duplicated, potentially more expressive, and more readily available. As such, some courts have sometimes treated digital evidence differently for purposes of authentication, hearsay, the best evidence rule, and privilege. In December 2006, strict new rules were enacted within the Federal Rules of Civil Procedure requiring the preservation and disclosure of electronically stored evidence.
Demonstrative Evidence
Demonstrative evidence is evidence in the form of a representation of an object. This is, as opposed to, real evidence, testimony, or other forms of evidence used at trial.
Examples of demonstrative evidence include photos, x-rays, videotapes, movies, sound recordings, diagrams, forensic animation, maps, drawings, graphs, animation, simulations, and models. It is useful for assisting a finder of fact (fact-finder) in establishing context among the facts presented in a case. To be admissible, a demonstrative exhibit must “fairly and accurately” represent the real object at the relevant time.
Chain of custody
Chain of custody refers to the chronological documentation, and/or paper trail, showing the seizure, custody, control, transfer, analysis, and disposition of evidence, physical or electronic. Because evidence can be used in court to convict persons of crimes, it must be handled in a scrupulously careful manner to avoid later allegations of tampering or misconduct which can compromise the case of the prosecution toward acquittal or to overturning a guilty verdict upon appeal.
The idea behind recoding the chain of custody is to establish that the alleged evidence is fact related to the alleged crime - rather than, for example, having been planted fraudulently to make someone appear guilty.
Establishing the chain of custody is especially important when the evidence consists of fungible goods. In practice, this most often applies to illegal drugs which have been seized by law enforcement personnel. In such cases, the defendant at times disclaims any knowledge of possession of the controlled substance in question.
Accordingly, the chain of custody documentation and testimony is presented by the prosecution to establish that the substance in evidence was in fact in the possession of the defendant.
An identifiable person must always have the physical custody of a piece of evidence. In practice, this means that a police officer or detective will take charge of a piece of evidence, document its collection, and hand it over to an evidence clerk for storage in a secure place. These transactions, and every succeeding transaction between the collection of the evidence and its appearance in court, should be completely documented chronologically in order to withstand legal challenges to the authenticity of the evidence. Documentation should include the conditions under which the evidence is gathered, the identity of all evidence handlers, duration of evidence custody, security conditions while handling or storing the evidence, and the manner in which evidence is transferred to subsequent custodians each time such a transfer occurs (along with the signatures of persons involved at each step).
Example
An example of "Chain of Custody" would be the recovery of a bloody knife at a murder scene:
Officer Andrew collects the knife and places it into a container, then gives it to forensics technician Bill. Forensics technician Bill takes the knife to the lab and collects fingerprints and other evidence from the knife. Bill then gives the knife and all evidence gathered from the knife to evidence clerk Charlene. Charlene then stores the evidence until it is needed, documenting everyone who has accessed the original evidence (the knife, and original copies of the lifted fingerprints).
The Chain of Custody requires that from the moment the evidence is collected, every transfer of evidence from person to person be documented and that it be provable that nobody else could have accessed that evidence. It is best to keep the number of transfers as low as possible.
In the courtroom, if the defendant questions the Chain of Custody of the evidence it can be proven that the knife in the evidence room is the same knife found at the crime scene. However, if there are discrepancies and it cannot be proven who had the knife at a particular point in time, then the Chain of Custody is broken and the defendant can ask to have the resulting evidence declared inadmissible.
"Chain of custody" is also used in most chemical sampling situations to maintain the integrity of the sample by providing documentation of the control, transfer, and analysis of samples. Chain of custody is especially important in environmental work where sampling can identify the existence of contamination and can be used to identify the responsible party.
REFERENCES:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 23173-23185). Auerbach Publications. Kindle Edition.
http://en.wikipedia.org/wiki/Documentary_evidence
http://en.wikipedia.org/wiki/Physical_evidence
http://en.wikipedia.org/wiki/Digital_evidence
http://en.wikipedia.org/wiki/Demonstrative_evidence
http://en.wikipedia.org/wiki/Real_evidence
http://en.wikipedia.org/wiki/Chain_of_custody
The absence of a safeguard, or a weakness in a system that may possibly be exploited is called a(n)?
Threat
Exposure
Vulnerability
Risk
A vulnerability is a weakness in a system that can be exploited by a threat.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 237.
Which of the following virus types changes some of its characteristics as it spreads?
Boot Sector
Parasitic
Stealth
Polymorphic
A Polymorphic virus produces varied but operational copies of itself in hopes of evading anti-virus software.
The following answers are incorrect:
boot sector. Is incorrect because it is not the best answer. A boot sector virus attacks the boot sector of a drive. It describes the type of attack of the virus and not the characteristics of its composition.
parasitic. Is incorrect because it is not the best answer. A parasitic virus attaches itself to other files but does not change its characteristics.
stealth. Is incorrect because it is not the best answer. A stealth virus attempts to hide changes of the affected files but not itself.
What best describes a scenario when an employee has been shaving off pennies from multiple accounts and depositing the funds into his own bank account?
Data fiddling
Data diddling
Salami techniques
Trojan horses
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2001, Page 644.
When you update records in multiple locations or you make a copy of the whole database at a remote location as a way to achieve the proper level of fault-tolerance and redundancy, it is knows as?
Shadowing
Data mirroring
Backup
Archiving
Updating records in multiple locations or copying an entire database to a remote location as a means to ensure the appropriate levels of fault-tolerance and redundancy is known as Database shadowing. Shadowing is the technique in which updates are shadowed in multiple locations. It is like copying the entire database on to a remote location.
Shadow files are an exact live copy of the original active database, allowing you to maintain live duplicates of your production database, which can be brought into production in the event of a hardware failure. They are used for security reasons: should the original database be damaged or incapacitated by hardware problems, the shadow can immediately take over as the primary database. It is therefore important that shadow files do not run on the same server or at least on the same drive as the primary database files.
The following are incorrect answers:
Data mirroring In data storage, disk mirroring is the replication of logical disk volumes onto separate physical hard disks in real time to ensure continuous availability. It is most commonly used in RAID 1. A mirrored volume is a complete logical representation of separate volume copies.
Backups In computing the phrase backup means to copy files to a second medium (a disk or tape) as a precaution in case the first medium fails. One of the cardinal rules in using computers is back up your files regularly. Backups are useful in recovering information or a system in the event of a disaster, else you may be very sorry :-(
Archiving is the storage of data that is not in continual use for historical purposes. It is the process of copying files to a long-term storage medium for backup.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 27614-27626). Auerbach Publications. Kindle Edition.
http://en.wikipedia.org/wiki/Disk_mirroring
http://www.webopedia.com/TERM/A/archive.html
http://ibexpert.net/ibe/index.php?n=Doc.DatabaseShadow
The high availability of multiple all-inclusive, easy-to-use hacking tools that do NOT require much technical knowledge has brought a growth in the number of which type of attackers?
Black hats
White hats
Script kiddies
Phreakers
As script kiddies are low to moderately skilled hackers using available scripts and tools to easily launch attacks against victims.
The other answers are incorrect because :
Black hats is incorrect as they are malicious , skilled hackers.
White hats is incorrect as they are security professionals.
Phreakers is incorrect as they are telephone/PBX (private branch exchange) hackers.
Reference : Shon Harris AIO v3 , Chapter 12: Operations security , Page : 830
Which virus category has the capability of changing its own code, making it harder to detect by anti-virus software?
Stealth viruses
Polymorphic viruses
Trojan horses
Logic bombs
A polymorphic virus has the capability of changing its own code, enabling it to have many different variants, making it harder to detect by anti-virus software. The particularity of a stealth virus is that it tries to hide its presence after infecting a system. A Trojan horse is a set of unauthorized instructions that are added to or replacing a legitimate program. A logic bomb is a set of instructions that is initiated when a specific event occurs.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 11: Application and System Development (page 786).
What do the ILOVEYOU and Melissa virus attacks have in common?
They are both denial-of-service (DOS) attacks.
They have nothing in common.
They are both masquerading attacks.
They are both social engineering attacks.
While a masquerading attack can be considered a type of social engineering, the Melissa and ILOVEYOU viruses are examples of masquerading attacks, even if it may cause some kind of denial of service due to the web server being flooded with messages. In this case, the receiver confidently opens a message coming from a trusted individual, only to find that the message was sent using the trusted party's identity.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, Chapter 10: Law, Investigation, and Ethics (page 650).
In computing what is the name of a non-self-replicating type of malware program containing malicious code that appears to have some useful purpose but also contains code that has a malicious or harmful purpose imbedded in it, when executed, carries out actions that are unknown to the person installing it, typically causing loss or theft of data, and possible system harm.
virus
worm
Trojan horse.
trapdoor
A trojan horse is any code that appears to have some useful purpose but also contains code that has a malicious or harmful purpose imbedded in it. A Trojan often also includes a trapdoor as a means to gain access to a computer system bypassing security controls.
Wikipedia defines it as:
A Trojan horse, or Trojan, in computing is a non-self-replicating type of malware program containing malicious code that, when executed, carries out actions determined by the nature of the Trojan, typically causing loss or theft of data, and possible system harm. The term is derived from the story of the wooden horse used to trick defenders of Troy into taking concealed warriors into their city in ancient Greece, because computer Trojans often employ a form of social engineering, presenting themselves as routine, useful, or interesting in order to persuade victims to install them on their computers.
The following answers are incorrect:
virus. Is incorrect because a Virus is a malicious program and is does not appear to be harmless, it's sole purpose is malicious intent often doing damage to a system. A computer virus is a type of malware that, when executed, replicates by inserting copies of itself (possibly modified) into other computer programs, data files, or the boot sector of the hard drive; when this replication succeeds, the affected areas are then said to be "infected".
worm. Is incorrect because a Worm is similiar to a Virus but does not require user intervention to execute. Rather than doing damage to the system, worms tend to self-propagate and devour the resources of a system. A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. Often, it uses a computer network to spread itself, relying on security failures on the target computer to access it. Unlike a computer virus, it does not need to attach itself to an existing program. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.
trapdoor. Is incorrect because a trapdoor is a means to bypass security by hiding an entry point into a system. Trojan Horses often have a trapdoor imbedded in them.
References:
http://en.wikipedia.org/wiki/Trojan_horse_%28computing%29
and
http://en.wikipedia.org/wiki/Computer_virus
and
http://en.wikipedia.org/wiki/Computer_worm
and
http://en.wikipedia.org/wiki/Backdoor_%28computing%29
Java is not:
Object-oriented.
Distributed.
Architecture Specific.
Multithreaded.
JAVA was developed so that the same program could be executed on multiple hardware and operating system platforms, it is not Architecture Specific.
The following answers are incorrect:
Object-oriented. Is not correct because JAVA is object-oriented. It should use the object-oriented programming methodology.
Distributed. Is incorrect because JAVA was developed to be able to be distrubuted, run on multiple computer systems over a network.
Multithreaded. Is incorrect because JAVA is multi-threaded that is calls to subroutines as is the case with object-oriented programming.
A virus is a program that can replicate itself on a system but not necessarily spread itself by network connections.
Which of the following technologies is a target of XSS or CSS (Cross-Site Scripting) attacks?
Web Applications
Intrusion Detection Systems
Firewalls
DNS Servers
XSS or Cross-Site Scripting is a threat to web applications where malicious code is placed on a website that attacks the use using their existing authenticated session status.
Cross-Site Scripting attacks are a type of injection problem, in which malicious scripts are injected into the otherwise benign and trusted web sites. Cross-site scripting (XSS) attacks occur when an attacker uses a web application to send malicious code, generally in the form of a browser side script, to a different end user. Flaws that allow these attacks to succeed are quite widespread and occur anywhere a web application uses input from a user in the output it generates without validating or encoding it.
An attacker can use XSS to send a malicious script to an unsuspecting user. The end user’s browser has no way to know that the script should not be trusted, and will execute the script. Because it thinks the script came from a trusted source, the malicious script can access any cookies, session tokens, or other sensitive information retained by your browser and used with that site. These scripts can even rewrite the content of the HTML page.
Mitigation:
Configure your IPS - Intrusion Prevention System to detect and suppress this traffic.
Input Validation on the web application to normalize inputted data.
Set web apps to bind session cookies to the IP Address of the legitimate user and only permit that IP Address to use that cookie.
See the XSS (Cross Site Scripting) Prevention Cheat Sheet
See the Abridged XSS Prevention Cheat Sheet
See the DOM based XSS Prevention Cheat Sheet
See the OWASP Development Guide article on Phishing.
See the OWASP Development Guide article on Data Validation.
The following answers are incorrect:
Intrusion Detection Systems: Sorry. IDS Systems aren't usually the target of XSS attacks but a properly-configured IDS/IPS can "detect and report on malicious string and suppress the TCP connection in an attempt to mitigate the threat.
Firewalls: Sorry. Firewalls aren't usually the target of XSS attacks.
DNS Servers: Same as above, DNS Servers aren't usually targeted in XSS attacks but they play a key role in the domain name resolution in the XSS attack process.
The following reference(s) was used to create this question:
CCCure Holistic Security+ CBT and Curriculum
and
https://www.owasp.org/index.php/Cross-site_Scripting_%28XSS%29
Which of the following is most likely to be useful in detecting intrusions?
Access control lists
Security labels
Audit trails
Information security policies
If audit trails have been properly defined and implemented, they will record information that can assist in detecting intrusions.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, Chapter 4: Access Control (page 186).
In order to enable users to perform tasks and duties without having to go through extra steps it is important that the security controls and mechanisms that are in place have a degree of?
Complexity
Non-transparency
Transparency
Simplicity
The security controls and mechanisms that are in place must have a degree of transparency.
This enables the user to perform tasks and duties without having to go through extra steps because of the presence of the security controls. Transparency also does not let the user know too much about the controls, which helps prevent him from figuring out how to circumvent them. If the controls are too obvious, an attacker can figure out how to compromise them more easily.
Security (more specifically, the implementation of most security controls) has long been a sore point with users who are subject to security controls. Historically, security controls have been very intrusive to users, forcing them to interrupt their work flow and remember arcane codes or processes (like long passwords or access codes), and have generally been seen as an obstacle to getting work done. In recent years, much work has been done to remove that stigma of security controls as a detractor from the work process adding nothing but time and money. When developing access control, the system must be as transparent as possible to the end user. The users should be required to interact with the system as little as possible, and the process around using the control should be engineered so as to involve little effort on the part of the user.
For example, requiring a user to swipe an access card through a reader is an effective way to ensure a person is authorized to enter a room. However, implementing a technology (such as RFID) that will automatically scan the badge as the user approaches the door is more transparent to the user and will do less to impede the movement of personnel in a busy area.
In another example, asking a user to understand what applications and data sets will be required when requesting a system ID and then specifically requesting access to those resources may allow for a great deal of granularity when provisioning access, but it can hardly be seen as transparent. A more transparent process would be for the access provisioning system to have a role-based structure, where the user would simply specify the role he or she has in the organization and the system would know the specific resources that user needs to access based on that role. This requires less work and interaction on the part of the user and will lead to more accurate and secure access control decisions because access will be based on predefined need, not user preference.
When developing and implementing an access control system special care should be taken to ensure that the control is as transparent to the end user as possible and interrupts his work flow as little as possible.
The following answers were incorrect:
All of the other detractors were incorrect.
Reference(s) used for this question:
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, 6th edition. Operations Security, Page 1239-1240
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (Kindle Locations 25278-25281). McGraw-Hill. Kindle Edition.
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Access Control ((ISC)2 Press) (Kindle Locations 713-729). Auerbach Publications. Kindle Edition.
Why would anomaly detection IDSs often generate a large number of false positives?
Because they can only identify correctly attacks they already know about.
Because they are application-based are more subject to attacks.
Because they can't identify abnormal behavior.
Because normal patterns of user and system behavior can vary wildly.
Unfortunately, anomaly detectors and the Intrusion Detection Systems (IDS) based on them often produce a large number of false alarms, as normal patterns of user and system behavior can vary wildly. Being only able to identify correctly attacks they already know about is a characteristic of misuse detection (signature-based) IDSs. Application-based IDSs are a special subset of host-based IDSs that analyze the events transpiring within a software application. They are more vulnerable to attacks than host-based IDSs. Not being able to identify abnormal behavior would not cause false positives, since they are not identified.
Source: DUPUIS, Cl?ment, Access Control Systems and Methodology CISSP Open Study Guide, version 1.0, march 2002 (page 92).
Attributes that characterize an attack are stored for reference using which of the following Intrusion Detection System (IDS) ?
signature-based IDS
statistical anomaly-based IDS
event-based IDS
inferent-based IDS
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 49.
Controls provide accountability for individuals who are accessing sensitive information. This accountability is accomplished:
through access control mechanisms that require identification and authentication and through the audit function.
through logical or technical controls involving the restriction of access to systems and the protection of information.
through logical or technical controls but not involving the restriction of access to systems and the protection of information.
through access control mechanisms that do not require identification and authentication and do not operate through the audit function.
Controls provide accountability for individuals who are accessing sensitive information. This accountability is accomplished through access control mechanisms that require identification and authentication and through the audit function. These controls must be in accordance with and accurately represent the organization's security policy. Assurance procedures ensure that the control mechanisms correctly implement the security policy for the entire life cycle of an information system.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 33.
Several analysis methods can be employed by an IDS, each with its own strengths and weaknesses, and their applicability to any given situation should be carefully considered. There are two basic IDS analysis methods that exists. Which of the basic method is more prone to false positive?
Pattern Matching (also called signature analysis)
Anomaly Detection
Host-based intrusion detection
Network-based intrusion detection
Several analysis methods can be employed by an IDS, each with its own strengths and weaknesses, and their applicability to any given situation should be carefully considered.
There are two basic IDS analysis methods:
1. Pattern Matching (also called signature analysis), and
2. Anomaly detection
PATTERN MATCHING
Some of the first IDS products used signature analysis as their detection method and simply looked for known characteristics of an attack (such as specific packet sequences or text in the data stream) to produce an alert if that pattern was detected. If a new or different attack vector is used, it will not match a known signature and, thus, slip past the IDS.
ANOMALY DETECTION
Alternately, anomaly detection uses behavioral characteristics of a system’s operation or network traffic to draw conclusions on whether the traffic represents a risk to the network or host. Anomalies may include but are not limited to:
Multiple failed log-on attempts
Users logging in at strange hours
Unexplained changes to system clocks
Unusual error messages
Unexplained system shutdowns or restarts
Attempts to access restricted files
An anomaly-based IDS tends to produce more data because anything outside of the expected behavior is reported. Thus, they tend to report more false positives as expected behavior patterns change. An advantage to anomaly-based IDS is that, because they are based on behavior identification and not specific patterns of traffic, they are often able to detect new attacks that may be overlooked by a signature-based system. Often information from an anomaly-based IDS may be used to create a pattern for a signature-based IDS.
Host Based Intrusion Detection (HIDS)
HIDS is the implementation of IDS capabilities at the host level. Its most significant difference from NIDS is that related processes are limited to the boundaries of a single-host system. However, this presents advantages in effectively detecting objectionable activities because the IDS process is running directly on the host system, not just observing it from the network. This offers unfettered access to system logs, processes, system information, and device information, and virtually eliminates limits associated with encryption. The level of integration represented by HIDS increases the level of visibility and control at the disposal of the HIDS application.
Network Based Intrustion Detection (NIDS)
NIDS are usually incorporated into the network in a passive architecture, taking advantage of promiscuous mode access to the network. This means that it has visibility into every packet traversing the network segment. This allows the system to inspect packets and monitor sessions without impacting the network or the systems and applications utilizing the network.
Below you have other ways that instrusion detection can be performed:
Stateful Matching Intrusion Detection
Stateful matching takes pattern matching to the next level. It scans for attack signatures in the context of a stream of traffic or overall system behavior rather than the individual packets or discrete system activities. For example, an attacker may use a tool that sends a volley of valid packets to a targeted system. Because all the packets are valid, pattern matching is nearly useless. However, the fact that a large volume of the packets was seen may, itself, represent a known or potential attack pattern. To evade attack, then, the attacker may send the packets from multiple locations with long wait periods between each transmission to either confuse the signature detection system or exhaust its session timing window. If the IDS service is tuned to record and analyze traffic over a long period of time it may detect such an attack. Because stateful matching also uses signatures, it too must be updated regularly and, thus, has some of the same limitations as pattern matching.
Statistical Anomaly-Based Intrusion Detection
The statistical anomaly-based IDS analyzes event data by comparing it to typical, known, or predicted traffic profiles in an effort to find potential security breaches. It attempts to identify suspicious behavior by analyzing event data and identifying patterns of entries that deviate from a predicted norm. This type of detection method can be very effective and, at a very high level, begins to take on characteristics seen in IPS by establishing an expected baseline of behavior and acting on divergence from that baseline. However, there are some potential issues that may surface with a statistical IDS. Tuning the IDS can be challenging and, if not performed regularly, the system will be prone to false positives. Also, the definition of normal traffic can be open to interpretation and does not preclude an attacker from using normal activities to penetrate systems. Additionally, in a large, complex, dynamic corporate environment, it can be difficult, if not impossible, to clearly define “normal” traffic. The value of statistical analysis is that the system has the potential to detect previously unknown attacks. This is a huge departure from the limitation of matching previously known signatures. Therefore, when combined with signature matching technology, the statistical anomaly-based IDS can be very effective.
Protocol Anomaly-Based Intrusion Detection
A protocol anomaly-based IDS identifies any unacceptable deviation from expected behavior based on known network protocols. For example, if the IDS is monitoring an HTTP session and the traffic contains attributes that deviate from established HTTP session protocol standards, the IDS may view that as a malicious attempt to manipulate the protocol, penetrate a firewall, or exploit a vulnerability. The value of this method is directly related to the use of well-known or well-defined protocols within an environment. If an organization primarily uses well-known protocols (such as HTTP, FTP, or telnet) this can be an effective method of performing intrusion detection. In the face of custom or nonstandard protocols, however, the system will have more difficulty or be completely unable to determine the proper packet format. Interestingly, this type of method is prone to the same challenges faced by signature-based IDSs. For example, specific protocol analysis modules may have to be added or customized to deal with unique or new protocols or unusual use of standard protocols. Nevertheless, having an IDS that is intimately aware of valid protocol use can be very powerful when an organization employs standard implementations of common protocols.
Traffic Anomaly-Based Intrusion
Detection A traffic anomaly-based IDS identifies any unacceptable deviation from expected behavior based on actual traffic structure. When a session is established between systems, there is typically an expected pattern and behavior to the traffic transmitted in that session. That traffic can be compared to expected traffic conduct based on the understandings of traditional system interaction for that type of connection. Like the other types of anomaly-based IDS, traffic anomaly-based IDS relies on the ability to establish “normal” patterns of traffic and expected modes of behavior in systems, networks, and applications. In a highly dynamic environment it may be difficult, if not impossible, to clearly define these parameters.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 3664-3686). Auerbach Publications. Kindle Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 3711-3734). Auerbach Publications. Kindle Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 3694-3711). Auerbach Publications. Kindle Edition.
What setup should an administrator use for regularly testing the strength of user passwords?
A networked workstation so that the live password database can easily be accessed by the cracking program.
A networked workstation so the password database can easily be copied locally and processed by the cracking program.
A standalone workstation on which the password database is copied and processed by the cracking program.
A password-cracking program is unethical; therefore it should not be used.
Poor password selection is frequently a major security problem for any system's security. Administrators should obtain and use password-guessing programs frequently to identify those users having easily guessed passwords.
Because password-cracking programs are very CPU intensive and can slow the system on which it is running, it is a good idea to transfer the encrypted passwords to a standalone (not networked) workstation. Also, by doing the work on a non-networked machine, any results found will not be accessible by anyone unless they have physical access to that system.
Out of the four choice presented above this is the best choice.
However, in real life you would have strong password policies that enforce complexity requirements and does not let the user choose a simple or short password that can be easily cracked or guessed. That would be the best choice if it was one of the choice presented.
Another issue with password cracking is one of privacy. Many password cracking tools can avoid this by only showing the password was cracked and not showing what the password actually is. It is masking the password being used from the person doing the cracking.
Source: National Security Agency, Systems and Network Attack Center (SNAC), The 60 Minute Network Security Guide, February 2002, page 8.
Which of the following is an IDS that acquires data and defines a "normal" usage profile for the network or host?
Statistical Anomaly-Based ID
Signature-Based ID
dynamical anomaly-based ID
inferential anomaly-based ID
Statistical Anomaly-Based ID - With this method, an IDS acquires data and defines a "normal" usage profile for the network or host that is being monitored.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 49.
Network-based Intrusion Detection systems:
Commonly reside on a discrete network segment and monitor the traffic on that network segment.
Commonly will not reside on a discrete network segment and monitor the traffic on that network segment.
Commonly reside on a discrete network segment and does not monitor the traffic on that network segment.
Commonly reside on a host and and monitor the traffic on that specific host.
Network-based ID systems:
- Commonly reside on a discrete network segment and monitor the traffic on that network segment
- Usually consist of a network appliance with a Network Interface Card (NIC) that is operating in promiscuous mode and is intercepting and analyzing the network packets in real time
"A passive NIDS takes advantage of promiscuous mode access to the network, allowing it to gain visibility into every packet traversing the network segment. This allows the system to inspect packets and monitor sessions without impacting the network, performance, or the systems and applications utilizing the network."
NOTE FROM CLEMENT:
A discrete network is a synonym for a SINGLE network. Usually the sensor will monitor a single network segment, however there are IDS today that allow you to monitor multiple LAN's at the same time.
References used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 62.
and
Official (ISC)2 Guide to the CISSP CBK, Hal Tipton and Kevin Henry, Page 196
and
Additional information on IDS systems can be found here: http://en.wikipedia.org/wiki/Intrusion_detection_system
Which of the following would be LESS likely to prevent an employee from reporting an incident?
They are afraid of being pulled into something they don't want to be involved with.
The process of reporting incidents is centralized.
They are afraid of being accused of something they didn't do.
They are unaware of the company's security policies and procedures.
The reporting process should be centralized else employees won't bother.
The other answers are incorrect because :
They are afraid of being pulled into something they don't want to be involved with is incorrect as most of the employees fear of this and this would prevent them to report an incident.
They are afraid of being accused of something they didn't do is also incorrect as this also prevents them to report an incident.
They are unaware of the company's security policies and procedures is also incorrect as mentioned above.
Reference : Shon Harris AIO v3 , Ch-10 : Laws , Investigatio & Ethics , Page : 675.
Which of the following would assist the most in Host Based intrusion detection?
audit trails.
access control lists.
security clearances
host-based authentication
To assist in Intrusion Detection you would review audit logs for access violations.
The following answers are incorrect:
access control lists. This is incorrect because access control lists determine who has access to what but do not detect intrusions.
security clearances. This is incorrect because security clearances determine who has access to what but do not detect intrusions.
host-based authentication. This is incorrect because host-based authentication determine who have been authenticated to the system but do not dectect intrusions.
Knowledge-based Intrusion Detection Systems (IDS) are more common than:
Network-based IDS
Host-based IDS
Behavior-based IDS
Application-Based IDS
Knowledge-based IDS are more common than behavior-based ID systems.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 63.
Application-Based IDS - "a subset of HIDS that analyze what's going on in an application using the transaction log files of the application." Source: Official ISC2 CISSP CBK Review Seminar Student Manual Version 7.0 p. 87
Host-Based IDS - "an implementation of IDS capabilities at the host level. Its most significant difference from NIDS is intrusion detection analysis, and related processes are limited to the boundaries of the host." Source: Official ISC2 Guide to the CISSP CBK - p. 197
Network-Based IDS - "a network device, or dedicated system attached to the network, that monitors traffic traversing the network segment for which it is integrated." Source: Official ISC2 Guide to the CISSP CBK - p. 196
CISSP for dummies a book that we recommend for a quick overview of the 10 domains has nice and concise coverage of the subject:
Intrusion detection is defined as real-time monitoring and analysis of network activity and data for potential vulnerabilities and attacks in progress. One major limitation of current intrusion detection system (IDS) technologies is the requirement to filter false alarms lest the operator (system or security administrator) be overwhelmed with data. IDSes are classified in many different ways, including active and passive, network-based and host-based, and knowledge-based and behavior-based:
Active and passive IDS
An active IDS (now more commonly known as an intrusion prevention system — IPS) is a system that's configured to automatically block suspected attacks in progress without any intervention required by an operator. IPS has the advantage of providing real-time corrective action in response to an attack but has many disadvantages as well. An IPS must be placed in-line along a network boundary; thus, the IPS itself is susceptible to attack. Also, if false alarms and legitimate traffic haven't been properly identified and filtered, authorized users and applications may be improperly denied access. Finally, the IPS itself may be used to effect a Denial of Service (DoS) attack by intentionally flooding the system with alarms that cause it to block connections until no connections or bandwidth are available.
A passive IDS is a system that's configured only to monitor and analyze network traffic activity and alert an operator to potential vulnerabilities and attacks. It isn't capable of performing any protective or corrective functions on its own. The major advantages of passive IDSes are that these systems can be easily and rapidly deployed and are not normally susceptible to attack themselves.
Network-based and host-based IDS
A network-based IDS usually consists of a network appliance (or sensor) with a Network Interface Card (NIC) operating in promiscuous mode and a separate management interface. The IDS is placed along a network segment or boundary and monitors all traffic on that segment.
A host-based IDS requires small programs (or agents) to be installed on individual systems to be monitored. The agents monitor the operating system and write data to log files and/or trigger alarms. A host-based IDS can only monitor the individual host systems on which the agents are installed; it doesn't monitor the entire network.
Knowledge-based and behavior-based IDS
A knowledge-based (or signature-based) IDS references a database of previous attack profiles and known system vulnerabilities to identify active intrusion attempts. Knowledge-based IDS is currently more common than behavior-based IDS.
Advantages of knowledge-based systems include the following:
It has lower false alarm rates than behavior-based IDS.
Alarms are more standardized and more easily understood than behavior-based IDS.
Disadvantages of knowledge-based systems include these:
Signature database must be continually updated and maintained.
New, unique, or original attacks may not be detected or may be improperly classified.
A behavior-based (or statistical anomaly–based) IDS references a baseline or learned pattern of normal system activity to identify active intrusion attempts. Deviations from this baseline or pattern cause an alarm to be triggered.
Advantages of behavior-based systems include that they
Dynamically adapt to new, unique, or original attacks.
Are less dependent on identifying specific operating system vulnerabilities.
Disadvantages of behavior-based systems include
Higher false alarm rates than knowledge-based IDSes.
Usage patterns that may change often and may not be static enough to implement an effective behavior-based IDS.
What is the essential difference between a self-audit and an independent audit?
Tools used
Results
Objectivity
Competence
To maintain operational assurance, organizations use two basic methods: system audits and monitoring. Monitoring refers to an ongoing activity whereas audits are one-time or periodic events and can be either internal or external. The essential difference between a self-audit and an independent audit is objectivity, thus indirectly affecting the results of the audit. Internal and external auditors should have the same level of competence and can use the same tools.
Source: SWANSON, Marianne & GUTTMAN, Barbara, National Institute of Standards and Technology (NIST), NIST Special Publication 800-14, Generally Accepted Principles and Practices for Securing Information Technology Systems, September 1996 (page 25).
If an organization were to monitor their employees' e-mail, it should not:
Monitor only a limited number of employees.
Inform all employees that e-mail is being monitored.
Explain who can read the e-mail and how long it is backed up.
Explain what is considered an acceptable use of the e-mail system.
Monitoring has to be conducted is a lawful manner and applied in a consistent fashion; thus should be applied uniformly to all employees, not only to a small number.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 9: Law, Investigation, and Ethics (page 304).
Which conceptual approach to intrusion detection system is the most common?
Behavior-based intrusion detection
Knowledge-based intrusion detection
Statistical anomaly-based intrusion detection
Host-based intrusion detection
There are two conceptual approaches to intrusion detection. Knowledge-based intrusion detection uses a database of known vulnerabilities to look for current attempts to exploit them on a system and trigger an alarm if an attempt is found. The other approach, not as common, is called behaviour-based or statistical analysis-based. A host-based intrusion detection system is a common implementation of intrusion detection, not a conceptual approach.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 63).
Also: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 4: Access Control (pages 193-194).
What ensures that the control mechanisms correctly implement the security policy for the entire life cycle of an information system?
Accountability controls
Mandatory access controls
Assurance procedures
Administrative controls
Controls provide accountability for individuals accessing information. Assurance procedures ensure that access control mechanisms correctly implement the security policy for the entire life cycle of an information system.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 2: Access control systems (page 33).
In the process of gathering evidence from a computer attack, a system administrator took a series of actions which are listed below. Can you identify which one of these actions has compromised the whole evidence collection process?
Using a write blocker
Made a full-disk image
Created a message digest for log files
Displayed the contents of a folder
Displaying the directory contents of a folder can alter the last access time on each listed file.
Using a write blocker is wrong because using a write blocker ensure that you cannot modify the data on the host and it prevent the host from writing to its hard drives.
Made a full-disk image is wrong because making a full-disk image can preserve all data on a hard disk, including deleted files and file fragments.
Created a message digest for log files is wrong because creating a message digest for log files. A message digest is a cryptographic checksum that can demonstrate that the integrity of a file has not been compromised (e.g. changes to the content of a log file)
Domain: LEGAL, REGULATIONS, COMPLIANCE AND INVESTIGATIONS
References:
AIO 3rd Edition, page 783-784
NIST 800-61 Computer Security Incident Handling guide page 3-18 to 3-20
Most access violations are:
Accidental
Caused by internal hackers
Caused by external hackers
Related to Internet
The most likely source of exposure is from the uninformed, accidental or unknowing person, although the greatest impact may be from those with malicious or fraudulent intent.
Source: Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, Chapter 4: Protection of Information Assets (page 192).
The IP header contains a protocol field. If this field contains the value of 51, what type of data is contained within the ip datagram?
Transmission Control Protocol (TCP)
Authentication Header (AH)
User datagram protocol (UDP)
Internet Control Message Protocol (ICMP)
TCP has the value of 6
UDP has the value of 17
ICMP has the value of 1
Which of the following is defined as the most recent point in time to which data must be synchronized without adversely affecting the organization (financial or operational impacts)?
Recovery Point Objective
Recovery Time Objective
Point of Time Objective
Critical Time Objective
The recovery point objective (RPO) is the maximum acceptable level of data loss following an unplanned “event”, like a disaster (natural or man-made), act of crime or terrorism, or any other business or technical disruption that could cause such data loss. The RPO represents the point in time, prior to such an event or incident, to which lost data can be recovered (given the most recent backup copy of the data).
The recovery time objective (RTO) is a period of time within which business and / or technology capabilities must be restored following an unplanned event or disaster. The RTO is a function of the extent to which the interruption disrupts normal operations and the amount of revenue lost per unit of time as a result of the disaster.
These factors in turn depend on the affected equipment and application(s). Both of these numbers represent key targets that are set by key businesses during business continuity and disaster recovery planning; these targets in turn drive the technology and implementation choices for business resumption services, backup / recovery / archival services, and recovery facilities and procedures.
Many organizations put the cart before the horse in selecting and deploying technologies before understanding the business needs as expressed in RPO and RTO; IT departments later bear the brunt of user complaints that their service expectations are not being met. Defining the RPO and RTO can avoid that pitfall, and in doing so can also make for a compelling business case for recovery technology spending and staffing.
For the CISSP candidate studying for the exam, there are no such objectives for "point of time," and "critical time." Those two answers are simply detracters.
All of the following can be considered essential business functions that should be identified when creating a Business Impact Analysis (BIA) except one. Which of the following would not be considered an essential element of the BIA but an important TOPIC to include within the BCP plan:
IT Network Support
Accounting
Public Relations
Purchasing
Public Relations, although important to a company, is not listed as an essential business function that should be identified and have loss criteria developed for.
All other entries are considered essential and should be identified and have loss criteria developed.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 9: Disaster Recovery and Business continuity (page 598).
What would be the Annualized Rate of Occurrence (ARO) of the threat "user input error", in the case where a company employs 100 data entry clerks and every one of them makes one input error each month?
100
120
1
1200
If every one of the 100 clerks makes 1 error 12 times per year, it makes a total of 1200 errors. The Annnualized Rate of Occurence (ARO) is a value that represents the estimated frequency in which a threat is expected to occur. The range can be from 0.0 to a large number. Having an average of 1200 errors per year means an ARO of 1200
The RSA Algorithm uses which mathematical concept as the basis of its encryption?
Geometry
16-round ciphers
PI (3.14159...)
Two large prime numbers
Source: TIPTON, et. al, Official (ISC)2 Guide to the CISSP CBK, 2007 edition, page 254.
And from the RSA web site, http://www.rsa.com/rsalabs/node.asp?id=2214 :
The RSA cryptosystem is a public-key cryptosystem that offers both encryption and digital signatures (authentication). Ronald Rivest, Adi Shamir, and Leonard Adleman developed the RSA system in 1977 [RSA78]; RSA stands for the first letter in each of its inventors' last names.
The RSA algorithm works as follows: take two large primes, p and q, and compute their product n = pq; n is called the modulus. Choose a number, e, less than n and relatively prime to (p-1)(q-1), which means e and (p-1)(q-1) have no common factors except 1. Find another number d such that (ed - 1) is divisible by (p-1)(q-1). The values e and d are called the public and private exponents, respectively. The public key is the pair (n, e); the private key is (n, d). The factors p and q may be destroyed or kept with the private key.
It is currently difficult to obtain the private key d from the public key (n, e). However if one could factor n into p and q, then one could obtain the private key d. Thus the security of the RSA system is based on the assumption that factoring is difficult. The discovery of an easy method of factoring would "break" RSA (see Question 3.1.3 and Question 2.3.3).
Here is how the RSA system can be used for encryption and digital signatures (in practice, the actual use is slightly different; see Questions 3.1.7 and 3.1.8):
Encryption
Suppose Alice wants to send a message m to Bob. Alice creates the ciphertext c by exponentiating: c = me mod n, where e and n are Bob's public key. She sends c to Bob. To decrypt, Bob also exponentiates: m = cd mod n; the relationship between e and d ensures that Bob correctly recovers m. Since only Bob knows d, only Bob can decrypt this message.
Digital Signature
Suppose Alice wants to send a message m to Bob in such a way that Bob is assured the message is both authentic, has not been tampered with, and from Alice. Alice creates a digital signature s by exponentiating: s = md mod n, where d and n are Alice's private key. She sends m and s to Bob. To verify the signature, Bob exponentiates and checks that the message m is recovered: m = se mod n, where e and n are Alice's public key.
Thus encryption and authentication take place without any sharing of private keys: each person uses only another's public key or their own private key. Anyone can send an encrypted message or verify a signed message, but only someone in possession of the correct private key can decrypt or sign a message.
Which of the following algorithms is a stream cipher?
RC2
RC4
RC5
RC6
RC2, RC4, RC5 and RC6 were developed by Ronal Rivest from RSA Security.
In the RC family only RC4 is a stream cipher.
RC4 allows a variable key length.
RC2 works with 64-bit blocks and variable key lengths,
RC5 has variable block sizes, key length and number of processing rounds.
RC6 was designed to fix a flaw in RC5.
Source: ANDRESS, Mandy, Exam Cram CISSP, Coriolis, 2001, Chapter 6: Cryptography (page 103).
Who should measure the effectiveness of Information System security related controls in an organization?
The local security specialist
The business manager
The systems auditor
The central security manager
It is the systems auditor that should lead the effort to ensure that the security controls are in place and effective. The audit would verify that the controls comply with polices, procedures, laws, and regulations where applicable. The findings would provide these to senior management.
The following answers are incorrect:
the local security specialist. Is incorrect because an independent review should take place by a third party. The security specialist might offer mitigation strategies but it is the auditor that would ensure the effectiveness of the controls
the business manager. Is incorrect because the business manager would be responsible that the controls are in place, but it is the auditor that would ensure the effectiveness of the controls.
the central security manager. Is incorrect because the central security manager would be responsible for implementing the controls, but it is the auditor that is responsibe for ensuring their effectiveness.
Which of the following types of Intrusion Detection Systems uses behavioral characteristics of a system’s operation or network traffic to draw conclusions on whether the traffic represents a risk to the network or host?
Network-based ID systems.
Anomaly Detection.
Host-based ID systems.
Signature Analysis.
There are two basic IDS analysis methods: pattern matching (also called signature analysis) and anomaly detection.
Anomaly detection uses behavioral characteristics of a system’s operation or network traffic to draw conclusions on whether the traffic represents a risk to the network or host. Anomalies may include but are not limited to:
Multiple failed log-on attempts
Users logging in at strange hours
Unexplained changes to system clocks
Unusual error messages
The following are incorrect answers:
Network-based ID Systems (NIDS) are usually incorporated into the network in a passive architecture, taking advantage of promiscuous mode access to the network. This means that it has visibility into every packet traversing the network segment. This allows the system to inspect packets and monitor sessions without impacting the network or the systems and applications utilizing the network.
Host-based ID Systems (HIDS) is the implementation of IDS capabilities at the host level. Its most significant difference from NIDS is that related processes are limited to the boundaries of a single-host system. However, this presents advantages in effectively detecting objectionable activities because the IDS process is running directly on the host system, not just observing it from the network. This offers unfettered access to system logs, processes, system information, and device information, and virtually eliminates limits associated with encryption. The level of integration represented by HIDS increases the level of visibility and control at the disposal of the HIDS application.
Signature Analysis Some of the first IDS products used signature analysis as their detection method and simply looked for known characteristics of an attack (such as specific packet sequences or text in the data stream) to produce an alert if that pattern was detected. For example, an attacker manipulating an FTP server may use a tool that sends a specially constructed packet. If that particular packet pattern is known, it can be represented in the form of a signature that IDS can then compare to incoming packets. Pattern-based IDS will have a database of hundreds, if not thousands, of signatures that are compared to traffic streams. As new attack signatures are produced, the system is updated, much like antivirus solutions. There are drawbacks to pattern-based IDS. Most importantly, signatures can only exist for known attacks. If a new or different attack vector is used, it will not match a known signature and, thus, slip past the IDS. Additionally, if an attacker knows that the IDS is present, he or she can alter his or her methods to avoid detection. Changing packets and data streams, even slightly, from known signatures can cause an IDS to miss the attack. As with some antivirus systems, the IDS is only as good as the latest signature database on the system.
For additional information on Intrusion Detection Systems - http://en.wikipedia.org/wiki/Intrusion_detection_system
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 3623-3625, 3649-3654, 3666-3686). Auerbach Publications. Kindle Edition.
Which one of the following statements about the advantages and disadvantages of network-based Intrusion detection systems is true
Network-based IDSs are not vulnerable to attacks.
Network-based IDSs are well suited for modern switch-based networks.
Most network-based IDSs can automatically indicate whether or not an attack was successful.
The deployment of network-based IDSs has little impact upon an existing network.
Network-based IDSs are usually passive devices that listen on a network wire without interfering with the normal operation of a network. Thus, it is usually easy to retrofit a network to include network-based IDSs with minimal effort.
Network-based IDSs are not vulnerable to attacks is not true, even thou network-based IDSs can be made very secure against attack and even made invisible to many attackers they still have to read the packets and sometimes a well crafted packet might exploit or kill your capture engine.
Network-based IDSs are well suited for modern switch-based networks is not true as most switches do not provide universal monitoring ports and this limits the monitoring range of a network-based IDS sensor to a single host. Even when switches provide such monitoring ports, often the single port cannot mirror all traffic traversing the switch.
Most network-based IDSs can automatically indicate whether or not an attack was successful is not true as most network-based IDSs cannot tell whether or not an attack was successful; they can only discern that an attack was initiated. This means that after a network-based IDS detects an attack, administrators must manually investigate each attacked host to determine whether it was indeed penetrated.
A timely review of system access audit records would be an example of which of the basic security functions?
avoidance
deterrence
prevention
detection
By reviewing system logs you can detect events that have occured.
The following answers are incorrect:
avoidance. This is incorrect, avoidance is a distractor. By reviewing system logs you have not avoided anything.
deterrence. This is incorrect because system logs are a history of past events. You cannot deter something that has already occurred.
prevention. This is incorrect because system logs are a history of past events. You cannot prevent something that has already occurred.
How often should a Business Continuity Plan be reviewed?
At least once a month
At least every six months
At least once a year
At least Quarterly
As stated in SP 800-34 Rev. 1:
To be effective, the plan must be maintained in a ready state that accurately reflects system requirements, procedures, organizational structure, and policies. During the Operation/Maintenance phase of the SDLC, information systems undergo frequent changes because of shifting business needs, technology upgrades, or new internal or external policies.
As a general rule, the plan should be reviewed for accuracy and completeness at an organization-defined frequency (at least once a year for the purpose of the exam) or whenever significant changes occur to any element of the plan. Certain elements, such as contact lists, will require more frequent reviews.
Remember, there could be two good answers as specified above. Either once a year or whenever significant changes occur to the plan. You will of course get only one of the two presented within you exam.
Reference(s) used for this question:
NIST SP 800-34 Revision 1
Which of the following is needed for System Accountability?
Audit mechanisms.
Documented design as laid out in the Common Criteria.
Authorization.
Formal verification of system design.
Is a means of being able to track user actions. Through the use of audit logs and other tools the user actions are recorded and can be used at a later date to verify what actions were performed.
Accountability is the ability to identify users and to be able to track user actions.
The following answers are incorrect:
Documented design as laid out in the Common Criteria. Is incorrect because the Common Criteria is an international standard to evaluate trust and would not be a factor in System Accountability.
Authorization. Is incorrect because Authorization is granting access to subjects, just because you have authorization does not hold the subject accountable for their actions.
Formal verification of system design. Is incorrect because all you have done is to verify the system design and have not taken any steps toward system accountability.
References:
OIG CBK Glossary (page 778)
Which of the following Intrusion Detection Systems (IDS) uses a database of attacks, known system vulnerabilities, monitoring current attempts to exploit those vulnerabilities, and then triggers an alarm if an attempt is found?
Knowledge-Based ID System
Application-Based ID System
Host-Based ID System
Network-Based ID System
Knowledge-based Intrusion Detection Systems use a database of previous attacks and known system vulnerabilities to look for current attempts to exploit their vulnerabilities, and trigger an alarm if an attempt is found.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 87.
Application-Based ID System - "a subset of HIDS that analyze what's going on in an application using the transaction log files of the application." Source: Official ISC2 CISSP CBK Review Seminar Student Manual Version 7.0 p. 87
Host-Based ID System - "an implementation of IDS capabilities at the host level. Its most significant difference from NIDS is intrusion detection analysis, and related processes are limited to the boundaries of the host." Source: Official ISC2 Guide to the CISSP CBK - p. 197
Network-Based ID System - "a network device, or dedicated system attached to teh network, that monitors traffic traversing teh network segment for which it is integrated." Source: Official ISC2 Guide to the CISSP CBK - p. 196
Which of the following is a disadvantage of a statistical anomaly-based intrusion detection system?
it may truly detect a non-attack event that had caused a momentary anomaly in the system.
it may falsely detect a non-attack event that had caused a momentary anomaly in the system.
it may correctly detect a non-attack event that had caused a momentary anomaly in the system.
it may loosely detect a non-attack event that had caused a momentary anomaly in the system.
Some disadvantages of a statistical anomaly-based ID are that it will not detect an attack that does not significantly change the system operating characteristics, or it may falsely detect a non-attack event that had caused a momentary anomaly in the system.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 49.
Which of the following are the two MOST common implementations of Intrusion Detection Systems?
Server-based and Host-based.
Network-based and Guest-based.
Network-based and Client-based.
Network-based and Host-based.
The two most common implementations of Intrusion Detection are Network-based and Host-based.
IDS can be implemented as a network device, such as a router, switch, firewall, or dedicated device monitoring traffic, typically referred to as network IDS (NIDS).
The" (IDS) "technology can also be incorporated into a host system (HIDS) to monitor a single system for undesirable activities. "
A network intrusion detection system (NIDS) is a network device .... that monitors traffic traversing the network segment for which it is integrated." Remember that NIDS are usually passive in nature.
HIDS is the implementation of IDS capabilities at the host level. Its most significant difference from NIDS is that related processes are limited to the boundaries of a single-host system. However, this presents advantages in effectively detecting objectionable activities because the IDS process is running directly on the host system, not just observing it from the network.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 3649-3652). Auerbach Publications. Kindle Edition.
Which of the following is not a preventive operational control?
Protecting laptops, personal computers and workstations.
Controlling software viruses.
Controlling data media access and disposal.
Conducting security awareness and technical training.
Conducting security awareness and technical training to ensure that end users and system users are aware of the rules of behaviour and their responsibilities in protecting the organization's mission is an example of a preventive management control, therefore not an operational control.
Source: STONEBURNER, Gary et al., NIST Special publication 800-30, Risk management Guide for Information Technology Systems, 2001 (page 37).
Which of the following is used to monitor network traffic or to monitor host audit logs in real time to determine violations of system security policy that have taken place?
Intrusion Detection System
Compliance Validation System
Intrusion Management System (IMS)
Compliance Monitoring System
An Intrusion Detection System (IDS) is a system that is used to monitor network traffic or to monitor host audit logs in order to determine if any violations of an organization's system security policy have taken place.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 48.
Who is responsible for providing reports to the senior management on the effectiveness of the security controls?
Information systems security professionals
Data owners
Data custodians
Information systems auditors
IT auditors determine whether systems are in compliance with the security policies, procedures, standards, baselines, designs, architectures, management direction and other requirements" and "provide top company management with an independent view of the controls that have been designed and their effectiveness."
"Information systems security professionals" is incorrect. Security professionals develop the security policies and supporting baselines, etc.
"Data owners" is incorrect. Data owners have overall responsibility for information assets and assign the appropriate classification for the asset as well as ensure that the asset is protected with the proper controls.
"Data custodians" is incorrect. Data custodians care for an information asset on behalf of the data owner.
References:
CBK, pp. 38 - 42.
AIO3. pp. 99 - 104
Which of the following questions are least likely to help in assessing controls covering audit trails?
Does the audit trail provide a trace of user actions?
Are incidents monitored and tracked until resolved?
Is access to online logs strictly controlled?
Is there separation of duties between security personnel who administer the access control function and those who administer the audit trail?
Audit trails maintain a record of system activity by system or application processes and by user activity. In conjunction with appropriate tools and procedures, audit trails can provide individual accountability, a means to reconstruct events, detect intrusions, and identify problems. Audit trail controls are considered technical controls. Monitoring and tracking of incidents is more an operational control related to incident response capability.
Reference(s) used for this question:
SWANSON, Marianne, NIST Special Publication 800-26, Security Self-Assessment Guide for Information Technology Systems, November 2001 (Pages A-50 to A-51).
NOTE: NIST SP 800-26 has been superceded By: FIPS 200, SP 800-53, SP 800-53A
You can find the new replacement at: http://csrc.nist.gov/publications/PubsSPs.html
However, if you really wish to see the old standard, it is listed as an archived document at:
http://csrc.nist.gov/publications/PubsSPArch.html
Which of the following would NOT violate the Due Diligence concept?
Security policy being outdated
Data owners not laying out the foundation of data protection
Network administrator not taking mandatory two-week vacation as planned
Latest security patches for servers being installed as per the Patch Management process
To be effective a patch management program must be in place (due diligence) and detailed procedures would specify how and when the patches are applied properly (Due Care). Remember, the question asked for NOT a violation of Due Diligence, in this case, applying patches demonstrates due care and the patch management process in place demonstrates due diligence.
Due diligence is the act of investigating and understanding the risks the company faces. A company practices by developing and implementing security policies, procedures, and standards. Detecting risks would be based on standards such as ISO 2700, Best Practices, and other published standards such as NIST standards for example.
Due Diligence is understanding the current threats and risks. Due diligence is practiced by activities that make sure that the protection mechanisms are continually maintained and operational where risks are constantly being evaluated and reviewed. The security policy being outdated would be an example of violating the due diligence concept.
Due Care is implementing countermeasures to provide protection from those threats. Due care is when the necessary steps to help protect the company and its resources from possible risks that have been identifed. If the information owner does not lay out the foundation of data protection (doing something about it) and ensure that the directives are being enforced (actually being done and kept at an acceptable level), this would violate the due care concept.
If a company does not practice due care and due diligence pertaining to the security of its assets, it can be legally charged with negligence and held accountable for any ramifications of that negligence. Liability is usually established based on Due Diligence and Due Care or the lack of either.
A good way to remember this is using the first letter of both words within Due Diligence (DD) and Due Care (DC).
Due Diligence = Due Detect
Steps you take to identify risks based on best practices and standards.
Due Care = Due Correct.
Action you take to bring the risk level down to an acceptable level and maintaining that level over time.
The Following answer were wrong:
Security policy being outdated:
While having and enforcing a security policy is the right thing to do (due care), if it is outdated, you are not doing it the right way (due diligence). This questions violates due diligence and not due care.
Data owners not laying out the foundation for data protection:
Data owners are not recognizing the "right thing" to do. They don't have a security policy.
Network administrator not taking mandatory two week vacation:
The two week vacation is the "right thing" to do, but not taking the vacation violates due diligence (not doing the right thing the right way)
Reference(s) used for this question
Shon Harris, CISSP All In One, Version 5, Chapter 3, pg 110
PGP uses which of the following to encrypt data?
An asymmetric encryption algorithm
A symmetric encryption algorithm
A symmetric key distribution system
An X.509 digital certificate
Notice that the question specifically asks what PGP uses to encrypt For this, PGP uses an symmetric key algorithm. PGP then uses an asymmetric key algorithm to encrypt the session key and then send it securely to the receiver. It is an hybrid system where both types of ciphers are being used for different purposes.
Whenever a question talks about the bulk of the data to be sent, Symmetric is always best to choice to use because of the inherent speed within Symmetric Ciphers. Asymmetric ciphers are 100 to 1000 times slower than Symmetric Ciphers.
The other answers are not correct because:
"An asymmetric encryption algorithm" is incorrect because PGP uses a symmetric algorithm to encrypt data.
"A symmetric key distribution system" is incorrect because PGP uses an asymmetric algorithm for the distribution of the session keys used for the bulk of the data.
"An X.509 digital certificate" is incorrect because PGP does not use X.509 digital certificates to encrypt the data, it uses a session key to encrypt the data.
References:
Official ISC2 Guide page: 275
All in One Third Edition page: 664 - 665
Which of the following teams should NOT be included in an organization's contingency plan?
Damage assessment team
Hardware salvage team
Tiger team
Legal affairs team
According to NIST's Special publication 800-34, a capable recovery strategy will require some or all of the following functional groups: Senior management official, management team, damage assessment team, operating system administration team, systems software team, server recovery team, LAN/WAN recovery team, database recovery team, network operations recovery team, telecommunications team, hardware salvage team, alternate site recovery coordination team, original site restoration/salvage coordination team, test team, administrative support team, transportation and relocation team, media relations team, legal affairs team, physical/personal security team, procurements team. Ideally, these teams would be staffed with the personnel responsible for the same or similar operation under normal conditions. A tiger team, originally a U.S. military jargon term, defines a team (of sneakers) whose purpose is to penetrate security, and thus test security measures. Used today for teams performing ethical hacking.
Source: SWANSON, Marianne, & al., National Institute of Standards and Technology (NIST), NIST Special Publication 800-34, Contingency Planning Guide for Information Technology Systems, December 2001 (page 23).
Which of the following steps should be one of the first step performed in a Business Impact Analysis (BIA)?
Identify all CRITICAL business units within the organization.
Evaluate the impact of disruptive events.
Estimate the Recovery Time Objectives (RTO).
Identify and Prioritize Critical Organization Functions
Project Initiation and Management
This is the first step in building the Business Continuity program is project initiation and management. During this phase, the following activities will occur:
Obtain senior management support to go forward with the project
Define a project scope, the objectives to be achieved, and the planning assumptions
Estimate the project resources needed to be successful, both human resources and financial resources
Define a timeline and major deliverables of the project In this phase, the program will be managed like a project, and a project manager should be assigned to the BC and DR domain.
The next step in the planning process is to have the planning team perform a BIA. The BIA will help the company decide what needs to be recovered, and how quickly. Mission functions are typically designated with terms such as critical, essential, supporting and nonessential to help determine the appropriate prioritization.
One of the first steps of a BIA is to Identify and Prioritize Critical Organization Functions. All organizational functions and the technology that supports them need to be classified based on their recovery priority. Recovery time frames for organization operations are driven by the consequences of not performing the function. The consequences may be the result of organization lost during the down period; contractual commitments not met resulting in fines or lawsuits, lost goodwill with customers.
All other answers are incorrect.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 21073-21075). Auerbach Publications. Kindle Edition.
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 20697-20710). Auerbach Publications. Kindle Edition.
Computer security should be first and foremost which of the following:
Cover all identified risks
Be cost-effective.
Be examined in both monetary and non-monetary terms.
Be proportionate to the value of IT systems.
Computer security should be first and foremost cost-effective.
As for any organization, there is a need to measure their cost-effectiveness, to justify budget usage and provide supportive arguments for their next budget claim. But organizations often have difficulties to accurately measure the effectiveness and the cost of their information security activities.
The classical financial approach for ROI calculation is not particularly appropriate for measuring security-related initiatives: Security is not generally an investment that results in a profit. Security is more about loss prevention. In other terms, when you invest in security, you don’t expect benefits; you expect to reduce the risks threatening your assets.
The concept of the ROI calculation applies to every investment. Security is no exception. Executive decision-makers want to know the impact security is having on the bottom line. In order to know how much they should spend on security, they need to know how much is the lack of security costing to the business and what
are the most cost-effective solutions.
Applied to security, a Return On Security Investment (ROSI) calculation can provide quantitative answers to essential financial questions:
Is an organization paying too much for its security?
What financial impact on productivity could have lack of security?
When is the security investment enough?
Is this security product/organisation beneficial?
The following are other concerns about computer security but not the first and foremost:
The costs and benefits of security should be carefully examined in both monetary and non-monetary terms to ensure that the cost of controls does not exceed expected benefits.
Security should be appropriate and proportionate to the value of and degree of reliance on the IT systems and to the severity, probability, and extent of potential harm.
Requirements for security vary, depending upon the particular IT system. Therefore it does not make sense for computer security to cover all identified risks when the cost of the measures exceeds the value of the systems they are protecting.
Reference(s) used for this question:
SWANSON, Marianne & GUTTMAN, Barbara, National Institute of Standards and Technology (NIST), NIST Special Publication 800-14, Generally Accepted Principles and Practices for Securing Information Technology Systems, September 1996 (page 6).
and
http://www.enisa.europa.eu/activities/cert/other-work/introduction-to-return-on-security-investment
To be admissible in court, computer evidence must be which of the following?
Relevant
Decrypted
Edited
Incriminating
Before any evidence can be admissible in court, the evidence has to be relevant, material to the issue, and it must be presented in compliance with the rules of evidence. This holds true for computer evidence as well.
While there are no absolute means to ensure that evidence will be allowed and helpful in a court of law, information security professionals should understand the basic rules of evidence. Evidence should be relevant, authentic, accurate, complete, and convincing. Evidence gathering should emphasize these criteria.
As stated in CISSP for Dummies:
Because computer-generated evidence can sometimes be easily manipulated, altered , or tampered with, and because it’s not easily and commonly understood, this type of evidence is usually considered suspect in a court of law. In order to be admissible, evidence must be
Relevant: It must tend to prove or disprove facts that are relevant and material to the case.
Reliable: It must be reasonably proven that what is presented as evidence is what was originally collected and that the evidence itself is reliable. This is accomplished, in part, through proper evidence handling and the chain of custody. (We discuss this in the upcoming section
“Chain of custody and the evidence life cycle.”)
Legally permissible: It must be obtained through legal means. Evidence that’s not legally permissible may include evidence obtained through the following means:
Illegal search and seizure: Law enforcement personnel must obtain a prior court order; however, non-law enforcement personnel, such as a supervisor or system administrator, may be able to conduct an authorized search under some circumstances.
Illegal wiretaps or phone taps: Anyone conducting wiretaps or phone taps must obtain a prior court order.
Entrapment or enticement: Entrapment encourages someone to commit a crime that the individual may have had no intention of committing. Conversely, enticement lures someone toward certain evidence (a honey pot, if you will) after that individual has already committed a crime. Enticement is not necessarily illegal but does raise certain ethical arguments and may not be admissible in court.
Coercion: Coerced testimony or confessions are not legally permissible.
Unauthorized or improper monitoring: Active monitoring must be properly authorized and conducted in a standard manner; users must be notified that they may be subject to monitoring.
The following answers are incorrect:
decrypted. Is incorrect because evidence has to be relevant, material to the issue, and it must be presented in compliance with the rules of evidence.
edited. Is incorrect because evidence has to be relevant, material to the issue, and it must be presented in compliance with the rules of evidence. Edited evidence violates the rules of evidence.
incriminating. Is incorrect because evidence has to be relevant, material to the issue, and it must be presented in compliance with the rules of evidence.
Reference(s) used for this question:
CISSP STudy Guide (Conrad, Misenar, Feldman) Elsevier. 2012. Page 423
and
Mc Graw Hill, Shon Harris CISSP All In One (AIO), 6th Edition , Pages 1051-1056
and
CISSP for Dummies , Peter Gregory
Which of the following statements pertaining to ethical hacking is incorrect?
An organization should use ethical hackers who do not sell auditing, hardware, software, firewall, hosting, and/or networking services.
Testing should be done remotely to simulate external threats.
Ethical hacking should not involve writing to or modifying the target systems negatively.
Ethical hackers never use tools that have the potential of affecting servers or services.
This means that many of the tools used for ethical hacking have the potential of exploiting vulnerabilities and causing disruption to IT system. It is up to the individuals performing the tests to be familiar with their use and to make sure that no such disruption can happen or at least shoudl be avoided.
The first step before sending even one single packet to the target would be to have a signed agreement with clear rules of engagement and a signed contract. The signed contract explains to the client the associated risks and the client must agree to them before you even send one packet to the target range. This way the client understand that some of the test could lead to interruption of service or even crash a server. The client signs that he is aware of such risks and willing to accept them.
The following are incorrect answers:
An organization should use ethical hackers who do not sell auditing, hardware, software, firewall, hosting, and/or networking services. An ethical hacking firm's independence can be questioned if they sell security solutions at the same time as doing testing for the same client. There has to be independance between the judge (the tester) and the accuse (the client).
Testing should be done remotely to simulate external threats Testing simulating a cracker from the Internet is often time one of the first test being done, this is to validate perimeter security. By performing tests remotely, the ethical hacking firm emulates the hacker's approach more realistically.
Ethical hacking should not involve writing to or modifying the target systems negatively. Even though ethical hacking should not involve negligence in writing to or modifying the target systems or reducing its response time, comprehensive penetration testing has to be performed using the most complete tools available just like a real cracker would.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Appendix F: The Case for Ethical Hacking (page 520).
What is the PRIMARY goal of incident handling?
Successfully retrieve all evidence that can be used to prosecute
Improve the company's ability to be prepared for threats and disasters
Improve the company's disaster recovery plan
Contain and repair any damage caused by an event.
This is the PRIMARY goal of an incident handling process.
The other answers are incorrect because :
Successfully retrieve all evidence that can be used to prosecute is more often used in identifying weaknesses than in prosecuting.
Improve the company's ability to be prepared for threats and disasters is more appropriate for a disaster recovery plan.
Improve the company's disaster recovery plan is also more appropriate for disaster recovery plan.
Reference : Shon Harris AIO v3 , Chapter - 10 : Law, Investigation, and Ethics , Page : 727-728
For which areas of the enterprise are business continuity plans required?
All areas of the enterprise.
The financial and information processing areas of the enterprise.
The operating areas of the enterprise.
The marketing, finance, and information processing areas.
Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
A momentary low voltage, from 1 cycle to a few seconds, is a:
spike
blackout
sag
fault
A momentary low voltage is a sag. A synonym would be a dip.
Risks to electrical power supply:
POWER FAILURE
Blackout: complete loss of electrical power
Fault: momentary power outage
POWER DEGRADATION
Brownout: an intentional reduction of voltage by the power company.
Sag/dip: a short period of low voltage
POWER EXCESS
Surge: Prolonged rise in voltage
Spike: Momentary High Voltage
In-rush current: the initial surge of current required by a load before it reaches normal operation.
– Transient: line noise or disturbance is superimposed on the supply circuit and can cause fluctuations in electrical power
Refence(s) used for this question:
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (p. 462). McGraw-Hill. Kindle Edition.
Which of the following computer recovery sites is only partially equipped with processing equipment?
hot site
rolling hot site
warm site
cold site
A warm site has some basic equipment or in some case almost all of the equipment but it is not sufficient to be operational without bringing in the last backup and in some cases more computers and other equipment.
The following answers are incorrect:
hot site. Is incorrect because a hot-site is fully configured with all the required hardware. The only thing missing is the last backup and you are up and running.
Rolling hot site. Is incorrect because a rolling hot-site is fully configured with all the required hardware.
cold site. Is incorrect because a cold site has basically power, HVAC, basic cabling, but no or little as far as processing equipment is concerned. All other equipment must be brought to this site. It might take a week or two to reconstruct.
References:
OIG CBK Business Continuity and Disaster Recovery Planning (pages 368 - 369)
What can be described as a measure of the magnitude of loss or impact on the value of an asset?
Probability
Exposure factor
Vulnerability
Threat
The exposure factor is a measure of the magnitude of loss or impact on the value of an asset.
The probability is the chance or likelihood, in a finite sample, that an event will occur or that a specific loss value may be attained should the event occur.
A vulnerability is the absence or weakness of a risk-reducing safeguard.
A threat is event, the occurrence of which could have an undesired impact.
Source: ROTHKE, Ben, CISSP CBK Review presentation on domain 3, August 1999.
Which disaster recovery plan test involves functional representatives meeting to review the plan in detail?
Simulation test
Checklist test
Parallel test
Structured walk-through test
The structured walk-through test occurs when the functional representatives meet to review the plan in detail. This involves a thorough look at each of the plan steps, and the procedures that are invoked at that point in the plan. This ensures that the actual planned activities are accurately described in the plan. The checklist test is a method of testing the plan by distributing copies to each of the functional areas. The simulation test plays out different scenarios. The parallel test is essentially an operational test that is performed without interrupting current processing.
Source: HARE, Chris, CISSP Study Guide: Business Continuity Planning Domain,
Which of the following best defines a Computer Security Incident Response Team (CSIRT)?
An organization that provides a secure channel for receiving reports about suspected security incidents.
An organization that ensures that security incidents are reported to the authorities.
An organization that coordinates and supports the response to security incidents.
An organization that disseminates incident-related information to its constituency and other involved parties.
RFC 2828 (Internet Security Glossary) defines a Computer Security Incident Response Team (CSIRT) as an organization that coordinates and supports the response to security incidents that involves sites within a defined constituency. This is the proper definition for the CSIRT. To be considered a CSIRT, an organization must provide a secure channel for receiving reports about suspected security incidents, provide assistance to members of its constituency in handling the incidents and disseminate incident-related information to its constituency and other involved parties. Security-related incidents do not necessarily have to be reported to the authorities.
Source: SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
Prior to a live disaster test also called a Full Interruption test, which of the following is most important?
Restore all files in preparation for the test.
Document expected findings.
Arrange physical security for the test site.
Conduct of a successful Parallel Test
A live disaster test or Full interruption test is an actual simulation of the Disaster Recovery Plan. All operations are shut down and brought back online at the alternate site. This test poses the biggest threat to an organization and should not be performed until a successful Parallell Test has been conducted.
1. A Checklist test would be conducted where each of the key players will get a copy of the plan and they read it to make sure it has been properly developed for the specific needs of their departments.
2. A Structure Walk Through would be conducted next. This is when all key players meet together in a room and they walk through the test together to identify shortcoming and dependencies between department.
3. A simulation test would be next. In this case you go through a disaster scenario up to the point where you would move to the alternate site. You do not move to the alternate site and you learn from your mistakes and you improve the plan. It is the right time to find shortcomings.
4. A Parallell Test would be done. You go through a disaster scenario. You move to the alternate site and you process from both sites simultaneously.
5. A full interruption test would be conducted. You move to the alternate site and you resume processing at the alternate site.
The following answers are incorrect:
Restore all files in preparation for the test. Is incorrect because you would restore the files at the alternate site as part of the test not in preparation for the test.
Document expected findings. Is incorrect because it is not the best answer. Documenting the expected findings won't help if you have not performed tests prior to a Full interruption test or live disaster test.
Arrange physical security for the test site. Is incorrect because it is not the best answer. why physical security for the test site is important if you have not performed a successful structured walk-through prior to performing a Full interruption test or live disaster test you might have some unexpected and disasterous results.
Which of the following is NOT a common backup method?
Full backup method
Daily backup method
Incremental backup method
Differential backup method
A daily backup is not a backup method, but defines periodicity at which backups are made. There can be daily full, incremental or differential backups.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 69).
Which backup method usually resets the archive bit on the files after they have been backed up?
Incremental backup method.
Differential backup method.
Partial backup method.
Tape backup method.
The incremental backup method usually resets the archive bit on the files after they have been backed up.
An Incremental Backup will backup all the files that have changed since the last Full Backup (the first time it is run after a full backup was previously completed) or after an Incremental Backup (for the second backup and subsequent backups) and sets the archive bit to 0. This type of backup take less time during the backup phase but it will take more time to restore.
The other answers are all incorrect choices.
The following backup types also exists:
Full Backup - All data are backed up. The archive bit is cleared, which means that it is set to 0.
Differential Backup - Backup the files that have been modified since the last Full Backup. The archive bit does not change. Take more time while the backup phase is performed and take less time to restore.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 69.
Risk mitigation and risk reduction controls for providing information security are classified within three main categories, which of the following are being used?
preventive, corrective, and administrative
detective, corrective, and physical
Physical, technical, and administrative
Administrative, operational, and logical
Security is generally defined as the freedom from danger or as the condition of safety. Computer security, specifically, is the protection of data in a system against unauthorized disclosure, modification, or destruction and protection of the computer system itself against unauthorized use, modification, or denial of service. Because certain computer security controls inhibit productivity, security is typically a compromise toward which security practitioners, system users, and system operations and administrative personnel work to achieve a satisfactory balance between security and productivity.
Controls for providing information security can be physical, technical, or administrative.
These three categories of controls can be further classified as either preventive or detective. Preventive controls attempt to avoid the occurrence of unwanted events, whereas detective controls attempt to identify unwanted events after they have occurred. Preventive controls inhibit the free use of computing resources and therefore can be applied only to the degree that the users are willing to accept. Effective security awareness programs can help increase users’ level of tolerance for preventive controls by helping them understand how such controls enable them to trust their computing systems. Common detective controls include audit trails, intrusion detection methods, and checksums.
Three other types of controls supplement preventive and detective controls. They are usually described as deterrent, corrective, and recovery.
Deterrent controls are intended to discourage individuals from intentionally violating information security policies or procedures. These usually take the form of constraints that make it difficult or undesirable to perform unauthorized activities or threats of consequences that influence a potential intruder to not violate security (e.g., threats ranging from embarrassment to severe punishment).
Corrective controls either remedy the circumstances that allowed the unauthorized activity or return conditions to what they were before the violation. Execution of corrective controls could result in changes to existing physical, technical, and administrative controls.
Recovery controls restore lost computing resources or capabilities and help the organization recover monetary losses caused by a security violation.
Deterrent, corrective, and recovery controls are considered to be special cases within the major categories of physical, technical, and administrative controls; they do not clearly belong in either preventive or detective categories. For example, it could be argued that deterrence is a form of prevention because it can cause an intruder to turn away; however, deterrence also involves detecting violations, which may be what the intruder fears most. Corrective controls, on the other hand, are not preventive or detective, but they are clearly linked with technical controls when antiviral software eradicates a virus or with administrative controls when backup procedures enable restoring a damaged data base. Finally, recovery controls are neither preventive nor detective but are included in administrative controls as disaster recovery or contingency plans.
Reference(s) used for this question
Handbook of Information Security Management, Hal Tipton
A momentary high voltage is a:
spike
blackout
surge
fault
Too much voltage for a short period of time is a spike.
Too much voltage for a long period of time is a surge.
Not enough voltage for a short period of time is a sag or dip
Not enough voltage for a long period of time is brownout
A short power interruption is a fault
A long power interruption is a blackout
You MUST know all of the power issues above for the purpose of the exam.
From: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, 3rd. Edition McGraw-Hill/Osborne, 2005, page 368.
Which of the following specifically addresses cyber attacks against an organization's IT systems?
Continuity of support plan
Business continuity plan
Incident response plan
Continuity of operations plan
The incident response plan focuses on information security responses to incidents affecting systems and/or networks. It establishes procedures to address cyber attacks against an organization's IT systems. These procedures are designed to enable security personnel to identify, mitigate, and recover from malicious computer incidents, such as unauthorized access to a system or data, denial of service, or unauthorized changes to system hardware or software. The continuity of support plan is the same as an IT contingency plan. It addresses IT system disruptions and establishes procedures for recovering a major application or general support system. It is not business process focused. The business continuity plan addresses business processes and provides procedures for sustaining essential business operations while recovering from a significant disruption. The continuity of operations plan addresses the subset of an organization's missions that are deemed most critical and procedures to sustain these functions at an alternate site for up to 30 days.
Source: SWANSON, Marianne, & al., National Institute of Standards and Technology (NIST), NIST Special Publication 800-34, Contingency Planning Guide for Information Technology Systems, December 2001 (page 8).
Which of the following questions is less likely to help in assessing an organization's contingency planning controls?
Is damaged media stored and/or destroyed?
Are the backup storage site and alternate site geographically far enough from the primary site?
Is there an up-to-date copy of the plan stored securely off-site?
Is the location of stored backups identified?
Contingency planning involves more than planning for a move offsite after a disaster destroys a facility.
It also addresses how to keep an organization's critical functions operating in the event of disruptions, large and small.
Handling of damaged media is an operational task related to regular production and is not specific to contingency planning.
Source: SWANSON, Marianne, NIST Special Publication 800-26, Security Self-Assessment Guide for Information Technology Systems, November 2001 (Pages A-27 to A-28).
Which of the following categories of hackers poses the greatest threat?
Disgruntled employees
Student hackers
Criminal hackers
Corporate spies
According to the authors, hackers fall in these categories, in increasing threat order: security experts, students, underemployed adults, criminal hackers, corporate spies and disgruntled employees.
Disgruntled employees are the most dangerous security problem of all because they are most likely to have a good knowledge of the organization's IT systems and security measures.
Source: STREBE, Matthew and PERKINS, Charles, Firewalls 24seven, Sybex 2000, Chapter 2: Hackers.
Which of the following cannot be undertaken in conjunction or while computer incident handling is ongoing?
System development activity
Help-desk function
System Imaging
Risk management process
If Incident Handling is underway an incident has potentially been identified. At that point all use of the system should stop because the system can no longer be trusted and any changes could contaminate the evidence. This would include all System Development Activity.
Every organization should have plans and procedures in place that deals with Incident Handling.
Employees should be instructed what steps are to be taken as soon as an incident occurs and how to report it. It is important that all parties involved are aware of these steps to protect not only any possible evidence but also to prevent any additional harm.
It is quite possible that the fraudster has planted malicous code that could cause destruction or even a Trojan Horse with a back door into the system. As soon as an incident has been identified the system can no longer be trusted and all use of the system should cease.
Shon Harris in her latest book mentions:
Although we commonly use the terms “event” and “incident” interchangeably, there are subtle differences between the two. An event is a negative occurrence that can be observed, verified, and documented, whereas an incident is a series of events that negatively affects the company and/ or impacts its security posture. This is why we call reacting to these issues “incident response” (or “incident handling”), because something is negatively affecting the company and causing a security breach.
Many types of incidents (virus, insider attack, terrorist attacks, and so on) exist, and sometimes it is just human error. Indeed, many incident response individuals have received a frantic call in the middle of the night because a system is acting “weird.” The reasons could be that a deployed patch broke something, someone misconfigured a device, or the administrator just learned a new scripting language and rolled out some code that caused mayhem and confusion.
When a company endures a computer crime, it should leave the environment and evidence unaltered and contact whomever has been delegated to investigate these types of situations. Someone who is unfamiliar with the proper process of collecting data and evidence from a crime scene could instead destroy that evidence, and thus all hope of prosecuting individuals, and achieving a conviction would be lost.
Companies should have procedures for many issues in computer security such as enforcement procedures, disaster recovery and continuity procedures, and backup procedures. It is also necessary to have a procedure for dealing with computer incidents because they have become an increasingly important issue of today’s information security departments. This is a direct result of attacks against networks and information systems increasing annually. Even though we don’t have specific numbers due to a lack of universal reporting and reporting in general, it is clear that the volume of attacks is increasing.
Just think about all the spam, phishing scams, malware, distributed denial-of-service, and other attacks you see on your own network and hear about in the news. Unfortunately, many companies are at a loss as to who to call or what to do right after they have been the victim of a cybercrime. Therefore, all companies should have an incident response policy that indicates who has the authority to initiate an incident response, with supporting procedures set up before an incident takes place.
This policy should be managed by the legal department and security department. They need to work together to make sure the technical security issues are covered and the legal issues that surround criminal activities are properly dealt with. The incident response policy should be clear and concise. For example, it should indicate if systems can be taken offline to try to save evidence or if systems have to continue functioning at the risk of destroying evidence. Each system and functionality should have a priority assigned to it. For instance, if the file server is infected, it should be removed from the network, but not shut down. However, if the mail server is infected, it should not be removed from the network or shut down because of the priority the company attributes to the mail server over the file server. Tradeoffs and decisions will have to be made, but it is better to think through these issues before the situation occurs, because better logic is usually possible before a crisis, when there’s less emotion and chaos.
The Australian Computer Emergency Response Team’s General Guidelines for Computer Forensics:
Keep the handling and corruption of original data to a minimum.
Document all actions and explain changes.
Follow the Five Rules for Evidence (Admissible, Authentic, Complete, Accurate, Convincing).
• Bring in more experienced help when handling and/ or analyzing the evidence is beyond your knowledge, skills, or abilities.
Adhere to your organization’s security policy and obtain written permission to conduct a forensics investigation.
Capture as accurate an image of the system( s) as possible while working quickly.
Be ready to testify in a court of law.
Make certain your actions are repeatable.
Prioritize your actions, beginning with volatile and proceeding to persistent evidence.
Do not run any programs on the system( s) that are potential evidence.
Act ethically and in good faith while conducting a forensics investigation, and do not attempt to do any harm.
The following answers are incorrect:
help-desk function. Is incorrect because during an incident, employees need to be able to communicate with a central source. It is most likely that would be the help-desk. Also the help-desk would need to be able to communicate with the employees to keep them informed.
system imaging. Is incorrect because once an incident has occured you should perform a capture of evidence starting with the most volatile data and imaging would be doen using bit for bit copy of storage medias to protect the evidence.
risk management process. Is incorrect because incident handling is part of risk management, and should continue.
Reference(s) used for this question:
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (Kindle Locations 21468-21476). McGraw-Hill. Kindle Edition.
and
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (Kindle Locations 21096-21121). McGraw-Hill. Kindle Edition.
and
NIST Computer Security incident handling http://csrc.nist.gov/publications/nistpubs/800-12/800-12-html/chapter12.html
Contracts and agreements are often times unenforceable or hard to enforce in which of the following alternate facility recovery agreement?
hot site
warm site
cold site
reciprocal agreement
A reciprocal agreement is where two or more organizations mutually agree to provide facilities to the other if a disaster occurs. The organizations must have similiar hardware and software configurations. Reciprocal agreements are often not legally binding.
Reciprocal agreements are not contracts and cannot be enforced. You cannot force someone you have such an agreement with to provide processing to you.
Government regulators do not accept reciprocal agreements as valid disaster recovery sites.
Cold sites are empty computer rooms consisting only of environmental systems, such as air conditioning and raised floors, etc. They do not meet the requirements of most regulators and boards of directors that the disaster plan be tested at least annually.
Time Brokers promise to deliver processing time on other systems. They charge a fee, but cannot guaranty that processing will always be available, especially in areas that experienced multiple disasters.
With the exception of providing your own hot site, commercial hot sites provide the greatest protection. Most will allow you up to six weeks to restore your sites if you declare a disaster. They also permit an annual amount of time to test the Disaster Plan.
References:
OIG CBK Business Continuity and Disaster Recovery Planning (pages 368 - 369)
The following answers are incorrect:
hot site. Is incorrect because you have a contract in place stating what services are to be provided.
warm site. Is incorrect because you have a contract in place stating what services are to be provided.
cold site. Is incorrect because you have a contract in place stating what services are to be provided.
Which of the following statements do not apply to a hot site?
It is expensive.
There are cases of common overselling of processing capabilities by the service provider.
It provides a false sense of security.
It is accessible on a first come first serve basis. In case of large disaster it might not be accessible.
Remember this is a NOT question. Hot sites do not provide a false sense of security since they are the best disaster recovery alternate for backup site that you rent.
A Cold, Warm, and Hot site is always a rental place in the context of the CBK. This is definivily the best choices out of the rental options that exists. It is fully configured and can be activated in a very short period of time.
Cold and Warm sites, not hot sites, provide a false sense of security because you can never fully test your plan.
In reality, using a cold site will most likely make effective recovery impossible or could lead to business closure if it takes more than two weeks for recovery.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 8: Business Continuity Planning and Disaster Recovery Planning (page 284).
Which of the following is an Internet IPsec protocol to negotiate, establish, modify, and delete security associations, and to exchange key generation and authentication data, independent of the details of any specific key generation technique, key establishment protocol, encryption algorithm, or authentication mechanism?
OAKLEY
Internet Security Association and Key Management Protocol (ISAKMP)
Simple Key-management for Internet Protocols (SKIP)
IPsec Key exchange (IKE)
RFC 2828 (Internet Security Glossary) defines the Internet Security Association and Key Management Protocol (ISAKMP) as an Internet IPsec protocol to negotiate, establish, modify, and delete security associations, and to exchange key generation and authentication data, independent of the details of any specific key generation technique, key establishment protocol, encryption algorithm, or authentication mechanism.
Let's clear up some confusion here first. Internet Key Exchange (IKE) is a hybrid protocol, it consists of 3 "protocols"
ISAKMP: It's not a key exchange protocol per se, it's a framework on which key exchange protocols operate. ISAKMP is part of IKE. IKE establishs the shared security policy and authenticated keys. ISAKMP is the protocol that specifies the mechanics of the key exchange.
Oakley: Describes the "modes" of key exchange (e.g. perfect forward secrecy for keys, identity protection, and authentication). Oakley describes a series of key exchanges and services.
SKEME: Provides support for public-key-based key exchange, key distribution centres, and manual installation, it also outlines methods of secure and fast key refreshment.
So yes, IPSec does use IKE, but ISAKMP is part of IKE.
The questions did not ask for the actual key negotiation being done but only for the "exchange of key generation and authentication data" being done. Under Oakly it would be Diffie Hellman (DH) that would be used for the actual key nogotiation.
The following are incorrect answers:
Simple Key-management for Internet Protocols (SKIP) is a key distribution protocol that uses hybrid encryption to convey session keys that are used to encrypt data in IP packets.
OAKLEY is a key establishment protocol (proposed for IPsec but superseded by IKE) based on the Diffie-Hellman algorithm and designed to be a compatible component of ISAKMP.
IPsec Key Exchange (IKE) is an Internet, IPsec, key-establishment protocol [R2409] (partly based on OAKLEY) that is intended for putting in place authenticated keying material for use with ISAKMP and for other security associations, such as in AH and ESP.
Reference used for this question:
SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
Which of the following was developed in order to protect against fraud in electronic fund transfers (EFT) by ensuring the message comes from its claimed originator and that it has not been altered in transmission?
Secure Electronic Transaction (SET)
Message Authentication Code (MAC)
Cyclic Redundancy Check (CRC)
Secure Hash Standard (SHS)
In order to protect against fraud in electronic fund transfers (EFT), the Message Authentication Code (MAC), ANSI X9.9, was developed. The MAC is a check value, which is derived from the contents of the message itself, that is sensitive to the bit changes in a message. It is similar to a Cyclic Redundancy Check (CRC).
The aim of message authentication in computer and communication systems is to verify that he message comes from its claimed originator and that it has not been altered in transmission. It is particularly needed for EFT Electronic Funds Transfer). The protection mechanism is generation of a Message Authentication Code (MAC), attached to the message, which can be recalculated by the receiver and will reveal any alteration in transit. One standard method is described in (ANSI, X9.9). Message authentication mechanisms an also be used to achieve non-repudiation of messages.
The Secure Electronic Transaction (SET) was developed by a consortium including MasterCard and VISA as a means of preventing fraud from occurring during electronic payment.
The Secure Hash Standard (SHS), NIST FIPS 180, available at http://www.itl.nist.gov/fipspubs/fip180-1.htm, specifies the Secure Hash Algorithm (SHA-1).
Source:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 4: Cryptography (page 170)
also see:
http://luizfirmino.blogspot.com/2011/04/message-authentication-code-mac.html
and
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.22.2312 &rep=rep1&type=pdf
Which of the following is NOT a property of the Rijndael block cipher algorithm?
The key sizes must be a multiple of 32 bits
Maximum block size is 256 bits
Maximum key size is 512 bits
The key size does not have to match the block size
The above statement is NOT true and thus the correct answer. The maximum key size on Rijndael is 256 bits.
There are some differences between Rijndael and the official FIPS-197 specification for AES.
Rijndael specification per se is specified with block and key sizes that must be a multiple of 32 bits, both with a minimum of 128 and a maximum of 256 bits. Namely, Rijndael allows for both key and block sizes to be chosen independently from the set of { 128, 160, 192, 224, 256 } bits. (And the key size does not in fact have to match the block size).
However, FIPS-197 specifies that the block size must always be 128 bits in AES, and that the key size may be either 128, 192, or 256 bits. Therefore AES-128, AES-192, and AES-256 are actually:
Key Size (bits) Block Size (bits)
AES-128 128 128
AES-192 192 128
AES-256 256 128
So in short:
Rijndael and AES differ only in the range of supported values for the block length and cipher key length.
For Rijndael, the block length and the key length can be independently specified to any multiple of 32 bits, with a minimum of 128 bits, and a maximum of 256 bits.
AES fixes the block length to 128 bits, and supports key lengths of 128, 192 or 256 bits only.
References used for this question:
http://blogs.msdn.com/b/shawnfa/archive/2006/10/09/the-differences-between-rijndael-and-aes.aspx
and
http://csrc.nist.gov/CryptoToolkit/aes/rijndael/Rijndael.pdf
A public key algorithm that does both encryption and digital signature is which of the following?
RSA
DES
IDEA
Diffie-Hellman
RSA can be used for encryption, key exchange, and digital signatures.
Key Exchange versus key Agreement
KEY EXCHANGE
Key exchange (also known as "key establishment") is any method in cryptography by which cryptographic keys are exchanged between users, allowing use of a cryptographic algorithm.
If sender and receiver wish to exchange encrypted messages, each must be equipped to encrypt messages to be sent and decrypt messages received. The nature of the equipping they require depends on the encryption technique they might use. If they use a code, both will require a copy of the same codebook. If they use a cipher, they will need appropriate keys. If the cipher is a symmetric key cipher, both will need a copy of the same key. If an asymmetric key cipher with the public/private key property, both will need the other's public key.
KEY AGREEMENT
Diffie-Hellman is a key agreement algorithm used by two parties to agree on a shared secret. The Diffie Hellman (DH) key agreement algorithm describes a means for two parties to agree upon a shared secret over a public network in such a way that the secret will be unavailable to eavesdroppers. The DH algorithm converts the shared secret into an arbitrary amount of keying material. The resulting keying material is used as a symmetric encryption key.
The other answers are not correct because:
DES and IDEA are both symmetric algorithms.
Diffie-Hellman is a common asymmetric algorithm, but is used only for key agreement. It is not typically used for data encryption and does not have digital signature capability.
References:
http://tools.ietf.org/html/rfc2631
For Diffie-Hellman information: http://www.netip.com/articles/keith/diffie-helman.htm
Which encryption algorithm is BEST suited for communication with handheld wireless devices?
ECC (Elliptic Curve Cryptosystem)
RSA
SHA
RC4
As it provides much of the same functionality that RSA provides: digital signatures, secure key distribution,and encryption. One differing factor is ECC’s efficiency. ECC is more efficient that RSA and any other asymmetric algorithm.
The following answers are incorrect because :
RSA is incorrect as it is less efficient than ECC to be used in handheld devices.
SHA is also incorrect as it is a hashing algorithm.
RC4 is also incorrect as it is a symmetric algorithm.
Reference : Shon Harris AIO v3 , Chapter-8 : Cryptography , Page : 631 , 638.
What can be defined as a value computed with a cryptographic algorithm and appended to a data object in such a way that any recipient of the data can use the signature to verify the data's origin and integrity?
A digital envelope
A cryptographic hash
A Message Authentication Code
A digital signature
RFC 2828 (Internet Security Glossary) defines a digital signature as a value computed with a cryptographic algorithm and appended to a data object in such a way that any recipient of the data can use the signature to verify the data's origin and integrity.
The steps to create a Digital Signature are very simple:
1. You create a Message Digest of the message you wish to send
2. You encrypt the message digest using your Private Key which is the action of Signing
3. You send the Message along with the Digital Signature to the recipient
To validate the Digital Signature the recipient will make use of the sender Public Key. Here are the steps:
1. The receiver will decrypt the Digital Signature using the sender Publick Key producing a clear text message digest.
2. The receiver will produce his own message digest of the message received.
3. At this point the receiver will compare the two message digest (the one sent and the one produce by the receiver), if the two matches, it proves the authenticity of the message and it confirms that the message was not modified in transit validating the integrity as well. Digital Signatures provides for Authenticity and Integrity only. There is no confidentiality in place, if you wish to get confidentiality it would be needed for the sender to encrypt everything with the receiver public key as a last step before sending the message.
A Digital Envelope is a combination of encrypted data and its encryption key in an encrypted form that has been prepared for use of the recipient. In simple term it is a type of security that uses two layers of encryption to protect a message. First, the message itself is encoded using symmetric encryption, and then the key to decode the message is encrypted using public-key encryption. This technique overcomes one of the problems of public-key encryption, which is that it is slower than symmetric encryption. Because only the key is protected with public-key encryption, there is very little overhead.
A cryptographic hash is the result of a cryptographic hash function such as MD5, SHA-1, or SHA-2. A hash value also called a Message Digest is like a fingerprint of a message. It is used to proves integrity and ensure the message was not changed either in transit or in storage.
A Message Authentication Code (MAC) refers to an ANSI standard for a checksum that is computed with a keyed hash that is based on DES or it can also be produced without using DES by concataning the Secret Key at the end of the message (simply adding it at the end of the message) being sent and then producing a Message digest of the Message+Secret Key together. The MAC is then attached and sent along with the message but the Secret Key is NEVER sent in clear text over the network.
In cryptography, HMAC (Hash-based Message Authentication Code), is a specific construction for calculating a message authentication code (MAC) involving a cryptographic hash function in combination with a secret key. As with any MAC, it may be used to simultaneously verify both the data integrity and the authenticity of a message. Any cryptographic hash function, such as MD5 or SHA-1, may be used in the calculation of an HMAC; the resulting MAC algorithm is termed HMAC-MD5 or HMAC-SHA1 accordingly. The cryptographic strength of the HMAC depends upon the cryptographic strength of the underlying hash function, the size of its hash output length in bits and on the size and quality of the cryptographic key.
There is more than one type of MAC: Meet CBC-MAC
In cryptography, a Cipher Block Chaining Message Authentication Code, abbreviated CBC-MAC, is a technique for constructing a message authentication code from a block cipher. The message is encrypted with some block cipher algorithm in CBC mode to create a chain of blocks such that each block depends on the proper encryption of the previous block. This interdependence ensures that a change to any of the plaintext bits will cause the final encrypted block to change in a way that cannot be predicted or counteracted without knowing the key to the block cipher.
References:
SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
and
http://www.webopedia.com/TERM/D/digital_envelope.html
and
http://en.wikipedia.org/wiki/CBC-MAC
Where parties do not have a shared secret and large quantities of sensitive information must be passed, the most efficient means of transferring information is to use Hybrid Encryption Methods. What does this mean?
Use of public key encryption to secure a secret key, and message encryption using the secret key.
Use of the recipient's public key for encryption and decryption based on the recipient's private key.
Use of software encryption assisted by a hardware encryption accelerator.
Use of elliptic curve encryption.
A Public Key is also known as an asymmetric algorithm and the use of a secret key would be a symmetric algorithm.
The following answers are incorrect:
Use of the recipient's public key for encryption and decryption based on the recipient's private key. Is incorrect this would be known as an asymmetric algorithm.
Use of software encryption assisted by a hardware encryption accelerator. This is incorrect, it is a distractor.
Use of Elliptic Curve Encryption. Is incorrect this would use an asymmetric algorithm.
What enables users to validate each other's certificate when they are certified under different certification hierarchies?
Cross-certification
Multiple certificates
Redundant certification authorities
Root certification authorities
Cross-certification is the act or process by which two CAs each certifiy a public key of the other, issuing a public-key certificate to that other CA, enabling users that are certified under different certification hierarchies to validate each other's certificate.
Source: SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
Attributable data should be:
always traced to individuals responsible for observing and recording the data
sometimes traced to individuals responsible for observing and recording the data
never traced to individuals responsible for observing and recording the data
often traced to individuals responsible for observing and recording the data
As per FDA data should be attributable, original, accurate, contemporaneous and legible. In an automated system attributability could be achieved by a computer system designed to identify individuals responsible for any input.
Source: U.S. Department of Health and Human Services, Food and Drug Administration, Guidance for Industry - Computerized Systems Used in Clinical Trials, April 1999, page 1.
One of the following statements about the differences between PPTP and L2TP is NOT true
PPTP can run only on top of IP networks.
PPTP is an encryption protocol and L2TP is not.
L2TP works well with all firewalls and network devices that perform NAT.
L2TP supports AAA servers
L2TP is affected by packet header modification and cannot cope with firewalls and network devices that perform NAT.
"PPTP can run only on top of IP networks." is correct as PPTP encapsulates datagrams into an IP packet, allowing PPTP to route many network protocols across an IP network.
"PPTP is an encryption protocol and L2TP is not." is correct. When using PPTP, the PPP payload is encrypted with Microsoft Point-to-Point Encryption (MPPE) using MSCHAP or EAP-TLS.
"L2TP supports AAA servers" is correct as L2TP supports TACACS+ and RADIUS.
NOTE:
L2TP does work over NAT. It is possible to use a tunneled mode that wraps every packet into a UDP packet. Port 4500 is used for this purpose. However this is not true of PPTP and it is not true as well that it works well with all firewalls and NAT devices.
References:
All in One Third Edition page 545
Official Guide to the CISSP Exam page 124-126
At which OSI/ISO layer is an encrypted authentication between a client software package and a firewall performed?
Network layer
Session layer
Transport layer
Data link layer
Encrypted authentication is a firewall feature that allows users on an external network to authenticate themselves to prove that they are authorized to access resources on the internal network. Encrypted authentication is convenient because it happens at the transport layer between a client software and a firewall, allowing all normal application software to run without hindrance.
Source: STREBE, Matthew and PERKINS, Charles, Firewalls 24seven, Sybex 2000, Chapter 1: Understanding Firewalls.
Which of the following IEEE standards defines the token ring media access method?
802.3
802.11
802.5
802.2
The IEEE 802.5 standard defines the token ring media access method. 802.3 refers to Ethernet's CSMA/CD, 802.11 refers to wireless communications and 802.2 refers to the logical link control.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 109).
The IP header contains a protocol field. If this field contains the value of 6, what type of data is contained within the ip datagram?
TCP.
ICMP.
UDP.
IGMP.
If the protocol field has a value of 6 then it would indicate it was TCP.
The protocol field of the IP packet dictates what protocol the IP packet is using.
TCP=6, ICMP=1, UDP=17, IGMP=2
The following answers are incorrect:
ICMP. Is incorrect because the value for an ICMP protocol would be 1.
UDP. Is incorrect because the value for an UDP protocol would be 17.
IGMP. Is incorrect because the value for an IGMP protocol would be 2.
References:
SANS http://www.sans.org/resources/tcpip.pdf?ref=3871
Why is Network File System (NFS) used?
It enables two different types of file systems to interoperate.
It enables two different types of file systems to share Sun applications.
It enables two different types of file systems to use IP/IPX.
It enables two different types of file systems to emulate each other.
Network File System (NFS) is a TCP/IP client/server application developed by Sun that enables different types of file systems to interoperate regardless of operating system or network architecture.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 88.
Before the advent of classless addressing, the address 128.192.168.16 would have been considered part of:
a class A network.
a class B network.
a class C network.
a class D network.
Before the advent of classless addressing, one could tell the size of a network by the first few bits of an IP address. If the first bit was set to zero (the first byte being from 0 to 127), the address was a class A network. Values from 128 to 191 were used for class B networks whereas values between 192 and 223 were used for class C networks. Class D, with values from 224 to 239 (the first three bits set to one and the fourth to zero), was reserved for IP multicast.
Source: STREBE, Matthew and PERKINS, Charles, Firewalls 24seven, Sybex 2000, Chapter 3: TCP/IP from a Security Viewpoint.
Which of the following protocols that provide integrity and authentication for IPSec, can also provide non-repudiation in IPSec?
Authentication Header (AH)
Encapsulating Security Payload (ESP)
Secure Sockets Layer (SSL)
Secure Shell (SSH-2)
As per the RFC in reference, the Authentication Header (AH) protocol is a mechanism for providing strong integrity and authentication for IP datagrams. It might also provide non-repudiation, depending on which cryptographic algorithm is used and how keying is performed. For example, use of an asymmetric digital signature algorithm, such as RSA, could provide non-repudiation.
from a cryptography point of view, so we will cover it from a VPN point of view here. IPSec is a suite of protocols that was developed to specifically protect IP traffic. IPv4 does not have any integrated security, so IPSec was developed to bolt onto IP and secure the data the protocol transmits. Where PPTP and L2TP work at the data link layer, IPSec works at the network layer of the OSI model. The main protocols that make up the IPSec suite and their basic functionality are as follows: A. Authentication Header (AH) provides data integrity, data origin authentication, and protection from replay attacks. B. Encapsulating Security Payload (ESP) provides confidentiality, data-origin authentication, and data integrity. C. Internet Security Association and Key Management Protocol (ISAKMP) provides a framework for security association creation and key exchange. D. Internet Key Exchange (IKE) provides authenticated keying material for use with ISAKMP.
The following are incorrect answers:
ESP is a mechanism for providing integrity and confidentiality to IP datagrams. It may also provide authentication, depending on which lgorithm and algorithm mode are used. Non-repudiation and protection from traffic analysis are not provided by ESP (RFC 1827).
SSL is a secure protocol used for transmitting private information over the Internet. It works by using a public key to encrypt data that is transferred of the SSL connection. OIG 2007, page 976
SSH-2 is a secure, efficient, and portable version of SSH (Secure Shell) which is a secure replacement for telnet.
Reference(s) used for this question:
Shon Harris, CISSP All In One, 6th Edition , Page 705
and
RFC 1826, http://tools.ietf.org/html/rfc1826, paragraph 1.
Which of the following is the BEST way to detect software license violations?
Implementing a corporate policy on copyright infringements and software use.
Requiring that all PCs be diskless workstations.
Installing metering software on the LAN so applications can be accessed through the metered software.
Regularly scanning PCs in use to ensure that unauthorized copies of software have not been loaded on the PC.
The best way to prevent and detect software license violations is to regularly scan used PCs, either from the LAN or directly, to ensure that unauthorized copies of software have not been loaded on the PC.
Other options are not detective.
A corporate policy is not necessarily enforced and followed by all employees.
Software can be installed from other means than floppies or CD-ROMs (from a LAN or even downloaded from the Internet) and software metering only concerns applications that are registered.
Source: Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, Chapter 3: Technical Infrastructure and Operational Practices (page 108).
A periodic review of user account management should not determine:
Conformity with the concept of least privilege.
Whether active accounts are still being used.
Strength of user-chosen passwords.
Whether management authorizations are up-to-date.
Organizations should have a process for (1) requesting, establishing, issuing, and closing user accounts; (2) tracking users and their respective access authorizations; and (3) managing these functions.
Reviews should examine the levels of access each individual has, conformity with the concept of least privilege, whether all accounts are still active, whether management authorizations are up-to-date, whether required training has been completed, and so forth. These reviews can be conducted on at least two levels: (1) on an application-by-application basis, or (2) on a system wide basis.
The strength of user passwords is beyond the scope of a simple user account management review, since it requires specific tools to try and crack the password file/database through either a dictionary or brute-force attack in order to check the strength of passwords.
Reference(s) used for this question:
SWANSON, Marianne & GUTTMAN, Barbara, National Institute of Standards and Technology (NIST), NIST Special Publication 800-14, Generally Accepted Principles and Practices for Securing Information Technology Systems, September 1996 (page 28).
What would be considered the biggest drawback of Host-based Intrusion Detection systems (HIDS)?
It can be very invasive to the host operating system
Monitors all processes and activities on the host system only
Virtually eliminates limits associated with encryption
They have an increased level of visibility and control compared to NIDS
The biggest drawback of HIDS, and the reason many organizations resist its use, is that it can be very invasive to the host operating system. HIDS must have the capability to monitor all processes and activities on the host system and this can sometimes interfere with normal system processing.
HIDS versus NIDS
A host-based IDS (HIDS) can be installed on individual workstations and/ or servers to watch for inappropriate or anomalous activity. HIDSs are usually used to make sure users do not delete system files, reconfigure important settings, or put the system at risk in any other way.
So, whereas the NIDS understands and monitors the network traffic, a HIDS’s universe is limited to the computer itself. A HIDS does not understand or review network traffic, and a NIDS does not “look in” and monitor a system’s activity. Each has its own job and stays out of the other’s way.
The ISC2 official study book defines an IDS as:
An intrusion detection system (IDS) is a technology that alerts organizations to adverse or unwanted activity. An IDS can be implemented as part of a network device, such as a router, switch, or firewall, or it can be a dedicated IDS device monitoring traffic as it traverses the network. When used in this way, it is referred to as a network IDS, or NIDS. IDS can also be used on individual host systems to monitor and report on file, disk, and process activity on that host. When used in this way it is referred to as a host-based IDS, or HIDS.
An IDS is informative by nature and provides real-time information when suspicious activities are identified. It is primarily a detective device and, acting in this traditional role, is not used to directly prevent the suspected attack.
What about IPS?
In contrast, an intrusion prevention system (IPS), is a technology that monitors activity like an IDS but will automatically take proactive preventative action if it detects unacceptable activity. An IPS permits a predetermined set of functions and actions to occur on a network or system; anything that is not permitted is considered unwanted activity and blocked. IPS is engineered specifically to respond in real time to an event at the system or network layer. By proactively enforcing policy, IPS can thwart not only attackers, but also authorized users attempting to perform an action that is not within policy. Fundamentally, IPS is considered an access control and policy enforcement technology, whereas IDS is considered network monitoring and audit technology.
The following answers were incorrect:
All of the other answer were advantages and not drawback of using HIDS
TIP FOR THE EXAM:
Be familiar with the differences that exists between an HIDS, NIDS, and IPS. Know that IDS's are mostly detective but IPS are preventive. IPS's are considered an access control and policy enforcement technology, whereas IDS's are considered network monitoring and audit technology.
Reference(s) used for this question:
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (Kindle Locations 5817-5822). McGraw-Hill. Kindle Edition.
and
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Access Control ((ISC)2 Press), Domain1, Page 180-188 or on the kindle version look for Kindle Locations 3199-3203. Auerbach Publications.
In an online transaction processing system (OLTP), which of the following actions should be taken when erroneous or invalid transactions are detected?
The transactions should be dropped from processing.
The transactions should be processed after the program makes adjustments.
The transactions should be written to a report and reviewed.
The transactions should be corrected and reprocessed.
In an online transaction processing system (OLTP) all transactions are recorded as they occur. When erroneous or invalid transactions are detected the transaction can be recovered by reviewing the logs.
As explained in the ISC2 OIG:
OLTP is designed to record all of the business transactions of an organization as they occur. It is a data processing system facilitating and managing transaction-oriented applications. These are characterized as a system used by many concurrent users who are actively adding and modifying data to effectively change real-time data.
OLTP environments are frequently found in the finance, telecommunications, insurance, retail, transportation, and travel industries. For example, airline ticket agents enter data in the database in real-time by creating and modifying travel reservations, and these are increasingly joined by users directly making their own reservations and purchasing tickets through airline company Web sites as well as discount travel Web site portals. Therefore, millions of people may be accessing the same flight database every day, and dozens of people may be looking at a specific flight at the same time.
The security concerns for OLTP systems are concurrency and atomicity.
Concurrency controls ensure that two users cannot simultaneously change the same data, or that one user cannot make changes before another user is finished with it. In an airline ticket system, it is critical for an agent processing a reservation to complete the transaction, especially if it is the last seat available on the plane.
Atomicity ensures that all of the steps involved in the transaction complete successfully. If one step should fail, then the other steps should not be able to complete. Again, in an airline ticketing system, if the agent does not enter a name into the name data field correctly, the transaction should not be able to complete.
OLTP systems should act as a monitoring system and detect when individual processes abort, automatically restart an aborted process, back out of a transaction if necessary, allow distribution of multiple copies of application servers across machines, and perform dynamic load balancing.
A security feature uses transaction logs to record information on a transaction before it is processed, and then mark it as processed after it is done. If the system fails during the transaction, the transaction can be recovered by reviewing the transaction logs.
Checkpoint restart is the process of using the transaction logs to restart the machine by running through the log to the last checkpoint or good transaction. All transactions following the last checkpoint are applied before allowing users to access the data again.
Wikipedia has nice coverage on what is OLTP:
Online transaction processing, or OLTP, refers to a class of systems that facilitate and manage transaction-oriented applications, typically for data entry and retrieval transaction processing. The term is somewhat ambiguous; some understand a "transaction" in the context of computer or database transactions, while others (such as the Transaction Processing Performance Council) define it in terms of business or commercial transactions.
OLTP has also been used to refer to processing in which the system responds immediately to user requests. An automatic teller machine (ATM) for a bank is an example of a commercial transaction processing application.
The technology is used in a number of industries, including banking, airlines, mailorder, supermarkets, and manufacturing. Applications include electronic banking, order processing, employee time clock systems, e-commerce, and eTrading.
There are two security concerns for OLTP system: Concurrency and Atomicity
ATOMICITY
In database systems, atomicity (or atomicness) is one of the ACID transaction properties. In an atomic transaction, a series of database operations either all occur, or nothing occurs. A guarantee of atomicity prevents updates to the database occurring only partially, which can cause greater problems than rejecting the whole series outright.
The etymology of the phrase originates in the Classical Greek concept of a fundamental and indivisible component; see atom.
An example of atomicity is ordering an airline ticket where two actions are required: payment, and a seat reservation. The potential passenger must either:
both pay for and reserve a seat; OR
neither pay for nor reserve a seat.
The booking system does not consider it acceptable for a customer to pay for a ticket without securing the seat, nor to reserve the seat without payment succeeding.
CONCURRENCY
Database concurrency controls ensure that transactions occur in an ordered fashion.
The main job of these controls is to protect transactions issued by different users/applications from the effects of each other. They must preserve the four characteristics of database transactions ACID test: Atomicity, Consistency, Isolation, and Durability. Read http://en.wikipedia.org/wiki/ACID for more details on the ACID test.
Thus concurrency control is an essential element for correctness in any system where two database transactions or more, executed with time overlap, can access the same data, e.g., virtually in any general-purpose database system. A well established concurrency control theory exists for database systems: serializability theory, which allows to effectively design and analyze concurrency control methods and mechanisms.
Concurrency is not an issue in itself, it is the lack of proper concurrency controls that makes it a serious issue.
The following answers are incorrect:
The transactions should be dropped from processing. Is incorrect because the transactions are processed and when erroneous or invalid transactions are detected the transaction can be recovered by reviewing the logs.
The transactions should be processed after the program makes adjustments. Is incorrect because the transactions are processed and when erroneous or invalid transactions are detected the transaction can be recovered by reviewing the logs.
The transactions should be corrected and reprocessed. Is incorrect because the transactions are processed and when erroneous or invalid transactions are detected the transaction can be recovered by reviewing the logs.
References:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 12749-12768). Auerbach Publications. Kindle Edition.
and
http://en.wikipedia.org/wiki/Online_transaction_processing
and
http://databases.about.com/od/administration/g/concurrency.htm
Which of the following is NOT a characteristic of a host-based intrusion detection system?
A HIDS does not consume large amounts of system resources
A HIDS can analyse system logs, processes and resources
A HIDS looks for unauthorized changes to the system
A HIDS can notify system administrators when unusual events are identified
A HIDS does not consume large amounts of system resources is the correct choice. HIDS can consume inordinate amounts of CPU and system resources in order to function effectively, especially during an event.
All the other answers are characteristics of HIDSes
A HIDS can:
scrutinize event logs, critical system files, and other auditable system resources;
look for unauthorized change or suspicious patterns of behavior or activity
can send alerts when unusual events are discovered
Which of the following monitors network traffic in real time?
network-based IDS
host-based IDS
application-based IDS
firewall-based IDS
This type of IDS is called a network-based IDS because monitors network traffic in real time.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 48.
Which protocol is NOT implemented in the Network layer of the OSI Protocol Stack?
hyper text transport protocol
Open Shortest Path First
Internet Protocol
Routing Information Protocol
Open Shortest Path First, Internet Protocol, and Routing Information Protocol are all protocols implemented in the Network Layer.
Domain: Telecommunications and Network Security
References: AIO 3rd edition. Page 429
Official Guide to the CISSP CBK. Page 411
The viewing of recorded events after the fact using a closed-circuit TV camera is considered a
Preventative control.
Detective control
Compensating control
Corrective control
Detective security controls are like a burglar alarm. They detect and report an unauthorized or undesired event (or an attempted undesired event). Detective security controls are invoked after the undesirable event has occurred. Example detective security controls are log monitoring and review, system audit, file integrity checkers, and motion detection.
Visual surveillance or recording devices such as closed circuit television are used in conjunction with guards in order to enhance their surveillance ability and to record events for future analysis or prosecution.
When events are monitored, it is considered preventative whereas recording of events is considered detective in nature.
Below you have explanations of other types of security controls from a nice guide produce by James Purcell (see reference below):
Preventive security controls are put into place to prevent intentional or unintentional disclosure, alteration, or destruction (D.A.D.) of sensitive information. Some example preventive controls follow:
Policy – Unauthorized network connections are prohibited.
Firewall – Blocks unauthorized network connections.
Locked wiring closet – Prevents unauthorized equipment from being physically plugged into a network switch.
Notice in the preceding examples that preventive controls crossed administrative, technical, and physical categories discussed previously. The same is true for any of the controls discussed in this section.
Corrective security controls are used to respond to and fix a security incident. Corrective security controls also limit or reduce further damage from an attack. Examples follow:
Procedure to clean a virus from an infected system
A guard checking and locking a door left unlocked by a careless employee
Updating firewall rules to block an attacking IP address
Note that in many cases the corrective security control is triggered by a detective security control.
Recovery security controls are those controls that put a system back into production after an incident. Most Disaster Recovery activities fall into this category. For example, after a disk failure, data is restored from a backup tape.
Directive security controls are the equivalent of administrative controls. Directive controls direct that some action be taken to protect sensitive organizational information. The directive can be in the form of a policy, procedure, or guideline.
Deterrent security controls are controls that discourage security violations. For instance, “Unauthorized Access Prohibited” signage may deter a trespasser from entering an area. The presence of security cameras might deter an employee from stealing equipment. A policy that states access to servers is monitored could deter unauthorized access.
Compensating security controls are controls that provide an alternative to normal controls that cannot be used for some reason. For instance, a certain server cannot have antivirus software installed because it interferes with a critical application. A compensating control would be to increase monitoring of that server or isolate that server on its own network segment.
Note that there is a third popular taxonomy developed by NIST and described in NIST Special Publication 800-53, “Recommended Security Controls for Federal Information Systems.” NIST categorizes security controls into 3 classes and then further categorizes the controls within the classes into 17 families. Within each security control family are dozens of specific controls. The NIST taxonomy is not covered on the CISSP exam but is one the CISSP should be aware of if you are employed within the US federal workforce.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 10: Physical security (page 340).
and
CISSP Study Guide By Eric Conrad, Seth Misenar, Joshua Feldman, page 50-52
and
Security Control Types and Operational Security, James E. Purcell, http://www.giac.org/cissp-papers/207.pdf
Which of the following is NOT a valid reason to use external penetration service firms rather than corporate resources?
They are more cost-effective
They offer a lack of corporate bias
They use highly talented ex-hackers
They ensure a more complete reporting
Two points are important to consider when it comes to ethical hacking: integrity and independence.
By not using an ethical hacking firm that hires or subcontracts to ex-hackers of others who have criminal records, an entire subset of risks can be avoided by an organization. Also, it is not cost-effective for a single firm to fund the effort of the ongoing research and development, systems development, and maintenance that is needed to operate state-of-the-art proprietary and open source testing tools and techniques.
External penetration firms are more effective than internal penetration testers because they are not influenced by any previous system security decisions, knowledge of the current system environment, or future system security plans. Moreover, an employee performing penetration testing might be reluctant to fully report security gaps.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Appendix F: The Case for Ethical Hacking (page 517).
Which of the following is required in order to provide accountability?
Authentication
Integrity
Confidentiality
Audit trails
Accountability can actually be seen in two different ways:
1) Although audit trails are also needed for accountability, no user can be accountable for their actions unless properly authenticated.
2) Accountability is another facet of access control. Individuals on a system are responsible for their actions. This accountability property enables system activities to be traced to the proper individuals. Accountability is supported by audit trails that record events on the system and network. Audit trails can be used for intrusion detection and for the reconstruction of past events. Monitoring individual activities, such as keystroke monitoring, should be accomplished in accordance with the company policy and appropriate laws. Banners at the log-on time should notify the user of any monitoring that is being conducted.
The point is that unless you employ an appropriate auditing mechanism, you don't have accountability. Authorization only gives a user certain permissions on the network. Accountability is far more complex because it also includes intrusion detection, unauthorized actions by both unauthorized users and authorized users, and system faults. The audit trail provides the proof that unauthorized modifications by both authorized and unauthorized users took place. No proof, No accountability.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Page 50.
The Shon Harris AIO book, 4th Edition, on Page 243 also states:
Auditing Capabilities ensures users are accountable for their actions, verify that the secutiy policies are enforced,
and can be used as investigation tools. Accountability is tracked by recording user, system, and application activities.
This recording is done through auditing functions and mechanisms within an operating sytem or application.
Audit trail contain information about operating System activities, application events, and user actions.
Which of the following is NOT a fundamental component of an alarm in an intrusion detection system?
Communications
Enunciator
Sensor
Response
Response is the correct choice. A response would essentially be the action that is taken once an alarm has been produced by an IDS, but is not a fundamental component of the alarm.
The following are incorrect answers:
Communications is the component of an alarm that delivers alerts through a variety of channels such as email, pagers, instant messages and so on.
An Enunciator is the component of an alarm that uses business logic to compose the content and format of an alert and determine the recipients of that alert.
A sensor is a fundamental component of IDS alarms. A sensor detects an event and produces an appropriate notification.
Domain: Access Control
What is the name of the protocol use to set up and manage Security Associations (SA) for IP Security (IPSec)?
Internet Key Exchange (IKE)
Secure Key Exchange Mechanism
Oakley
Internet Security Association and Key Management Protocol
The Key management for IPSec is called the Internet Key Exchange (IKE)
Note: IKE underwent a series of improvements establishing IKEv2 with RFC 4306. The basis of this answer is IKEv2.
The IKE protocol is a hybrid of three other protocols: ISAKMP (Internet Security Association and Key Management Protocol), Oakley and SKEME. ISAKMP provides a framework for authentication and key exchange, but does not define them (neither authentication nor key exchange). The Oakley protocol describes a series of modes for key exchange and the SKEME protocol defines key exchange techniques.
IKE—Internet Key Exchange. A hybrid protocol that implements Oakley and Skeme key exchanges inside the ISAKMP framework. IKE can be used with other protocols, but its initial implementation is with the IPSec protocol. IKE provides authentication of the IPSec peers, negotiates IPSec keys, and negotiates IPSec security associations.
IKE is implemented in accordance with RFC 2409, The Internet Key Exchange.
The Internet Key Exchange (IKE) security protocol is a key management protocol standard that is used in conjunction with the IPSec standard. IPSec can be configured without IKE, but IKE enhances IPSec by providing additional features, flexibility, and ease of configuration for the IPSec standard.
IKE is a hybrid protocol that implements the Oakley key exchange and the SKEME key exchange inside the Internet Security Association and Key Management Protocol (ISAKMP) framework. (ISAKMP, Oakley, and SKEME are security protocols implemented by IKE.)
IKE automatically negotiates IPSec security associations (SAs) and enables IPSec secure communications without costly manual preconfiguration. Specifically, IKE provides these benefits:
•Eliminates the need to manually specify all the IPSec security parameters in the crypto maps at both peers.
•Allows you to specify a lifetime for the IPSec security association.
•Allows encryption keys to change during IPSec sessions.
•Allows IPSec to provide anti-replay services.
•Permits certification authority (CA) support for a manageable, scalable IPSec implementation.
•Allows dynamic authentication of peers.
About ISAKMP
The Internet Security Association and Key Management Protocol (ISAKMP) is a framework that defines the phases for establishing a secure relationship and support for negotiation of security attributes, it does not establish sessions keys by itself, it is used along with the Oakley session key establishment protocol. The Secure Key Exchange Mechanism (SKEME) describes a secure exchange mechanism and Oakley defines the modes of operation needed to establish a secure connection.
ISAKMP provides a framework for Internet key management and provides the specific protocol support for negotiation of security attributes. Alone, it does not establish session keys. However it can be used with various session key establishment protocols, such as Oakley, to provide a complete solution to Internet key management.
About Oakley
The Oakley protocol uses a hybrid Diffie-Hellman technique to establish session keys on Internet hosts and routers. Oakley provides the important security property of Perfect Forward Secrecy (PFS) and is based on cryptographic techniques that have survived substantial public scrutiny. Oakley can be used by itself, if no attribute negotiation is needed, or Oakley can be used in conjunction with ISAKMP. When ISAKMP is used with Oakley, key escrow is not feasible.
The ISAKMP and Oakley protocols have been combined into a hybrid protocol. The resolution of ISAKMP with Oakley uses the framework of ISAKMP to support a subset of Oakley key exchange modes. This new key exchange protocol provides optional PFS, full security association attribute negotiation, and authentication methods that provide both repudiation and non-repudiation. Implementations of this protocol can be used to establish VPNs and also allow for users from remote sites (who may have a dynamically allocated IP address) access to a secure network.
About IPSec
The IETF's IPSec Working Group develops standards for IP-layer security mechanisms for both IPv4 and IPv6. The group also is developing generic key management protocols for use on the Internet. For more information, refer to the IP Security and Encryption Overview.
IPSec is a framework of open standards developed by the Internet Engineering Task Force (IETF) that provides security for transmission of sensitive information over unprotected networks such as the Internet. It acts at the network level and implements the following standards:
IPSec
Internet Key Exchange (IKE)
Data Encryption Standard (DES)
MD5 (HMAC variant)
SHA (HMAC variant)
Authentication Header (AH)
Encapsulating Security Payload (ESP)
IPSec services provide a robust security solution that is standards-based. IPSec also provides data authentication and anti-replay services in addition to data confidentiality services.
For more information regarding IPSec, refer to the chapter "Configuring IPSec Network Security."
About SKEME
SKEME constitutes a compact protocol that supports a variety of realistic scenarios and security models over Internet. It provides clear tradeoffs between security and performance as required by the different scenarios without incurring in unnecessary system complexity. The protocol supports key exchange based on public key, key distribution centers, or manual installation, and provides for fast and secure key refreshment. In addition, SKEME selectively provides perfect forward secrecy, allows for replaceability and negotiation of the underlying cryptographic primitives, and addresses privacy issues as anonymity and repudiatability
SKEME's basic mode is based on the use of public keys and a Diffie-Hellman shared secret generation.
However, SKEME is not restricted to the use of public keys, but also allows the use of a pre-shared key. This key can be obtained by manual distribution or by the intermediary of a key distribution center (KDC) such as Kerberos.
In short, SKEME contains four distinct modes:
Basic mode, which provides a key exchange based on public keys and ensures PFS thanks to Diffie-Hellman.
A key exchange based on the use of public keys, but without Diffie-Hellman.
A key exchange based on the use of a pre-shared key and on Diffie-Hellman.
A mechanism of fast rekeying based only on symmetrical algorithms.
In addition, SKEME is composed of three phases: SHARE, EXCH and AUTH.
During the SHARE phase, the peers exchange half-keys, encrypted with their respective public keys. These two half-keys are used to compute a secret key K. If anonymity is wanted, the identities of the two peers are also encrypted. If a shared secret already exists, this phase is skipped.
The exchange phase (EXCH) is used, depending on the selected mode, to exchange either Diffie-Hellman public values or nonces. The Diffie-Hellman shared secret will only be computed after the end of the exchanges.
The public values or nonces are authenticated during the authentication phase (AUTH), using the secret key established during the SHARE phase.
The messages from these three phases do not necessarily follow the order described above; in actual practice they are combined to minimize the number of exchanged messages.
References used for this question:
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 4: Cryptography (page 172).
http://tools.ietf.org/html/rfc4306
http://tools.ietf.org/html/rfc4301
http://en.wikipedia.org/wiki/Internet_Key_Exchange
CISCO ISAKMP and OAKLEY information
CISCO Configuring Internet Key Exchange Protocol
http://www.hsc.fr/ressources/articles/ipsec-tech/index.html.en
You work in a police department forensics lab where you examine computers for evidence of crimes. Your work is vital to the success of the prosecution of criminals.
One day you receive a laptop and are part of a two man team responsible for examining it together. However, it is lunch time and after receiving the laptop you leave it on your desk and you both head out to lunch.
What critical step in forensic evidence have you forgotten?
Chain of custody
Locking the laptop in your desk
Making a disk image for examination
Cracking the admin password with chntpw
When evidence from a crime is to be used in the prosecution of a criminal it is critical that you follow the law when handling that evidence. Part of that process is called chain of custody and is when you maintain proactive and documented control over ALL evidence involved in a crime.
Failure to do this can lead to the dismissal of charges against a criminal because if the evidence is compromised because you failed to maintain of chain of custody.
A chain of custody is chronological documentation for evidence in a particular case, and is especially important with electronic evidence due to the possibility of fraudulent data alteration, deletion, or creation. A fully detailed chain of custody report is necessary to prove the physical custody of a piece of evidence and show all parties that had access to said evidence at any given time.
Evidence must be protected from the time it is collected until the time it is presented in court.
The following answers are incorrect:
- Locking the laptop in your desk: Even this wouldn't assure that the defense team would try to challenge chain of custody handling. It's usually easy to break into a desk drawer and evidence should be stored in approved safes or other storage facility.
- Making a disk image for examination: This is a key part of system forensics where we make a disk image of the evidence system and study that as opposed to studying the real disk drive. That could lead to loss of evidence. However if the original evidence is not secured than the chain of custoday has not been maintained properly.
- Cracking the admin password with chntpw: This isn't correct. Your first mistake was to compromise the chain of custody of the laptop. The chntpw program is a Linux utility to (re)set the password of any user that has a valid (local) account on a Windows system, by modifying the crypted password in the registry's SAM file. You do not need to know the old password to set a new one. It works offline which means you must have physical access (i.e., you have to shutdown your computer and boot off a linux floppy disk). The bootdisk includes stuff to access NTFS partitions and scripts to glue the whole thing together. This utility works with SYSKEY and includes the option to turn it off. A bootdisk image is provided on their website at http://freecode.com/projects/chntpw .
The following reference(s) was used to create this question:
For more details and to cover 100% of the exam QUESTION NO: s, subscribe to our holistic Security+ 2014 CBT Tutorial at: http://www.cccure.tv/
and
http://en.wikipedia.org/wiki/Chain_of_custody
and
http://www.datarecovery.com/forensic_chain_of_custody.asp
How many rounds are used by DES?
16
32
64
48
DES is a block encryption algorithm using 56-bit keys and 64-bit blocks that are divided in half and each character is encrypted one at a time. The characters are put through 16 rounds of transposition and substitution functions. Triple DES uses 48 rounds.
Source: WALLHOFF, John, CBK#5 Cryptography (CISSP Study Guide), April 2002 (page 3).
Which type of encryption is considered to be unbreakable if the stream is truly random and is as large as the plaintext and never reused in whole or part?
One Time Pad (OTP)
One time Cryptopad (OTC)
Cryptanalysis
Pretty Good Privacy (PGP)
OTP or One Time Pad is considered unbreakable if the key is truly random and is as large as the plaintext and never reused in whole or part AND kept secret.
In cryptography, a one-time pad is a system in which a key generated randomly is used only once to encrypt a message that is then decrypted by the receiver using the matching one-time pad and key. Messages encrypted with keys based on randomness have the advantage that there is theoretically no way to "break the code" by analyzing a succession of messages. Each encryption is unique and bears no relation to the next encryption so that some pattern can be detected.
With a one-time pad, however, the decrypting party must have access to the same key used to encrypt the message and this raises the problem of how to get the key to the decrypting party safely or how to keep both keys secure. One-time pads have sometimes been used when the both parties started out at the same physical location and then separated, each with knowledge of the keys in the one-time pad. The key used in a one-time pad is called a secret key because if it is revealed, the messages encrypted with it can easily be deciphered.
One-time pads figured prominently in secret message transmission and espionage before and during World War II and in the Cold War era. On the Internet, the difficulty of securely controlling secret keys led to the invention of public key cryptography.
The biggest challenge with OTP was to get the pad security to the person or entity you wanted to communicate with. It had to be done in person or using a trusted courrier or custodian. It certainly did not scale up very well and it would not be usable for large quantity of data that needs to be encrypted as we often time have today.
The following answers are incorrect:
- One time Cryptopad: Almost but this isn't correct. Cryptopad isn't a valid term in cryptography.
- Cryptanalysis: Sorry, incorrect. Cryptanalysis is the process of analyzing information in an effort to breach the cryptographic security systems.
- PGP - Pretty Good Privacy: PGP, written by Phil Zimmermann is a data encryption and decryption program that provides cryptographic privacy and authentication for data. Still isn't the right answer though. Read more here about PGP.
The following reference(s) was used to create this question:
To get more info on this QUESTION NO: s or any QUESTION NO: s of Security+, subscribe to the CCCure Holistic Security+ CBT available at: http://www.cccure.tv
and
http://users.telenet.be/d.rijmenants/en/otp.htm
and
http://en.wikipedia.org/wiki/One-time_pad
and
http://searchsecurity.techtarget.com/definition/one-time-pad
Which of the following is true about link encryption?
Each entity has a common key with the destination node.
Encrypted messages are only decrypted by the final node.
This mode does not provide protection if anyone of the nodes along the transmission path is compromised.
Only secure nodes are used in this type of transmission.
In link encryption, each entity has keys in common with its two neighboring nodes in the transmission chain.
Thus, a node receives the encrypted message from its predecessor, decrypts it, and then re-encrypts it with a new key, common to the successor node. Obviously, this mode does not provide protection if anyone of the nodes along the transmission path is compromised.
Encryption can be performed at different communication levels, each with different types of protection and implications. Two general modes of encryption implementation are link encryption and end-to-end encryption.
Link encryption encrypts all the data along a specific communication path, as in a satellite link, T3 line, or telephone circuit. Not only is the user information encrypted, but the header, trailers, addresses, and routing data that are part of the packets are also encrypted. The only traffic not encrypted in this technology is the data link control messaging information, which includes instructions and parameters that the different link devices use to synchronize communication methods. Link encryption provides protection against packet sniffers and eavesdroppers.
In end-to-end encryption, the headers, addresses, routing, and trailer information are not encrypted, enabling attackers to learn more about a captured packet and where it is headed.
Reference(s) used for this question:
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (pp. 845-846). McGraw-Hill.
And:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 4: Cryptography (page 132).
What is the length of an MD5 message digest?
128 bits
160 bits
256 bits
varies depending upon the message size.
A hash algorithm (alternatively, hash "function") takes binary data, called the message, and produces a condensed representation, called the message digest. A cryptographic hash algorithm is a hash algorithm that is designed to achieve certain security properties. The Federal Information Processing Standard 180-3, Secure Hash Standard, specifies five cryptographic hash algorithms - SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512 for federal use in the US; the standard was also widely adopted by the information technology industry and commercial companies.
The MD5 Message-Digest Algorithm is a widely used cryptographic hash function that produces a 128-bit (16-byte) hash value. Specified in RFC 1321, MD5 has been employed in a wide variety of security applications, and is also commonly used to check data integrity. MD5 was designed by Ron Rivest in 1991 to replace an earlier hash function, MD4. An MD5 hash is typically expressed as a 32-digit hexadecimal number.
However, it has since been shown that MD5 is not collision resistant; as such, MD5 is not suitable for applications like SSL certificates or digital signatures that rely on this property. In 1996, a flaw was found with the design of MD5, and while it was not a clearly fatal weakness, cryptographers began recommending the use of other algorithms, such as SHA-1 - which has since been found also to be vulnerable. In 2004, more serious flaws were discovered in MD5, making further use of the algorithm for security purposes questionable - specifically, a group of researchers described how to create a pair of files that share the same MD5 checksum. Further advances were made in breaking MD5 in 2005, 2006, and 2007. In December 2008, a group of researchers used this technique to fake SSL certificate validity, and US-CERT now says that MD5 "should be considered cryptographically broken and unsuitable for further use." and most U.S. government applications now require the SHA-2 family of hash functions.
NIST CRYPTOGRAPHIC HASH PROJECT
NIST announced a public competition in a Federal Register Notice on November 2, 2007 to develop a new cryptographic hash algorithm, called SHA-3, for standardization. The competition was NIST’s response to advances made in the cryptanalysis of hash algorithms.
NIST received sixty-four entries from cryptographers around the world by October 31, 2008, and selected fifty-one first-round candidates in December 2008, fourteen second-round candidates in July 2009, and five finalists – BLAKE, Grøstl, JH, Keccak and Skein, in December 2010 to advance to the third and final round of the competition.
Throughout the competition, the cryptographic community has provided an enormous amount of feedback. Most of the comments were sent to NIST and a public hash forum; in addition, many of the cryptanalysis and performance studies were published as papers in major cryptographic conferences or leading cryptographic journals. NIST also hosted a SHA-3 candidate conference in each round to obtain public feedback. Based on the public comments and internal review of the candidates, NIST announced Keccak as the winner of the SHA-3 Cryptographic Hash Algorithm Competition on October 2, 2012, and ended the five-year competition.
Which of the following can best define the "revocation request grace period"?
The period of time allotted within which the user must make a revocation request upon a revocation reason
Minimum response time for performing a revocation by the CA
Maximum response time for performing a revocation by the CA
Time period between the arrival of a revocation request and the publication of the revocation information
The length of time between the Issuer’s receipt of a revocation request and the time the Issuer is required to revoke the certificate should bear a reasonable relationship to the amount of risk the participants are willing to assume that someone may rely on a certificate for which a proper evocation request has been given but has not yet been acted upon.
How quickly revocation requests need to be processed (and CRLs or certificate status databases need to be updated) depends upon the specific application for which the Policy Authority is rafting the Certificate Policy.
A Policy Authority should recognize that there may be risk and lost tradeoffs with respect to grace periods for revocation notices.
If the Policy Authority determines that its PKI participants are willing to accept a grace period of a few hours in exchange for a lower implementation cost, the Certificate Policy may reflect that decision.
Which of the following is more suitable for a hardware implementation?
Stream ciphers
Block ciphers
Cipher block chaining
Electronic code book
A stream cipher treats the message as a stream of bits or bytes and performs mathematical functions on them individually. The key is a random value input into the stream cipher, which it uses to ensure the randomness of the keystream data. They are more suitable for hardware implementations, because they encrypt and decrypt one bit at a time. They are intensive because each bit must be manipulated, which works better at the silicon level. Block ciphers operate a the block level, dividing the message into blocks of bits. Cipher Block chaining (CBC) and Electronic Code Book (ECB) are operation modes of DES, a block encryption algorithm.
Source: WALLHOFF, John, CBK#5 Cryptography (CISSP Study Guide), April 2002 (page 2).
Which of the following is NOT a symmetric key algorithm?
Blowfish
Digital Signature Standard (DSS)
Triple DES (3DES)
RC5
Digital Signature Standard (DSS) specifies a Digital Signature Algorithm (DSA) appropriate for applications requiring a digital signature, providing the capability to generate signatures (with the use of a private key) and verify them (with the use of the corresponding public key).
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 8: Cryptography (page 550).
Which of the following is a symmetric encryption algorithm?
RSA
Elliptic Curve
RC5
El Gamal
RC5 is a symmetric encryption algorithm. It is a block cipher of variable block length, encrypts through integer addition, the application of a bitwise Exclusive OR (XOR), and variable rotations.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 4: Cryptography (page 153).
What can be defined as a digital certificate that binds a set of descriptive data items, other than a public key, either directly to a subject name or to the identifier of another certificate that is a public-key certificate?
A public-key certificate
An attribute certificate
A digital certificate
A descriptive certificate
The Internet Security Glossary (RFC2828) defines an attribute certificate as a digital certificate that binds a set of descriptive data items, other than a public key, either directly to a subject name or to the identifier of another certificate that is a public-key certificate. A public-key certificate binds a subject name to a public key value, along with information needed to perform certain cryptographic functions. Other attributes of a subject, such as a security clearance, may be certified in a separate kind of digital certificate, called an attribute certificate. A subject may have multiple attribute certificates associated with its name or with each of its public-key certificates.
Source: SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
What can be defined as secret communications where the very existence of the message is hidden?
Clustering
Steganography
Cryptology
Vernam cipher
Steganography is a secret communication where the very existence of the message is hidden. For example, in a digital image, the least significant bit of each word can be used to comprise a message without causing any significant change in the image. Key clustering is a situation in which a plaintext message generates identical ciphertext messages using the same transformation algorithm but with different keys. Cryptology encompasses cryptography and cryptanalysis. The Vernam Cipher, also called a one-time pad, is an encryption scheme using a random key of the same size as the message and is used only once. It is said to be unbreakable, even with infinite resources.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 4: Cryptography (page 134).
Which of the following is not a disadvantage of symmetric cryptography when compared with Asymmetric Ciphers?
Provides Limited security services
Has no built in Key distribution
Speed
Large number of keys are needed
Symmetric cryptography ciphers are generally fast and hard to break. So speed is one of the key advantage of Symmetric ciphers and NOT a disadvantage. Symmetric Ciphers uses simple encryption steps such as XOR, substitution, permutation, shifting columns, shifting rows, etc... Such steps does not required a large amount of processing power compare to the complex mathematical problem used within Asymmetric Ciphers.
Some of the weaknesses of Symmetric Ciphers are:
The lack of automated key distribution. Usually an Asymmetric cipher would be use to protect the symmetric key if it needs to be communicated to another entity securely over a public network. In the good old day this was done manually where it was distributed using the Floppy Net sometimes called the Sneaker Net (you run to someone's office to give them the key).
As far as the total number of keys are required to communicate securely between a large group of users, it does not scale very well. 10 users would require 45 keys for them to communicate securely with each other. If you have 1000 users then you would need almost half a million key to communicate secure. On Asymmetric ciphers there is only 2000 keys required for 1000 users. The formula to calculate the total number of keys required for a group of users who wishes to communicate securely with each others using Symmetric encryption is Total Number of Users (N) * Total Number of users minus one Divided by 2 or N (N-1)/2
Symmetric Ciphers are limited when it comes to security services, they cannot provide all of the security services provided by Asymmetric ciphers. Symmetric ciphers provides mostly confidentiality but can also provide integrity and authentication if a Message Authentication Code (MAC) is used and could also provide user authentication if Kerberos is used for example. Symmetric Ciphers cannot provide Digital Signature and Non-Repudiation.
Reference used for theis question:
WALLHOFF, John, CBK#5 Cryptography (CISSP Study Guide), April 2002 (page 2).
Which of the following statements pertaining to stream ciphers is correct?
A stream cipher is a type of asymmetric encryption algorithm.
A stream cipher generates what is called a keystream.
A stream cipher is slower than a block cipher.
A stream cipher is not appropriate for hardware-based encryption.
A stream cipher is a type of symmetric encryption algorithm that operates on continuous streams of plain text and is appropriate for hardware-based encryption.
Stream ciphers can be designed to be exceptionally fast, much faster than any block cipher. A stream cipher generates what is called a keystream (a sequence of bits used as a key).
Stream ciphers can be viewed as approximating the action of a proven unbreakable cipher, the one-time pad (OTP), sometimes known as the Vernam cipher. A one-time pad uses a keystream of completely random digits. The keystream is combined with the plaintext digits one at a time to form the ciphertext. This system was proved to be secure by Claude Shannon in 1949. However, the keystream must be (at least) the same length as the plaintext, and generated completely at random. This makes the system very cumbersome to implement in practice, and as a result the one-time pad has not been widely used, except for the most critical applications.
A stream cipher makes use of a much smaller and more convenient key — 128 bits, for example. Based on this key, it generates a pseudorandom keystream which can be combined with the plaintext digits in a similar fashion to the one-time pad. However, this comes at a cost: because the keystream is now pseudorandom, and not truly random, the proof of security associated with the one-time pad no longer holds: it is quite possible for a stream cipher to be completely insecure if it is not implemented properly as we have seen with the Wired Equivalent Privacy (WEP) protocol.
Encryption is accomplished by combining the keystream with the plaintext, usually with the bitwise XOR operation.
Source: DUPUIS, Clement, CISSP Open Study Guide on domain 5, cryptography, April 1999.
More details can be obtained on Stream Ciphers in RSA Security's FAQ on Stream Ciphers.
Which of the following is not an example of a block cipher?
Skipjack
IDEA
Blowfish
RC4
RC4 is a proprietary, variable-key-length stream cipher invented by Ron Rivest for RSA Data Security, Inc. Skipjack, IDEA and Blowfish are examples of block ciphers.
Source: SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
Which of the following binds a subject name to a public key value?
A public-key certificate
A public key infrastructure
A secret key infrastructure
A private key certificate
Remember the term Public-Key Certificate is synonymous with Digital Certificate or Identity certificate.
The certificate itself provides the binding but it is the certificate authority who will go through the Certificate Practice Statements (CPS) actually validating the bindings and vouch for the identity of the owner of the key within the certificate.
As explained in Wikipedia:
In cryptography, a public key certificate (also known as a digital certificate or identity certificate) is an electronic document which uses a digital signature to bind together a public key with an identity — information such as the name of a person or an organization, their address, and so forth. The certificate can be used to verify that a public key belongs to an individual.
In a typical public key infrastructure (PKI) scheme, the signature will be of a certificate authority (CA). In a web of trust scheme such as PGP or GPG, the signature is of either the user (a self-signed certificate) or other users ("endorsements") by getting people to sign each other keys. In either case, the signatures on a certificate are attestations by the certificate signer that the identity information and the public key belong together.
RFC 2828 defines the certification authority (CA) as:
An entity that issues digital certificates (especially X.509 certificates) and vouches for the binding between the data items in a certificate.
An authority trusted by one or more users to create and assign certificates. Optionally, the certification authority may create the user's keys.
X509 Certificate users depend on the validity of information provided by a certificate. Thus, a CA should be someone that certificate users trust, and usually holds an official position created and granted power by a government, a corporation, or some other organization. A CA is responsible for managing the life cycle of certificates and, depending on the type of certificate and the CPS that applies, may be responsible for the life cycle of key pairs associated with the certificates
Source: SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
and
http://en.wikipedia.org/wiki/Public_key_certificate
What does the directive of the European Union on Electronic Signatures deal with?
Encryption of classified data
Encryption of secret data
Non repudiation
Authentication of web servers
Which of the following is an issue with signature-based intrusion detection systems?
Only previously identified attack signatures are detected.
Signature databases must be augmented with inferential elements.
It runs only on the windows operating system
Hackers can circumvent signature evaluations.
An issue with signature-based ID is that only attack signatures that are stored in their database are detected.
New attacks without a signature would not be reported. They do require constant updates in order to maintain their effectiveness.
Reference used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 49.
Which of the following are additional terms used to describe knowledge-based IDS and behavior-based IDS?
signature-based IDS and statistical anomaly-based IDS, respectively
signature-based IDS and dynamic anomaly-based IDS, respectively
anomaly-based IDS and statistical-based IDS, respectively
signature-based IDS and motion anomaly-based IDS, respectively.
The two current conceptual approaches to Intrusion Detection methodology are knowledge-based ID systems and behavior-based ID systems, sometimes referred to as signature-based ID and statistical anomaly-based ID, respectively.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 63.
Which of the following was not designed to be a proprietary encryption algorithm?
RC2
RC4
Blowfish
Skipjack
Blowfish is a symmetric block cipher with variable-length key (32 to 448 bits) designed in 1993 by Bruce Schneier as an unpatented, license-free, royalty-free replacement for DES or IDEA. See attributes below:
Block cipher: 64-bit block
Variable key length: 32 bits to 448 bits
Designed by Bruce Schneier
Much faster than DES and IDEA
Unpatented and royalty-free
No license required
Free source code available
Rivest Cipher #2 (RC2) is a proprietary, variable-key-length block cipher invented by Ron Rivest for RSA Data Security, Inc.
Rivest Cipher #4 (RC4) is a proprietary, variable-key-length stream cipher invented by Ron Rivest for RSA Data Security, Inc.
The Skipjack algorithm is a Type II block cipher [NIST] with a block size of 64 bits and a key size of 80 bits that was developed by NSA and formerly classified at the U.S. Department of Defense "Secret" level. The NSA announced on June 23, 1998, that Skipjack had been declassified.
References:
RSA Laboratories
http://www.rsa.com/rsalabs/node.asp?id=2250
RFC 2828 - Internet Security Glossary
http://www.faqs.org/rfcs/rfc2828.html
Which of the following can best be defined as a key distribution protocol that uses hybrid encryption to convey session keys. This protocol establishes a long-term key once, and then requires no prior communication in order to establish or exchange keys on a session-by-session basis?
Internet Security Association and Key Management Protocol (ISAKMP)
Simple Key-management for Internet Protocols (SKIP)
Diffie-Hellman Key Distribution Protocol
IPsec Key exchange (IKE)
RFC 2828 (Internet Security Glossary) defines Simple Key Management for Internet Protocols (SKIP) as:
A key distribution protocol that uses hybrid encryption to convey session keys that are used to encrypt data in IP packets.
SKIP is an hybrid Key distribution protocol similar to SSL, except that it establishes a long-term key once, and then requires no prior communication in order to establish or exchange keys on a session-by-session basis. Therefore, no connection setup overhead exists and new keys values are not continually generated. SKIP uses the knowledge of its own secret key or private component and the destination's public component to calculate a unique key that can only be used between them.
IKE stand for Internet Key Exchange, it makes use of ISAKMP and OAKLEY internally.
Internet Key Exchange (IKE or IKEv2) is the protocol used to set up a security association (SA) in the IPsec protocol suite. IKE builds upon the Oakley protocol and ISAKMP. IKE uses X.509 certificates for authentication and a Diffie–Hellman key exchange to set up a shared session secret from which cryptographic keys are derived.
The following are incorrect answers:
ISAKMP is an Internet IPsec protocol to negotiate, establish, modify, and delete security associations, and to exchange key generation and authentication data, independent of the details of any specific key generation technique, key establishment protocol, encryption algorithm, or authentication mechanism.
IKE is an Internet, IPsec, key-establishment protocol (partly based on OAKLEY) that is intended for putting in place authenticated keying material for use with ISAKMP and for other security associations, such as in AH and ESP.
IPsec Key exchange (IKE) is only a detracto.
Reference(s) used for this question:
SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
and
http://en.wikipedia.org/wiki/Simple_Key-Management_for_Internet_Protocol
and
http://en.wikipedia.org/wiki/Simple_Key-Management_for_Internet_Protocol
Which of the following does NOT concern itself with key management?
Internet Security Association Key Management Protocol (ISAKMP)
Diffie-Hellman (DH)
Cryptology (CRYPTO)
Key Exchange Algorithm (KEA)
Cryptology is the science that includes both cryptography and cryptanalysis and is not directly concerned with key management. Cryptology is the mathematics, such as number theory, and the application of formulas and algorithms, that underpin cryptography and cryptanalysis.
The following are all concerned with Key Management which makes them the wrong choices:
Internet Security Association Key Management Protocol (ISAKMP) is a key management protocol used by IPSec. ISAKMP (Internet Security Association and Key Management Protocol) is a protocol defined by RFC 2408 for establishing Security Associations (SA) and cryptographic keys in an Internet environment. ISAKMP only provides a framework for authentication and key exchange. The actual key exchange is done by the Oakley Key Determination Protocol which is a key-agreement protocol that allows authenticated parties to exchange keying material across an insecure connection using the Diffie-Hellman key exchange algorithm.
Diffie-Hellman and one variation of the Diffie-Hellman algorithm called the Key Exchange Algorithm (KEA) are also key exchange protocols. Key exchange (also known as "key establishment") is any method in cryptography by which cryptographic keys are exchanged between users, allowing use of a cryptographic algorithm. Diffie–Hellman key exchange (D–H) is a specific method of exchanging keys. It is one of the earliest practical examples of key exchange implemented within the field of cryptography. The Diffie–Hellman key exchange method allows two parties that have no prior knowledge of each other to jointly establish a shared secret key over an insecure communications channel. This key can then be used to encrypt subsequent communications using a symmetric key cipher.
Reference(s) used for this question:
Mike Meyers CISSP Certification Passport, by Shon Harris and Mike Meyers, page 228.
It is highlighted as an EXAM TIP. Which tells you that it is a must know for the purpose of the exam.
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, Fifth Edition, Chapter 8: Cryptography (page 713-715).
and
https://en.wikipedia.org/wiki/ISAKMP
and
http://searchsecurity.techtarget.com/definition/cryptology
The primary purpose for using one-way hashing of user passwords within a password file is which of the following?
It prevents an unauthorized person from trying multiple passwords in one logon attempt.
It prevents an unauthorized person from reading the password.
It minimizes the amount of storage required for user passwords.
It minimizes the amount of processing time used for encrypting passwords.
The whole idea behind a one-way hash is that it should be just that - one-way. In other words, an attacker should not be able to figure out your password from the hashed version of that password in any mathematically feasible way (or within any reasonable length of time).
Password Hashing and Encryption
In most situations , if an attacker sniffs your password from the network wire, she still has some work to do before she actually knows your password value because most systems hash the password with a hashing algorithm, commonly MD4 or MD5, to ensure passwords are not sent in cleartext.
Although some people think the world is run by Microsoft, other types of operating systems are out there, such as Unix and Linux. These systems do not use registries and SAM databases, but contain their user passwords in a file cleverly called “shadow.” Now, this shadow file does not contain passwords in cleartext; instead, your password is run through a hashing algorithm, and the resulting value is stored in this file.
Unixtype systems zest things up by using salts in this process. Salts are random values added to the encryption process to add more complexity and randomness. The more randomness entered into the encryption process, the harder it is for the bad guy to decrypt and uncover your password. The use of a salt means that the same password can be encrypted into several thousand different formats. This makes it much more difficult for an attacker to uncover the right format for your system.
Password Cracking tools
Note that the use of one-way hashes for passwords does not prevent password crackers from guessing passwords. A password cracker runs a plain-text string through the same one-way hash algorithm used by the system to generate a hash, then compares that generated has with the one stored on the system. If they match, the password cracker has guessed your password.
This is very much the same process used to authenticate you to a system via a password. When you type your username and password, the system hashes the password you typed and compares that generated hash against the one stored on the system - if they match, you are authenticated.
Pre-Computed password tables exists today and they allow you to crack passwords on Lan Manager (LM) within a VERY short period of time through the use of Rainbow Tables. A Rainbow Table is a precomputed table for reversing cryptographic hash functions, usually for cracking password hashes. Tables are usually used in recovering a plaintext password up to a certain length consisting of a limited set of characters. It is a practical example of a space/time trade-off also called a Time-Memory trade off, using more computer processing time at the cost of less storage when calculating a hash on every attempt, or less processing time and more storage when compared to a simple lookup table with one entry per hash. Use of a key derivation function that employs a salt makes this attack unfeasible.
You may want to review "Rainbow Tables" at the links:
http://en.wikipedia.org/wiki/Rainbow_table
http://www.antsight.com/zsl/rainbowcrack/
Today's password crackers:
Meet oclHashcat. They are GPGPU-based multi-hash cracker using a brute-force attack (implemented as mask attack), combinator attack, dictionary attack, hybrid attack, mask attack, and rule-based attack.
This GPU cracker is a fusioned version of oclHashcat-plus and oclHashcat-lite, both very well-known suites at that time, but now deprecated. There also existed a now very old oclHashcat GPU cracker that was replaced w/ plus and lite, which - as said - were then merged into oclHashcat 1.00 again.
This cracker can crack Hashes of NTLM Version 2 up to 8 characters in less than a few hours. It is definitively a game changer. It can try hundreds of billions of tries per seconds on a very large cluster of GPU's. It supports up to 128 Video Cards at once.
I am stuck using Password what can I do to better protect myself?
You could look at safer alternative such as Bcrypt, PBKDF2, and Scrypt.
bcrypt is a key derivation function for passwords designed by Niels Provos and David Mazières, based on the Blowfish cipher, and presented at USENIX in 1999. Besides incorporating a salt to protect against rainbow table attacks, bcrypt is an adaptive function: over time, the iteration count can be increased to make it slower, so it remains resistant to brute-force search attacks even with increasing computation power.
In cryptography, scrypt is a password-based key derivation function created by Colin Percival, originally for the Tarsnap online backup service. The algorithm was specifically designed to make it costly to perform large-scale custom hardware attacks by requiring large amounts of memory. In 2012, the scrypt algorithm was published by the IETF as an Internet Draft, intended to become an informational RFC, which has since expired. A simplified version of scrypt is used as a proof-of-work scheme by a number of cryptocurrencies, such as Litecoin and Dogecoin.
PBKDF2 (Password-Based Key Derivation Function 2) is a key derivation function that is part of RSA Laboratories' Public-Key Cryptography Standards (PKCS) series, specifically PKCS #5 v2.0, also published as Internet Engineering Task Force's RFC 2898. It replaces an earlier standard, PBKDF1, which could only produce derived keys up to 160 bits long.
PBKDF2 applies a pseudorandom function, such as a cryptographic hash, cipher, or HMAC to the input password or passphrase along with a salt value and repeats the process many times to produce a derived key, which can then be used as a cryptographic key in subsequent operations. The added computational work makes password cracking much more difficult, and is known as key stretching. When the standard was written in 2000, the recommended minimum number of iterations was 1000, but the parameter is intended to be increased over time as CPU speeds increase. Having a salt added to the password reduces the ability to use precomputed hashes (rainbow tables) for attacks, and means that multiple passwords have to be tested individually, not all at once. The standard recommends a salt length of at least 64 bits.
The other answers are incorrect:
"It prevents an unauthorized person from trying multiple passwords in one logon attempt." is incorrect because the fact that a password has been hashed does not prevent this type of brute force password guessing attempt.
"It minimizes the amount of storage required for user passwords" is incorrect because hash algorithms always generate the same number of bits, regardless of the length of the input. Therefore, even short passwords will still result in a longer hash and not minimize storage requirements.
"It minimizes the amount of processing time used for encrypting passwords" is incorrect because the processing time to encrypt a password would be basically the same required to produce a one-way has of the same password.
Reference(s) used for this question:
http://en.wikipedia.org/wiki/PBKDF2
http://en.wikipedia.org/wiki/Scrypt
http://en.wikipedia.org/wiki/Bcrypt
Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 195) . McGraw-Hill. Kindle Edition.
Which of the following answers is described as a random value used in cryptographic algorithms to ensure that patterns are not created during the encryption process?
IV - Initialization Vector
Stream Cipher
OTP - One Time Pad
Ciphertext
The basic power in cryptography is randomness. This uncertainty is why encrypted data is unusable to someone without the key to decrypt.
Initialization Vectors are a used with encryption keys to add an extra layer of randomness to encrypted data. If no IV is used the attacker can possibly break the keyspace because of patterns resulting in the encryption process. Implementation such as DES in Code Book Mode (CBC) would allow frequency analysis attack to take place.
In cryptography, an initialization vector (IV) or starting variable (SV)is a fixed-size input to a cryptographic primitive that is typically required to be random or pseudorandom. Randomization is crucial for encryption schemes to achieve semantic security, a property whereby repeated usage of the scheme under the same key does not allow an attacker to infer relationships between segments of the encrypted message. For block ciphers, the use of an IV is described by so-called modes of operation. Randomization is also required for other primitives, such as universal hash functions and message authentication codes based thereon.
It is define by TechTarget as:
An initialization vector (IV) is an arbitrary number that can be used along with a secret key for data encryption. This number, also called a nonce, is employed only one time in any session.
The use of an IV prevents repetition in data encryption, making it more difficult for a hacker using a dictionary attack to find patterns and break a cipher. For example, a sequence might appear twice or more within the body of a message. If there are repeated sequences in encrypted data, an attacker could assume that the corresponding sequences in the message were also identical. The IV prevents the appearance of corresponding duplicate character sequences in the ciphertext.
The following answers are incorrect:
- Stream Cipher: This isn't correct. A stream cipher is a symmetric key cipher where plaintext digits are combined with pseudorandom key stream to product cipher text.
- OTP - One Time Pad: This isn't correct but OTP is made up of random values used as key material. (Encryption key) It is considered by most to be unbreakable but must be changed with a new key after it is used which makes it impractical for common use.
- Ciphertext: Sorry, incorrect answer. Ciphertext is basically text that has been encrypted with key material (Encryption key)
The following reference(s) was used to create this question:
For more details on this TOPIC and other QUESTION NO: s of the Security+ CBK, subscribe to our Holistic Computer Based Tutorial (CBT) at http://www.cccure.tv
and
whatis.techtarget.com/definition/initialization-vector-IV
and
en.wikipedia.org/wiki/Initialization_vector
Crackers today are MOST often motivated by their desire to:
Help the community in securing their networks.
Seeing how far their skills will take them.
Getting recognition for their actions.
Gaining Money or Financial Gains.
A few years ago the best choice for this question would have been seeing how far their skills can take them. Today this has changed greatly, most crimes committed are financially motivated.
Profit is the most widespread motive behind all cybercrimes and, indeed, most crimes- everyone wants to make money. Hacking for money or for free services includes a smorgasbord of crimes such as embezzlement, corporate espionage and being a “hacker for hire”. Scams are easier to undertake but the likelihood of success is much lower. Money-seekers come from any lifestyle but those with persuasive skills make better con artists in the same way as those who are exceptionally tech-savvy make better “hacks for hire”.
"White hats" are the security specialists (as opposed to Black Hats) interested in helping the community in securing their networks. They will test systems and network with the owner authorization.
A Black Hat is someone who uses his skills for offensive purpose. They do not seek authorization before they attempt to comprise the security mechanisms in place.
"Grey Hats" are people who sometimes work as a White hat and other times they will work as a "Black Hat", they have not made up their mind yet as to which side they prefer to be.
The following are incorrect answers:
All the other choices could be possible reasons but the best one today is really for financial gains.
References used for this question:
http://library.thinkquest.org/04oct/00460/crimeMotives.html
and
http://www.informit.com/articles/article.aspx?p=1160835
and
http://www.aic.gov.au/documents/1/B/A/%7B1BA0F612-613A-494D-B6C5-06938FE8BB53%7Dhtcb006.pdf
What is malware that can spread itself over open network connections?
Worm
Rootkit
Adware
Logic Bomb
Computer worms are also known as Network Mobile Code, or a virus-like bit of code that can replicate itself over a network, infecting adjacent computers.
A computer worm is a standalone malware computer program that replicates itself in order to spread to other computers. Often, it uses a computer network to spread itself, relying on security failures on the target computer to access it. Unlike a computer virus, it does not need to attach itself to an existing program. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.
A notable example is the SQL Slammer computer worm that spread globally in ten minutes on January 25, 2003. I myself came to work that day as a software tester and found all my SQL servers infected and actively trying to infect other computers on the test network.
A patch had been released a year prior by Microsoft and if systems were not patched and exposed to a 376 byte UDP packet from an infected host then system would become compromised.
Ordinarily, infected computers are not to be trusted and must be rebuilt from scratch but the vulnerability could be mitigated by replacing a single vulnerable dll called sqlsort.dll.
Replacing that with the patched version completely disabled the worm which really illustrates to us the importance of actively patching our systems against such network mobile code.
The following answers are incorrect:
- Rootkit: Sorry, this isn't correct because a rootkit isn't ordinarily classified as network mobile code like a worm is. This isn't to say that a rootkit couldn't be included in a worm, just that a rootkit isn't usually classified like a worm. A rootkit is a stealthy type of software, typically malicious, designed to hide the existence of certain processes or programs from normal methods of detection and enable continued privileged access to a computer. The term rootkit is a concatenation of "root" (the traditional name of the privileged account on Unix operating systems) and the word "kit" (which refers to the software components that implement the tool). The term "rootkit" has negative connotations through its association with malware.
- Adware: Incorrect answer. Sorry but adware isn't usually classified as a worm. Adware, or advertising-supported software, is any software package which automatically renders advertisements in order to generate revenue for its author. The advertisements may be in the user interface of the software or on a screen presented to the user during the installation process. The functions may be designed to analyze which Internet sites the user visits and to present advertising pertinent to the types of goods or services featured there. The term is sometimes used to refer to software that displays unwanted advertisements.
- Logic Bomb: Logic bombs like adware or rootkits could be spread by worms if they exploit the right service and gain root or admin access on a computer.
The following reference(s) was used to create this question:
The CCCure CompTIA Holistic Security+ Tutorial and CBT
and
http://en.wikipedia.org/wiki/Rootkit
and
http://en.wikipedia.org/wiki/Computer_worm
and
http://en.wikipedia.org/wiki/Adware
Virus scanning and content inspection of SMIME encrypted e-mail without doing any further processing is:
Not possible
Only possible with key recovery scheme of all user keys
It is possible only if X509 Version 3 certificates are used
It is possible only by "brute force" decryption
Content security measures presumes that the content is available in cleartext on the central mail server.
Encrypted emails have to be decrypted before it can be filtered (e.g. to detect viruses), so you need the decryption key on the central "crypto mail server".
There are several ways for such key management, e.g. by message or key recovery methods. However, that would certainly require further processing in order to achieve such goal.
Which of the following computer crime is MORE often associated with INSIDERS?
IP spoofing
Password sniffing
Data diddling
Denial of service (DOS)
It refers to the alteration of the existing data , most often seen before it is entered into an application.This type of crime is extremely common and can be prevented by using appropriate access controls and proper segregation of duties. It will more likely be perpetrated by insiders, who have access to data before it is processed.
The other answers are incorrect because :
IP Spoofing is not correct as the questions asks about the crime associated with the insiders. Spoofing is generally accomplished from the outside.
Password sniffing is also not the BEST answer as it requires a lot of technical knowledge in understanding the encryption and decryption process.
Denial of service (DOS) is also incorrect as most Denial of service attacks occur over the internet.
Reference : Shon Harris , AIO v3 , Chapter-10 : Law , Investigation & Ethics , Page : 758-760.
In which layer of the OSI Model are connection-oriented protocols located in the TCP/IP suite of protocols?
Transport layer
Application layer
Physical layer
Network layer
Connection-oriented protocols such as TCP provides reliability.
It is the responsibility of such protocols in the transport layer to ensure every byte is accounted for. The network layer does not provide reliability. It only privides the best route to get the traffic to the final destination address.
For your exam you should know the information below about OSI model:
The Open Systems Interconnection model (OSI) is a conceptual model that characterizes and standardizes the internal functions of a communication system by partitioning it into abstraction layers. The model is a product of the Open Systems Interconnection project at the International Organization for Standardization (ISO), maintained by the identification ISO/IEC 7498-1.
The model groups communication functions into seven logical layers. A layer serves the layer above it and is served by the layer below it. For example, a layer that provides error-free communications across a network provides the path needed by applications above it, while it calls the next lower layer to send and receive packets that make up the contents of that path. Two instances at one layer are connected by a horizontal.
OSI Model
Image source: http://www.petri.co.il/images/osi_model.JPG
PHYSICAL LAYER
The physical layer, the lowest layer of the OSI model, is concerned with the transmission and reception of the unstructured raw bit stream over a physical medium. It describes the electrical/optical, mechanical, and functional interfaces to the physical medium, and carries the signals for all of the higher layers. It provides:
Data encoding: modifies the simple digital signal pattern (1s and 0s) used by the PC to better accommodate the characteristics of the physical medium, and to aid in bit and frame synchronization. It determines:
What signal state represents a binary 1
How the receiving station knows when a "bit-time" starts
How the receiving station delimits a frame
DATA LINK LAYER
The data link layer provides error-free transfer of data frames from one node to another over the physical layer, allowing layers above it to assume virtually error-free transmission over the link. To do this, the data link layer provides:
Link establishment and termination: establishes and terminates the logical link between two nodes.
Frame traffic control: tells the transmitting node to "back-off" when no frame buffers are available.
Frame sequencing: transmits/receives frames sequentially.
Frame acknowledgment: provides/expects frame acknowledgments. Detects and recovers from errors that occur in the physical layer by retransmitting non-acknowledged frames and handling duplicate frame receipt.
Frame delimiting: creates and recognizes frame boundaries.
Frame error checking: checks received frames for integrity.
Media access management: determines when the node "has the right" to use the physical medium.
NETWORK LAYER
The network layer controls the operation of the subnet, deciding which physical path the data should take based on network conditions, priority of service, and other factors. It provides:
Routing: routes frames among networks.
Subnet traffic control: routers (network layer intermediate systems) can instruct a sending station to "throttle back" its frame transmission when the router's buffer fills up.
Frame fragmentation: if it determines that a downstream router's maximum transmission unit (MTU) size is less than the frame size, a router can fragment a frame for transmission and re-assembly at the destination station.
Logical-physical address mapping: translates logical addresses, or names, into physical addresses.
Subnet usage accounting: has accounting functions to keep track of frames forwarded by subnet intermediate systems, to produce billing information.
Communications Subnet
The network layer software must build headers so that the network layer software residing in the subnet intermediate systems can recognize them and use them to route data to the destination address.
This layer relieves the upper layers of the need to know anything about the data transmission and intermediate switching technologies used to connect systems. It establishes, maintains and terminates connections across the intervening communications facility (one or several intermediate systems in the communication subnet).
In the network layer and the layers below, peer protocols exist between a node and its immediate neighbor, but the neighbor may be a node through which data is routed, not the destination station. The source and destination stations may be separated by many intermediate systems.
TRANSPORT LAYER
The transport layer ensures that messages are delivered error-free, in sequence, and with no losses or duplications. It relieves the higher layer protocols from any concern with the transfer of data between them and their peers.
The size and complexity of a transport protocol depends on the type of service it can get from the network layer. For a reliable network layer with virtual circuit capability, a minimal transport layer is required. If the network layer is unreliable and/or only supports datagrams, the transport protocol should include extensive error detection and recovery.
The transport layer provides:
Message segmentation: accepts a message from the (session) layer above it, splits the message into smaller units (if not already small enough), and passes the smaller units down to the network layer. The transport layer at the destination station reassembles the message.
Message acknowledgment: provides reliable end-to-end message delivery with acknowledgments.
Message traffic control: tells the transmitting station to "back-off" when no message buffers are available.
Session multiplexing: multiplexes several message streams, or sessions onto one logical link and keeps track of which messages belong to which sessions (see session layer).
Typically, the transport layer can accept relatively large messages, but there are strict message size limits imposed by the network (or lower) layer. Consequently, the transport layer must break up the messages into smaller units, or frames, prepending a header to each frame.
The transport layer header information must then include control information, such as message start and message end flags, to enable the transport layer on the other end to recognize message boundaries. In addition, if the lower layers do not maintain sequence, the transport header must contain sequence information to enable the transport layer on the receiving end to get the pieces back together in the right order before handing the received message up to the layer above.
End-to-end layers
Unlike the lower "subnet" layers whose protocol is between immediately adjacent nodes, the transport layer and the layers above are true "source to destination" or end-to-end layers, and are not concerned with the details of the underlying communications facility. Transport layer software (and software above it) on the source station carries on a conversation with similar software on the destination station by using message headers and control messages.
SESSION LAYER
The session layer allows session establishment between processes running on different stations. It provides:
Session establishment, maintenance and termination: allows two application processes on different machines to establish, use and terminate a connection, called a session.
Session support: performs the functions that allow these processes to communicate over the network, performing security, name recognition, logging, and so on.
PRESENTATION LAYER
The presentation layer formats the data to be presented to the application layer. It can be viewed as the translator for the network. This layer may translate data from a format used by the application layer into a common format at the sending station, then translate the common format to a format known to the application layer at the receiving station.
The presentation layer provides:
Character code translation: for example, ASCII to EBCDIC.
Data conversion: bit order, CR-CR/LF, integer-floating point, and so on.
Data compression: reduces the number of bits that need to be transmitted on the network.
Data encryption: encrypt data for security purposes. For example, password encryption.
APPLICATION LAYER
The application layer serves as the window for users and application processes to access network services. This layer contains a variety of commonly needed functions:
Resource sharing and device redirection
Remote file access
Remote printer access
Inter-process communication
Network management
Directory services
Electronic messaging (such as mail)
Network virtual terminals
The following were incorrect answers:
Application Layer - The application layer serves as the window for users and application processes to access network services.
Network layer - The network layer controls the operation of the subnet, deciding which physical path the data should take based on network conditions, priority of service, and other factors.
Physical Layer - The physical layer, the lowest layer of the OSI model, is concerned with the transmission and reception of the unstructured raw bit stream over a physical medium. It describes the electrical/optical, mechanical, and functional interfaces to the physical medium, and carries the signals for all of the higher layers.
The following reference(s) were/was used to create this question:
CISA review manual 2014 Page number 260
and
Official ISC2 guide to CISSP CBK 3rd Edition Page number 287
and
http://en.wikipedia.org/wiki/Tcp_protocol
In the UTP category rating, the tighter the wind:
the higher the rating and its resistance against interference and crosstalk.
the slower the rating and its resistance against interference and attenuation.
the shorter the rating and its resistance against interference and attenuation.
the longer the rating and its resistance against interference and attenuation.
The category rating is based on how tightly the copper cable is wound within the shielding: The tighter the wind, the higher the rating and its resistance against interference and crosstalk.
Twisted pair copper cabling is a form of wiring in which two conductors are wound together for the purposes of canceling out electromagnetic interference (EMI) from external sources and crosstalk from neighboring wires. Twisting wires decreases interference because the loop area between the wires (which determines the magnetic coupling into the signal) is reduced. In balanced pair operation, the two wires typically carry equal and opposite signals (differential mode) which are combined by subtraction at the destination. The noise from the two wires cancel each other in this subtraction because the two wires have been exposed to similar EMI.
The twist rate (usually defined in twists per metre) makes up part of the specification for a given type of cable. The greater the number of twists, the greater the attenuation of crosstalk. Where pairs are not twisted, as in most residential interior telephone wiring, one member of the pair may be closer to the source than the other, and thus exposed to slightly different induced EMF.
Encapsulating Security Payload (ESP) provides some of the services of Authentication Headers (AH), but it is primarily designed to provide:
Confidentiality
Cryptography
Digital signatures
Access Control
Source: TIPTON, Harold F. & KRAUSE, MICKI, Information Security Management Handbook, 4th Edition, Volume 2, 2001, CRC Press, NY, page 164.
What type of attack involves IP spoofing, ICMP ECHO and a bounce site?
IP spoofing attack
Teardrop attack
SYN attack
Smurf attack
A smurf attack occurs when an attacker sends a spoofed (IP spoofing) PING (ICMP ECHO) packet to the broadcast address of a large network (the bounce site). The modified packet containing the address of the target system, all devices on its local network respond with a ICMP REPLY to the target system, which is then saturated with those replies. An IP spoofing attack is used to convince a system that it is communication with a known entity that gives an intruder access. It involves modifying the source address of a packet for a trusted source's address. A teardrop attack consists of modifying the length and fragmentation offset fields in sequential IP packets so the target system becomes confused and crashes after it receives contradictory instructions on how the fragments are offset on these packets. A SYN attack is when an attacker floods a system with connection requests but does not respond when the target system replies to those requests.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 76).
Which of the following is unlike the other three choices presented?
El Gamal
Teardrop
Buffer Overflow
Smurf
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, pages 76, 157.
A group of independent servers, which are managed as a single system, that provides higher availability, easier manageability, and greater scalability is:
server cluster
client cluster
guest cluster
host cluster
A server cluster is a group of independent servers, which are managed as a single system, that provides higher availability, easier manageability, and greater scalability.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 67.
Why is traffic across a packet switched network difficult to monitor?
Packets are link encrypted by the carrier
Government regulations forbids monitoring
Packets can take multiple paths when transmitted
The network factor is too high
With a packet switched network, packets are difficult to monitor because they can be transmitted using different paths.
A packet-switched network is a digital communications network that groups all transmitted data, irrespective of content, type, or structure into suitably sized blocks, called packets. The network over which packets are transmitted is a shared network which routes each packet independently from all others and allocates transmission resources as needed.
The principal goals of packet switching are to optimize utilization of available link capacity, minimize response times and increase the robustness of communication. When traversing network adapters, switches and other network nodes, packets are buffered and queued, resulting in variable delay and throughput, depending on the traffic load in the network.
Most modern Wide Area Network (WAN) protocols, including TCP/IP, X.25, and Frame Relay, are based on packet-switching technologies. In contrast, normal telephone service is based on a circuit-switching technology, in which a dedicated line is allocated for transmission between two parties. Circuit-switching is ideal when data must be transmitted quickly and must arrive in the same order in which it's sent. This is the case with most real-time data, such as live audio and video. Packet switching is more efficient and robust for data that can withstand some delays in transmission, such as e-mail messages and Web pages.
All of the other answer are wrong
Reference(s) used for this question:
TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
and
https://en.wikipedia.org/wiki/Packet-switched_network
and
http://www.webopedia.com/TERM/P/packet_switching.html
In stateful inspection firewalls, packets are:
Inspected at only one layer of the Open System Interconnection (OSI) model
Inspected at all Open System Interconnection (OSI) layers
Decapsulated at all Open Systems Interconnect (OSI) layers.
Encapsulated at all Open Systems Interconnect (OSI) layers.
Many times when a connection is opened, the firewall will inspect all layers of the packet. While this inspection is scaled back for subsequent packets to improve performance, this is the best of the four answers.
When packet filtering is used, a packet arrives at the firewall, and it runs through its ACLs to determine whether this packet should be allowed or denied. If the packet is allowed, it is passed on to the destination host, or to another network device, and the packet filtering device forgets about the packet. This is different from stateful inspection, which remembers and keeps track of what packets went where until each particular connection is closed. A stateful firewall is like a nosy neighbor who gets into people’s business and conversations. She keeps track of the suspicious cars that come into the neighborhood, who is out of town for the week, and the postman who stays a little too long at the neighbor lady’s house. This can be annoying until your house is burglarized. Then you and the police will want to talk to the nosy neighbor, because she knows everything going on in the neighborhood and would be the one most likely to know something unusual happened.
"Inspected at only one Open Systems Interconnetion (OSI) layer" is incorrect. To perform stateful packet inspection, the firewall must consider at least the network and transport layers.
"Decapsulated at all Open Systems Interconnection (OSI) layers" is incorrect. The headers are not stripped ("decapsulated" if there is such a word) and are passed through in their entirety IF the packet is passed.
"Encapsulated at all Open Systems Interconnect (OSI) layers" is incorrect. Encapsulation refers to the adding of a layer's header/trailer to the information received from the above level. This is done when the packet is assembled not at the firewall.
Reference(s) used for this question:
CBK, p. 466
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (pp. 632-633). McGraw-Hill. Kindle Edition.
What attack involves the perpetrator sending spoofed packet(s) wich contains the same destination and source IP address as the remote host, the same port for the source and destination, having the SYN flag, and targeting any open ports that are open on the remote host?
Boink attack
Land attack
Teardrop attack
Smurf attack
The Land attack involves the perpetrator sending spoofed packet(s) with the SYN flag set to the victim's machine on any open port that is listening. The packet(s) contain the same destination and source IP address as the host, causing the victim's machine to reply to itself repeatedly. In addition, most systems experience a total freeze up, where as CTRL-ALT-DELETE fails to work, the mouse and keyboard become non operational and the only method of correction is to reboot via a reset button on the system or by turning the machine off.
The Boink attack, a modified version of the original Teardrop and Bonk exploit programs, is very similar to the Bonk attack, in that it involves the perpetrator sending corrupt UDP packets to the host. It however allows the attacker to attack multiple ports where Bonk was mainly directed to port 53 (DNS).
The Teardrop attack involves the perpetrator sending overlapping packets to the victim, when their machine attempts to re-construct the packets the victim's machine hangs.
A Smurf attack is a network-level attack against hosts where a perpetrator sends a large amount of ICMP echo (ping) traffic at broadcast addresses, all of it having a spoofed source address of a victim. If the routing device delivering traffic to those broadcast addresses performs the IP broadcast to layer 2 broadcast function, most hosts on that IP network will take the ICMP echo request and reply to it with an echo reply each, multiplying the traffic by the number of hosts responding. On a multi-access broadcast network, there could potentially be hundreds of machines to reply to each packet.
Resources:
http://en.wikipedia.org/wiki/Denial-of-service_attack
http://en.wikipedia.org/wiki/LAND
While using IPsec, the ESP and AH protocols both provides integrity services. However when using AH, some special attention needs to be paid if one of the peers uses NAT for address translation service. Which of the items below would affects the use of AH and it´s Integrity Check Value (ICV) the most?
Key session exchange
Packet Header Source or Destination address
VPN cryptographic key size
Crypotographic algorithm used
It may seem odd to have two different protocols that provide overlapping functionality.
AH provides authentication and integrity, and ESP can provide those two functions and confidentiality.
Why even bother with AH then?
In most cases, the reason has to do with whether the environment is using network address translation (NAT). IPSec will generate an integrity check value (ICV), which is really the same thing as a MAC value, over a portion of the packet. Remember that the sender and receiver generate their own values. In IPSec, it is called an ICV value. The receiver compares her ICV value with the one sent by the sender. If the values match, the receiver can be assured the packet has not been modified during transmission. If the values are different, the packet has been altered and the receiver discards the packet.
The AH protocol calculates this ICV over the data payload, transport, and network headers. If the packet then goes through a NAT device, the NAT device changes the IP address of the packet. That is its job. This means a portion of the data (network header) that was included to calculate the ICV value has now changed, and the receiver will generate an ICV value that is different from the one sent with the packet, which means the packet will be discarded automatically.
The ESP protocol follows similar steps, except it does not include the network header portion when calculating its ICV value. When the NAT device changes the IP address, it will not affect the receiver’s ICV value because it does not include the network header when calculating the ICV.
Here is a tutorial on IPSEC from the Shon Harris Blog:
The Internet Protocol Security (IPSec) protocol suite provides a method of setting up a secure channel for protected data exchange between two devices. The devices that share this secure channel can be two servers, two routers, a workstation and a server, or two gateways between different networks. IPSec is a widely accepted standard for providing network layer protection. It can be more flexible and less expensive than end-to end and link encryption methods.
IPSec has strong encryption and authentication methods, and although it can be used to enable tunneled communication between two computers, it is usually employed to establish virtual private networks (VPNs) among networks across the Internet.
IPSec is not a strict protocol that dictates the type of algorithm, keys, and authentication method to use. Rather, it is an open, modular framework that provides a lot of flexibility for companies when they choose to use this type of technology. IPSec uses two basic security protocols: Authentication Header (AH) and Encapsulating Security Payload (ESP). AH is the authenticating protocol, and ESP is an authenticating and encrypting protocol that uses cryptographic mechanisms to provide source authentication, confidentiality, and message integrity.
IPSec can work in one of two modes: transport mode, in which the payload of the message is protected, and tunnel mode, in which the payload and the routing and header information are protected. ESP in transport mode encrypts the actual message information so it cannot be sniffed and uncovered by an unauthorized entity. Tunnel mode provides a higher level of protection by also protecting the header and trailer data an attacker may find useful. Figure 8-26 shows the high-level view of the steps of setting up an IPSec connection.
Each device will have at least one security association (SA) for each VPN it uses. The SA, which is critical to the IPSec architecture, is a record of the configurations the device needs to support an IPSec connection. When two devices complete their handshaking process, which means they have agreed upon a long list of parameters they will use to communicate, these data must be recorded and stored somewhere, which is in the SA.
The SA can contain the authentication and encryption keys, the agreed-upon algorithms, the key lifetime, and the source IP address. When a device receives a packet via the IPSec protocol, it is the SA that tells the device what to do with the packet. So if device B receives a packet from device C via IPSec, device B will look to the corresponding SA to tell it how to decrypt the packet, how to properly authenticate the source of the packet, which key to use, and how to reply to the message if necessary.
SAs are directional, so a device will have one SA for outbound traffic and a different SA for inbound traffic for each individual communication channel. If a device is connecting to three devices, it will have at least six SAs, one for each inbound and outbound connection per remote device. So how can a device keep all of these SAs organized and ensure that the right SA is invoked for the right connection? With the mighty secu rity parameter index (SPI), that’s how. Each device has an SPI that keeps track of the different SAs and tells the device which one is appropriate to invoke for the different packets it receives. The SPI value is in the header of an IPSec packet, and the device reads this value to tell it which SA to consult.
IPSec can authenticate the sending devices of the packet by using MAC (covered in the earlier section, “The One-Way Hash”). The ESP protocol can provide authentication, integrity, and confidentiality if the devices are configured for this type of functionality.
So if a company just needs to make sure it knows the source of the sender and must be assured of the integrity of the packets, it would choose to use AH. If the company would like to use these services and also have confidentiality, it would use the ESP protocol because it provides encryption functionality. In most cases, the reason ESP is employed is because the company must set up a secure VPN connection.
It may seem odd to have two different protocols that provide overlapping functionality. AH provides authentication and integrity, and ESP can provide those two functions and confidentiality. Why even bother with AH then? In most cases, the reason has to do with whether the environment is using network address translation (NAT). IPSec will generate an integrity check value (ICV), which is really the same thing as a MAC value, over a portion of the packet. Remember that the sender and receiver generate their own values. In IPSec, it is called an ICV value. The receiver compares her ICV value with the one sent by the sender. If the values match, the receiver can be assured the packet has not been modified during transmission. If the values are different, the packet has been altered and the receiver discards the packet.
The AH protocol calculates this ICV over the data payload, transport, and network headers. If the packet then goes through a NAT device, the NAT device changes the IP address of the packet. That is its job. This means a portion of the data (network header) that was included to calculate the ICV value has now changed, and the receiver will generate an ICV value that is different from the one sent with the packet, which means the packet will be discarded automatically.
The ESP protocol follows similar steps, except it does not include the network header portion when calculating its ICV value. When the NAT device changes the IP address, it will not affect the receiver’s ICV value because it does not include the network header when calculating the ICV.
Because IPSec is a framework, it does not dictate which hashing and encryption algorithms are to be used or how keys are to be exchanged between devices. Key management can be handled manually or automated by a key management protocol. The de facto standard for IPSec is to use Internet Key Exchange (IKE), which is a combination of the ISAKMP and OAKLEY protocols. The Internet Security Association and Key Management Protocol (ISAKMP) is a key exchange architecture that is independent of the type of keying mechanisms used. Basically, ISAKMP provides the framework of what can be negotiated to set up an IPSec connection (algorithms, protocols, modes, keys). The OAKLEY protocol is the one that carries out the negotiation process. You can think of ISAKMP as providing the playing field (the infrastructure) and OAKLEY as the guy running up and down the playing field (carrying out the steps of the negotiation).
IPSec is very complex with all of its components and possible configurations. This complexity is what provides for a great degree of flexibility, because a company has many different configuration choices to achieve just the right level of protection. If this is all new to you and still confusing, please review one or more of the following references to help fill in the gray areas.
The following answers are incorrect:
The other options are distractors.
The following reference(s) were/was used to create this question:
Shon Harris, CISSP All-in-One Exam Guide- fiveth edition, page 759
and
https://neodean.wordpress.com/tag/security-protocol/
Which one of the following is used to provide authentication and confidentiality for e-mail messages?
Digital signature
PGP
IPSEC AH
MD4
Instead of using a Certificate Authority, PGP uses a "Web of Trust", where users can certify each other in a mesh model, which is best applied to smaller groups.
In cryptography, a web of trust is a concept used in PGP, GnuPG, and other OpenPGP compatible systems to establish the authenticity of the binding between a public key and its owner. Its decentralized trust model is an alternative to the centralized trust model of a public key infrastructure (PKI), which relies exclusively on a certificate authority (or a hierarchy of such). The web of trust concept was first put forth by PGP creator Phil Zimmermann in 1992 in the manual for PGP version 2.0.
Pretty Good Privacy (PGP) is a data encryption and decryption computer program that provides cryptographic privacy and authentication for data communication. PGP is often used for signing, encrypting and decrypting texts, E-mails, files, directories and whole disk partitions to increase the security of e-mail communications. It was created by Phil Zimmermann in 1991.
As per Shon Harris's book:
Pretty Good Privacy (PGP) was designed by Phil Zimmerman as a freeware e-mail security program and was released in 1991. It was the first widespread public key encryption program. PGP is a complete cryptosystem that uses cryptographic protection to protect e-mail and files. It can use RSA public key encryption for key management and use IDEA symmetric cipher for bulk encryption of data, although the user has the option of picking different types of algorithms for these functions. PGP can provide confidentiality by using the IDEA encryption algorithm, integrity by using the MD5 hashing algorithm, authentication by using the public key certificates, and nonrepudiation by using cryptographically signed messages. PGP initially used its own type of digital certificates rather than what is used in PKI, but they both have similar purposes. Today PGP support X.509 V3 digital certificates.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 4: Cryptography (page 169).
Shon Harris, CISSP All in One book
https://en.wikipedia.org/wiki/Pretty_Good_Privacy
TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
In a SSL session between a client and a server, who is responsible for generating the master secret that will be used as a seed to generate the symmetric keys that will be used during the session?
Both client and server
The client's browser
The web server
The merchant's Certificate Server
Once the merchant server has been authenticated by the browser client, the browser generates a master secret that is to be shared only between the server and client. This secret serves as a seed to generate the session (private) keys. The master secret is then encrypted with the merchant's public key and sent to the server. The fact that the master secret is generated by the client's browser provides the client assurance that the server is not reusing keys that would have been used in a previous session with another client.
Source: ANDRESS, Mandy, Exam Cram CISSP, Coriolis, 2001, Chapter 6: Cryptography (page 112).
Also: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2001, page 569.
Which of the following offers security to wireless communications?
S-WAP
WTLS
WSP
WDP
Wireless Transport Layer Security (WTLS) is a communication protocol that allows wireless devices to send and receive encrypted information over the Internet. S-WAP is not defined. WSP (Wireless Session Protocol) and WDP (Wireless Datagram Protocol) are part of Wireless Access Protocol (WAP).
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 4: Cryptography (page 173).
Which of the following is the biggest concern with firewall security?
Internal hackers
Complex configuration rules leading to misconfiguration
Buffer overflows
Distributed denial of service (DDOS) attacks
Firewalls tend to give a false sense of security. They can be very hard to bypass but they need to be properly configured. The complexity of configuration rules can introduce a vulnerability when the person responsible for its configuration does not fully understand all possible options and switches. Denial of service attacks mainly concerns availability.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, Chapter 3: Telecommunications and Network Security (page 412).
Which of the following is an IP address that is private (i.e. reserved for internal networks, and not a valid address to use on the Internet)?
192.168.42.5
192.166.42.5
192.175.42.5
192.1.42.5
This is a valid Class C reserved address. For Class C, the reserved addresses are 192.168.0.0 - 192.168.255.255.
The private IP address ranges are defined within RFC 1918:
RFC 1918 private ip address range
The following answers are incorrect:
192.166.42.5 Is incorrect because it is not a Class C reserved address.
192.175.42.5 Is incorrect because it is not a Class C reserved address.
192.1.42.5 Is incorrect because it is not a Class C reserved address.
What is the maximum length of cable that can be used for a twisted-pair, Category 5 10Base-T cable?
80 meters
100 meters
185 meters
500 meters
As a signal travels though a medium, it attenuates (loses strength) and at some point will become indistinguishable from noise. To assure trouble-free communication, maximum cable lengths are set between nodes to assure that attenuation will not cause a problem. The maximum CAT-5 UTP cable length between two nodes for 10BASE-T is 100M.
The following answers are incorrect:
80 meters. It is only a distracter.
185 meters. Is incorrect because it is the maximum length for 10Base-2
500 meters. Is incorrect because it is the maximum length for 10Base-5
The controls that usually require a human to evaluate the input from sensors or cameras to determine if a real threat exists are associated with:
Preventive/physical
Detective/technical
Detective/physical
Detective/administrative
Detective/physical controls usually require a human to evaluate the input from sensors or cameras to determine if a real threat exists.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 36.
A central authority determines what subjects can have access to certain objects based on the organizational security policy is called:
Mandatory Access Control
Discretionary Access Control
Non-Discretionary Access Control
Rule-based Access control
A central authority determines what subjects can have access to certain objects based on the organizational security policy.
The key focal point of this question is the 'central authority' that determines access rights.
Cecilia one of the quiz user has sent me feedback informing me that NIST defines MAC as: "MAC Policy means that Access Control Policy Decisions are made by a CENTRAL AUTHORITY. Which seems to indicate there could be two good answers to this question.
However if you read the NISTR document mentioned in the references below, it is also mentioned that: MAC is the most mentioned NDAC policy. So MAC is a form of NDAC policy.
Within the same document it is also mentioned: "In general, all access control policies other than DAC are grouped in the category of non- discretionary access control (NDAC). As the name implies, policies in this category have rules that are not established at the discretion of the user. Non-discretionary policies establish controls that cannot be changed by users, but only through administrative action."
Under NDAC you have two choices:
Rule Based Access control and Role Base Access Control
MAC is implemented using RULES which makes it fall under RBAC which is a form of NDAC. It is a subset of NDAC.
This question is representative of what you can expect on the real exam where you have more than once choice that seems to be right. However, you have to look closely if one of the choices would be higher level or if one of the choice falls under one of the other choice. In this case NDAC is a better choice because MAC is falling under NDAC through the use of Rule Based Access Control.
The following are incorrect answers:
MANDATORY ACCESS CONTROL
In Mandatory Access Control the labels of the object and the clearance of the subject determines access rights, not a central authority. Although a central authority (Better known as the Data Owner) assigns the label to the object, the system does the determination of access rights automatically by comparing the Object label with the Subject clearance. The subject clearance MUST dominate (be equal or higher) than the object being accessed.
The need for a MAC mechanism arises when the security policy of a system dictates that:
1. Protection decisions must not be decided by the object owner.
2. The system must enforce the protection decisions (i.e., the system enforces the security policy over the wishes or intentions of the object owner).
Usually a labeling mechanism and a set of interfaces are used to determine access based on the MAC policy; for example, a user who is running a process at the Secret classification should not be allowed to read a file with a label of Top Secret. This is known as the “simple security rule,” or “no read up.”
Conversely, a user who is running a process with a label of Secret should not be allowed to write to a file with a label of Confidential. This rule is called the “*-property” (pronounced “star property”) or “no write down.” The *-property is required to maintain system security in an automated environment.
DISCRETIONARY ACCESS CONTROL
In Discretionary Access Control the rights are determined by many different entities, each of the persons who have created files and they are the owner of that file, not one central authority.
DAC leaves a certain amount of access control to the discretion of the object's owner or anyone else who is authorized to control the object's access. For example, it is generally used to limit a user's access to a file; it is the owner of the file who controls other users' accesses to the file. Only those users specified by the owner may have some combination of read, write, execute, and other permissions to the file.
DAC policy tends to be very flexible and is widely used in the commercial and government sectors. However, DAC is known to be inherently weak for two reasons:
First, granting read access is transitive; for example, when Ann grants Bob read access to a file, nothing stops Bob from copying the contents of Ann’s file to an object that Bob controls. Bob may now grant any other user access to the copy of Ann’s file without Ann’s knowledge.
Second, DAC policy is vulnerable to Trojan horse attacks. Because programs inherit the identity of the invoking user, Bob may, for example, write a program for Ann that, on the surface, performs some useful function, while at the same time destroys the contents of Ann’s files. When investigating the problem, the audit files would indicate that Ann destroyed her own files. Thus, formally, the drawbacks of DAC are as follows:
Discretionary Access Control (DAC) Information can be copied from one object to another; therefore, there is no real assurance on the flow of information in a system.
No restrictions apply to the usage of information when the user has received it.
The privileges for accessing objects are decided by the owner of the object, rather than through a system-wide policy that reflects the organization’s security requirements.
ACLs and owner/group/other access control mechanisms are by far the most common mechanism for implementing DAC policies. Other mechanisms, even though not designed with DAC in mind, may have the capabilities to implement a DAC policy.
RULE BASED ACCESS CONTROL
In Rule-based Access Control a central authority could in fact determine what subjects can have access when assigning the rules for access. However, the rules actually determine the access and so this is not the most correct answer.
RuBAC (as opposed to RBAC, role-based access control) allow users to access systems and information based on pre determined and configured rules. It is important to note that there is no commonly understood definition or formally defined standard for rule-based access control as there is for DAC, MAC, and RBAC. “Rule-based access” is a generic term applied to systems that allow some form of organization-defined rules, and therefore rule-based access control encompasses a broad range of systems. RuBAC may in fact be combined with other models, particularly RBAC or DAC. A RuBAC system intercepts every access request and compares the rules with the rights of the user to make an access decision. Most of the rule-based access control relies on a security label system, which dynamically composes a set of rules defined by a security policy. Security labels are attached to all objects, including files, directories, and devices. Sometime roles to subjects (based on their attributes) are assigned as well. RuBAC meets the business needs as well as the technical needs of controlling service access. It allows business rules to be applied to access control—for example, customers who have overdue balances may be denied service access. As a mechanism for MAC, rules of RuBAC cannot be changed by users. The rules can be established by any attributes of a system related to the users such as domain, host, protocol, network, or IP addresses. For example, suppose that a user wants to access an object in another network on the other side of a router. The router employs RuBAC with the rule composed by the network addresses, domain, and protocol to decide whether or not the user can be granted access. If employees change their roles within the organization, their existing authentication credentials remain in effect and do not need to be re configured. Using rules in conjunction with roles adds greater flexibility because rules can be applied to people as well as to devices. Rule-based access control can be combined with role-based access control, such that the role of a user is one of the attributes in rule setting. Some provisions of access control systems have rule- based policy engines in addition to a role-based policy engine and certain implemented dynamic policies [Des03]. For example, suppose that two of the primary types of software users are product engineers and quality engineers. Both groups usually have access to the same data, but they have different roles to perform in relation to the data and the application's function. In addition, individuals within each group have different job responsibilities that may be identified using several types of attributes such as developing programs and testing areas. Thus, the access decisions can be made in real time by a scripted policy that regulates the access between the groups of product engineers and quality engineers, and each individual within these groups. Rules can either replace or complement role-based access control. However, the creation of rules and security policies is also a complex process, so each organization will need to strike the appropriate balance.
References used for this question:
http://csrc.nist.gov/publications/nistir/7316/NISTIR-7316.pdf
and
AIO v3 p162-167 and OIG (2007) p.186-191
also
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 33.
Which of following is not a service provided by AAA servers (Radius, TACACS and DIAMETER)?
Authentication
Administration
Accounting
Authorization
Radius, TACACS and DIAMETER are classified as authentication, authorization, and accounting (AAA) servers.
Source: TIPTON, Harold F. & KRAUSE, MICKI, Information Security Management Handbook, 4th Edition, Volume 2, 2001, CRC Press, NY, Page 33.
also see:
The term "AAA" is often used, describing cornerstone concepts [of the AIC triad] Authentication, Authorization, and Accountability. Left out of the AAA acronym is Identification which is required before the three "A's" can follow. Identity is a claim, Authentication proves an identity, Authorization describes the action you can perform on a system once you have been identified and authenticated, and accountability holds users accountable for their actions.
What kind of certificate is used to validate a user identity?
Public key certificate
Attribute certificate
Root certificate
Code signing certificate
In cryptography, a public key certificate (or identity certificate) is an electronic document which incorporates a digital signature to bind together a public key with an identity — information such as the name of a person or an organization, their address, and so forth. The certificate can be used to verify that a public key belongs to an individual.
In a typical public key infrastructure (PKI) scheme, the signature will be of a certificate authority (CA). In a web of trust scheme, the signature is of either the user (a self-signed certificate) or other users ("endorsements"). In either case, the signatures on a certificate are attestations by the certificate signer that the identity information and the public key belong together.
In computer security, an authorization certificate (also known as an attribute certificate) is a digital document that describes a written permission from the issuer to use a service or a resource that the issuer controls or has access to use. The permission can be delegated.
Some people constantly confuse PKCs and ACs. An analogy may make the distinction clear. A PKC can be considered to be like a passport: it identifies the holder, tends to last for a long time, and should not be trivial to obtain. An AC is more like an entry visa: it is typically issued by a different authority and does not last for as long a time. As acquiring an entry visa typically requires presenting a passport, getting a visa can be a simpler process.
A real life example of this can be found in the mobile software deployments by large service providers and are typically applied to platforms such as Microsoft Smartphone (and related), Symbian OS, J2ME, and others.
In each of these systems a mobile communications service provider may customize the mobile terminal client distribution (ie. the mobile phone operating system or application environment) to include one or more root certificates each associated with a set of capabilities or permissions such as "update firmware", "access address book", "use radio interface", and the most basic one, "install and execute". When a developer wishes to enable distribution and execution in one of these controlled environments they must acquire a certificate from an appropriate CA, typically a large commercial CA, and in the process they usually have their identity verified using out-of-band mechanisms such as a combination of phone call, validation of their legal entity through government and commercial databases, etc., similar to the high assurance SSL certificate vetting process, though often there are additional specific requirements imposed on would-be developers/publishers.
Once the identity has been validated they are issued an identity certificate they can use to sign their software; generally the software signed by the developer or publisher's identity certificate is not distributed but rather it is submitted to processor to possibly test or profile the content before generating an authorization certificate which is unique to the particular software release. That certificate is then used with an ephemeral asymmetric key-pair to sign the software as the last step of preparation for distribution. There are many advantages to separating the identity and authorization certificates especially relating to risk mitigation of new content being accepted into the system and key management as well as recovery from errant software which can be used as attack vectors.
References:
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, 2001, McGraw-Hill/Osborne, page 540.
http://en.wikipedia.org/wiki/Attribute_certificate
http://en.wikipedia.org/wiki/Public_key_certificate
What is the Biba security model concerned with?
Confidentiality
Reliability
Availability
Integrity
The Biba security model addresses the integrity of data being threatened when subjects at lower security levels are able to write to objects at higher security levels and when subjects can read data at lower levels.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, Chapter 5: Security Models and Architecture (Page 244).
Which of the following is most appropriate to notify an internal user that session monitoring is being conducted?
Logon Banners
Wall poster
Employee Handbook
Written agreement
This is a tricky question, the keyword in the question is Internal users.
There are two possible answers based on how the question is presented, this question could either apply to internal users or ANY anonymous/external users.
Internal users should always have a written agreement first, then logon banners serve as a constant reminder.
Banners at the log-on time should be used to notify external users of any monitoring that is being conducted. A good banner will give you a better legal stand and also makes it obvious the user was warned about who should access the system, who is authorized and unauthorized, and if it is an unauthorized user then he is fully aware of trespassing. Anonymous/External users, such as those logging into a web site, ftp server or even a mail server; their only notification system is the use of a logon banner.
References used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 50.
and
Shon Harris, CISSP All-in-one, 5th edition, pg 873
What does the (star) property mean in the Bell-LaPadula model?
No write up
No read up
No write down
No read down
The (star) property of the Bell-LaPadula access control model states that writing of information by a subject at a higher level of sensitivity to an object at a lower level of sensitivity is not permitted (no write down).
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 5: Security Architectures and Models (page 202).
Also check out: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, Chapter 5: Security Models and Architecture (page 242, 243).
What is called the use of technologies such as fingerprint, retina, and iris scans to authenticate the individuals requesting access to resources?
Micrometrics
Macrometrics
Biometrics
MicroBiometrics
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 35.
Which access control model would a lattice-based access control model be an example of?
Mandatory access control.
Discretionary access control.
Non-discretionary access control.
Rule-based access control.
In a lattice model, there are pairs of elements that have the least upper bound of values and greatest lower bound of values. In a Mandatory Access Control (MAC) model, users and data owners do not have as much freedom to determine who can access files.
TIPS FROM CLEMENT
Mandatory Access Control is in place whenever you have permissions that are being imposed on the subject and the subject cannot arbitrarily change them. When the subject/owner of the file can change permissions at will, it is discretionary access control.
Here is a breakdown largely based on explanations provided by Doug Landoll. I am reproducing below using my own word and not exactly how Doug explained it:
FIRST: The Lattice
A lattice is simply an access control tool usually used to implement Mandatory Access Control (MAC) and it could also be used to implement RBAC but this is not as common. The lattice model can be used for Integrity level or file permissions as well. The lattice has a least upper bound and greatest lower bound. It makes use of pair of elements such as the subject security clearance pairing with the object sensitivity label.
SECOND: DAC (Discretionary Access Control)
Let's get into Discretionary Access Control: It is an access control method where the owner (read the creator of the object) will decide who has access at his own discretion. As we all know, users are sometimes insane. They will share their files with other users based on their identity but nothing prevent the user from further sharing it with other users on the network. Very quickly you loose control on the flow of information and who has access to what. It is used in small and friendly environment where a low level of security is all that is required.
THIRD: MAC (Mandatory Access Control)
All of the following are forms of Mandatory Access Control:
Mandatory Access control (MAC) (Implemented using the lattice)
You must remember that MAC makes use of Security Clearance for the subject and also Labels will be assigned to the objects. The clearance of the Subject must dominate (be equal or higher) the clearance of the Object being accessed. The label attached to the object will indicate the sensitivity leval and the categories the object belongs to. The categories are used to implement the Need to Know.
All of the following are forms of Non Discretionary Access Control:
Role Based Access Control (RBAC)
Rule Based Access Control (Think Firewall in this case)
The official ISC2 book says that RBAC (synonymous with Non Discretionary Access Control) is a form of DAC but they are simply wrong. RBAC is a form of Non Discretionary Access Control. Non Discretionary DOES NOT equal mandatory access control as there is no labels and clearance involved.
I hope this clarifies the whole drama related to what is what in the world of access control.
In the same line of taught, you should be familiar with the difference between Explicit permission (the user has his own profile) versus Implicit (the user inherit permissions by being a member of a role for example).
The following answers are incorrect:
Discretionary access control. Is incorrect because in a Discretionary Access Control (DAC) model, access is restricted based on the authorization granted to the users. It is identity based access control only. It does not make use of a lattice.
Non-discretionary access control. Is incorrect because Non-discretionary Access Control (NDAC) uses the role-based access control method to determine access rights and permissions. It is often times used as a synonym to RBAC which is Role Based Access Control. The user inherit permission from the role when they are assigned into the role. This type of access could make use of a lattice but could also be implemented without the use of a lattice in some case. Mandatory Access Control was a better choice than this one, but RBAC could also make use of a lattice. The BEST answer was MAC.
Rule-based access control. Is incorrect because it is an example of a Non-discretionary Access Control (NDAC) access control mode. You have rules that are globally applied to all users. There is no such thing as a lattice being use in Rule-Based Access Control.
References:
AIOv3 Access Control (pages 161 - 168)
AIOv3 Security Models and Architecture (pages 291 - 293)
In an organization where there are frequent personnel changes, non-discretionary access control using Role Based Access Control (RBAC) is useful because:
people need not use discretion
the access controls are based on the individual's role or title within the organization.
the access controls are not based on the individual's role or title within the organization
the access controls are often based on the individual's role or title within the organization
In an organization where there are frequent personnel changes, non-discretionary access control (also called Role Based Access Control) is useful because the access controls are based on the individual's role or title within the organization. You can easily configure a new employee acces by assigning the user to a role that has been predefine. The user will implicitly inherit the permissions of the role by being a member of that role.
These access permissions defined within the role do not need to be changed whenever a new person takes over the role.
Another type of non-discretionary access control model is the Rule Based Access Control (RBAC or RuBAC) where a global set of rule is uniformly applied to all subjects accessing the resources. A good example of RuBAC would be a firewall.
This question is a sneaky one, one of the choice has only one added word to it which is often. Reading questions and their choices very carefully is a must for the real exam. Reading it twice if needed is recommended.
Shon Harris in her book list the following ways of managing RBAC:
Role-based access control can be managed in the following ways:
Non-RBAC Users are mapped directly to applications and no roles are used. (No roles being used)
Limited RBAC Users are mapped to multiple roles and mapped directly to other types of applications that do not have role-based access functionality. (A mix of roles for applications that supports roles and explicit access control would be used for applications that do not support roles)
Hybrid RBAC Users are mapped to multiapplication roles with only selected rights assigned to those roles.
Full RBAC Users are mapped to enterprise roles. (Roles are used for all access being granted)
NIST defines RBAC as:
Security administration can be costly and prone to error because administrators usually specify access control lists for each user on the system individually. With RBAC, security is managed at a level that corresponds closely to the organization's structure. Each user is assigned one or more roles, and each role is assigned one or more privileges that are permitted to users in that role. Security administration with RBAC consists of determining the operations that must be executed by persons in particular jobs, and assigning employees to the proper roles. Complexities introduced by mutually exclusive roles or role hierarchies are handled by the RBAC software, making security administration easier.
Reference(s) used for this question:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 32.
and
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition McGraw-Hill.
and
http://csrc.nist.gov/groups/SNS/rbac/
Guards are appropriate whenever the function required by the security program involves which of the following?
The use of discriminating judgment
The use of physical force
The operation of access control devices
The need to detect unauthorized access
The Answer: The use of discriminating judgment, a guard can make the determinations that hardware or other automated security devices cannot make due to its ability to adjust to rapidly changing conditions, to learn and alter recognizable patterns, and to respond to various conditions in the environment. Guards are better at making value decisions at times of incidents. They are appropriate whenever immediate, discriminating judgment is required by the security entity.
The following answers are incorrect:
The use of physical force This is not the best answer. A guard provides discriminating judgment, and the ability to discern the need for physical force.
The operation of access control devices A guard is often uninvolved in the operations of an automated access control device such as a biometric reader, a smart lock, mantrap, etc.
The need to detect unauthorized access The primary function of a guard is not to detect unauthorized access, but to prevent unauthorized physical access attempts and may deter social engineering attempts.
The following reference(s) were/was used to create this question:
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 10: Physical security (page 339).
Source: ISC2 Offical Guide to the CBK page 288-289.
What refers to legitimate users accessing networked services that would normally be restricted to them?
Spoofing
Piggybacking
Eavesdropping
Logon abuse
Unauthorized access of restricted network services by the circumvention of security access controls is known as logon abuse. This type of abuse refers to users who may be internal to the network but access resources they would not normally be allowed.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 3: Telecommunications and Network Security (page 74).
Which of the following Kerberos components holds all users' and services' cryptographic keys?
The Key Distribution Service
The Authentication Service
The Key Distribution Center
The Key Granting Service
The Key Distribution Center (KDC) holds all users' and services' cryptographic keys. It provides authentication services, as well as key distribution functionality. The Authentication Service is the part of the KDC that authenticates a principal. The Key Distribution Service and Key Granting Service are distracters and are not defined Kerberos components.
Source: WALLHOFF, John, CISSP Summary 2002, April 2002, CBK#1 Access Control System & Methodology (page 3)
Which access control model was proposed for enforcing access control in government and military applications?
Bell-LaPadula model
Biba model
Sutherland model
Brewer-Nash model
The Bell-LaPadula model, mostly concerned with confidentiality, was proposed for enforcing access control in government and military applications. It supports mandatory access control by determining the access rights from the security levels associated with subjects and objects. It also supports discretionary access control by checking access rights from an access matrix. The Biba model, introduced in 1977, the Sutherland model, published in 1986, and the Brewer-Nash model, published in 1989, are concerned with integrity.
Source: ANDRESS, Mandy, Exam Cram CISSP, Coriolis, 2001, Chapter 2: Access Control Systems and Methodology (page 11).
Which of the following access control models requires security clearance for subjects?
Identity-based access control
Role-based access control
Discretionary access control
Mandatory access control
With mandatory access control (MAC), the authorization of a subject's access to an object is dependant upon labels, which indicate the subject's clearance. Identity-based access control is a type of discretionary access control. A role-based access control is a type of non-discretionary access control.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 2: Access control systems (page 33).
Which of the following is NOT true of the Kerberos protocol?
Only a single login is required per session.
The initial authentication steps are done using public key algorithm.
The KDC is aware of all systems in the network and is trusted by all of them
It performs mutual authentication
Kerberos is a network authentication protocol. It is designed to provide strong authentication for client/server applications by using secret-key cryptography. It has the following characteristics:
It is secure: it never sends a password unless it is encrypted.
Only a single login is required per session. Credentials defined at login are then passed between resources without the need for additional logins.
The concept depends on a trusted third party – a Key Distribution Center (KDC). The KDC is aware of all systems in the network and is trusted by all of them.
It performs mutual authentication, where a client proves its identity to a server and a server proves its identity to the client.
Kerberos introduces the concept of a Ticket-Granting Server/Service (TGS). A client that wishes to use a service has to receive a ticket from the TGS – a ticket is a time-limited cryptographic message – giving it access to the server. Kerberos also requires an Authentication Server (AS) to verify clients. The two servers combined make up a KDC.
Within the Windows environment, Active Directory performs the functions of the KDC. The following figure shows the sequence of events required for a client to gain access to a service using Kerberos authentication. Each step is shown with the Kerberos message associated with it, as defined in RFC 4120 “The Kerberos Network Authorization Service (V5)”.
Kerberos Authentication Step by Step
Step 1: The user logs on to the workstation and requests service on the host. The workstation sends a message to the Authorization Server requesting a ticket granting ticket (TGT).
Step 2: The Authorization Server verifies the user’s access rights in the user database and creates a TGT and session key. The Authorization Sever encrypts the results using a key derived from the user’s password and sends a message back to the user workstation.
The workstation prompts the user for a password and uses the password to decrypt the incoming message. When decryption succeeds, the user will be able to use the TGT to request a service ticket.
Step 3: When the user wants access to a service, the workstation client application sends a request to the Ticket Granting Service containing the client name, realm name and a timestamp. The user proves his identity by sending an authenticator encrypted with the session key received in Step 2.
Step 4: The TGS decrypts the ticket and authenticator, verifies the request, and creates a ticket for the requested server. The ticket contains the client name and optionally the client IP address. It also contains the realm name and ticket lifespan. The TGS returns the ticket to the user workstation. The returned message contains two copies of a server session key – one encrypted with the client password, and one encrypted by the service password.
Step 5: The client application now sends a service request to the server containing the ticket received in Step 4 and an authenticator. The service authenticates the request by decrypting the session key. The server verifies that the ticket and authenticator match, and then grants access to the service. This step as described does not include the authorization performed by the Intel AMT device, as described later.
Step 6: If mutual authentication is required, then the server will reply with a server authentication message.
The Kerberos server knows "secrets" (encrypted passwords) for all clients and servers under its control, or it is in contact with other secure servers that have this information. These "secrets" are used to encrypt all of the messages shown in the figure above.
To prevent "replay attacks," Kerberos uses timestamps as part of its protocol definition. For timestamps to work properly, the clocks of the client and the server need to be in synch as much as possible. In other words, both computers need to be set to the same time and date. Since the clocks of two computers are often out of synch, administrators can establish a policy to establish the maximum acceptable difference to Kerberos between a client's clock and server's clock. If the difference between a client's clock and the server's clock is less than the maximum time difference specified in this policy, any timestamp used in a session between the two computers will be considered authentic. The maximum difference is usually set to five minutes.
Note that if a client application wishes to use a service that is "Kerberized" (the service is configured to perform Kerberos authentication), the client must also be Kerberized so that it expects to support the necessary message responses.
For more information about Kerberos, see http://web.mit.edu/kerberos/www/.
References:
Introduction to Kerberos Authentication from Intel
and
http://www.zeroshell.net/eng/kerberos/Kerberos-definitions/#1.3.5.3
and
http://www.ietf.org/rfc/rfc4120.txt
The control measures that are intended to reveal the violations of security policy using software and hardware are associated with:
Preventive/physical
Detective/technical
Detective/physical
Detective/administrative
The detective/technical control measures are intended to reveal the violations of security policy using technical means.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 35.
Identification and authentication are the keystones of most access control systems. Identification establishes:
User accountability for the actions on the system.
Top management accountability for the actions on the system.
EDP department accountability for the actions of users on the system.
Authentication for actions on the system
Identification and authentication are the keystones of most access control systems. Identification establishes user accountability for the actions on the system.
The control environment can be established to log activity regarding the identification, authentication, authorization, and use of privileges on a system. This can be used to detect the occurrence of errors, the attempts to perform an unauthorized action, or to validate when provided credentials were exercised. The logging system as a detective device provides evidence of actions (both successful and unsuccessful) and tasks that were executed by authorized users.
Once a person has been identified through the user ID or a similar value, she must be authenticated, which means she must prove she is who she says she is. Three general factors can be used for authentication: something a person knows, something a person has, and something a person is. They are also commonly called authentication by knowledge, authentication by ownership, and authentication by characteristic.
For a user to be able to access a resource, he first must prove he is who he claims to be, has the necessary credentials, and has been given the necessary rights or privileges to perform the actions he is requesting. Once these steps are completed successfully, the user can access and use network resources; however, it is necessary to track the user’s activities and enforce accountability for his actions.
Identification describes a method of ensuring that a subject (user, program, or process) is the entity it claims to be. Identification can be provided with the use of a username or account number. To be properly authenticated, the subject is usually required to provide a second piece to the credential set. This piece could be a password, passphrase, cryptographic key, personal identification number (PIN), anatomical attribute, or token.
These two credential items are compared to information that has been previously stored for this subject. If these credentials match the stored information, the subject is authenticated. But we are not done yet. Once the subject provides its credentials and is properly identified, the system it is trying to access needs to determine if this subject has been given the necessary rights and privileges to carry out the requested actions. The system will look at some type of access control matrix or compare security labels to verify that this subject may indeed access the requested resource and perform the actions it is attempting. If the system determines that the subject may access the resource, it authorizes the subject.
Although identification, authentication, authorization, and accountability have close and complementary definitions, each has distinct functions that fulfill a specific requirement in the process of access control. A user may be properly identified and authenticated to the network, but he may not have the authorization to access the files on the file server. On the other hand, a user may be authorized to access the files on the file server, but until she is properly identified and authenticated, those resources are out of reach.
Reference(s) used for this question:
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition: Access Control ((ISC)2 Press) (Kindle Locations 889-892). Auerbach Publications. Kindle Edition.
and
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (Kindle Locations 3875-3878). McGraw-Hill. Kindle Edition.
and
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (Kindle Locations 3833-3848). McGraw-Hill. Kindle Edition.
and
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 36.
Which of the following was developed by the National Computer Security Center (NCSC) for the US Department of Defense ?
TCSEC
ITSEC
DIACAP
NIACAP
The Answer: TCSEC; The TCSEC, frequently referred to as the Orange Book, is the centerpiece of the DoD Rainbow Series publications.
Initially issued by the National Computer Security Center (NCSC) an arm of the National Security Agency in 1983 and then updated in 1985, TCSEC was replaced with the development of the Common Criteria international standard originally published in 2005.
References:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, pages 197-199.
Wikepedia
http://en.wikipedia.org/wiki/TCSEC
Considerations of privacy, invasiveness, and psychological and physical comfort when using the system are important elements for which of the following?
Accountability of biometrics systems
Acceptability of biometrics systems
Availability of biometrics systems
Adaptability of biometrics systems
Acceptability refers to considerations of privacy, invasiveness, and psychological and physical comfort when using the system.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 39.
Which of the following is the most reliable authentication method for remote access?
Variable callback system
Synchronous token
Fixed callback system
Combination of callback and caller ID
A Synchronous token generates a one-time password that is only valid for a short period of time. Once the password is used it is no longer valid, and it expires if not entered in the acceptable time frame.
The following answers are incorrect:
Variable callback system. Although variable callback systems are more flexible than fixed callback systems, the system assumes the identity of the individual unless two-factor authentication is also implemented. By itself, this method might allow an attacker access as a trusted user.
Fixed callback system. Authentication provides assurance that someone or something is who or what he/it is supposed to be. Callback systems authenticate a person, but anyone can pretend to be that person. They are tied to a specific place and phone number, which can be spoofed by implementing call-forwarding.
Combination of callback and Caller ID. The caller ID and callback functionality provides greater confidence and auditability of the caller's identity. By disconnecting and calling back only authorized phone numbers, the system has a greater confidence in the location of the call. However, unless combined with strong authentication, any individual at the location could obtain access.
The following reference(s) were/was used to create this question:
Shon Harris AIO v3 p. 140, 548
ISC2 OIG 2007 p. 152-153, 126-127
Which of the following control pairings include: organizational policies and procedures, pre-employment background checks, strict hiring practices, employment agreements, employee termination procedures, vacation scheduling, labeling of sensitive materials, increased supervision, security awareness training, behavior awareness, and sign-up procedures to obtain access to information systems and networks?
Preventive/Administrative Pairing
Preventive/Technical Pairing
Preventive/Physical Pairing
Detective/Administrative Pairing
The Answer: Preventive/Administrative Pairing: These mechanisms include organizational policies and procedures, pre-employment background checks, strict hiring practices, employment agreements, friendly and unfriendly employee termination procedures, vacation scheduling, labeling of sensitive materials, increased supervision, security awareness training, behavior awareness, and sign-up procedures to obtain access to information systems and networks.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 34.
This is a common security issue that is extremely hard to control in large environments. It occurs when a user has more computer rights, permissions, and access than what is required for the tasks the user needs to fulfill. What best describes this scenario?
Excessive Rights
Excessive Access
Excessive Permissions
Excessive Privileges
Even thou all 4 terms are very close to each other, the best choice is Excessive Privileges which would include the other three choices presented.
Reference(s) used for this question:
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2001, Page 645.
and
What Orange Book security rating is reserved for systems that have been evaluated but fail to meet the criteria and requirements of the higher divisions?
A
D
E
F
D or "minimal protection" is reserved for systems that were evaluated under the TCSEC but did not meet the requirements for a higher trust level.
A is incorrect. A or "Verified Protectection" is the highest trust level under the TCSEC.
E is incorrect. The trust levels are A - D so "E" is not a valid trust level.
F is incorrect. The trust levels are A - D so "F" is not a valid trust level.
CBK, pp. 329 - 330
AIO3, pp. 302 - 306
Which of the following describes the major disadvantage of many Single Sign-On (SSO) implementations?
Once an individual obtains access to the system through the initial log-on, they have access to all resources within the environment that the account has access to.
The initial logon process is cumbersome to discourage potential intruders.
Once a user obtains access to the system through the initial log-on, they only need to logon to some applications.
Once a user obtains access to the system through the initial log-on, he has to logout from all other systems
Single Sign-On is a distrubuted Access Control methodology where an individual only has to authenticate once and would have access to all primary and secondary network domains. The individual would not be required to re-authenticate when they needed additional resources. The security issue that this creates is if a fraudster is able to compromise those credential they too would have access to all the resources that account has access to.
All the other answers are incorrect as they are distractors.
What can be defined as a table of subjects and objects indicating what actions individual subjects can take upon individual objects?
A capacity table
An access control list
An access control matrix
A capability table
The matrix lists the users, groups and roles down the left side and the resources and functions across the top. The cells of the matrix can either indicate that access is allowed or indicate the type of access. CBK pp 317 - 318.
AIO3, p. 169 describes it as a table if subjects and objects specifying the access rights a certain subject possesses pertaining to specific objects.
In either case, the matrix is a way of analyzing the access control needed by a population of subjects to a population of objects. This access control can be applied using rules, ACL's, capability tables, etc.
"A capacity table" is incorrect.
This answer is a trap for the unwary -- it sounds a little like "capability table" but is just there to distract you.
"An access control list" is incorrect.
"It [ACL] specifies a list of users [subjects] who are allowed access to each object" CBK, p. 188 Access control lists (ACL) could be used to implement the rules identified by an access control matrix but is different from the matrix itself.
"A capability table" is incorrect.
"Capability tables are used to track, manage and apply controls based on the object and rights, or capabilities of a subject. For example, a table identifies the object, specifies access rights allowed for a subject, and permits access based on the user's posession of a capability (or ticket) for the object." CBK, pp. 191-192. To put it another way, as noted in AIO3 on p. 169, "A capabiltiy table is different from an ACL because the subject is bound to the capability table, whereas the object is bound to the ACL."
Again, a capability table could be used to implement the rules identified by an access control matrix but is different from the matrix itself.
References:
CBK pp. 191-192, 317-318
AIO3, p. 169
Who developed one of the first mathematical models of a multilevel-security computer system?
Diffie and Hellman.
Clark and Wilson.
Bell and LaPadula.
Gasser and Lipner.
In 1973 Bell and LaPadula created the first mathematical model of a multi-level security system.
The following answers are incorrect:
Diffie and Hellman. This is incorrect because Diffie and Hellman was involved with cryptography.
Clark and Wilson. This is incorrect because Bell and LaPadula was the first model. The Clark-Wilson model came later, 1987.
Gasser and Lipner. This is incorrect, it is a distractor. Bell and LaPadula was the first model.
What does it mean to say that sensitivity labels are "incomparable"?
The number of classification in the two labels is different.
Neither label contains all the classifications of the other.
the number of categories in the two labels are different.
Neither label contains all the categories of the other.
If a category does not exist then you cannot compare it. Incomparable is when you have two disjointed sensitivity labels, that is a category in one of the labels is not in the other label. "Because neither label contains all the categories of the other, the labels can't be compared. They're said to be incomparable"
COMPARABILITY:
The label:
TOP SECRET [VENUS ALPHA]
is "higher" than either of the labels:
SECRET [VENUS ALPHA] TOP SECRET [VENUS]
But you can't really say that the label:
TOP SECRET [VENUS]
is higher than the label:
SECRET [ALPHA]
Because neither label contains all the categories of the other, the labels can't be compared. They're said to be incomparable. In a mandatory access control system, you won't be allowed access to a file whose label is incomparable to your clearance.
The Multilevel Security policy uses an ordering relationship between labels known as the dominance relationship. Intuitively, we think of a label that dominates another as being "higher" than the other. Similarly, we think of a label that is dominated by another as being "lower" than the other. The dominance relationship is used to determine permitted operations and information flows.
DOMINANCE
The dominance relationship is determined by the ordering of the Sensitivity/Clearance component of the label and the intersection of the set of Compartments.
Sample Sensitivity/Clearance ordering are:
Top Secret > Secret > Confidential > Unclassified
s3 > s2 > s1 > s0
Formally, for label one to dominate label 2 both of the following must be true:
The sensitivity/clearance of label one must be greater than or equal to the sensitivity/clearance of label two.
The intersection of the compartments of label one and label two must equal the compartments of label two.
Additionally:
Two labels are said to be equal if their sensitivity/clearance and set of compartments are exactly equal. Note that dominance includes equality.
One label is said to strictly dominate the other if it dominates the other but is not equal to the other.
Two labels are said to be incomparable if each label has at least one compartment that is not included in the other's set of compartments.
The dominance relationship will produce a partial ordering over all possible MLS labels, resulting in what is known as the MLS Security Lattice.
The following answers are incorrect:
The number of classification in the two labels is different. Is incorrect because the categories are what is being compared, not the classifications.
Neither label contains all the classifications of the other. Is incorrect because the categories are what is being compared, not the classifications.
the number of categories in the two labels is different. Is incorrect because it is possibe a category exists more than once in one sensitivity label and does exist in the other so they would be comparable.
Reference(s) used for this question:
OReilly - Computer Systems and Access Control (Chapter 3)
http://www.oreilly.com/catalog/csb/chapter/ch03.html
and
http://rubix.com/cms/mls_dom
Kerberos is vulnerable to replay in which of the following circumstances?
When a private key is compromised within an allotted time window.
When a public key is compromised within an allotted time window.
When a ticket is compromised within an allotted time window.
When the KSD is compromised within an allotted time window.
Replay can be accomplished on Kerberos if the compromised tickets are used within an allotted time window.
The security depends on careful implementation:enforcing limited lifetimes for authentication credentials minimizes the threat of of replayed credentials, the KDC must be physically secured, and it should be hardened, not permitting any non-kerberos activities.
What would be the name of a Logical or Virtual Table dynamically generated to restrict the information a user can access in a database?
Database Management system
Database views
Database security
Database shadowing
The Answer: Database views; Database views are mechanisms that restrict access to the information that a user can access in a database.Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 35.
Wikipedia has a detailed explantion as well:
In database theory, a view is a virtual or logical table composed of the result set of a query. Unlike ordinary tables (base tables) in a relational database, a view is not part of the physical schema: it is a dynamic, virtual table computed or collated from data in the database. Changing the data in a table alters the data shown in the view.
Views can provide advantages over tables;
They can subset the data contained in a table
They can join and simplify multiple tables into a single virtual table
Views can act as aggregated tables, where aggregated data (sum, average etc.) are calculated and presented as part of the data
Views can hide the complexity of data, for example a view could appear as Sales2000 or Sales2001, transparently partitioning the actual underlying table
Views do not incur any extra storage overhead
Depending on the SQL engine used, views can provide extra security.
Limit the exposure to which a table or tables are exposed to outer world
Just like functions (in programming) provide abstraction, views can be used to create abstraction. Also, just like functions, views can be nested, thus one view can aggregate data from other views. Without the use of views it would be much harder to normalise databases above second normal form. Views can make it easier to create lossless join decomposition.
Which type of control is concerned with restoring controls?
Compensating controls
Corrective controls
Detective controls
Preventive controls
Corrective controls are concerned with remedying circumstances and restoring controls.
Detective controls are concerned with investigating what happen after the fact such as logs and video surveillance tapes for example.
Compensating controls are alternative controls, used to compensate weaknesses in other controls.
Preventive controls are concerned with avoiding occurrences of risks.
Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
Which one of the following factors is NOT one on which Authentication is based?
Type 1. Something you know, such as a PIN or password
Type 2. Something you have, such as an ATM card or smart card
Type 3. Something you are (based upon one or more intrinsic physical or behavioral traits), such as a fingerprint or retina scan
Type 4. Something you are, such as a system administrator or security administrator
Authentication is based on the following three factor types:
Type 1. Something you know, such as a PIN or password
Type 2. Something you have, such as an ATM card or smart card
Type 3. Something you are (Unique physical characteristic), such as a fingerprint or retina scan
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 36.
Also: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 4: Access Control (pages 132-133).
In biometric identification systems, the parts of the body conveniently available for identification are:
neck and mouth
hands, face, and eyes
feet and hair
voice and neck
Today implementation of fast, accurate, reliable, and user-acceptable biometric identification systems are already under way. Because most identity authentication takes place when a people are fully clothed (neck to feet and wrists), the parts of the body conveniently available for this purpose are hands, face, and eyes.
From: TIPTON, Harold F. & KRAUSE, MICKI, Information Security Management Handbook, 4th Edition, Volume 1, Page 7.
Which TCSEC class specifies discretionary protection?
B2
B1
C2
C1
C1 involves discretionary protection, C2 involves controlled access protection, B1 involves labeled security protection and B2 involves structured protection.
Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
Which of the following classes is defined in the TCSEC (Orange Book) as discretionary protection?
C
B
A
D
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, page 197.
Also: THE source for all TCSEC "level" questions: http://csrc.nist.gov/publications/secpubs/rainbow/std001.txt
Which of the following division is defined in the TCSEC (Orange Book) as minimal protection?
Division D
Division C
Division B
Division A
The criteria are divided into four divisions: D, C, B, and A ordered in a hierarchical manner with the highest division (A) being reserved for systems providing the most comprehensive security.
Each division represents a major improvement in the overall confidence one can place in the system for the protection of sensitive information.
Within divisions C and B there are a number of subdivisions known as classes. The classes are also ordered in a hierarchical manner with systems representative of division C and lower classes of division B being characterized by the set of computer security mechanisms that they possess.
Assurance of correct and complete design and implementation for these systems is gained mostly through testing of the security- relevant portions of the system. The security-relevant portions of a system are referred to throughout this document as the Trusted Computing Base (TCB).
Systems representative of higher classes in division B and division A derive their security attributes more from their design and implementation structure. Increased assurance that the required features are operative, correct, and tamperproof under all circumstances is gained through progressively more rigorous analysis during the design process.
TCSEC provides a classification system that is divided into hierarchical divisions of assurance levels:
Division D - minimal security
Division C - discretionary protection
Division B - mandatory protection
Division A - verified protection
Which integrity model defines a constrained data item, an integrity verification procedure and a transformation procedure?
The Take-Grant model
The Biba integrity model
The Clark Wilson integrity model
The Bell-LaPadula integrity model
The Clark Wilson integrity model addresses the three following integrity goals: 1) data is protected from modification by unauthorized users; 2) data is protected from unauthorized modification by authorized users; and 3) data is internally and externally consistent. It also defines a Constrained Data Item (CDI), an Integrity Verification Procedure (IVP), a Transformation Procedure (TP) and an Unconstrained Data item. The Bell-LaPadula and Take-Grant models are not integrity models.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 5: Security Architecture and Models (page 205).
Which access control model is also called Non Discretionary Access Control (NDAC)?
Lattice based access control
Mandatory access control
Role-based access control
Label-based access control
RBAC is sometimes also called non-discretionary access control (NDAC) (as Ferraiolo says "to distinguish it from the policy-based specifics of MAC"). Another model that fits within the NDAC category is Rule-Based Access Control (RuBAC or RBAC). Most of the CISSP books use the same acronym for both models but NIST tend to use a lowercase "u" in between R and B to differentiate the two models.
You can certainly mimic MAC using RBAC but true MAC makes use of Labels which contains the sensitivity of the objects and the categories they belong to. No labels means MAC is not being used.
One of the most fundamental data access control decisions an organization must make is the amount of control it will give system and data owners to specify the level of access users of that data will have. In every organization there is a balancing point between the access controls enforced by organization and system policy and the ability for information owners to determine who can have access based on specific business requirements. The process of translating that balance into a workable access control model can be defined by three general access frameworks:
Discretionary access control
Mandatory access control
Nondiscretionary access control
A role-based access control (RBAC) model bases the access control authorizations on the roles (or functions) that the user is assigned within an organization. The determination of what roles have access to a resource can be governed by the owner of the data, as with DACs, or applied based on policy, as with MACs.
Access control decisions are based on job function, previously defined and governed by policy, and each role (job function) will have its own access capabilities. Objects associated with a role will inherit privileges assigned to that role. This is also true for groups of users, allowing administrators to simplify access control strategies by assigning users to groups and groups to roles.
There are several approaches to RBAC. As with many system controls, there are variations on how they can be applied within a computer system.
There are four basic RBAC architectures:
1. Non-RBAC: Non-RBAC is simply a user-granted access to data or an application by traditional mapping, such as with ACLs. There are no formal “roles” associated with the mappings, other than any identified by the particular user.
2. Limited RBAC: Limited RBAC is achieved when users are mapped to roles within a single application rather than through an organization-wide role structure. Users in a limited RBAC system are also able to access non-RBAC-based applications or data. For example, a user may be assigned to multiple roles within several applications and, in addition, have direct access to another application or system independent of his or her assigned role. The key attribute of limited RBAC is that the role for that user is defined within an application and not necessarily based on the user’s organizational job function.
3. Hybrid RBAC: Hybrid RBAC introduces the use of a role that is applied to multiple applications or systems based on a user’s specific role within the organization. That role is then applied to applications or systems that subscribe to the organization’s role-based model. However, as the term “hybrid” suggests, there are instances where the subject may also be assigned to roles defined solely within specific applications, complimenting (or, perhaps, contradicting) the larger, more encompassing organizational role used by other systems.
4. Full RBAC: Full RBAC systems are controlled by roles defined by the organization’s policy and access control infrastructure and then applied to applications and systems across the enterprise. The applications, systems, and associated data apply permissions based on that enterprise definition, and not one defined by a specific application or system.
Be careful not to try to make MAC and DAC opposites of each other -- they are two different access control strategies with RBAC being a third strategy that was defined later to address some of the limitations of MAC and DAC.
The other answers are not correct because:
Mandatory access control is incorrect because though it is by definition not discretionary, it is not called "non-discretionary access control." MAC makes use of label to indicate the sensitivity of the object and it also makes use of categories to implement the need to know.
Label-based access control is incorrect because this is not a name for a type of access control but simply a bogus detractor.
Lattice based access control is not adequate either. A lattice is a series of levels and a subject will be granted an upper and lower bound within the series of levels. These levels could be sensitivity levels or they could be confidentiality levels or they could be integrity levels.
Reference(s) used for this question:
All in One, third edition, page 165.
Ferraiolo, D., Kuhn, D. & Chandramouli, R. (2003). Role-Based Access Control, p. 18.
Ferraiolo, D., Kuhn, D. (1992). Role-Based Access Controls. http://csrc.nist.gov/rbac/Role_Based_Access_Control-1992.html
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Access Control ((ISC)2 Press) (Kindle Locations 1557-1584). Auerbach Publications. Kindle Edition.
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Access Control ((ISC)2 Press) (Kindle Locations 1474-1477). Auerbach Publications. Kindle Edition.
A timely review of system access audit records would be an example of which of the basic security functions?
avoidance.
deterrence.
prevention.
detection.
By reviewing system logs you can detect events that have occured.
The following answers are incorrect:
avoidance. This is incorrect, avoidance is a distractor. By reviewing system logs you have not avoided anything.
deterrence. This is incorrect because system logs are a history of past events. You cannot deter something that has already occurred.
prevention. This is incorrect because system logs are a history of past events. You cannot prevent something that has already occurred.
Which of the following is NOT a form of detective administrative control?
Rotation of duties
Required vacations
Separation of duties
Security reviews and audits
Detective administrative controls warn of administrative control violations. Rotation of duties, required vacations and security reviews and audits are forms of detective administrative controls. Separation of duties is the practice of dividing the steps in a system function among different individuals, so as to keep a single individual from subverting the process, thus a preventive control rather than a detective control.
Source: DUPUIS, Cl?ment, Access Control Systems and Methodology CISSP Open Study Guide, version 1.0 (march 2002).
The type of discretionary access control (DAC) that is based on an individual's identity is also called:
Identity-based Access control
Rule-based Access control
Non-Discretionary Access Control
Lattice-based Access control
An identity-based access control is a type of Discretionary Access Control (DAC) that is based on an individual's identity.
DAC is good for low level security environment. The owner of the file decides who has access to the file.
If a user creates a file, he is the owner of that file. An identifier for this user is placed in the file header and/or in an access control matrix within the operating system.
Ownership might also be granted to a specific individual. For example, a manager for a certain department might be made the owner of the files and resources within her department. A system that uses discretionary access control (DAC) enables the owner of the resource to specify which subjects can access specific resources.
This model is called discretionary because the control of access is based on the discretion of the owner. Many times department managers, or business unit managers , are the owners of the data within their specific department. Being the owner, they can specify who should have access and who should not.
Reference(s) used for this question:
Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 220). McGraw-Hill . Kindle Edition.
Which of the following statements pertaining to RADIUS is incorrect:
A RADIUS server can act as a proxy server, forwarding client requests to other authentication domains.
Most of RADIUS clients have a capability to query secondary RADIUS servers for redundancy.
Most RADIUS servers have built-in database connectivity for billing and reporting purposes.
Most RADIUS servers can work with DIAMETER servers.
This is the correct answer because it is FALSE.
Diameter is an AAA protocol, AAA stands for authentication, authorization and accounting protocol for computer networks, and it is a successor to RADIUS.
The name is a pun on the RADIUS protocol, which is the predecessor (a diameter is twice the radius).
The main differences are as follows:
Reliable transport protocols (TCP or SCTP, not UDP)
The IETF is in the process of standardizing TCP Transport for RADIUS
Network or transport layer security (IPsec or TLS)
The IETF is in the process of standardizing Transport Layer Security for RADIUS
Transition support for RADIUS, although Diameter is not fully compatible with RADIUS
Larger address space for attribute-value pairs (AVPs) and identifiers (32 bits instead of 8 bits)
Client–server protocol, with exception of supporting some server-initiated messages as well
Both stateful and stateless models can be used
Dynamic discovery of peers (using DNS SRV and NAPTR)
Capability negotiation
Supports application layer acknowledgements, defines failover methods and state machines (RFC 3539)
Error notification
Better roaming support
More easily extended; new commands and attributes can be defined
Aligned on 32-bit boundaries
Basic support for user-sessions and accounting
A Diameter Application is not a software application, but a protocol based on the Diameter base protocol (defined in RFC 3588). Each application is defined by an application identifier and can add new command codes and/or new mandatory AVPs. Adding a new optional AVP does not require a new application.
Examples of Diameter applications:
Diameter Mobile IPv4 Application (MobileIP, RFC 4004)
Diameter Network Access Server Application (NASREQ, RFC 4005)
Diameter Extensible Authentication Protocol (EAP) Application (RFC 4072)
Diameter Credit-Control Application (DCCA, RFC 4006)
Diameter Session Initiation Protocol Application (RFC 4740)
Various applications in the 3GPP IP Multimedia Subsystem
All of the other choices presented are true. So Diameter is backwork compatible with Radius (to some extent) but the opposite is false.
Reference(s) used for this question:
TIPTON, Harold F. & KRAUSE, MICKI, Information Security Management Handbook, 4th Edition, Volume 2, 2001, CRC Press, NY, Page 38.
and
https://secure.wikimedia.org/wikipedia/en/wiki/Diameter_%28protocol%29
Which of the following biometric devices offers the LOWEST CER?
Keystroke dynamics
Voice verification
Iris scan
Fingerprint
From most effective (lowest CER) to least effective (highest CER) are:
Iris scan, fingerprint, voice verification, keystroke dynamics.
Reference : Shon Harris Aio v3 , Chapter-4 : Access Control , Page : 131
Also see: http://www.sans.org/reading_room/whitepapers/authentication/biometric-selection-body-parts-online_139
Who first described the DoD multilevel military security policy in abstract, formal terms?
David Bell and Leonard LaPadula
Rivest, Shamir and Adleman
Whitfield Diffie and Martin Hellman
David Clark and David Wilson
It was David Bell and Leonard LaPadula who, in 1973, first described the DoD multilevel military security policy in abstract, formal terms. The Bell-LaPadula is a Mandatory Access Control (MAC) model concerned with confidentiality. Rivest, Shamir and Adleman (RSA) developed the RSA encryption algorithm. Whitfield Diffie and Martin Hellman published the Diffie-Hellman key agreement algorithm in 1976. David Clark and David Wilson developed the Clark-Wilson integrity model, more appropriate for security in commercial activities.
Source: RUSSEL, Deborah & GANGEMI, G.T. Sr., Computer Security Basics, O'Reilly, July 1992 (pages 78,109).
In Synchronous dynamic password tokens:
The token generates a new password value at fixed time intervals (this password could be based on the time of day encrypted with a secret key).
The token generates a new non-unique password value at fixed time intervals (this password could be based on the time of day encrypted with a secret key).
The unique password is not entered into a system or workstation along with an owner's PIN.
The authentication entity in a system or workstation knows an owner's secret key and PIN, and the entity verifies that the entered password is invalid and that it was entered during the invalid time window.
Synchronous dynamic password tokens:
- The token generates a new password value at fixed time intervals (this password could be the time of day encrypted with a secret key).
- the unique password is entered into a system or workstation along with an owner's PIN.
- The authentication entity in a system or workstation knows an owner's secret key and PIN, and the entity verifies that the entered password is valid and that it was entered during the valid time window.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 37.
In biometrics, the "one-to-one" search used to verify claim to an identity made by a person is considered:
Authentication
Identification
Auditing
Authorization
Biometric devices can be use for either IDENTIFICATION or AUTHENTICATION
ONE TO ONE is for AUTHENTICATION
This means that you as a user would provide some biometric credential such as your fingerprint. Then they will compare the template that you have provided with the one stored in the Database. If the two are exactly the same that prove that you are who you pretend to be.
ONE TO MANY is for IDENTIFICATION
A good example of this would be within airport. Many airports today have facial recognition cameras, as you walk through the airport it will take a picture of your face and then compare the template (your face) with a database full of templates and see if there is a match between your template and the ones stored in the Database. This is for IDENTIFICATION of a person.
Some additional clarification or comments that might be helpful are: Biometrics establish authentication using specific information and comparing results to expected data. It does not perform well for identification purposes such as scanning for a person's face in a moving crowd for example.
Identification methods could include: username, user ID, account number, PIN, certificate, token, smart card, biometric device or badge.
Auditing is a process of logging or tracking what was done after the identity and authentication process is completed.
Authorization is the rights the subject is given and is performed after the identity is established.
Reference OIG (2007) p148, 167
Authentication in biometrics is a "one-to-one" search to verify claim to an identity made by a person.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 38.
What can best be defined as the sum of protection mechanisms inside the computer, including hardware, firmware and software?
Trusted system
Security kernel
Trusted computing base
Security perimeter
The Trusted Computing Base (TCB) is defined as the total combination of protection mechanisms within a computer system. The TCB includes hardware, software, and firmware. These are part of the TCB because the system is sure that these components will enforce the security policy and not violate it.
The security kernel is made up of hardware, software, and firmware components at fall within the TCB and implements and enforces the reference monitor concept.
If an operating system permits shared resources such as memory to be used sequentially by multiple users/application or subjects without a refresh of the objects/memory area, what security problem is MOST likely to exist?
Disclosure of residual data.
Unauthorized obtaining of a privileged execution state.
Data leakage through covert channels.
Denial of service through a deadly embrace.
Allowing objects to be used sequentially by multiple users without a refresh of the objects can lead to disclosure of residual data. It is important that steps be taken to eliminate the chance for the disclosure of residual data.
Object reuse refers to the allocation or reallocation of system resources to a user or, more appropriately, to an application or process. Applications and services on a computer system may create or use objects in memory and in storage to perform programmatic functions. In some cases, it is necessary to share these resources between various system applications. However, some objects may be employed by an application to perform privileged tasks on behalf of an authorized user or upstream application. If object usage is not controlled or the data in those objects is not erased after use, they may become available to unauthorized users or processes.
Disclosure of residual data and Unauthorized obtaining of a privileged execution state are both a problem with shared memory and resources. Not clearing the heap/stack can result in residual data and may also allow the user to step on somebody's session if the security token/identify was maintained in that space. This is generally more malicious and intentional than accidental though. The MOST common issue would be Disclosure of residual data.
The following answers are incorrect:
Unauthorized obtaining of a privileged execution state. Is incorrect because this is not a problem with Object Reuse.
Data leakage through covert channels. Is incorrect because it is not the best answer. A covert channel is a communication path. Data leakage would not be a problem created by Object Reuse. In computer security, a covert channel is a type of computer security attack that creates a capability to transfer information objects between processes that are not supposed to be allowed to communicate by the computer security policy. The term, originated in 1973 by Lampson is defined as "(channels) not intended for information transfer at all, such as the service program's effect on system load." to distinguish it from Legitimate channels that are subjected to access controls by COMPUSEC.
Denial of service through a deadly embrace. Is incorrect because it is only a detractor.
References:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 4174-4179). Auerbach Publications. Kindle Edition.
and
https://www.fas.org/irp/nsa/rainbow/tg018.htm
and
http://en.wikipedia.org/wiki/Covert_channel
What is defined as the hardware, firmware and software elements of a trusted computing base that implement the reference monitor concept?
The reference monitor
Protection rings
A security kernel
A protection domain
A security kernel is defined as the hardware, firmware and software elements of a trusted computing base that implement the reference monitor concept. A reference monitor is a system component that enforces access controls on an object. A protection domain consists of the execution and memory space assigned to each process. The use of protection rings is a scheme that supports multiple protection domains.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 5: Security Architecture and Models (page 194).
A security evaluation report and an accreditation statement are produced in which of the following phases of the system development life cycle?
project initiation and planning phase
system design specification phase
development & documentation phase
acceptance phase
The Answer: "acceptance phase". Note the question asks about an "evaluation report" - which details how the system evaluated, and an "accreditation statement" which describes the level the system is allowed to operate at. Because those two activities are a part of testing and testing is a part of the acceptance phase, the only answer above that can be correct is "acceptance phase".
The other answers are not correct because:
The "project initiation and planning phase" is just the idea phase. Nothing has been developed yet to be evaluated, tested, accredited, etc.
The "system design specification phase" is essentially where the initiation and planning phase is fleshed out. For example, in the initiation and planning phase, we might decide we want the system to have authentication. In the design specification phase, we decide that that authentication will be accomplished via username/password. But there is still nothing actually developed at this point to evaluate or accredit.
The "development & documentation phase" is where the system is created and documented. Part of the documentation includes specific evaluation and accreditation criteria. That is the criteria that will be used to evaluate and accredit the system during the "acceptance phase".
In other words - you cannot evaluate or accredit a system that has not been created yet. Of the four answers listed, only the acceptance phase is dealing with an existing system. The others deal with planning and creating the system, but the actual system isn't there yet.
An effective information security policy should not have which of the following characteristic?
Include separation of duties
Be designed with a short- to mid-term focus
Be understandable and supported by all stakeholders
Specify areas of responsibility and authority
An effective information security policy should be designed with a long-term focus. All other characteristics apply.
Source: ALLEN, Julia H., The CERT Guide to System and Network Security Practices, Addison-Wesley, 2001, Appendix B, Practice-Level Policy Considerations (page 397).
When attempting to establish Liability, which of the following would be describe as performing the ongoing maintenance necessary to keep something in proper working order, updated, effective, or to abide by what is commonly expected in a situation?
Due care
Due concern
Due diligence
Due practice
My friend JD Murray at Techexams.net has a nice definition of both, see his explanation below:
Oh, I hate these two. It's like describing the difference between "jealously" and "envy." Kinda the same thing but not exactly. Here it goes:
Due diligence is performing reasonable examination and research before committing to a course of action. Basically, "look before you leap." In law, you would perform due diligence by researching the terms of a contract before signing it. The opposite of due diligence might be "haphazard" or "not doing your homework."
Due care is performing the ongoing maintenance necessary to keep something in proper working order, or to abide by what is commonly expected in a situation. This is especially important if the due care situation exists because of a contract, regulation, or law. The opposite of due care is "negligence."
In summary, Due Diligence is Identifying threats and risks while Due Care is Acting upon findings to mitigate risks
EXAM TIP:
The Due Diligence refers to the steps taken to identify risks that exists within the environment. This is base on best practices, standards such as ISO 27001, ISO 17799, and other consensus. The first letter of the word Due and the word Diligence should remind you of this. The two letters are DD = Do Detect.
In the case of due care, it is the actions that you have taken (implementing, designing, enforcing, updating) to reduce the risks identified and keep them at an acceptable level. The same apply here, the first letters of the work Due and the work Care are DC. Which should remind you that DC = Do correct.
The other answers are only detractors and not valid.
Reference(s) used for this question:
CISSP Study Guide, Syngress, By Eric Conrad, Page 419
HARRIS, Shon, All-In-One CISSP Certification Exam Guide Fifth Edition, McGraw-Hill, Page 49 and 110.
and
Corporate; (Isc)² (2010-04-20). Official (ISC)2 Guide to the CISSP CBK, Second Edition ((ISC)2 Press) (Kindle Locations 11494-11504). Taylor & Francis. Kindle Edition.
and
My friend JD Murray at Techexams.net
Which of the following phases of a software development life cycle normally addresses Due Care and Due Diligence?
Implementation
System feasibility
Product design
Software plans and requirements
The software plans and requirements phase addresses threats, vulnerabilities, security requirements, reasonable care, due diligence, legal liabilities, cost/benefit analysis, level of protection desired, test plans.
Implementation is incorrect because it deals with Installing security software, running the system, acceptance testing, security software testing, and complete documentation certification and accreditation (where necessary).
System Feasibility is incorrect because it deals with information security policy, standards, legal issues, and the early validation of concepts.
Product design is incorrect because it deals with incorporating security specifications, adjusting test plans and data,
determining access controls, design documentation, evaluating encryption options, and verification.
Sources:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 7: Applications and Systems Development (page 252).
KRUTZ, Ronald & VINES, Russel, The CISSP Prep Guide: Gold Edition, Wiley Publishing Inc., 2003, Chapter 7: Security Life Cycle Components, Figure 7.5 (page 346).
Which of the following is NOT a technical control?
Password and resource management
Identification and authentication methods
Monitoring for physical intrusion
Intrusion Detection Systems
It is considered to be a 'Physical Control'
There are three broad categories of access control: administrative, technical, and physical. Each category has different access control mechanisms that can be carried out manually or automatically. All of these access control mechanisms should work in concert with each other to protect an infrastructure and its data.
Each category of access control has several components that fall within it, a partial list is shown here. Not all controls fall into a single category, many of the controls will be in two or more categories. Below you have an example with backups where it is in all three categories:
Administrative Controls
Policy and procedures
- A backup policy would be in place
Personnel controls
Supervisory structure
Security-awareness training
Testing
Physical Controls
Network segregation
Perimeter security
Computer controls
Work area separation
Data backups (actual storage of the media, i:e Offsite Storage Facility)
Cabling
Technical Controls
System access
Network architecture
Network access
Encryption and protocols
Control zone
Auditing
Backup (Actual software doing the backups)
The following answers are incorrect :
Password and resource management is considered to be a logical or technical control.
Identification and authentication methods is considered to be a logical or technical control.
Intrusion Detection Systems is considered to be a logical or technical control.
Reference : Shon Harris , AIO v3 , Chapter - 4 : Access Control , Page : 180 - 185
Which software development model is actually a meta-model that incorporates a number of the software development models?
The Waterfall model
The modified Waterfall model
The Spiral model
The Critical Path Model (CPM)
The spiral model is actually a meta-model that incorporates a number of the software development models. This model depicts a spiral that incorporates the various phases of software development. The model states that each cycle of the spiral involves the same series of steps for each part of the project. CPM refers to the Critical Path Methodology.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 7: Applications and Systems Development (page 246).
What can best be defined as high-level statements, beliefs, goals and objectives?
Standards
Policies
Guidelines
Procedures
Policies are high-level statements, beliefs, goals and objectives and the general means for their attainment for a specific subject area. Standards are mandatory activities, action, rules or regulations designed to provide policies with the support structure and specific direction they require to be effective. Guidelines are more general statements of how to achieve the policies objectives by providing a framework within which to implement procedures. Procedures spell out the specific steps of how the policy and supporting standards and how guidelines will be implemented.
Source: HARE, Chris, Security management Practices CISSP Open Study Guide, version 1.0, april 1999.
Who of the following is responsible for ensuring that proper controls are in place to address integrity, confidentiality, and availability of IT systems and data?
Business and functional managers
IT Security practitioners
System and information owners
Chief information officer
The system and information owners are responsible for ensuring that proper controls are in place to address integrity, confidentiality, and availability of the IT systems and data they own. IT security practitioners are responsible for proper implementation of security requirements in their IT systems.
Source: STONEBURNER, Gary et al., NIST Special publication 800-30, Risk management Guide for Information Technology Systems, 2001 (page 6).
Which of the following should NOT be performed by an operator?
Implementing the initial program load
Monitoring execution of the system
Data entry
Controlling job flow
Under the principle of separation of duties, an operator should not be performing data entry. This should be left to data entry personnel.
System operators represent a class of users typically found in data center environments where mainframe systems are used. They provide day-to-day operations of the mainframe environment, ensuring that scheduled jobs are running effectively and troubleshooting problems that may arise. They also act as the arms and legs of the mainframe environment, load and unloading tape and results of job print runs. Operators have elevated privileges, but less than those of system administrators. If misused, these privileges may be used to circumvent the system’s security policy. As such, use of these privileges should be monitored through audit logs.
Some of the privileges and responsibilities assigned to operators include:
Implementing the initial program load: This is used to start the operating system. The boot process or initial program load of a system is a critical time for ensuring system security. Interruptions to this process may reduce the integrity of the system or cause the system to crash, precluding its availability.
Monitoring execution of the system: Operators respond to various events, to include errors, interruptions, and job completion messages.
Volume mounting: This allows the desired application access to the system and its data.
Controlling job flow: Operators can initiate, pause, or terminate programs. This may allow an operator to affect the scheduling of jobs. Controlling job flow involves the manipulation of configuration information needed by the system. Operators with the ability to control a job or application can cause output to be altered or diverted, which can threaten the confidentiality.
Bypass label processing: This allows the operator to bypass security label information to run foreign tapes (foreign tapes are those from a different data center that would not be using the same label format that the system could run). This privilege should be strictly controlled to prevent unauthorized access.
Renaming and relabeling resources: This is sometimes necessary in the mainframe environment to allow programs to properly execute. Use of this privilege should be monitored, as it can allow the unauthorized viewing of sensitive information.
Reassignment of ports and lines: Operators are allowed to reassign ports or lines. If misused, reassignment can cause program errors, such as sending sensitive output to an unsecured location. Furthermore, an incidental port may be opened, subjecting the system to an attack through the creation of a new entry point into the system.
Reference(s) used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 19367-19395). Auerbach Publications. Kindle Edition.
129
Which of the following should be performed by an operator?
A. Changing profiles
B. Approving changes
C. Adding and removal of users
D. Installing system software
Answer: D
Of the listed tasks, installing system software is the only task that should normally be performed by an operator in a properly segregated environment.
Source: MOSHER, Richard & ROTHKE, Ben, CISSP CBK Review presentation on domain 7.
What is RAD?
A development methodology
A project management technique
A measure of system complexity
Risk-assessment diagramming
RAD stands for Rapid Application Development.
RAD is a methodology that enables organizations to develop strategically important systems faster while reducing development costs and maintaining quality.
RAD is a programming system that enables programmers to quickly build working programs.
In general, RAD systems provide a number of tools to help build graphical user interfaces that would normally take a large development effort.
Two of the most popular RAD systems for Windows are Visual Basic and Delphi. Historically, RAD systems have tended to emphasize reducing development time, sometimes at the expense of generating in-efficient executable code. Nowadays, though, many RAD systems produce extremely faster code that is optimized.
Conversely, many traditional programming environments now come with a number of visual tools to aid development. Therefore, the line between RAD systems and other development environments has become blurred.
Which of the following is not a form of passive attack?
Scavenging
Data diddling
Shoulder surfing
Sniffing
Data diddling involves alteration of existing data and is extremely common. It is one of the easiest types of crimes to prevent by using access and accounting controls, supervision, auditing, separation of duties, and authorization limits. It is a form of active attack. All other choices are examples of passive attacks, only affecting confidentiality.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, Chapter 10: Law, Investigation, and Ethics (page 645).
Which of the following security modes of operation involves the highest risk?
Compartmented Security Mode
Multilevel Security Mode
System-High Security Mode
Dedicated Security Mode
In multilevel mode, two or more classification levels of data exist, some people are not cleared for all the data on the system.
Risk is higher because sensitive data could be made available to someone not validated as being capable of maintaining secrecy of that data (i.e., not cleared for it).
In other security modes, all users have the necessary clearance for all data on the system.
Source: LaROSA, Jeanette (domain leader), Application and System Development Security CISSP Open Study Guide, version 3.0, January 2002.
Making sure that only those who are supposed to access the data can access is which of the following?
confidentiality.
capability.
integrity.
availability.
From the published (ISC)2 goals for the Certified Information Systems Security Professional candidate, domain definition. Confidentiality is making sure that only those who are supposed to access the data can access it.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 59.
What is the difference between Advisory and Regulatory security policies?
there is no difference between them
regulatory policies are high level policy, while advisory policies are very detailed
Advisory policies are not mandated. Regulatory policies must be implemented.
Advisory policies are mandated while Regulatory policies are not
Advisory policies are security polices that are not mandated to be followed but are strongly suggested, perhaps with serious consequences defined for failure to follow them (such as termination, a job action warning, and so forth). A company with such policies wants most employees to consider these policies mandatory.
Most policies fall under this broad category.
Advisory policies can have many exclusions or application levels. Thus, these policies can control some employees more than others, according to their roles and responsibilities within that organization. For example, a policy that
requires a certain procedure for transaction processing might allow for an alternative procedure under certain, specified conditions.
Regulatory
Regulatory policies are security policies that an organization must implement due to compliance, regulation, or other legal requirements. These companies might be financial institutions, public utilities, or some other type of organization that operates in the public interest. These policies are usually very detailed and are specific to the industry in which the organization operates.
Regulatory polices commonly have two main purposes:
1. To ensure that an organization is following the standard procedures or base practices of operation in its specific industry
2. To give an organization the confidence that it is following the standard and accepted industry policy
Informative
Informative policies are policies that exist simply to inform the reader. There are no implied or specified requirements, and the audience for this information could be certain internal (within the organization) or external parties. This does not mean that the policies are authorized for public consumption but that they are general enough to be distributed to external parties (vendors accessing an extranet, for example) without a loss of confidentiality.
References:
KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Page 12, Chapter 1: Security Management Practices.
also see:
The CISSP Prep Guide:Mastering the Ten Domains of Computer Security by Ronald L. Krutz, Russell Dean Vines, Edward M. Stroz
also see:
http://i-data-recovery.com/information-security/information-security-policies-standards-guidelines-and-procedures
An Architecture where there are more than two execution domains or privilege levels is called:
Ring Architecture.
Ring Layering
Network Environment.
Security Models
In computer science, hierarchical protection domains, often called protection rings, are a mechanism to protect data and functionality from faults (fault tolerance) and malicious behavior (computer security). This approach is diametrically opposite to that of capability-based security.
Computer operating systems provide different levels of access to resources. A protection ring is one of two or more hierarchical levels or layers of privilege within the architecture of a computer system. This is generally hardware-enforced by some CPU architectures that provide different CPU modes at the hardware or microcode level. Rings are arranged in a hierarchy from most privileged (most trusted, usually numbered zero) to least privileged (least trusted, usually with the highest ring number). On most operating systems, Ring 0 is the level with the most privileges and interacts most directly with the physical hardware such as the CPU and memory.
Special gates between rings are provided to allow an outer ring to access an inner ring's resources in a predefined manner, as opposed to allowing arbitrary usage. Correctly gating access between rings can improve security by preventing programs from one ring or privilege level from misusing resources intended for programs in another. For example, spyware running as a user program in Ring 3 should be prevented from turning on a web camera without informing the user, since hardware access should be a Ring 1 function reserved for device drivers. Programs such as web browsers running in higher numbered rings must request access to the network, a resource restricted to a lower numbered ring.
Ring Architecture
All of the other answers are incorrect because they are detractors.
References:
OIG CBK Security Architecture and Models (page 311)
and
https://en.wikipedia.org/wiki/Ring_%28computer_security%29
Which of the following is not a method to protect objects and the data within the objects?
Layering
Data mining
Abstraction
Data hiding
Data mining is used to reveal hidden relationships, patterns and trends by running queries on large data stores.
Data mining is the act of collecting and analyzing large quantities of information to determine patterns of use or behavior and use those patterns to form conclusions about past, current, or future behavior. Data mining is typically used by large organizations with large databases of customer or consumer behavior. Retail and credit companies will use data mining to identify buying patterns or trends in geographies, age groups, products, or services. Data mining is essentially the statistical analysis of general information in the absence of specific data.
The following are incorrect answers:
They are incorrect as they all apply to Protecting Objects and the data within them. Layering, abstraction and data hiding are related concepts that can work together to produce modular software that implements an organizations security policies and is more reliable in operation.
Layering is incorrect. Layering assigns specific functions to each layer and communication between layers is only possible through well-defined interfaces. This helps preclude tampering in violation of security policy. In computer programming, layering is the organization of programming into separate functional components that interact in some sequential and hierarchical way, with each layer usually having an interface only to the layer above it and the layer below it.
Abstraction is incorrect. Abstraction "hides" the particulars of how an object functions or stores information and requires the object to be manipulated through well-defined interfaces that can be designed to enforce security policy. Abstraction involves the removal of characteristics from an entity in order to easily represent its essential properties.
Data hiding is incorrect. Data hiding conceals the details of information storage and manipulation within an object by only exposing well defined interfaces to the information rather than the information itslef. For example, the details of how passwords are stored could be hidden inside a password object with exposed interfaces such as check_password, set_password, etc. When a password needs to be verified, the test password is passed to the check_password method and a boolean (true/false) result is returned to indicate if the password is correct without revealing any details of how/where the real passwords are stored. Data hiding maintains activities at different security levels to separate these levels from each other.
The following reference(s) were used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 27535-27540). Auerbach Publications. Kindle Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 4269-4273). Auerbach Publications. Kindle Edition.
Who should DECIDE how a company should approach security and what security measures should be implemented?
Senior management
Data owner
Auditor
The information security specialist
They are responsible for security of the organization and the protection of its assets.
The following answers are incorrect because :
Data owner is incorrect as data owners should not decide as to what security measures should be applied.
Auditor is also incorrect as auditor cannot decide as to what security measures should be applied.
The information security specialist is also incorrect as they may have the technical knowledge of how security measures should be implemented and configured , but they should not be in a position of deciding what measures should be applied.
Reference : Shon Harris AIO v3 , Chapter-3: Security Management Practices , Page : 51.
It is a violation of the "separation of duties" principle when which of the following individuals access the software on systems implementing security?
security administrator
security analyst
systems auditor
systems programmer
Reason: The security administrator, security analysis, and the system auditor need access to portions of the security systems to accomplish their jobs. The system programmer does not need access to the working (AKA: Production) security systems.
Programmers should not be allowed to have ongoing direct access to computers running production systems (systems used by the organization to operate its business). To maintain system integrity, any changes they make to production systems should be tracked by the organization’s change management control system.
Because the security administrator’s job is to perform security functions, the performance of non-security tasks must be strictly limited. This separation of duties reduces the likelihood of loss that results from users abusing their authority by taking actions outside of their assigned functional responsibilities.
References:
OFFICIAL (ISC)2® GUIDE TO THE CISSP® EXAM (2003), Hansche, S., Berti, J., Hare, H., Auerbach Publication, FL, Chapter 5 - Operations Security, section 5.3,”Security Technology and Tools,” Personnel section (page 32).
KRUTZ, R. & VINES, R. The CISSP Prep Guide: Gold Edition (2003), Wiley Publishing Inc., Chapter 6: Operations Security, Separations of Duties (page 303).
Which of the following best defines add-on security?
Physical security complementing logical security measures.
Protection mechanisms implemented as an integral part of an information system.
Layer security.
Protection mechanisms implemented after an information system has become operational.
The Internet Security Glossary (RFC2828) defines add-on security as "The retrofitting of protection mechanisms, implemented by hardware or software, after the [automatic data processing] system has become operational."
Source: SHIREY, Robert W., RFC2828: Internet Security Glossary, may 2000.
Which of the following would be the best reason for separating the test and development environments?
To restrict access to systems under test.
To control the stability of the test environment.
To segregate user and development staff.
To secure access to systems under development.
The test environment must be controlled and stable in order to ensure that development projects are tested in a realistic environment which, as far as possible, mirrors the live environment.
Reference(s) used for this question:
Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, chapter 6: Business Application System Development, Acquisition, Implementation and Maintenance (page 309).
What is the main purpose of Corporate Security Policy?
To transfer the responsibility for the information security to all users of the organization
To communicate management's intentions in regards to information security
To provide detailed steps for performing specific actions
To provide a common framework for all development activities
A Corporate Security Policy is a high level document that indicates what are management`s intentions in regard to Information Security within the organization. It is high level in purpose, it does not give you details about specific products that would be use, specific steps, etc..
The organization’s requirements for access control should be defined and documented in its security policies. Access rules and rights for each user or group of users should be clearly stated in an access policy statement. The access control policy should minimally consider:
Statements of general security principles and their applicability to the organization
Security requirements of individual enterprise applications, systems, and services
Consistency between the access control and information classification policies of different systems and networks
Contractual obligations or regulatory compliance regarding protection of assets
Standards defining user access profiles for organizational roles
Details regarding the management of the access control system
As a Certified Information System Security Professional (CISSP) you would be involved directly in the drafting and coordination of security policies, standards and supporting guidelines, procedures, and baselines.
Guidance provided by the CISSP for technical security issues, and emerging threats are considered for the adoption of new policies. Activities such as interpretation of government regulations and industry trends and analysis of vendor solutions to include in the security architecture that advances the security of the organization are performed by the CISSP as well.
The following are incorrect answers:
To transfer the responsibility for the information security to all users of the organization is bogus. You CANNOT transfer responsibility, you can only tranfer authority. Responsibility will also sit with upper management. The keyworks ALL and USERS is also an indication that it is the wrong choice.
To provide detailed steps for performing specific actions is also a bogus detractor. A step by step document is referred to as a procedure. It details how to accomplish a specific task.
To provide a common framework for all development activities is also an invalid choice. Security Policies are not restricted only to development activities.
Reference Used for this question:
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 1551-1565). Auerbach Publications. Kindle Edition.
and
Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 9109-9112). Auerbach Publications. Kindle Edition.
Memory management in TCSEC levels B3 and A1 operating systems may utilize "data hiding". What does this mean?
System functions are layered, and none of the functions in a given layer can access data outside that layer.
Auditing processes and their memory addresses cannot be accessed by user processes.
Only security processes are allowed to write to ring zero memory.
It is a form of strong encryption cipher.
Data Hiding is protecting data so that it is only available to higher levels this is done and is also performed by layering, when the software in each layer maintains its own global data and does not directly reference data outside its layers.
The following answers are incorrect:
Auditing processes and their memory addresses cannot be accessed by user processes. Is incorrect because this does not offer data hiding.
Only security processes are allowed to write to ring zero memory. This is incorrect, the security kernel would be responsible for this.
It is a form of strong encryption cipher. Is incorrect because this does not conform to the definition of data hiding.
During which phase of an IT system life cycle are security requirements developed?
Operation
Initiation
Functional design analysis and Planning
Implementation
The software development life cycle (SDLC) (sometimes referred to as the System Development Life Cycle) is the process of creating or altering software systems, and the models and methodologies that people use to develop these systems.
The NIST SP 800-64 revision 2 has within the description section of para 3.2.1:
This section addresses security considerations unique to the second SDLC phase. Key security activities for this phase include:
• Conduct the risk assessment and use the results to supplement the baseline security controls;
• Analyze security requirements;
• Perform functional and security testing;
• Prepare initial documents for system certification and accreditation; and
• Design security architecture.
Reviewing this publication you may want to pick development/acquisition. Although initiation would be a decent choice, it is correct to say during this phase you would only brainstorm the idea of security requirements. Once you start to develop and acquire hardware/software components then you would also develop the security controls for these. The Shon Harris reference below is correct as well.
Shon Harris' Book (All-in-One CISSP Certification Exam Guide) divides the SDLC differently:
Project initiation
Functional design analysis and planning
System design specifications
Software development
Installation
Maintenance support
Revision and replacement
According to the author (Shon Harris), security requirements should be developed during the functional design analysis and planning phase.
SDLC POSITIONING FROM NIST 800-64
SDLC Positioning in the enterprise
Information system security processes and activities provide valuable input into managing IT systems and their development, enabling risk identification, planning and mitigation. A risk management approach involves continually balancing the protection of agency information and assets with the cost of security controls and mitigation strategies throughout the complete information system development life cycle (see Figure 2-1 above). The most effective way to implement risk management is to identify critical assets and operations, as well as systemic vulnerabilities across the agency. Risks are shared and not bound by organization, revenue source, or topologies. Identification and verification of critical assets and operations and their interconnections can be achieved through the system security planning process, as well as through the compilation of information from the Capital Planning and Investment Control (CPIC) and Enterprise Architecture (EA) processes to establish insight into the agency’s vital business operations, their supporting assets, and existing interdependencies and relationships.
With critical assets and operations identified, the organization can and should perform a business impact analysis (BIA). The purpose of the BIA is to relate systems and assets with the critical services they provide and assess the consequences of their disruption. By identifying these systems, an agency can manage security effectively by establishing priorities. This positions the security office to facilitate the IT program’s cost-effective performance as well as articulate its business impact and value to the agency.
SDLC OVERVIEW FROM NIST 800-64
SDLC Overview from NIST 800-64 Revision 2
NIST 800-64 Revision 2 is one publication within the NISTstandards that I would recommend you look at for more details about the SDLC. It describe in great details what activities would take place and they have a nice diagram for each of the phases of the SDLC. You will find a copy at:
http://csrc.nist.gov/publications/nistpubs/800-64-Rev2/SP800-64-Revision2.pdf
DISCUSSION:
Different sources present slightly different info as far as the phases names are concerned.
People sometimes gets confused with some of the NIST standards. For example NIST 800-64 Security Considerations in the Information System Development Life Cycle has slightly different names, the activities mostly remains the same.
NIST clearly specifies that Security requirements would be considered throughout ALL of the phases. The keyword here is considered, if a question is about which phase they would be developed than Functional Design Analysis would be the correct choice.
Within the NIST standard they use different phase, howeverr under the second phase you will see that they talk specifically about Security Functional requirements analysis which confirms it is not at the initiation stage so it become easier to come out with the answer to this question. Here is what is stated:
The security functional requirements analysis considers the system security environment, including the enterprise information security policy and the enterprise security architecture. The analysis should address all requirements for confidentiality, integrity, and availability of information, and should include a review of all legal, functional, and other security requirements contained in applicable laws, regulations, and guidance.
At the initiation step you would NOT have enough detailed yet to produce the Security Requirements. You are mostly brainstorming on all of the issues listed but you do not develop them all at that stage.
By considering security early in the information system development life cycle (SDLC), you may be able to avoid higher costs later on and develop a more secure system from the start.
NIST says:
NIST`s Information Technology Laboratory recently issued Special Publication (SP) 800-64, Security Considerations in the Information System Development Life Cycle, by Tim Grance, Joan Hash, and Marc Stevens, to help organizations include security requirements in their planning for every phase of the system life cycle, and to select, acquire, and use appropriate and cost-effective security controls.
I must admit this is all very tricky but reading skills and paying attention to KEY WORDS is a must for this exam.
References:
HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, Fifth Edition, Page 956
and
NIST S-64 Revision 2 at http://csrc.nist.gov/publications/nistpubs/800-64-Rev2/SP800-64-Revision2.pdf
and
http://www.mks.com/resources/resource-pages/software-development-life-cycle-sdlc-system-development
Related to information security, confidentiality is the opposite of which of the following?
closure
disclosure
disposal
disaster
Confidentiality is the opposite of disclosure.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 59.
Which of the following is a CHARACTERISTIC of a decision support system (DSS) in regards to Threats and Risks Analysis?
DSS is aimed at solving highly structured problems.
DSS emphasizes flexibility in the decision making approach of users.
DSS supports only structured decision-making tasks.
DSS combines the use of models with non-traditional data access and retrieval functions.
DSS emphasizes flexibility in the decision-making approach of users. It is aimed at solving less structured problems, combines the use of models and analytic techniques with traditional data access and retrieval functions and supports semi-structured decision-making tasks.
DSS is sometimes referred to as the Delphi Method or Delphi Technique:
The Delphi technique is a group decision method used to ensure that each member gives an honest opinion of what he or she thinks the result of a particular threat will be. This avoids a group of individuals feeling pressured to go along with others’ thought processes and enables them to participate in an independent and anonymous way. Each member of the group provides his or her opinion of a certain threat and turns it in to the team that is performing the analysis. The results are compiled and distributed to the group members, who then write down their comments anonymously and return them to the analysis group. The comments are compiled and redistributed for more comments until a consensus is formed. This method is used to obtain an agreement on cost, loss values, and probabilities of occurrence without individuals having to agree verbally.
Here is the ISC2 book coverage of the subject:
One of the methods that uses consensus relative to valuation of information is the consensus/modified Delphi method. Participants in the valuation exercise are asked to comment anonymously on the task being discussed. This information is collected and disseminated to a participant other than the original author. This participant comments upon the observations of the original author. The information gathered is discussed in a public forum and the best course is agreed upon by the group (consensus).
EXAM TIP:
The DSS is what some of the books are referring to as the Delphi Method or Delphi Technique. Be familiar with both terms for the purpose of the exam.
The other answers are incorrect:
'DSS is aimed at solving highly structured problems' is incorrect because it is aimed at solving less structured problems.
'DSS supports only structured decision-making tasks' is also incorrect as it supports semi-structured decision-making tasks.
'DSS combines the use of models with non-traditional data access and retrieval functions' is also incorrect as it combines the use of models and analytic techniques with traditional data access and retrieval functions.
Reference(s) used for this question:
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (p. 91). McGraw-Hill. Kindle Edition.
and
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Information Security Governance and Risk Management ((ISC)2 Press) (Kindle Locations 1424-1426). Auerbach Publications. Kindle Edition.
Which of the following statements pertaining to software testing is incorrect?
Unit testing should be addressed and considered when the modules are being designed.
Test data should be part of the specifications.
Testing should be performed with live data to cover all possible situations.
Test data generators can be used to systematically generate random test data that can be used to test programs.
Live or actual field data is not recommended for use in the testing procedures because both data types may not cover out of range situations and the correct outputs of the test are unknown. Live data would not be the best data to use because of the lack of anomalies and also because of the risk of exposure to your live data.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 7: Applications and Systems Development (page 251).
Who can best decide what are the adequate technical security controls in a computer-based application system in regards to the protection of the data being used, the criticality of the data, and it's sensitivity level ?
System Auditor
Data or Information Owner
System Manager
Data or Information user
The data or information owner also referred to as "Data Owner" would be the best person. That is the individual or officer who is ultimately responsible for the protection of the information and can therefore decide what are the adequate security controls according to the data sensitivity and data criticality. The auditor would be the best person to determine the adequacy of controls and whether or not they are working as expected by the owner.
The function of the auditor is to come around periodically and make sure you are doing what you are supposed to be doing. They ensure the correct controls are in place and are being maintained securely. The goal of the auditor is to make sure the organization complies with its own policies and the applicable laws and regulations.
Organizations can have internal auditors and/ or external auditors. The external auditors commonly work on behalf of a regulatory body to make sure compliance is being met. For example CobiT, which is a model that most information security auditors follow when evaluating a security program. While many security professionals fear and dread auditors, they can be valuable tools in ensuring the overall security of the organization. Their goal is to find the things you have missed and help you understand how to fix the problem.
The Official ISC2 Guide (OIG) says:
IT auditors determine whether users, owners, custodians, systems, and networks are in compliance with the security policies, procedures, standards, baselines, designs, architectures, management direction, and other requirements placed on systems. The auditors provide independent assurance to the management on the appropriateness of the security controls. The auditor examines the information systems and determines whether they are designed, configured, implemented, operated, and managed in a way ensuring that the organizational objectives are being achieved. The auditors provide top company management with an independent view of the controls and their effectiveness.
Example:
Bob is the head of payroll. He is therefore the individual with primary responsibility over the payroll database, and is therefore the information/data owner of the payroll database. In Bob's department, he has Sally and Richard working for him. Sally is responsible for making changes to the payroll database, for example if someone is hired or gets a raise. Richard is only responsible for printing paychecks. Given those roles, Sally requires both read and write access to the payroll database, but Richard requires only read access to it. Bob communicates these requirements to the system administrators (the "information/data custodians") and they set the file permissions for Sally's and Richard's user accounts so that Sally has read/write access, while Richard has only read access.
So in short Bob will determine what controls are required, what is the sensitivily and criticality of the Data. Bob will communicate this to the custodians who will implement the requirements on the systems/DB. The auditor would assess if the controls are in fact providing the level of security the Data Owner expects within the systems/DB. The auditor does not determine the sensitivity of the data or the crititicality of the data.
The other answers are not correct because:
A "system auditor" is never responsible for anything but auditing... not actually making control decisions but the auditor would be the best person to determine the adequacy of controls and then make recommendations.
A "system manager" is really just another name for a system administrator, which is actually an information custodian as explained above.
A "Data or information user" is responsible for implementing security controls on a day-to-day basis as they utilize the information, but not for determining what the controls should be or if they are adequate.
References:
Official ISC2 Guide to the CISSP CBK, Third Edition , Page 477
Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Information Security Governance and Risk Management ((ISC)2 Press) (Kindle Locations 294-298). Auerbach Publications. Kindle Edition.
Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (Kindle Locations 3108-3114).
Information Security Glossary
Responsibility for use of information resources
Risk analysis is MOST useful when applied during which phase of the system development process?
Project initiation and Planning
Functional Requirements definition
System Design Specification
Development and Implementation
In most projects the conditions for failure are established at the beginning of the project. Thus risk management should be established at the commencement of the project with a risk assessment during project initiation.
As it is clearly stated in the ISC2 book: Security should be included at the first phase of development and throughout all of the phases of the system development life cycle. This is a key concept to understand for the purpose for the exam.
The most useful time is to undertake it at project initiation, although it is often valuable to update the current risk analysis at later stages.
Attempting to retrofit security after the SDLC is completed would cost a lot more money and might be impossible in some cases. Look at the family of browsers we use today, for the past 8 years they always claim that it is the most secure version that has been released and within days vulnerabilities will be found.
Risks should be monitored throughout the SDLC of the project and reassessed when appropriate.
The phases of the SDLC can very from one source to another one. It could be as simple as Concept, Design, and Implementation. It could also be expanded to include more phases such as this list proposed within the ISC2 Official Study book:
Project Initiation and Planning
Functional Requirements Definition
System Design Specification
Development and Implementation
Documentations and Common Program Controls
Testing and Evaluation Control, certification and accreditation (C&A)
Transition to production (Implementation)
And there are two phases that will extend beyond the SDLC, they are:
Operation and Maintenance Support (O&M)
Revisions and System Replacement (Disposal)
Source: Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, chapter 6: Business Application System Development, Acquisition, Implementation and Maintenance (page 291).
and
The Official ISC2 Guide to the CISSP CBK , Second Edition, Page 182-185
Why does compiled code pose more of a security risk than interpreted code?
Because malicious code can be embedded in compiled code and be difficult to detect.
If the executed compiled code fails, there is a chance it will fail insecurely.
Because compilers are not reliable.
There is no risk difference between interpreted code and compiled code.
From a security standpoint, a compiled program is less desirable than an interpreted one because malicious code can be
resident somewhere in the compiled code, and it is difficult to detect in a very large program.
Which of the following would be the best criterion to consider in determining the classification of an information asset?
Value
Age
Useful life
Personal association
Information classification should be based on the value of the information to the organization and its sensitivity (reflection of how much damage would accrue due to disclosure).
Age is incorrect. While age might be a consideration in some cases, the guiding principles should be value and sensitivity.
Useful life. While useful lifetime is relevant to how long data protections should be applied, the classification is based on information value and sensitivity.
Personal association is incorrect. Information classification decisions should be based on value of the information and its sensitiviry.
References
CBK, pp. 101 - 102.
Which of the following would best describe the difference between white-box testing and black-box testing?
White-box testing is performed by an independent programmer team.
Black-box testing uses the bottom-up approach.
White-box testing examines the program internal logical structure.
Black-box testing involves the business units
Black-box testing observes the system external behavior, while white-box testing is a detailed exam of a logical path, checking the possible conditions.
Source: Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, chapter 6: Business Application System Development, Acquisition, Implementation and Maintenance (page 299).
Which of the following is an advantage in using a bottom-up versus a top-down approach to software testing?
Interface errors are detected earlier.
Errors in critical modules are detected earlier.
Confidence in the system is achieved earlier.
Major functions and processing are tested earlier.
The bottom-up approach to software testing begins with the testing of atomic units, such as programs and modules, and work upwards until a complete system testing has taken place. The advantages of using a bottom-up approach to software testing are the fact that there is no need for stubs or drivers and errors in critical modules are found earlier. The other choices refer to advantages of a top down approach which follows the opposite path.
Source: Information Systems Audit and Control Association, Certified Information Systems Auditor 2002 review manual, chapter 6: Business Application System Development, Acquisition, Implementation and Maintenance (page 299).
What is called a system that is capable of detecting that a fault has occurred and has the ability to correct the fault or operate around it?
A fail safe system
A fail soft system
A fault-tolerant system
A failover system
A fault-tolerant system is capable of detecting that a fault has occurred and has the ability to correct the fault or operate around it. In a fail-safe system, program execution is terminated, and the system is protected from being compromised when a hardware or software failure occurs and is detected. In a fail-soft system, when a hardware or software failure occurs and is detected, selected, non-critical processing is terminated. The term failover refers to switching to a duplicate "hot" backup component in real-time when a hardware or software failure occurs, enabling processing to continue.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, John Wiley & Sons, 2001, Chapter 5: Security Architecture and Models (page 196).
Which of the following describes a computer processing architecture in which a language compiler or pre-processor breaks program instructions down into basic operations that can be performed by the processor at the same time?
Very-Long Instruction-Word Processor (VLIW)
Complex-Instruction-Set-Computer (CISC)
Reduced-Instruction-Set-Computer (RISC)
Super Scalar Processor Architecture (SCPA)
Very long instruction word (VLIW) describes a computer processing architecture in which a language compiler or pre-processor breaks program instruction down into basic operations that can be performed by the processor in parallel (that is, at the same time). These operations are put into a very long instruction word which the processor can then take apart without further analysis, handing each operation to an appropriate functional unit.
The following answer are incorrect:
The term "CISC" (complex instruction set computer or computing) refers to computers designed with a full set of computer instructions that were intended to provide needed capabilities in the most efficient way. Later, it was discovered that, by reducing the full set to only the most frequently used instructions, the computer would get more work done in a shorter amount of time for most applications. Intel's Pentium microprocessors are CISC microprocessors.
The PowerPC microprocessor, used in IBM's RISC System/6000 workstation and Macintosh computers, is a RISC microprocessor. RISC takes each of the longer, more complex instructions from a CISC design and reduces it to multiple instructions that are shorter and faster to process. RISC technology has been a staple of mobile devices for decades, but it is now finally poised to take on a serious role in data center servers and server virtualization. The latest RISC processors support virtualization and will change the way computing resources scale to meet workload demands.
A superscalar CPU architecture implements a form of parallelism called instruction level parallelism within a single processor. It therefore allows faster CPU throughput than would otherwise be possible at a given clock rate. A superscalar processor executes more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to redundant functional units on the processor. Each functional unit is not a separate CPU core but an execution resource within a single CPU such as an arithmetic logic unit, a bit shifter, or a multiplier.
Reference(s) Used for this question:
http://whatis.techtarget.com/definition/0,,sid9_gci214395,00.html
and
http://searchcio-midmarket.techtarget.com/definition/CISC
and
http://en.wikipedia.org/wiki/Superscalar
What mechanism does a system use to compare the security labels of a subject and an object?
Validation Module.
Reference Monitor.
Clearance Check.
Security Module.
Because the Reference Monitor is responsible for access control to the objects by the subjects it compares the security labels of a subject and an object.
According to the OIG: The reference monitor is an access control concept referring to an abstract machine that mediates all accesses to objects by subjects based on information in an access control database. The reference monitor must mediate all access, be protected from modification, be verifiable as correct, and must always be invoked. The reference monitor, in accordance with the security policy, controls the checks that are made in the access control database.
The following are incorrect:
Validation Module. A Validation Module is typically found in application source code and is used to validate data being inputted.
Clearance Check. Is a distractor, there is no such thing other than what someone would do when checking if someone is authorized to access a secure facility.
Security Module. Is typically a general purpose module that prerforms a variety of security related functions.
References:
OIG CBK, Security Architecture and Design (page 324)
AIO, 4th Edition, Security Architecture and Design, pp 328-328.
Wikipedia - http://en.wikipedia.org/wiki/Reference_monitor
Which of the following is NOT an administrative control?
Logical access control mechanisms
Screening of personnel
Development of policies, standards, procedures and guidelines
Change control procedures
It is considered to be a technical control.
Logical is synonymous with Technical Control. That was the easy answer.
There are three broad categories of access control: Administrative, Technical, and Physical.
Each category has different access control mechanisms that can be carried out manually or automatically. All of these access control mechanisms should work in concert with each other to protect an infrastructure and its data.
Each category of access control has several components that fall within it, as shown here:
Administrative Controls
• Policy and procedures
• Personnel controls
• Supervisory structure
• Security-awareness training
• Testing
Physical Controls
Network segregation
Perimeter security
Computer controls
Work area separation
Data backups
Technical Controls
System access
Network architecture
Network access
Encryption and protocols
Control zone
Auditing
The following answers are incorrect :
Screening of personnel is considered to be an administrative control
Development of policies, standards, procedures and guidelines is considered to be an administrative control
Change control procedures is considered to be an administrative control.
Reference : Shon Harris AIO v3 , Chapter - 3 : Security Management Practices , Page : 52-54
A trusted system does NOT involve which of the following?
Enforcement of a security policy.
Sufficiency and effectiveness of mechanisms to be able to enforce a security policy.
Assurance that the security policy can be enforced in an efficient and reliable manner.
Independently-verifiable evidence that the security policy-enforcing mechanisms are sufficient and effective.
A trusted system is one that meets its intended security requirements. It involves sufficiency and effectiveness, not necessarily efficiency, in enforcing a security policy. Put succinctly, trusted systems have (1) policy, (2) mechanism, and (3) assurance.
Source: HARE, Chris, Security Architecture and Models, Area 6 CISSP Open Study Guide, January 2002.
What is the most secure way to dispose of information on a CD-ROM?
Sanitizing
Physical damage
Degaussing
Physical destruction
First you have to realize that the question is specifically talking about a CDROM. The information stored on a CDROM is not in electro magnetic format, so a degausser woud be inneffective.
You cannot sanitize a CDROM but you might be able to sanitize a RW/CDROM. A CDROM is a write once device and cannot be overwritten like a hard disk or other magnetic device.
Physical Damage would not be enough as information could still be extracted in a lab from the undamaged portion of the media or even from the pieces after the physical damage has been done.
Physical Destruction using a shredder, your microwave oven, melting it, would be very effective and the best choice for a non magnetic media such as a CDROM.
Source: TIPTON, Hal, (ISC)2, Introduction to the CISSP Exam presentation.
Which of the following phases of a system development life-cycle is most concerned with establishing a good security policy as the foundation for design?
Development/acquisition
Implementation
Initiation
Maintenance
A security policy is an important document to develop while designing an information system. The security policy begins with the organization's basic commitment to information security formulated as a general policy statement.
The policy is then applied to all aspects of the system design or security solution. The policy identifies security goals (e.g., confidentiality, integrity, availability, accountability, and assurance) the system should support, and these goals guide the procedures, standards and controls used in the IT security architecture design.
The policy also should require definition of critical assets, the perceived threat, and security-related roles and responsibilities.
Source: STONEBURNER, Gary & al, National Institute of Standards and Technology (NIST), NIST Special Publication 800-27, Engineering Principles for Information Technology Security (A Baseline for Achieving Security), June 2001 (page 6).
Which of the following is NOT a proper component of Media Viability Controls?
Storage
Writing
Handling
Marking
Media Viability Controls include marking, handling and storage.
Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 231.
Which of the following is not a responsibility of an information (data) owner?
Determine what level of classification the information requires.
Periodically review the classification assignments against business needs.
Delegate the responsibility of data protection to data custodians.
Running regular backups and periodically testing the validity of the backup data.
This responsibility would be delegated to a data custodian rather than being performed directly by the information owner.
"Determine what level of classification the information requires" is incorrect. This is one of the major responsibilities of an information owner.
"Periodically review the classification assignments against business needs" is incorrect. This is one of the major responsibilities of an information owner.
"Delegates responsibility of maintenance of the data protection mechanisms to the data custodian" is incorrect. This is a responsibility of the information owner.
References:
CBK p. 105.
AIO3, p. 53-54, 960
Which of the following would provide the BEST stress testing environment taking under consideration and avoiding possible data exposure and leaks of sensitive data?
Test environment using test data.
Test environment using sanitized live workloads data.
Production environment using test data.
Production environment using sanitized live workloads data.
The best way to properly verify an application or system during a stress test would be to expose it to "live" data that has been sanitized to avoid exposing any sensitive information or Personally Identifiable Data (PII) while in a testing environment. Fabricated test data may not be as varied, complex or computationally demanding as "live" data. A production environment should never be used to test a product, as a production environment is one where the application or system is being put to commercial or operational use. It is a best practice to perform testing in a non-production environment.
Stress testing is carried out to ensure a system can cope with production workloads, but as it may be tested to destruction, a test environment should always be used to avoid damaging the production environment. Hence, testing should never take place in a production environment. If only test data is used, there is no certainty that the system was adequately stress tested.
Within the context of the CBK, which of the following provides a MINIMUM level of security ACCEPTABLE for an environment ?
A baseline
A standard
A procedure
A guideline
Baselines provide the minimum level of security necessary throughout the organization.
Standards specify how hardware and software products should be used throughout the organization.
Procedures are detailed step-by-step instruction on how to achieve certain tasks.
Guidelines are recommendation actions and operational guides to personnel when a specific standard does not apply.
Source: HARRIS, Shon, All-In-One CISSP Certification Exam Guide, McGraw-Hill/Osborne, 2002, chapter 3: Security Management Practices (page 94).
TESTED 07 Dec 2024