Last Updated on March 10, 2022 by Admin 3

SSCP : System Security Certified Practitioner (SSCP) : Part 50

  1. Which of the following is the act of performing tests and evaluations to test a system’s security level to see if it complies with the design specifications and security requirements?

    • Validation
    • Verification
    • Assessment
    • Accuracy

    Explanation:

    Verification vs. Validation:

    Verification determines if the product accurately represents and meets the specifications. A product can be developed that does not match the original specifications. This step ensures that the specifications are properly met.

    Validation determines if the product provides the necessary solution intended real-world problem. In large projects, it is easy to lose sight of overall goal. This exercise ensures that the main goal of the project is met.

    From DITSCAP:
    6.3.2. Phase 2, Verification. The Verification phase shall include activities to verify compliance of the system with previously agreed security requirements. For each life-cycle development activity, DoD Directive 5000.1 (reference (i)), there is a corresponding set of security activities, enclosure 3, that shall verify compliance with the security requirements and evaluate vulnerabilities.

    6.3.3. Phase 3, Validation. The Validation phase shall include activities to evaluate the fully integrated system to validate system operation in a specified computing environment with an acceptable level of residual risk. Validation shall culminate in an approval to operate.

    You must also be familiar with Verification and Validation for the purpose of the exam. A simple definition for Verification would be whether or not the developers followed the design specifications along with the security requirements. A simple definition for Validation would be whether or not the final product meets the end user needs and can be use for a specific purpose.

    Wikipedia has an informal description that is currently written as: Validation can be expressed by the query “Are you building the right thing?” and Verification by “Are you building it right?

    NOTE:
    DITSCAP was replaced by DIACAP some time ago (2007). While DITSCAP had defined both a verification and a validation phase, the DIACAP only has a validation phase. It may not make a difference in the answer for the exam; however, DIACAP is the cornerstone policy of DOD C&A and IA efforts today. Be familiar with both terms just in case all of a sudden the exam becomes updated with the new term.

    Reference(s) used for this question:

    Harris, Shon (2012-10-18). CISSP All-in-One Exam Guide, 6th Edition (p. 1106). McGraw-Hill. Kindle Edition.

    http://iase.disa.mil/ditscap/DITSCAP.html
    https://en.wikipedia.org/wiki/Verification_and_validation
    For the definition of “validation” in DIACAP, Click Here
    Further sources for the phases in DIACAP, Click Here

  2. Which of the following refers to the data left on the media after the media has been erased?

    • remanence
    • recovery
    • sticky bits
    • semi-hidden
    Explanation:

    Actually the term “remanence” comes from electromagnetism, the study of the electromagnetics. Originally referred to (and still does in that field of study) the magnetic flux that remains in a magnetic circuit after an applied magnetomotive force has been removed. Absolutely no way a candidate will see anywhere near that much detail on any similar CISSP question, but having read this, a candidate won’t be likely to forget it either.

    It is becoming increasingly commonplace for people to buy used computer equipment, such as a hard drive, or router, and find information on the device left there by the previous owner; information they thought had been deleted. This is a classic example of data remanence: the remains of partial or even the entire data set of digital information. Normally, this refers to the data that remain on media after they are written over or degaussed. Data remanence is most common in storage systems but can also occur in memory.

    Specialized hardware devices known as degaussers can be used to erase data saved to magnetic media. The measure of the amount of energy needed to reduce the magnetic field on the media to zero is known as coercivity.

    It is important to make sure that the coercivity of the degausser is of sufficient strength to meet object reuse requirements when erasing data. If a degausser is used with insufficient coercivity, then a remanence of the data will exist. Remanence is the measure of the existing magnetic field on the media; it is the residue that remains after an object is degaussed or written over.

    Data is still recoverable even when the remanence is small. While data remanence exists, there is no assurance of safe object reuse.

    Reference(s) used for this question:

    Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 4207-4210). Auerbach Publications. Kindle Edition.
    and
    Hernandez CISSP, Steven (2012-12-21). Official (ISC)2 Guide to the CISSP CBK, Third Edition ((ISC)2 Press) (Kindle Locations 19694-19699). Auerbach Publications. Kindle Edition.

  3. What is the appropriate role of the security analyst in the application system development or acquisition project?

    • policeman
    • control evaluator & consultant
    • data owner
    • application user
    Explanation:

    The correct answer is “control evaluator & consultant”. During any system development or acquisition, the security staff should evaluate security controls and advise (or consult) on the strengths and weaknesses with those responsible for making the final decisions on the project.

    The other answers are not correct because:

    policeman – It is never a good idea for the security staff to be placed into this type of role (though it is sometimes unavoidable). During system development or acquisition, there should be no need of anyone filling the role of policeman.

    data owner – In this case, the data owner would be the person asking for the new system to manage, control, and secure information they are responsible for. While it is possible the security staff could also be the data owner for such a project if they happen to have responsibility for the information, it is also possible someone else would fill this role. Therefore, the best answer remains “control evaluator & consultant”.

    application user – Again, it is possible this could be the security staff, but it could also be many other people or groups. So this is not the best answer.

    Reference:
    Official ISC2 Guide page: 555 – 560
    All in One Third Edition page: 832 – 846

  4. The information security staff’s participation in which of the following system development life cycle phases provides maximum benefit to the organization?

    • project initiation and planning phase
    • system design specifications phase
    • development and documentation phase
    • in parallel with every phase throughout the project
    Explanation:

    The other answers are not correct because:

    You are always looking for the “best” answer. While each of the answers listed here could be considered correct in that each of them require input from the security staff, the best answer is for that input to happen at all phases of the project.

    Reference:
    Official ISC2 Guide page: 556
    All in One Third Edition page: 832 – 833

  5. A security evaluation report and an accreditation statement are produced in which of the following phases of the system development life cycle?

    • project initiation and planning phase
    • system design specification phase
    • development & documentation phase
    • acceptance phase
    Explanation:

    The Answer: “acceptance phase”. Note the question asks about an “evaluation report” – which details how the system evaluated, and an “accreditation statement” which describes the level the system is allowed to operate at. Because those two activities are a part of testing and testing is a part of the acceptance phase, the only answer above that can be correct is “acceptance phase”.

    The other answers are not correct because:

    The “project initiation and planning phase” is just the idea phase. Nothing has been developed yet to be evaluated, tested, accredited, etc.

    The “system design specification phase” is essentially where the initiation and planning phase is fleshed out. For example, in the initiation and planning phase, we might decide we want the system to have authentication. In the design specification phase, we decide that that authentication will be accomplished via username/password. But there is still nothing actually developed at this point to evaluate or accredit.

    The “development & documentation phase” is where the system is created and documented. Part of the documentation includes specific evaluation and accreditation criteria. That is the criteria that will be used to evaluate and accredit the system during the “acceptance phase”.

    In other words – you cannot evaluate or accredit a system that has not been created yet. Of the four answers listed, only the acceptance phase is dealing with an existing system. The others deal with planning and creating the system, but the actual system isn’t there yet.

    Reference:
    Official ISC2 Guide Page: 558 – 559
    All in One Third Edition page: 832 – 833 (recommended reading)

  6. Which of the following is often the greatest challenge of distributed computing solutions?

    • scalability
    • security
    • heterogeneity
    • usability
    Explanation:

    The correct answer to this “security”. It is a major factor in deciding if a centralized or decentralized environment is more appropriate.

    Example: In a centralized computing environment, you have a central server and workstations (often “dumb terminals”) access applications, data, and everything else from that central servers. Therefore, the vast majority of your security resides on a centrally managed server. In a decentralized (or distributed) environment, you have a collection of PC’s each with their own operating systems to maintain, their own software to maintain, local data storage requiring protection and backup. You may also have PDA’s and “smart phones”, data watches, USB devices of all types able to store data… the list gets longer all the time.

    It is entirely possible to reach a reasonable and acceptable level of security in a distributed environment. But doing so is significantly more difficult, requiring more effort, more money, and more time.

    The other answers are not correct because:

    scalability – A distributed computing environment is almost infinitely scalable. Much more so than a centralized environment. This is therefore a bad answer.

    heterogeneity – Having products and systems from multiple vendors in a distributed environment is significantly easier than in a centralized environment. This would not be a “challenge of distributed computing solutions” and so is not a good answer.

    usability – This is potentially a challenge in either environment, but whether or not this is a problem has very little to do with whether it is a centralized or distributed environment. Therefore, this would not be a good answer.

    Reference:
    Official ISC2 Guide page: 313-314
    All in One Third Edition page: (unavailable at this time)

  7. It is a violation of the “separation of duties” principle when which of the following individuals access the software on systems implementing security?

    • security administrator
    • security analyst
    • systems auditor
    • systems programmer
    Explanation:

    Reason: The security administrator, security analysis, and the system auditor need access to portions of the security systems to accomplish their jobs. The system programmer does not need access to the working (AKA: Production) security systems.

    Programmers should not be allowed to have ongoing direct access to computers running production systems (systems used by the organization to operate its business). To maintain system integrity, any changes they make to production systems should be tracked by the organization’s change management control system.

    Because the security administrator’s job is to perform security functions, the performance of non-security tasks must be strictly limited. This separation of duties reduces the likelihood of loss that results from users abusing their authority by taking actions outside of their assigned functional responsibilities.

    References:
    OFFICIAL (ISC)2® GUIDE TO THE CISSP® EXAM (2003), Hansche, S., Berti, J., Hare, H., Auerbach Publication, FL, Chapter 5 – Operations Security, section 5.3,”Security Technology and Tools,” Personnel section (page 32).

    KRUTZ, R. & VINES, R. The CISSP Prep Guide: Gold Edition (2003), Wiley Publishing Inc., Chapter 6: Operations Security, Separations of Duties (page 303).

  8. When backing up an applications system’s data, which of the following is a key question to be answered first?

    • When to make backups
    • Where to keep backups
    • What records to backup
    • How to store backups
    Explanation:

    It is critical that a determination be made of WHAT data is important and should be retained and protected. Without determining the data to be backed up, the potential for error increases. A record or file could be vital and yet not included in a backup routine. Alternatively, temporary or insignificant files could be included in a backup routine unnecessarily.

    The following answers were incorrect:

    When to make backups Although it is important to consider schedules for backups, this is done after the decisions are made of what should be included in the backup routine.

    Where to keep backups The location of storing backup copies of data (Such as tapes, on-line backups, etc) should be made after determining what should be included in the backup routine and the method to store the backup.

    How to store backups The backup methodology should be considered after determining what data should be included in the backup routine.

  9. Which of the following is NOT an example of an operational control?

    • backup and recovery
    • Auditing
    • contingency planning
    • operations procedures
    Explanation:

    Operational controls are controls over the hardware, the media used and the operators using these resources.

    Operational controls are controls that are implemented and executed by people, they are most often procedures.

    Backup and recovery, contingency planning and operations procedures are operational controls.

    Auditing is considered an Administrative / detective control. However the actual auditing mechanisms in place on the systems would be consider operational controls.

  10. Degaussing is used to clear data from all of the following medias except:

    • Floppy Disks
    • Read-Only Media
    • Video Tapes
    • Magnetic Hard Disks
    Explanation:

    Atoms and Data

    Shon Harris says: “A device that performs degaussing generates a coercive magnetic force that reduces the magnetic flux density of the storage media to zero. This magnetic force is what properly erases data from media. Data are stored on magnetic media by the representation of the polarization of the atoms. Degaussing changes”

    The latest ISC2 book says:
    “Degaussing can also be a form of media destruction. High-power degaussers are so strong in some cases that they can literally bend and warp the platters in a hard drive. Shredding and burning are effective destruction methods for non-rigid magnetic media. Indeed, some shredders are capable of shredding some rigid media such as an optical disk. This may be an effective alternative for any optical media containing nonsensitive information due to the residue size remaining after feeding the disk into the machine. However, the residue size might be too large for media containing sensitive information. Alternatively, grinding and pulverizing are acceptable choices for rigid and solid-state media. Specialized devices are available for grinding the face of optical media that either sufficiently scratches the surface to render the media unreadable or actually grinds off the data layer of the disk. Several services also exist which will collect drives, destroy them on site if requested and provide certification of completion. It will be the responsibility of the security professional to help, select, and maintain the most appropriate solutions for media cleansing and disposal.”

    Degaussing is achieved by passing the magnetic media through a powerful magnet field to rearrange the metallic particles, completely removing any resemblance of the previously recorded signal (from the “all about degaussers link below). Therefore, degaussing will work on any electronic based media such as floppy disks, or hard disks – all of these are examples of electronic storage. However, “read-only media” includes items such as paper printouts and CD-ROM wich do not store data in an electronic form or is not magnetic storage. Passing them through a magnet field has no effect on them.

    Not all clearing/ purging methods are applicable to all media— for example, optical media is not susceptible to degaussing, and overwriting may not be effective against Flash devices. The degree to which information may be recoverable by a sufficiently motivated and capable adversary must not be underestimated or guessed at in ignorance. For the highest-value commercial data, and for all data regulated by government or military classification rules, read and follow the rules and standards.

    I will admit that this is a bit of a trick question. Determining the difference between “read-only media” and “read-only memory” is difficult for the question taker. However, I believe it is representative of the type of question you might one day see on an exam.

    The other answers are incorrect because:

    Floppy Disks, Magnetic Tapes, and Magnetic Hard Disks are all examples of magnetic storage, and therefore are erased by degaussing.

    A videotape is a recording of images and sounds on to magnetic tape as opposed to film stock used in filmmaking or random access digital media. Videotapes are also used for storing scientific or medical data, such as the data produced by an electrocardiogram. In most cases, a helical scan video head rotates against the moving tape to record the data in two dimensions, because video signals have a very high bandwidth, and static heads would require extremely high tape speeds. Videotape is used in both video tape recorders (VTRs) or, more commonly and more recently, videocassette recorder (VCR) and camcorders. A Tape use a linear method of storing information and since nearly all video recordings made nowadays are digital direct to disk recording (DDR), videotape is expected to gradually lose importance as non-linear/random-access methods of storing digital video data become more common.

    Reference(s) used for this question:

    Harris, Shon (2012-10-25). CISSP All-in-One Exam Guide, 6th Edition (Kindle Locations 25627-25630). McGraw-Hill. Kindle Edition.
    Schneiter, Andrew (2013-04-15). Official (ISC)2 Guide to the CISSP CBK, Third Edition : Security Operations (Kindle Locations 580-588). . Kindle Edition.

    All About Degaussers and Erasure of Magnetic Media:
    http://www.degausser.co.uk/degauss/degabout.htm
    http://www.degaussing.net/
    http://www.cerberussystems.com/INFOSEC/stds/ncsctg25.htm

  11. Which of the following describes a computer processing architecture in which a language compiler or pre-processor breaks program instructions down into basic operations that can be performed by the processor at the same time?

    • Very-Long Instruction-Word Processor (VLIW)
    • Complex-Instruction-Set-Computer (CISC)
    • Reduced-Instruction-Set-Computer (RISC)
    • Super Scalar Processor Architecture (SCPA)
    Explanation:

    Very long instruction word (VLIW) describes a computer processing architecture in which a language compiler or pre-processor breaks program instruction down into basic operations that can be performed by the processor in parallel (that is, at the same time). These operations are put into a very long instruction word which the processor can then take apart without further analysis, handing each operation to an appropriate functional unit.

    The following answer are incorrect:

    The term “CISC” (complex instruction set computer or computing) refers to computers designed with a full set of computer instructions that were intended to provide needed capabilities in the most efficient way. Later, it was discovered that, by reducing the full set to only the most frequently used instructions, the computer would get more work done in a shorter amount of time for most applications. Intel’s Pentium microprocessors are CISC microprocessors.

    The PowerPC microprocessor, used in IBM’s RISC System/6000 workstation and Macintosh computers, is a RISC microprocessor. RISC takes each of the longer, more complex instructions from a CISC design and reduces it to multiple instructions that are shorter and faster to process. RISC technology has been a staple of mobile devices for decades, but it is now finally poised to take on a serious role in data center servers and server virtualization. The latest RISC processors support virtualization and will change the way computing resources scale to meet workload demands.

    A superscalar CPU architecture implements a form of parallelism called instruction level parallelism within a single processor. It therefore allows faster CPU throughput than would otherwise be possible at a given clock rate. A superscalar processor executes more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to redundant functional units on the processor. Each functional unit is not a separate CPU core but an execution resource within a single CPU such as an arithmetic logic unit, a bit shifter, or a multiplier.

    Reference(s) Used for this question:
    http://whatis.techtarget.com/definition/0,,sid9_gci214395,00.html
    and
    http://searchcio-midmarket.techtarget.com/definition/CISC
    and
    http://en.wikipedia.org/wiki/Superscalar

  12. Related to information security, integrity is the opposite of which of the following?

    • abstraction
    • alteration
    • accreditation
    • application
    Explanation:
    Integrity is the opposite of “alteration.”
    Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 59.
  13. Making sure that the data is accessible when and where it is needed is which of the following?

    • confidentiality
    • integrity
    • acceptability
    • availability
    Explanation:
    Availability is making sure that the data is accessible when and where it is needed.
    Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 59.
  14. Making sure that only those who are supposed to access the data can access is which of the following?

    • confidentiality.
    • capability.
    • integrity.
    • availability.
    Explanation:
    From the published (ISC)2 goals for the Certified Information Systems Security Professional candidate, domain definition. Confidentiality is making sure that only those who are supposed to access the data can access it.
    Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 59.
  15. Related to information security, confidentiality is the opposite of which of the following?

    • closure
    • disclosure
    • disposal
    • disaster
    Explanation:
    Confidentiality is the opposite of disclosure.
    Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 59.
  16. Related to information security, the guarantee that the message sent is the message received with the assurance that the message was not intentionally or unintentionally altered is an example of which of the following?

    • integrity
    • confidentiality
    • availability
    • identity
    Explanation:
    Integrity is the guarantee that the message sent is the message received, and that the message was not intentionally or unintentionally altered.
    Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 60.
  17. One of the following assertions is NOT a characteristic of Internet Protocol Security (IPsec)

    • Data cannot be read by unauthorized parties
    • The identity of all IPsec endpoints are confirmed by other endpoints
    • Data is delivered in the exact order in which it is sent
    • The number of packets being exchanged can be counted.
    Explanation:

    IPSec provide replay protection that ensures data is not delivered multiple times, however IPsec does not ensure that data is delivered in the exact order in which it is sent. IPSEC uses TCP and packets may be delivered out of order to the receiving side depending which route was taken by the packet.

    Internet Protocol Security (IPsec) has emerged as the most commonly used network layer security control for protecting communications. IPsec is a framework of open standards for ensuring private communications over IP networks. Depending on how IPsec is implemented and configured, it can provide any combination of the following types of protection:

    Confidentiality. IPsec can ensure that data cannot be read by unauthorized parties. This is accomplished by encrypting data using a cryptographic algorithm and a secret key a value known only to the two parties exchanging data. The data can only be decrypted by someone who has the secret key.

    Integrity. IPsec can determine if data has been changed (intentionally or unintentionally) during transit. The integrity of data can be assured by generating a message authentication code (MAC) value, which is a cryptographic checksum of the data. If the data is altered and the MAC is recalculated, the old and new MACs will differ.

    Peer Authentication. Each IPsec endpoint confirms the identity of the other IPsec endpoint with which it wishes to communicate, ensuring that the network traffic and data is being sent from the expected host.

    Replay Protection. The same data is not delivered multiple times, and data is not delivered grossly out of order. However, IPsec does not ensure that data is delivered in the exact order in which it is sent.

    Traffic Analysis Protection. A person monitoring network traffic does not know which parties are communicating, how often communications are occurring, or how much data is being exchanged. However, the number of packets being exchanged can be counted.

    Access Control. IPsec endpoints can perform filtering to ensure that only authorized IPsec users can access particular network resources. IPsec endpoints can also allow or block certain types of network traffic, such as allowing Web server access but denying file sharing.

    The following are incorrect answers because they are all features provided by IPSEC:

    “Data cannot be read by unauthorized parties” is wrong because IPsec provides confidentiality through the usage of the Encapsulating Security Protocol (ESP), once encrypted the data cannot be read by unauthorized parties because they have access only to the ciphertext. This is accomplished by encrypting data using a cryptographic algorithm and a session key, a value known only to the two parties exchanging data. The data can only be decrypted by someone who has a copy of the session key.

    “The identity of all IPsec endpoints are confirmed by other endpoints” is wrong because IPsec provides peer authentication: Each IPsec endpoint confirms the identity of the other IPsec endpoint with which it wishes to communicate, ensuring that the network traffic and data is being sent from the expected host.

    “The number of packets being exchanged can be counted” is wrong because although IPsec provides traffic protection where a person monitoring network traffic does not know which parties are communicating, how often communications are occurring, or how much data is being exchanged, the number of packets being exchanged still can be counted.

    Reference(s) used for this question:
    NIST 800-77 Guide to IPsec VPNs . Pages 2-3 to 2-4

  18. Related to information security, availability is the opposite of which of the following?

    • delegation
    • distribution
    • documentation
    • destruction
    Explanation:
    Availability is the opposite of “destruction.”
    Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 59.
  19. Related to information security, the prevention of the intentional or unintentional unauthorized disclosure of contents is which of the following?

    • Confidentiality
    • Integrity
    • Availability
    • capability
    Explanation:
    Confidentiality is the prevention of the intentional or unintentional unauthorized disclosure of contents.
    Source: KRUTZ, Ronald L. & VINES, Russel D., The CISSP Prep Guide: Mastering the Ten Domains of Computer Security, 2001, John Wiley & Sons, Page 60.
  20. Which of the following are NOT a countermeasure to traffic analysis?

    • Padding messages.
    • Eavesdropping.
    • Sending noise.
    • Faraday Cage
    Explanation:

    Eavesdropping is not a countermeasure, it is a type of attack where you are collecting traffic and attempting to see what is being send between entities communicating with each other.

    The following answers are incorrect:

    Padding Messages. Is incorrect because it is considered a countermeasure you make messages uniform size, padding can be used to counter this kind of attack, in which decoy traffic is sent out over the network to disguise patterns and make it more difficult to uncover patterns.
    Sending Noise. Is incorrect because it is considered a countermeasure, tansmitting non-informational data elements to disguise real data.

    Faraday Cage Is incorrect because it is a tool used to prevent emanation of electromagnetic waves. It is a very effective tool to prevent traffic analysis.