Monday, December 26, 2005

How to classify software vulnerabilities

I was recently involved in an email discussion about the value of taxonomies for classification of vulnerabilities and threats. Inspired by a paper on taxonomy of software change events: "Toward a Taxonomy of Software Evolution" http://lampwww.epfl.ch/papers/use03.pdf

My point was that a similar taxonomy can be applicable to threats: when in SDLC threats occur (i.e. during requirements, design, development, deployment) where (at which level, application, system subsystem, component) , how they manifest (i.e. because of spoofing, tampering, repudiation etc) and what security attributes affect to (for example Confidentiality, Integrity and Availability).

To validate this approach I decided to research on the available papers and research on taxonomies for vulnerabilities.

In a recent paper (September 2006) Matt Bishop (UC Davis) and David Bailey http://seclab.cs.ucdavis.edu/projects/vulnerabilities/scriv/ucd-ecs-96-11.pdf did a survey of different taxonomies for vulnerabilities and concluded that some classifications can be flawed. As a reference a good taxonomy could be the classification of biological systems. Plants and animal belong to the same "kingdom" of biological creatures but can be differentiated in different groups and two animals of the same can be "uniquely" classified as part of the same group. Ideally the classification of a plant or a animal uniquely belong to a six-tuple (kingdom, phylums,classes,orders,family, genus). Bishop tested some vulnerabilities classification following this criteria. Some classification vulnerabilities such as RISOS (Research into Secure O.S) , UNIX faults taxonomy from Aslam and Bisbey's Protection Analysis Project have flaws in the classification because of ambiguities (not uniqueness ) in the classification: for example depending on the point of view you might have classification for a buffer overflow in different categories. Classifications are also found to be dependent on the discriminatory criteria adopted at different levels of the tree. The problem stands also in how vulnerabilities are grouped together: while deciding which level and which group the same discriminatory question can be asked at different levels so it is not clear at which level the classification is correct. Bishop does not actually find a better classification for security taxonomies, it actually point out the limitations and his critique is that can be flawed suggesting some criteria for new research on the topic.

Gary Mc Graw et al. in the paper 7 Pernicious Kingdoms, A Taxonomy of Software Security Errors http://vulncat.fortifysoftware.com/docs/tcm_taxonomy_submission.pdf take a more pragmatic approach: research for a taxonomy of coding errors that can be fixed by a set of security rules that can be used by both manual and automated code reviews (i.e. static code parsers). In reviewing of existing taxonomies like Bishop, Mc Graw also agrees in the ambiguity and the lack of coverage of the classification of vulnerability in some previous projects such as RISOS. The main limitations of RISOS according to Mc Graw stand in the high level of abstraction and in the objective to allow classification of vulnerabilities from an agnostic knowledge security point of view. One of the main goals of the RISOS project was also to discover vulnerabilities through common patterns to be used for automation tools by building a catalogue of vulnerabilities to be published.

According to McGraw, RISOS project actually failed in the objective of building a repository for vulnerabilities since the database was never published.
Based on research on existing taxonomies and by understanding limitations, McGraw classification does not aim to be rigorous from the stand point of the classification but rather to be useful.
McGraw taxonomy (that also will appear in the next book due in 2006, building Security In) is based on specific type of coding errors (i.e. a phylum that is for example a illegal pointer value) and a collection of phyla called a kingdom that share a common theme (for example input validation). McGraw recognizes that the classification is not theoretically complete and can change but also points out that can serve well a classification for errors (i.e. security flaws) to be part of both a manual and automatic code review.
McGraw taxonomy has the following 7+1 classification:
  • Input Validation and Representation
  • Api Abuse
  • Security Features
  • Time and State
  • Errors
  • Code Quality
  • Encapsulation
  • environment
For example under input Validation and Representation you might have BO, XSS, SQL Injection and many others, under Api abuse you might have unsafe string manipulation APIs, under Time and State you might have TOCTOU time to check time to execute etc.
Other authors such as VIEGA have also developed a taxonomy as part of a methodology for building security into the SDLC such as CLASP. CLASP has a taxonomy for vulnerabilities based on a root cause classification. According to Mc Graw, Viega CLASP taxonomy extends the one researched by Landwehr's Taxonomy of Computer Security Flaws http://www.cs.virginia.edu/~soffa/cs851/p211-landwehr.pdf.
Landwehr's Taxonomy is a classification from three perspectives: HOW the problem entered the system, WHEN the problem entered the system and WHERE the problem manifested. How was broken in inadvertent (and further in malicious and not malicious) and intentional, when was broken at which stage of the SDLC such as during design, development, operation and where was broken in hardware vs software.
According to Mc Graw, Landwehr's Taxonomy has the advantage to identify remedies (i.e. countermeasures) for example if most of the vulnerabilities occur during development you could strategically focus on code reviews. The disadvantage according to Mc Graw is in the limitation to handle new vulnerabilities since it classifies them according to the genesis so if a vulnerability is not yet known on how enters the system cannot be classified.
According to Mc Graw, CLASP taxonomy expands Landwehr's How, Where, When, further by adding a risk based classification such as the effect from the error (i.e. consequence), likelihood of exploit, severity and other parameters. The purpose is to build a root cause taxonomy for example the root cause of a security flaw according to CLASP can be classified depending the point of introduction into the SDLC in a hierarchical view.
  • Level 1: Identify Range and Type of Errors:Part of Level 1 you might have buffer overflows (introduced during Requirements, Design and Implementation), Command Injection (design and implementation), Double Free (implementation)
  • Sub level 2: Identify Environment Problems:Part of sub level 2 you might have resource exhaustion (design and implementation)
  • Sub level 3:Identify Synchronization and timing errors: TOCTOU, race conditions (design and implementation)
  • Sub level 4:Identify Protocol Errors:Misuse of cryptography (design)
  • Sub level 5:Identify Generic logic Errors:Performing a chroot without a chdir (implementation)
The approach is effective in determining root causes of security flaws with emphasis on the when in the SDLC is actually might originate helping architects and developers to build security into the SDLC. The main critics from Mc Graw about Viega's CLASP taxonomy stands in the fact that struggles to provide a reliable lexicon for security flaws classification since some of the issues in the taxonomy cannot be actually classified as security problems.
Notably all taxonomies are "living and changing entities", referring to M. Klass presentation at OWASP on taxonomies for software assurance tools and the security bugs they catch http://www.owasp.org/docroot/owasp/misc/OWASP_DC_2005_Presentations/Track_2-Day1/AppSec2005DC-Mike_Kass-Tools_Taxonomy.ppt taxonomies have such limitations in common:
  • Vulnerabilities do not map to a security flaw but to a combination of security flaw
  • Categorization of security flaws is not always mutually exclusive
  • Taxonomies can categorize flaws but cannot be identified by tools
  • Some flaws are not in the code
  • Some flaws can be introduced at different stages of the SDLC
More to be researched...



No comments: