Saturday, September 15, 2012

PASTA Process for Attack Simulation and threat analysis (PASTA) Risk-centric Threat Modeling

Castle under siege
(Source Wikipedia)
Information security is about protecting digital assets from threats, software security is about designing and implementing software that is not vulnerable to threat agents seeking to exploit design flaws and bugs to compromise digital assets. Traditionally software security has been driven by the need to identify vulnerabilities with specific tests such as static source code analysis and fix them prior to release software products in production. Today this traditional defensive approach toward software security security cannot cope with increasing level of sophistication and impact of cyber-threats such as financial fraud and massive compromises of confidential data. I therefore advocate we need a new approach in software security that considers the attacker perspective while designing and implementing software. Let's start this new approach by considering threats and attacks while designing and implementing security controls such as setting security requirements. Let' s design, implement and security test new countermeasures so that the software is both threat resilient and attack proof. This blog is about educating people on how to write secure software and to manage the different risks of insecure software. Security engineering and risk management are part of the solution of secure software and these are not only responsibility of software developers but the software organization as a whole that includes application architects, information security officers, chief technology officers, risk managers and least and not last business owners. Software security requires collaboration between engineering and security teams. It requires business and risk managers to together seeking to improve engineering processes and minimize risks. Software security is not the end goal but a process that allows to reduce risks to a level that the business is willing to accept.  Software security is more journey than a destination, it is an on going mission and an opportunity to reduce risks to the business through continuous process improvements. Indeed we made improvements in software security. For example, the average software developed today has fewer number of vulnerabilities than had in the past,  say six, ten years ago. This is due to the availability of better tools for testing software vulnerabilities and to the effort of security vendors and organizations  whose mission has been improving the security of web applications like OWASP. Nevertheless, despite the progress made in software security, we are far from writing and building software that can be considered resilient to today's threats and attacks. There is still a lot of work to do in software security. To know how much work, think about software security as a metaphor of car safety. In automobile industry metaphorical terms, the state of the art of countermeasures built in today's software are like air bags that inflate after a car crash accident had occurred.  Consider for example that it takes months on the average for a company to detect a data breach incident (based upon Verizon data breach reports) since the time the security accident had occurred.  Most of data breaches today are detected after the data has been lost, similarly to air bags that detect car crashes and explode after the passengers are either already dead or injured. Unfortunately, there is no air bag equivalent security measure in software today and there is not car crash test equivalent to test security measures.
Car Air Bag

Also consider the inherent risks due to the high value of the data assets and the critical business functions that software stores and process today such as software that runs critical industrial systems like SCADA and runs oil, gas, water and electric utilities, that control manufacturing, traffic controls and mission critical systems for the military. In the financial industry, this is the critical software that handles payments,allows to trade stocks and bonds seldom for million of dollars per transaction. A little bit closer to our every day experience as consumers, consider software for online purchases and that processes and stores credit cards data. Software that is critical for business functions and for the operation of critical business services is today under the focus of persistent attackers and need adequate countermeasures. Let me try to use the car analogy for highly sought targets from attackers. This would be like the limousine car carrying the president of the United States for a state visit trip. Because of the threats that the presidential car might face, it would need at least high grade security built into the car like bullet proof glass and doors. Other cars with secret service agents would escort the presidential car as well to provide a layered defense. The presidential car is not built with the protection of an average car and is not given average security protection. This is because the president is an highly value asset and needs extra level of protection. Similarly, business critical software is an high value asset that needs a level of security that is higher than commercial off the shelf software. For example, business critical software need at minimum additional layers of preventive and detective security. Yet business critical software today is engineered by following more or less the same design of countermeasures of average software that is 20 years behind today car safety standard technology such as air bags. So I hope you got my point with the car metaphor. Today's software security is not adequate because is not resilient enough to cope with the new threat landscape. Today software applications that protect critical company and government digital assets are under the siege of motivated threat agents and persistent attacks. In today threat landscape, business critical software would need the equivalent security of a tank or a bullet proof car.  So how we can catch up with the threats ? We need to work toward more resilient and attack proof software.  We need to design and implement countermeasures that make more costly for attackers to bypass. We need  preventive and detective controls to evolve to effectively detect fraud and prevent fraud and identity theft. We need to move on from infrastructure and perimeter security as network firewalls and intrusion detection systems were good security measures to protect from the cyber attacks of the late 90s and not adequate to protect from today's threats. Because of this, today's cybercrime is an industry that strives with profits of several millions of dollars for cyber criminals by selling malware that is designed to hack into the consumers bank accounts and steal credit card data.  Today cybercrime tool vendors offer a money back guarantee to a fraudster in case a cybercrime tools won't provide the financial gain that was sought (e.g. stealing money from bank accounts). Yes, in the mean time we worked to build more secure software, the cybercrime industry did not waste time and our effort of securing software today is not catching up with the threats we face. Not to underscore the progress we made in software security, if you read the the 2006 DHS Security in the SDLC (S-SDLC) guidelines, we can say that after 6 years, most of software organizations conduct penetration tests and some even have deployed static source code analysis tools that automate the process to identify vulnerabilities in source code. This means there are fewer number of vulnerabilities available to exploit by the attackers. We also have software security maturity models like BSIMM that help software development organizations to compare their software security practices among peers and focus their security efforts in the security domains and activities that need the most effort. This is all good but not enough because the threat landscape has changed and the exposure of software to cyber threats has increased dramatically. Consider the widespread use of software for mobile applications and the millions of people storing personal data on social networking sites. Consider the corporate data stored and processed by software in the cloud and the software that processes and stores personal identifiable information such as voice fingerprints for authentication and user's images for a person identification. Today, there is a disconnect between the escalation of cyber threats, the increased exposure of software to cyber threats and the effectiveness of the countermeasures for protecting and detecting cyber threats. Today software security need to evolve and bake in new countermeasures that need to work like a car air bag. Since Microsoft released a threat modeling methodology ten years ago, we had a software centric based approach to design secure software that considered threats against software components including data assets. This methodology is based on a simplified view of threats such as STRIDE (Spoofing Tampering Repudiation, Information Disclosure, Denial of Service and Elevation of Privileges). This type of threat modeling today is not adequate for designing secure software because threats and attacks have evolved from the basic threats. Consider the example of an attacker using an interface that takes credit card information not to steal credit card data but to enumerate which credit card numbers are valid so can be used for online purchases or counterfeit credit cards. This is a type of threat that STRIDE does not categorizes because is tied to business impact not technical impact. Today attacks against application's software not only seek to compromise the data assets but also to abuse the critical application functionality.  In a today threat model, the analysis of use and abuse cases and of business impacts caused by vulnerability exploits are essential to identify countermeasures and mitigating business risks. The attack surface of today's applications has also become wider including all the available application interfaces and channels that are exposed to a potential attacker. In enterprise wide software and applications the targets are not just one software component or library but the whole services provided to customers and partners. An attacker will seek to compromise different channels that lead to the data assets such as online, mobile and B2B channels and in the cloud where data is either stored or processed.
Tony UV Gives a talk on P.A.S.T.A. Threat Modeling
ATL BSides Conference in Atlanta, 2011
A comprehensive threat model today need to analyze the abuse of software and application functionality by an attacker to determine the possible business impacts. Today's software need to be tested with the equivalent of car crash tests to probe the security measures in place assuming that a compromise of one measure won't result on a catastrophic loss of the assets such as data and critical business functions. As I saw the need of a new way to look at threats, vulnerabilities and attacks, I embarked with my friend Tony Ucedavelez CEO of Versprite in a passionate effort to develop a new process for the analysis of cyber threats by focusing on business impacts and with the ultimate objective of protecting the company digital assets such as data and critical business functions. This is not a stand alone threat model for software developers but a risk framework that can be used by organizations to analyze the impacts to the assets and critical business functions assuming these can be attacked and compromised. This means to consider the attack as a mean to the attacker goals. The foundation of this application threat modeling methodology is a new risk framework and process. This threat modeling process consists on the "Process for Attack Simulation and Threat Analysis" (P.A.S.T.A). Pasta is a food metaphor for threat and attacks and it is used to educate security people to threat and attack analysis. Using the food metaphor, pasta is taught as the basic ingredient for cooking quality meals as threat modeling is the basic ingredient to build secure applications. Since an attack describes how a threat is realized, this methodology outlines the steps for analyzing threats and attacks and build countermeasures as a recipe for cooking good pasta.  The modeling threats and attacks, threat modeling drives the design of protective and detective measures to minimize business impacts. For example, the correlation of attacks to possible exploits of vulnerabilities can be used to design preventive and detective measures. Since we need tools to conduct this process and the correlation between threats, attacks and vulnerabilities, we convinced the company, myAppSecurity Inc to develop the threat modeling tool to support this process. The tool threatModeler (TM)  helps software developers in conducting the steps of the methodology and produce threat models of the applications. In the mean time, Tony UV and I started giving talks about threat modeling by attending several security conferences (e.g. Universities, OWASP, BSides). We spent the last three years learning what works and what does not work. Education of software engineers and software security professionals in threat modeling is key for success. Also in most of software development organizations today, threat modeling is misunderstood as software security methodology. For this reason, it is either missing as S-SDLC activity or it is considered  as complimentary of other consulting security engagements such as pen testing and secure code reviews. Instead threat modeling is central to the application security risk mitigation strategy since allows to map threats to attacks and attacks to vulnerabilities and to highlight the exposure to threats of the data and critical business functions. Threat modeling allows the business to understand the risks of the exposure of data assets by vulnerabilities and the determine the effectiveness of security measures in place. Threat modeling allows to perform a defense in depth analysis by determining how defenses can be bypassed by an attacker and identify where layered controls need to be implemented. Threat modeling allows to model the abuse cases of critical business functions so these can be used to crash test security measures and to determine how effective these are for protecting and detecting from the attacks. Ultimately, application threat modeling allow the business to decide which security measures are the most effective in mitigating risks of attacks and implement the security measures that minimize the risks and minimize the costs of implementing them. While security, engineering and business teams work together and follow the steps of P.A.S.T.A., they learn how to develop resilient software and translate software security into business value so that the business can make informed risk decisions. Finally, on the topic of application threat modeling, we have a book coming up where we collected our ideas and experiences in eight monumental chapters. The intent is to help others with our experience in the field and to educate the new generation of security professionals on how to design and implement resilient and attack proof application software for today's and future cyber threats.  So be prepared soon to reboot your security program as well and start a new journey leading to a destination where software and applications are resilient and attack proof like a cars are safe in accidents because are designed to use air bags and probed with crash tests.

Friday, August 05, 2011

Application Security Guide for CISOs

To make OWASP more visible to Chief Information Security Officers (CISO)s I put together an initial draft of an application security guide that can be downloaded from here. I believe the time is mature for an organization like OWASP to reach up CISOs directly with a targeted guide. The first part of this OWASP guide, need to document the business cases and risk-cost criteria for budgeting application security processes, tools/technologies and training. This is not an easy task because of the current economic recession requiring organizations to operate with tight budgets for information technology including application security while confronted with the need to mitigate the risk of increased number of attacks and security incidents. Therefore, CISOs today need to be able to articulate the business cases for application security and made the application security budget justifiable according to both risk mitigation and cost efficiency criteria. From risk mitigation perspective, it means to be able factor how much security incidents cost to the organization specifically when such incidents are caused by exploiting application vulnerabilities. Security incidents caused by malware and hacking threat agents that exploit application vulnerabilities such as SQL injection for example could cost businesses lots of money. For an business critical web application such as online banking for example that means several million of dollars of potential losses. By adopting criteria such as quantitative risk analysis, it is possible to calculate how much money should be spent in application security measures and justify this by comparing it with the cost of potential losses. When these losses are potential the cost need to be estimated, when these losses are the consequence of a security incident, this can be calculated based upon real operational costs such as the ones to recover from the security incident. From the application security costs efficiency perspective, criteria such as return of investment can help CISO in deciding how to spend the application security budget effectively such as in which SDLC activity (e.g. pen tests, source code analysis, threat modeling). In order to validate the assumptions of the guide, it would also required to gather CISO feedback such as in a form of a survey to assess risk mitigation from exploit of vulnerabilities by hacking and malware as well as other needs such as compliance so that this application security guide can be documented.

Sunday, June 19, 2011

Attack Simulation and Threat Analysis of Banking Malware-Based Attacks

I presented on the topic of threat modeling of banking malware attacks at the Security Summit conference in Rome, Italy and at the OWASP Appsec EU conference in Dublin Ireland. A new application threat modeling methodology called P.A.S.T.A. (Process for Attack Simulation and Threat Analysis) is featured, this can be used as risk framework for analyze malware-based threats and the impact to online banking applications.
P.A.S.T.A has a provisional patent from US Patent Office and will be published in a book on Application Threat Modeling  co-authored by myself and Tony UV to be published this year. There is also a new threat modeling tool, "ThreatModeler" developed by MyAppSecurity Inc that support this methodology. So far the presentation had good reception and comments, you can follow these comments on the OWASP Linkedin group. Some companies also posted comments herein.
The business impact of banking malware-based attacks for financial institutions today can no longer be neglected since it consists on several millions of dollars in fraudulent transactions, replacing compromised bank accounts as as well as potential legal costs for law suits in case the bank account compromised are business accounts.  The impact for banks due to banking malware attacks is also increasing worldwide: in the U.S.A. alone, according to data from FDIC (Federal Deposit Insurance Corporation) that were presented by David Nelson at RSA Conference in San Francisco last February, during the third quarter of 2009, malware-based online banking fraud rose to over $ 120 million. In the UK, according to data from the UK Cards Association, losses from the online banking sector due to credit card theft totaled 60 million pounds during 2009.  The aggregated losses suffered by banks because of banking malware attacks is very significant and cannot no longer be neglected: according to Gary Warner, director of research in computer forensics at the University of Alabama at Birmingham,“Just one of the Zeus controllers steals about $10 million a week from the United States,”. Targets are web applications, financial data and authentication data:  according to the data breach investigation report of Verizon in 2010 the top five types of data sought by attackers are credit card and authentication data and web applications are the primary target for these attacks since constitute the attack path sought for the highest percentage of data record breached (38% of overall).

To mitigate banking malware threats online banking applications need to be resilient and bullet proof to banking malware attacks and implement new countermeasures. But the first step in threat mitigation with countermeasures is to understand the threat and the threat agents to procect from. Today, banking and malware attacks come from fraudsters and cybercrime threat actors, these are financially motivated, part of organized cybercrime groups and use sophisticated crimeware tools specifically designed to attack banking sites online. To mitigate these threats businesses and specifically financial need to adopt a new risk mitigation strategy and adopt a risk analysis process that allows to understand the new threat scenario of banking malware and to analyze the banking malware attack vectors.  For example, in the typical banking malware attack, initially the banking malware is dropped into the victim's PC either by social engineering the victim with phishing by infecting the victim;s browser with drive by download. After the banking malware has infected the victim's PC, since will be undetected by most of antivirus, it will be transparent to the user and wait for when the user log into the online banking site. At this point, the banking trojan on the infected PC will inject HTML directly into the user's browser (outside of security controls of the site)  by presenting extra data fields that seek to harvest the victim's PII data such as CCN, CVV, PINs and SSNs. Later on when the user will perform an high risk transactions such as a wire transfer, will transfer money from the victim account  to a fraudulent account controlled by the fraudster. The transaction will occur as authentic since is done by the frauster on behalf of the user by using the user's session. 
Stages of P.A.S.T.A. (Process For Attack
Simulation and Threat Analysis)
Understanding the threat scenario of banking malware is the first step, the next one is to adopt an effective risk mitigation strategy that includes people prepared to learn/deal/respond to new threats and attacks, processes that identify security design flaws in applications and gaps in current security controls and innovative tools and countermeasures that mitigate the risk posed by banking malware and cyber threats and the attacks realized by these threats such as Man In The Middle and Man In The Browser attacks.
Regarding the application risk mitigation processes, we are promoting P.A.S.T.A. (Process for Attack Simulation and Threat Analysis).  This is a process designed to mitigate the risk represented by cyber threats to on-line applications in general, including banking malware threats. This process is conducted in seven stages, each stage has specific objectives. For the use case of banking malware, the focus and objectives of each of the seven stages is outlined herein:

The first stage focuses on the understanding of malware-based threat mitigation as a business problem: the objective is to understand the business impact, determine the risk mitigation objectives and derive security and compliance requirements to achieve these objectives.

The second stage consists on the definition of the technical scope for the analysis that consists on the on-line banking application and the production environment. This stage consists on documenting the application profile and gather all application "design blueprints" such as architecture design documents, sequence diagram documents and transaction flow diagrams for all use cases and transactions of the application.

The third stage focuses on the analysis of the on-line banking site from the perspective of secure architecture. This consists on identifying the application existing security controls and the dependencies of application functions/transactions from these. The scope is to support the threat analysis of the effectiveness of security controls in mitigating the threats.

The fourth stage consists on the gathering of threat and attack information from threat intelligence and from internal sources. The objective is to learn from the attack scenarios and the attack vectors used by different banking malware. Internal incidents and security events are then correlated to banking malware attacks and are also used to qualify the likelihood and impact of banking malware threats.

In the fifth stage, the threat analyst looks at the potential application vulnerabilities and the design flaws identified by other assessments such as black box (e.g. pen test) and white box (e.g. source code analysis)security testing. These are the vulnerabilities  that can possibly exploited by banking malware. This analysis of vulnerabilities in this case ought to be "end to end", that is from the client/browser to the servers (e.g. web server, app servers) and back-ends systems (e.g. middleware and mainframes) that are used by the online banking application. A generic correlation framework for mapping of vulnerabilities to threats can also used to identify which vulnerabilities can be potentially exploited by banking malware (e.g. browser vulnerabilities, session management vulnerabilities).

The sixth stage consists on analyzing and simulating the attack scenarios as the attackers will do by using the same attack vectors used by malware. The purpose of this exercise is to identify IF and WHICH vulnerabilities and weaknesses such as design flaws in the application are exploited. This stage includes the analysis of banking malware attacks using attack trees, the analysis of attacks as these will the vulnerabilities using attack libraries and the analysis of the abuse of security controls for hacking financial transactions using the "use and abuse cases" techniques. At this stage, design flaws and gaps of security controls in the application are identified both at the application-architecture level and at the function-transaction level.

Finally, in the last stage, risk managers can analyze the risks and impacts and formulate the risk mitigation strategy for mitigating risks of banking malware. The basis of the risk analysis is the categorization and calculation of the risk factors (e.g. threats, attacks, vulnerabilities, technical and business impact) and the calculation of risks of each exploit with qualitative and quantitative risk models. The risk mitigation strategy includes both preventive and detective controls, defense in depth criteria for application of countermeasures at different layers of the application (browser, web application, and infrastructure) as well as new governance processes: risk based testing, improved fraud detection, threat analysis and cyber-intelligence.

The ultimate goal was to be able to provide application security practitioners with different roles and responsibility (e.g. appsec/infosec risk managers and application security architects), a use case example of P.A.S.T.A ™ threat modeling for modelling banking malware attacks, identifying gaps in security controls-vulnerabilities and identifying protective and detective countermeasures that can be rolled out by following a risk mitigation strategy. The application risk framework provided seek to empower risk management to make informed risk management decisions to protect online banking applications from banking malware.

Sunday, February 06, 2011

7 Security tips for secure coding your HTML 5 applications

Since the release of HTML 5 standard is expected in 2011, it is important to prepare for the potential impacts on security due the adoption of HTML 5. Currently, we can review the working draft from W3C and start looking at this standard from the secure coding perspective and specifically on how to write secure HTML 5 software. Since this blog is dedicated to software security, I thought I should try to put out a list of top security concerns that need to be addressed when coding applications in HTML 5. Herein included is my top 7 list of software security best practices that need to be addressed when coding HTML 5 applications:

1) Be careful when using cross domain messaging features
HTML 5 APIs allow to process messages from an origin that is different from the one of the application processing the message. You should check the origin of the domain of the message to validate that can be trusted such as by whitelisting the domain (accept only request from trusted domains and rejects the others). Specifically, when using HTML 5.0 APIs such as PostMessage(), check the dom-MessageEvent-origin attribute before accepting the request.

2) Always validate input and filter malicious input before using HTML 5.0 APIs.
You should validate data input before you process any messages of HTML 5.0  APIs such as the PostMessage() API. Input validation should be done at a minimum, on the server side since client side validation can be potentially bypassed with a web proxy. . If you are using client side SQL such as WebSQL (like Google gears for example) you should filter data for any SQL injection attack vectors and use prepared SQL statements. 

3) Make sure that the use of any offline-local storage is secure
Whenever possible, do not store any confidential or sensitive data in the offline-local storage, if you do, make sure you encrypt the data. If you do encrypt the data in the offline-local storage, do not store any encryption keys on the client rather, use the server to encrypt this data on demand. Make sure the encryption key is tied to the user's session and to the device that is storing it. Beware that HTML 5 offline applications are vulnerable to cache and cache poisoning hence validate the data before putting anything in offline/local storage. If should also consider to restrict the use of offline/local storage as requirement of your HTML 5.0 security coding standards is possible. Consider that right now (Jan 2011), offline-local storage is not supported by IE browsers, only by Google Chrome, Safari, Firefox and the beta of the Opera browsers.

4) Secure code review HTML 5 code and the coding of HTML 5 tags, attributes and CSS.
You should update your secure code analysis rules to include security checks for special HTML coding attributes and HTML 5.0 tags. Some of HTML 5 tags attributes for example can be potentially be injected JavaScript (JS). You should made a requirement to source code review these new HTML 5 tags for security to make sure any JS input is validated. A new version of HTML 5 CSS also might allow an attacker to control display elements via JS injection. HTML 5 source code with tags, attributes and HTML 5 CSS files should be considered in scope for source code reviews before deployment.

5) Consider to restrict or ban the use of HTLM 5.0 websocket API.
HTML 5.0 websocket API provide a network communication stack to the browser that can be used for backdoors. You should check with your security team whether the use of web sockets is allowed by your organization information security policies and application security standards.

6) Make sure your company legal approve any use of geolocation API.
Consider the impact of privacy when using geolocation APIs to make sure the use is allowed and compliant by your company legal-privacy laws/regulations. The use of geolocation might have privacy impacts, hence should be reviewed to be in compliance with privacy policies that might include notify the user when these APIs are deployed as part of your application.

7) Leverage the security of sandboxing iFrame attributes
One of the HTML 5  features is the sandboxing attribute for iFrame that enables a set of extra restrictions on any content hosted by the iFrame. When this is attribute is set, the content is treated as being from a unique origin, forms and scripts are disabled and links are prevented from targeting other browsing contexts and plug-ins are disabled. Ian Hickson, the editor of the HTML 5 has a post on what the sandbox is good for. You should consider updating your organization's secure coding standards to cover how to code securely applications that leverage the HTML 5.0 sandbox attribute for IFrames.

Monday, November 15, 2010

Tribute to Software Security Guru Roman Hustad

Roman Hustad, OWASP chapter leader in Sacramento, CA, died suddenly on November 4th at the age of 39, the result a fatal heart rythm caused by an enlargement of his heart, the cause of which is still unknown. He collapsed after arriving in the Las Vegas airport that evening.  Roman suffered virtually no pain and was surrounded by others. 

Roman is survived by his wife of 6+ years, Tanya (Burgdorf) Hustad, and his sons Lucas (4 yrs old) and Wyatt (2 yrs old), his sister Holly (Fail) Hoeksema, and brother, Andrew James. The whole family is being supported and cared for by loving family and friends in Davis, CA at the moment.

This is also a big loss for OWASP and the appsec security community. I've known Roman as a former colleague at Foundstone and I worked with him at a four month software security gig for a financial client in Orange County, CA in 2006.

Roman was a person of high professional standards, strong integrity generosity and ethical values. Professionally, he was a top notch principal software security consultant and one of the best if not the best JAVA security trainer that I ever known. After I left Foundstone in 2007, I regret that I did not kept in touch with him. I will always remember him as one of the best software security consultants I had the pleasure to work with.

As a tribute to Roman published work I have provided some references herein.

Hacme Books vs 2.0 Strategic Secure Software Training Application

Papers on, such as:
 "Implementing a Software Security Training Program"
"Holistic Approach for Secure Software"

Roman also published a paper for ISSA Journal, on "How virtualization affects PCI-DSS, A review of Top 5 Issues":

Friday, September 10, 2010

Recent Acquisitions In The Security Industry And What It Means For Software Security Professionals

The recent news of the acquisitions of McAfee by Intel and of Fortify by HP can be interpreted as a future trend for the security industry: build security into hardware and engineering processes instead of bolting security on products. Intel's acquisition of McAfee for example, can be interpreted as move by Intel to integrate application security with hardware (e.g. microchips) that Intel currently develops. Similarly, the acquisition of Fortify Software by HP can be interpreted as a move by HP to integrate software security within HP suite of tools for software testing. Moreover, the news of McAfee acquisition by Intel, can also be interpreted as that the age of companies as pure providers of Antivirus tools has come to an end. This was also predicted by John Kula in his book, Hacking Wall St attacks and countermeasures: ”By the end of 2010, conventional pattern matching anti-virus systems will be completely dead. Their effectiveness will have fallen below 50%."

To understand how signature Anti-Virus (AV) detection and eradication tools have come to age, we need to look at the evolution of security threats in the last two decades and how this affected the effectiveness of AV tools in mitigating the current threats such as cybercrime threats. This is mostly due to the fact that the security threats that consumers and businesses have to protect from today are very different from the ones that they had to protect from ten years ago. In the 90’s the main targets for viruses were users' PC, typical attack vectors included opening unknown email attachments to infect their PCs and spread to the company servers. In 2001 we witnessed the appearance of the first malicious rootkit for the Windows NT: such rootkit had the capability to sneak under the radar of the anti-virus software and evade detection. In 2003 denial of service attacks took advantage of the spreading of worms for infrastructure wide exploitation of buffer overflows such as the SQL slammer worm that caused denial of service to several ATMs at banks such as Bank of America and Washington Mutual. As new signatures were developed to detect and eradicate viruses and worms, the effectiveness of Anti-Virus tools stood on the capacity to identify viruses and worms by the unique signature of the attacks as well as in the capability to eradicate viruses and worms after the infection by patching the infected system. But in 2005, we witnessed email phishing attacks to spread Trojans programs embedded in apparent harmless files eluding anti-virus software and firewalls with the purpose of data exfiltration such as to steal passwords and sensitive data. In 2007, we had the evidence of botnet controlled trojans used as crimeware tool to rob online bank customers, spreading either through targeted phishing attacks or through drive by download infections. More recently, in 2009, Trusteer a security company providing anti-malware solutions published an advisory entitled “Measuring the in-the-wild effectiveness of Antivirus against Zeus” according to which the most popular banker malware Zeus, is successfully bypassing up-to-date antivirus software : "The effectiveness of an up to date anti virus against Zeus is thus not 100%, not 90%, not even 50% - it’s just 23% “.

It is therefore clear in my opinion, that the defenses for malware infection, being this with either viruses, trojans or worms have to be expanded to include other layers of the technology stack that are now the target for rootkits and malware attacks. These expanded layers might include for example, besides the O.S and the application also hardware, kernel and firmware that are currently below the radar of AV detection tools.
Expanding security protection to the hardware layer is beneficial not only as detection control such as for malware intrusion detection but also as security risk preventive controls such as data protection. In the case of cybercrime, malware rootkits such as ZeuS for example that seek to compromise the communication channel between the PC and the banking sites, the malware attacks the client to either hook into the kernel to do Man In The Middle (MiTM) attacks or into the browser APIs to do Man in The Browser (MiTB) attacks. In both cases of these attacks, there is a lot of security to gain at the application layer by protecting the data at the hardware layer. One way to defeat MiTM attacks for example is to secure the communication channel through 2-way mutual authentication and PKI using client identities that are protected by the so called "ID vaults" embedded in hardware chips and secured at firmware layer. Examples of this "ID vaults"are the Broadcom USH Unified Security Hub, that is included in several PCs today and is leveraged by data protection tools such as Verdasys's Digital Guardian data protection solution. You might also consider the benefit of developing application with hardware defenses such as by enforcing firmware controls by digital signing your application at the firmware layer. For the ones of you that attended the talk from Barnaby Jack about jackpotting ATMs at BlackHat this year, signing the application at the firmware layer was one of the mitigations being recommended against rootkit infections.

The other big opportunity for security companies is the integration of security of software with hardware such as in the case of applications for mobile phones. As software is built for the specific mobile O.S. (e.g. Android or iPhone O.S.) can also be build out of the box by leveraging security controls deep in the technology stack that include kernel API, firmware and hardware. In the case of being capable to detect attack vectors, having intrusion detection events that can be triggered at the different layers of the technology stack can leverage defenses at the application layer such as blocking the application to run or transferring data to the server. These are just few examples of security synergies accross layers of the technology stack.

In summary, I think Intel acquisition of McAfee could give Intel the opportunity to design hardware chips that tightly integrate security detection and prevention controls with firmware and software and provide additional layers of security to applications.

The other industry M&A news was the acquisition of Fortify’s software security company by HP: this follows a trend of big software companies such as IBM and HP to acquire security tools companies such as Watchfire and Fortify. Previously, HP grew their security assessment suite of tools through the acquisition of SpyDynamics WebInspect to integrate it in HP's software quality assurance suite of tools, QA inspect. Since IBM previously acquired application scanning tool WatchFire’s Appscan and static analysis tool provider Ounce Labs, Fortify’s static analysis tool acquisition by HP fits the scenario of HP competing head to head with IBM in the software security space. For sake of competition, the acquisition of Fortify by HP make a lot of sense, but the HP acquisition of Fortify also fits the trend in the industry of run software security either as a service or as an assessment integrated as part of the Software Development Life Cycle (SDLC) process.

For example, application and source code vulnerability scanning assessments, referred as dynamic and static testing can be performed a Software Security as a Service (SSaaS) for software development stakeholders such as application architects, developers and testers. These services can also include automation security tools that can be rolled out as part of the overall software development and testing suite of tools such as Integrated Development Environments (IDE) and Q/A testing tools. Obviously, security tool integration with IDE and Q/A testing tools is just one part of the software security equation, as besides tools you also need to roll out secure coding training and secure coding standards. The holistic need of software security that includes people process and technology, is often misunderstood by who has to manage software security initiatives for organizations as software security tools or services alone are mis-interpreted as sufficient to produce secure software.

To produce secure software with a level of software security assurance that is both risk mitigation and cost effective, organizations need to roll out, besides static and dynamic analysis tools and services also software security training for developers and software security engineering processes/methodologies such as SAMM, BSIMM, MS-SDL-Agile, Securosis SSDL, OWASP CLASP.

Obviously, the increased adoption of static and dynamic analysis tools by the enterprise follows the application and software security tool adoption trend. If you refer from a survey from errata security –Integrating Security Info the SDLC, it is shown for example that static analysis is the most popular activity (57%) followed by manual secure code reviews (51%), manual testing (47%). The trend of adoption of application and software security tools usually follows the enterprise awareness of the application security problem as a software security problem.  At the beginning of the rolling out an application security initiative, companies start from the far right of the SDLC by rolling out application scanning tools and ethical hacking web assessments and then move toward the left of the SDLC with source code analysis. Eventually the awareness of the software security problem moves to the design stage by trying to identify security design flaws earlier in the SDLC with the Application Threat Modeling (ATM). Right now, according to the errata security survey, only 37% of organizations have adopted ATM as part of the SDLC. I believe the trend will lead to that direction of adopting ATM because of the efficiencies and the larger security coverage that ATM will provide. Probably this low ATM adoption can be explained by not enough security awareness yet onto the benefits of ATM as well as the maturity levels reached to seek adoption of ATM within the SLDC.

Software security training for developers is also a trend, 86% of the participants of the survey sent one or more members of the software development team to security training. But again according to the Errata security survey, software security is not yet part of the top list of information security management concerns as only about 1/6 of participants (16%) sends his project managers and InfoSec and AppSec directors to software security process management training.
As the static and dynamic security testing adoption grows in the industry there will be also a need of software security services such as software security training and the development of engineering processes and standards. This trend follows the integration of the organization SDLCs as well as InfoSec/AppSec and Risk management processes with formal software assurance methodologies and activities such as vulnerability assessments, secure coding reviews and secure design review/ application threat modeling.
These trends in the M&A of software security industry will also create new career opportunities. In the case of information security managers for example, there will be a need to hire managers with the right experience and skills in managing software security processes for organizations. In the case of software engineers and security consultants, it will create a need of software engineers and consultants abreast of software security formal methods, static and dynamic analysis tools as well as security assessments such as secure code reviews and application architecture risk analysis and design or application threat modeling. In the case of electrical, software or computer system engineers, the knowledge of hardware and software security could also be leveraged to become an expert in hardware-software security integration such as in the case of the design of hardware embedded application security products/solutions.

In conclusion, as software security practitioner, in your current professional role of information security manager, software security architect, software security consultant, software security trainer/instructor you might look at these industry trends to set your career goals and cultivate the necessary skills and experience that could lead you in new career opportunities being created as results of these security industry trends.

Monday, July 26, 2010

BlackHat, Defcon, BSides, Here We Come..

It is time to attend BlackHat U.S.A. conference again and join the crowd (or herd?) of hackers (white and black hats), security researchers, consultants, security manager, information security officers. Since the conference is held in Las Vegas at the Caesar Palace Casino, it is kind of interesting to watch the scene of geeky crowd mingling with the gamblers and people nicely dressed ready for the night shows.
I attended BlackHat the first time in 2006 when I presented at a turbo talk session on Building Security In the SDLC, not quite the hacker's topic I remember, it was quite stressful to be a speaker and I was rather scared to confront a very knowledgeable crowd of security folks that each attends BH...  Overall my presentation went OK but I remember I enjoyed more stressful free sunbathing at the Cabana/Booth that Foundstone Inc prepared at the venus/European syle pool at the Caesar palace casino :).

I attended BH and also Defcon in 2008 and 2009 but no longer as a speaker. I actually think Defcon is a lot of fun, you can learn from the real hackers (including the ones the get caught hacking on the Riviera Casino ATMs) and you can learn from thought leaders and stars of security like Bruce Schneier, Dan Kaminsky and others. You also get the most of your money attending Defcon instead of Blackhat since the conference fee only costs a small fraction (10% ) of what BH conference fee costs: compare $ 140 or Defcon vs. $1,800 for Blackhat....The value to attend BH nowadays, in my opinion, is mostly being able to get first hand information on exploits/hacks. As a zero-day vulnerability is announced, you ca get your company to act promptly remedied as soon as vulnerabilities are released to public. The other value of attending BH is the opportunity to network with other security professionals, promote your research/books and for me, to find good speakers for our local OWASP chapter.

Regarding the scheduled presentations of this year BH conference, there are several good ones that I would recommend attending such as Jack Barnaby's "Jackpotting the ATM" (this is the talk that was pulled out last year but now can be released), Robert Hansen's "HTTPs can beat me", Jeremiah Grossman's "Breaking Browsers Hacking Autocomplete" and Gunter Ollmann's "becoming the six-million-dollar man". There are also several presentations on mobile security that look very interesting to me, among them David Kane Perry's "More Bugs in More Places: Secure Development on Mobile Platforms". I usually tend to select talks based upon relevance for my work such as web application security as well as the reputation/bio of the presenter. I shared my selections on
Since I am staying in Las Vegas till Sunday for attending Defcon (the sister security conference that starts on Thursday till Sunday at the Riviera Hotel) I also plan to attend the few talks that were also presented at BH but that I could not attend over there.

There is also a new conference this year: BSides. BSides is an open security conference that combines structured events with grass-root security talks. I heard good things about BSides, it was held before during the RSA conference in San Francisco. My friend Tony UcedaVelez (co-author with me of the future Application Threat Modeling book) and his company Versprite are among the sponsors of the BSides Las Vegas conference. If you are in Las Vegas and you read this post, hope to meet you over there at either one of these conferences. I also kindly recommend my favorite place for breakfast, that for me is cappuccino and croissants: Payard Pastisserie and Bistro @ Ceasar Palace...

Sunday, March 21, 2010

How a process model can help bring security into software development

Very good article about SSDLC (Security Enhanced Software Development LifeCycle). It should be mandatory reading for promoters of SSDLC initiatives within organizations. This article (third in the series on the secure software lifecycle) captures some of my previous work around the concept of the (SSF) Software Security Framework. The SSF was conceived as framework to integrate security within the (SDLC) Software Development Lifecycle as well as with existing information security and risk management processes. The idea of the SSF originated in 2005 while working with clients of Foundstone (the security consulting company that was acquired by McAfee in 2004) mostly financial institutions and telcos and presented at Blackhat USA Conference in 2006.

Software Security Framework
In general, I have to give credit to the idea of the SSF to the CISOs that I worked for back then as consultant like Mr. Denis Verdon. I also have to thank Mr. Joe Jarzombeck PMP Director Of Software Assurance at the National Cyber Security Division at the Department Of Homeland Security (DHS) for capturing my contributions in the first SSDLC DHS document as well as the SMEs such as Mrs. Karen Mercedes Goertzel at the IATAC (Information Assurance Technology Analysis Center) to document the SSF in the 2007 State of The Art Report of Software Assurance. More recently the idea of SSF evolved thanks to the work of Dr Gary McGraw CTO of Cigital in the context of software security maturity models as framework of software assurance best practices within software maturity model domains

Friday, March 19, 2010

Perceived Security vs. Real Security

M.C. Escher (1898 - 1972), Bond Of Union, 1956.
Risk mitigation is about making an assessment more or less objectively of possible circumstances and events that might determine an impact. The perception of risk is an important factor to determine how humans make decisions on how mitigate risks. Human perception of risk is biased by facts and assumptions that might prevent objective and factual judgment of risk mitigation. Some of these perception factors are not risk factors are driven by human emotion and experience.

One important factor is fear, consider for example these data as fear relates to perception of risk:...the fear of earthquakes has been reported to be more common than the fear of slipping on the bathroom floor although the latter kills many more people than the former...the fear of a flying is still widespread despite the chances of being involved in an aircraft accident are about 1 in 11 million while your chances of being killed in an automobile accident are 1 in 5000. Bruce Schneier has actually posted on his blog some other interesting examples of human perception of risk. How perception matters for security risk professionals ? Well, assume you would like to drive security decisions, then understanding of human reaction to risk is critical factor to consider in risk mitigation decision making.

Understanding cognitive science basics is very important. Consider for example security awareness. Studies show that awareness shift the perception of risk. In general you are aware of a risk that is close to you or of an event that you experienced before, this would drive risk mitigation decision and investment on security. Statistics from OWASP for example shows that organizations that have experienced a public data breach spend more on security in the development process that those that have not.

Basically a breach or an occurred event drive risk awareness and is an important factor in risk mitigation decision and security spending, the relationship of bad events to risk perception is also confirmed by cognitive science,... events that have been experienced before are easily brought to mind are imagined and judged to be more likely than events that could not easily imagined and never occurred.

Another important aspect of risk is what is referred as the appetite of risk or being risk adverse because of a potential gain. In general humans are risk adverse with respect to gains such as preferring a sure thing over gamble with a potential loss and taking a risk in the event the loss is small comparing with the potential gain. Consider for example risk perception biased by human greed. Sometimes risk decision are blind of potential losses because of lack of due diligence on what losses can be. This is what someone refer as taking the risk as being the chicken or being the hawk. Another way to think about risk vs. gain is to rationalize what is the residual risk left if an event would occur where the probability of the event can be estimated based upon real incident/events data. In essence is the what I could loose factor for the business gain of taking the risk. This require being able to visualize and articulate the risk event and simulate the losses that would occur if the event would materialize. In my day to day job for example I would use the threat scenarios and simulate the event of a loss to make the point to the business of the potential loss due to the exploit of a vulnerability.

Threat and risk modeling can be a useful way to visualize an attack, which threats an attack might materialize, the vulnerabilities that can be exploited and how these vulnerabilities can cause an impact. Nevertheless, even if the threat scenario is visualized, the decision of whether to deploy a countermeasure or not is a risk judgment decision that is biased by business factors such as usability, customer impact and even with visualized threat scenario showing the risk potential, perception could still be such as that risk would be acceptable. If the threat scenario applies directly to a real event or incident that occurred before most likely the associated risk won't be accepted as well as if the threat scenario applies to a compliance risk event that could be found by the incoming audit.

In essence, for certain organizations, previous incidents and audit findings can drive security decisions more then threat assessments such as using risk analysis and threat modeling.

Another important factor of perception of risk is whether the risk impacts an organization or an individual responsibility directly or indirectly independently from the fact that the event occurred or not. If the impact is direct such as in the case of assuming the liability for the loss of a bad event occurring risk awareness will be higher then if is indirect and happen to a third party would be considered a non-liability.

In essence to make the cased for risk you need to consider how risk can be differently perceived by the business factoring fear as related to loss and rationalize residual risk as related to business gains. If the organization is fear driven in risk decision making including data from previous incidents and fraud that the companies experienced before can help to drive security awareness as factor of risk mitigation. If the organization is audit driven use the audit findings and non-compliance liabilities and made the case for mitigation.

Ultimately the adoption of security initiatives and security spending can be driven with informed risk decisions using threat models and risk factors such as likelihood and impact but also by factoring perceived security and risk vs. actual/real security and risk.

Sunday, January 24, 2010

OWASP Italy Day 4 Software Security Initiatives Conference Presentation Videos

OWASP Italy has published the videos of the conference on Software Initiatives held in Milan and Rome, Italy last November
The videos for the Milan conference can be reached at the following OWASP Day 4 Italy Page
From the OWASP page you can also download my two video webcasts (Italian language over English slides) of the related conference presentations (1) Guidance for starting software security initiatives within your organization and (2) business cases for software security initiatives
An hearth felt thank-you to the OWASP Italy organization and for putting this together, expecially to Matteo Meucci OWASP Italy Chair and Giorgio Fedon, Chief Operation Officer Minded Security.

If there is interest in having these webcasts also in English please contact me directly, thanks