home |
About |
Contact Us |
Editorial Info |

IEEE-USA |
    feature

   11.11    


11.11

Risk-Based Metrics for Software System Design, Development, and Test

By Dr. Carolyn Turbyfill
 

Ed Perkins on Risk Management
Ed Perkins, IEEE-USA Career and Workforce Policy Committee Resource Member and IEEE 2011-12 Region 6 Director, has a number of counterintuitive ideas that we believe all engineers should hear. + more

 

There are a variety of factors that, if left unaddressed, increase the risk of undesirable system behavior, including reliability, security, privacy, performance, fault tolerance, resilience, availability, sustainability and maintainability. In the context of these desirable attributes, we will also discuss how vulnerabilities, threats, weakness, defects, and exploits greatly reduce the ability to ensure acceptable system behavior. We will do so by examining the financial and technical trade-offs of these factors that must be considered in order to move from unacceptable risk to tolerable risk.

Figure 1, from Software Carpentry, is a classic graph that describes the increasing cost of fixing problems later in the product lifecycle.

Figure 1:  Cost of Fixing Software in Different Stages of the Product Lifecycle[1]


"Thirty-five years ago, Barry Boehm and others discovered that the later a bug is found, the more expensive fixing it is. What’s more, the cost curve is exponential: as we move from requirements to design to implementation to testing to deployment, the cost of fixing a problem increases by a factor of 3 to 10 at each stage, and those increases multiply"

Figure 1 does not imply that you need to know everything you wish to accomplish when you initially specify software.  It does mean that you need to make your code modular and extensible.  A good architect knows how to leave flexibility in code for expected extensions and unexpected requirements.

In deployment, software is secure when it does exactly what it is supposed to do and nothing else.  Software security includes reliability and availability.  Software reliability and availability must be assured against bugs, malicious attacks and unintentional misuse as well as natural and unnatural disasters.  One all too frequent cause of software failures is the application of patches or software upgrades, which have not been sufficiently tested in a customer’s environment.  You must also consider multiple failures.   A natural disaster could affect the power grid, phone services and transportation, which could affect services assumed to be available in a disaster recovery plan.  Terrorists or criminals could plan opportunistic cyber attacks, which could take advantage of the distraction caused by a natural disaster.

In terms of threat assessment, another disturbing trend is illustrated in Figure 2, which reflects the availability of scripts for launching cyber attacks on the Internet.  The trend is toward increasingly sophisticated attacks that can be acquired on the Internet, which can be launched by relatively unsophisticated intruders.

Figure 2: Attack Sophistication versus Intruder Knowledge [2]

Managing risk in an increasingly complex and hostile Internet environment requires forethought and planning across the Software Development Lifecycle.  Understanding and managing risk is a cross functional issue that is frequently considered an unwelcome expense and distraction until a problem occurs, at which point it is way to late to start addressing the problem.  An excellent source for IT security guidelines is the 800 series of NIST publications, which can be found at: http://csrc.nist.gov/publications/PubsSPs.html including but not limited to:

·         SP 800-86 - Aug 2006 -Guide to Integrating Forensic Techniques into Incident Response  - SP800-86.pdf

·         SP 800-84 - Sep 2006 - Guide to Test, Training, and Exercise Programs for IT Plans and Capabilities - SP800-84.pdf

·         SP 800-83 - Nov 2005 - Guide to Malware Incident Prevention and Handling - SP800-83.pdf

·         SP 800-70 Rev. 2 - Feb. 2011 - National Checklist Program for IT Products: Guidelines for Checklist Users and Developers - SP800-70-rev2.pdf

·         SP 800-65 Rev. 1 - July 14, 2009 - DRAFT Recommendations for Integrating Information Security into the Capital Planning and Investment Control Process (CPIC) - draft-sp800-65rev1.pdf

·         SP 800-61 Rev. 1 - Mar 2008 - Computer Security Incident Handling Guide - SP800-61rev1.pdf

·         SP 800-30 - Jul 2002 - Risk Management Guide for Information Technology Systems - sp800-30.pdf

On 14 December, a related IEEE-USA webinar will cover processes and guidelines to enable stakeholders from both technical and business areas of an organization to create a Software Risk Management Plan.

References

1)  Figure 1 is from ‘Study Development’, in the fourth lecture on software engineering, at http://softwarecarpentry.org/4_0/softeng/sturdy/. License: http://creativecommons.org/licenses/by/3.0/legalcode.

2)  From European Network and Information Security Agency, ‘A step-by-step approach on how to setup a CSIRT’:  http://www.enisa.europa.eu/act/cert/support/guide/files/07.jpg

 
IEEE-USA's six-part webinar series on risk management continues on 14 December with a presentation by Dr. Carolyn Turbyfill on "Risk-based Metrics for Software System Design, Development and Test."

Speaker: Dr. Carolyn Turbyfill
When: 14 Dec. 2011, 1-2 PM ET

About:
This webinar discusses discuses the role that software can play in fostering the risks of a physical system (unless the software-induced risks are mitigated). Physical systems that rely heavily of software functionality, control, or automation all require that specific categories of software-borne risks are lessened. However to accomplish this, a clearer understanding of the interplay between the physical and non-physical (software) components in a completed system and in a particular environment is warranted. The webinar also provides hints on how to sell and implement software risk mitigation at work.
 
Rates
IEEE Members $19 for individual webinar; $89 for series
Non-Members $38 for individual webinar; $189 for series

 

Back

 


Dr. Carolyn Turbyfill is a vice president of engineering at StackSafe, Inc., and Revive Systems, Inc. She has more than 20 years of experience in: security; project management, SDLC, enterprise products and services; compliance; database, strategy and roadmaps; management of multiple groups in domestic and international locations; startups and turnarounds. She has a Ph.D. in Computer Science from Cornell University, a M.Sc. in Computer Science from the University of Wisconsin, Madison, and a B.A. in Psychology from the University of North Carolina at Chapel Hill.

Comments may be submitted to todaysengineer@ieee.org.


Copyright © 2011 IEEE

 search archive

 

reader feedback
  search by date
also in this issue
Backscatter: The Case for Aptitude Testing
Cogent Communicator: Bombarded by Messages
Science and Technology Policy Advocacy
Opinion: We Gray Americans
Five Mistakes Leaders Make When Hiring
Licensure Update: NCEES Approves Revised Approach to Education Initiative
Your Engineering Heritage: Dials, Keypads and Smartphones
World Bytes: Blind Adventure
Tech News Digest: September 2014