Sunday, November 28, 2010

Course Notes -- Survey of Software Engineering (CS5391)

This week I'm revisiting some old course notes that I originally published on www.big-oh.net. I'm moving away from self-hosting content on that domain, migrating blog-style content into the cloud ... Blogger in this case.

Following are notes that I used to study for the Survey of Software Engineering section of the Texas State University Computer Science graduate comprehensive examination.


Software Development Lifecycle Models

Waterfall Model
  • This model is document-driven and is also known as the "linear sequential" model.
  • Activities in this model are structured in a linear cascade of phases in which the output of the previous phase is the input to the next phase.
  • Phases are: Feasibility Study -> Requirements Analysis -> Technical Design -> Code and Unit Test -> Integration and System Test -> Deployment and Maintenance.
  • The linear structure of this model provides no feedback to prior phases. This is often impractical since, for example, it is not always possible to identify all requirements at the beginning of a project.
Incremental Model
  • The stages of this model consist of expanding increments of an operational software product.
  • Increments are delivered to customers as they are completed.
  • Each delivered unit is a self-contained functional unit.
  • This methodology allows for continuous validation (as opposed to a final validation phase).
  • This model retains all steps defined in the waterfall model, but performs those steps multiple times in an iterative fashion.
Spiral Model

This model iteratively utilizes the steps below to identify and mitigate high-risk issues in the development of high-quality software.

  • Step 1 - Identify quality objectives for a portion of the product.
  • Step 2 - Identify risks in achieving objectives and evaluate alternative mitigation strategies.
  • Step 3 - Develop and verify a unit of functionality.
  • Step 4 - Review development results and plan for next iteration.
Capability Maturity Model

The Capability Maturity Model (CMM) has the following characteristics:

  • The CMM is an application of process management and quality improvement concepts to software development and maintenance.
  • The CMM doesn't describe how to create an effective software organization. Rather it simply describes the features of such an organization.
  • The CMM defines an ordinal scale of five levels used to measure the capabilities of an organization's software process.

Each level of the CMM is composed of a set of key process areas for which related activities are defined:

  • Level 1 - Chaotic/Initial. Processes are unpredictable and poorly controlled. Success results from individual efforts.
  • Level 2 - Repeatable. Projects can repeat previously mastered tasks. Basic project management processes are established to track cost, schedule and functionality.
  • Level 3 - Defined. Processes are defined and understood by staff. The software process activities are standardized and documents.
  • Level 4 - Managed. Processes are measured and controlled (since you can only manage what you can measure).
  • Level 5 - Optimized. Organization is focused on process improvement.

Software Design Methodologies

Structured Analysis and Design
  • Structured analysis and design attempts to harness complexity through a hierarchical design created by iteratively decomposing program segments until the program is completed.
  • The algorithm is the fundamental building block of this design methodology.
  • Abstraction is used to focus on subproblems, ignoring the overall strategy of the algorithm.
Object-Oriented Analysis and Design
  • Classes and objects are the fundamental building blocks of this design methodology; they are extracted from the vocabulary of the problem domain.
  • Abstraction is applied to reuse the features of related classes that are already part of the object model.
  • Reuse is enabled by language features: inheritance and polymorphism.

Software Testing Strategies

Program Inspection
  • Goal: Ensure the program element follows the specification. Ensure the program element supports stated non-functional requirements.
  • Method: A review group (comprising the roles of Moderator, Author, Reviewer and Recorder), uses a checklist to review a program elements to ensure the above goals.
  • Phase: Implementation.
Black-Box Testing
  • Goal: Verify that a system or sub-system behaves as expected.
  • Method: Derive test cases from the public interface specification. May use equivalence partitioning and boundary value analysis strategies.
  • Phase: System and Integration Testing.
White-Box Testing
  • Goal: Verify that a software unit behaves as expected.
  • Method: Derive test cases from the internal structure of the program.
  • Phase: Implementation.
Load or Stress Testing
  • Goal: Gain confidence that software meets non-functional performance requirements.
  • Method: Find mean time to failure using an input set that is appropriately varied.
  • Phase: Integration Testing.
Regression Testing
  • Goal: Ensure that previously released features have not been broken by new software updates.
  • Method: Execute functional test cases to gain confidence that key software features have not been broken.
  • Phase: Maintenance.

No comments:

Post a Comment