• Skip to main content
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

Computer Notes

Library
    • Computer Fundamental
    • Computer Memory
    • DBMS Tutorial
    • Operating System
    • Computer Networking
    • C Programming
    • C++ Programming
    • Java Programming
    • C# Programming
    • SQL Tutorial
    • Management Tutorial
    • Computer Graphics
    • Compiler Design
    • Style Sheet
    • JavaScript Tutorial
    • Html Tutorial
    • Wordpress Tutorial
    • Python Tutorial
    • PHP Tutorial
    • JSP Tutorial
    • AngularJS Tutorial
    • Data Structures
    • E Commerce Tutorial
    • Visual Basic
    • Structs2 Tutorial
    • Digital Electronics
    • Internet Terms
    • Servlet Tutorial
    • Software Engineering
    • Interviews Questions
    • Basic Terms
    • Troubleshooting
Menu

Header Right

Home » Software Engineering » Software Engineering

List & Explain Various Components of an SRS

By Dinesh Thakur

Completeness of specifications is difficult to achieve and even more difficult to verify. Having guidelines about what different things an SRS should specify will help in completely specifying the requirements. Here we describe some of the system properties than an SRS should specify.

The basic issues an SRS must address

1.       Functionality

2.       Performance

3.       Design constraints imposed on an implementation

4.       External interfaces

         Functional Requirements

1.       Which outputs should be produced from the given inputs?

2.       Relationship between the input and output.

3.       A detailed description of all the data inputs and their source, the units of measure.

4.       The range of valid inputs.

Design Constraints

1.       Standards that must be followed.

2.       Resource limits & operating environment.

3.       Reliability

4.       Security requirement

5.       Policies that may have an impact on the design of the system.

Standards Compliance:

This specifies the requirements for the standards that the system must follow.

Hardware Limitations: 

 The software may have to operate on some existing or predetermined hardware thus imposing restrictions on the design.

Reliability and Fault Tolerance:

Fault tolerance requirements can place a major constraint on how the system is to be designed. Fault tolerance requirements often make the system more complex and expensive.

Security: 

Security requirements are particularly significant in defense systems and many database systems. Security requirements place restrictions on the use of certain commands, control access to data, provide different kinds of access requirements for different people require the use of passwords and cryptography techniques and maintain a log of activities in the system.

External Interface Requirements:

All the possible interactions of the software with people, hardware and other software should be clearly specified. For the user interface, the characteristics of each user interface of the software product should be specified. User interface is becoming increasingly important and must be given proper attention. A preliminary user manual should be created with all use commands, screen formats and explanation of how the system will appear to the user, and feedback and error message.

Like other specifications these requirements should be precise and verifiable. So a statement likes “the system should be no longer than six characters” or command names should reflect the function they perform used. If the software is to execute on existing hardware or on predetermined hardware, all the characteristics of the hardware, including memory restrictions, should be specified. In addition, the current use and load characteristics of the hardware should be given.

Explain Object Oriented Analysis and Dsign Tools

By Dinesh Thakur

Pure object-oriented development requires that object-oriented techniques be used during the analysis, design, and implementation of the system. Various methods have been proposed for OOA and OOD, many of which propose a combined analysis and design technique.

 

Classes and objects:

Classes and objects are the basic building blocks of an OOD, just like functions are for function-oriented design.

 

Encapsulation

 

The basic property of an object is encapsulation: it encapsulates the data and information it contains and supports a well-defined abstraction. This encapsulation of information along with the implementation of the operations performed on the information such that from outside an object can be characterized by the set of services it provides is a key concept in object orientation.

 

State, behavior, and identity

 

An object has state, behavior and identity. The encapsulated data for an object defines the state of the object. The state and services of an object together define its behavior. The behavior of an object is how an object reacts in terms of changes when it is acted on, and how it acts upon other objects by requesting services and operations.

 

Classes

 

Objects represent the basic run-time entity in the OO system; they occupy space in memory that keeps its state and is operated on by the defined operations on the objects. A class, on other hand, defines a possible set of objects. A class can be considered template that specifies the properties for objects of the class. Classes have:

 

1.    An interface that defines which parts of an object of a class can be accessed from outside and how.

2.    A class body that implements the operations in the interface.

3.    Instances are variables that contain the state of an object of that class.

 

Relationship among objects: If an object invokes some services in other objects, we can say that the two objects are related in some way to each other. All objects in a system are not related to all other objects. If an object uses some services of another object, there is an association between the two objects.

 

This association is also called a link – link exits from one object to another if the object uses some services of the other object. A link capture the fact that a message is flowing from one object to another.

 

Inheritance and polymorphism:

 

Inheritance is a unique concept to object orientation. It is a relation between classes that allows for definition and implementation of one class based on the definition of existing classes. When a class B inherits from another class A, B is referred to as the subclass or the derived class and A is referred to as the superclass or the base class.

 

With polymorphism, an entity has a static type and a dynamic type. The static type of an object is the type in which the object is declared in the program text, and it remains unchanged. The dynamic type of an entity, on the other hand, can change from time to time and is known only at reference time.

What do you Mean by Structured Analysis.

By Dinesh Thakur

The structured analysis technique uses function-based decomposition while modeling the problem. It focuses on the functions performed in the problem domain and the data consumed and produced by these functions.

The structured analysis method helps the analyst decide what type of information to obtain at different points in analysis, and it helps organize information so that the analyst is not overwhelmed by the complexity of the problem.

 

It is a top-down refinement approach, which was originally called structured analysis and specification and was proposed for producing the specifications. However, we will limit our attention to the analysis aspect of the approach. Before we describe the approach, let us describe the data flow diagram and data dictionary on which the technique relies heavily.

Data Flow Diagrams and Data Dictionary

Data flow diagrams (also called data flow graphs) are commonly used during problem analysis. Data flow diagrams (DFDs) are quite general and are not limited to problem analysis for software engineering discipline. DFDs are very useful in understanding a system and can be effectively used during analysis.

 

A DFD shows the flow of data through a system. It views a system as a function that transforms the inputs into desired outputs. Any complex system will not perform this transformation in a “single step”, and data will typically undergo a series of transformations before it becomes the output.

 

The DFD aims to capture the transformation that take place within a system to the input data so that eventually the output data is produced. The agent that performs the transformation of data from one state to another is called a process (or a bubble). So, a DFD shows the movement of data through the different transformations or processes in the system.

 

The processes are shown by named circles and data flows are represented by named arrows entering or leaving the bubbles. A rectangle represents a source or sink and is a pet originator or consumer of data. A source or sink is typically outside the main system of study. An example of a DFD for a system that pays workers.

 

In this DFD there is one basic input data flow, the weekly timesheet, which originates from the source worker. His basic output is the paycheck, the sink for which is also the worker, In this system, first the employee’s record is retrieved, using the employee ID, which is contained in the timesheet.

 

From the employee record, the rate of payment and overtime are obtained. These rates and the regular and overtime hours (from the timesheet) are used to compute the pay. After the total pay the tax-rate file is used. The amount of tax deducted is recorded in the employee and company records. Finally, the payback is issued for the net pay. The amount paid is also recorded in company records.

 

This DFD is an abstract description of the system for handling payment. It does not matter if the system is automated or manual. This diagram could very well be for a manual system where the computations are all done with calculators and not represented in this DFD.

 

For example, what happens if the error in the weekly timesheet is not shown in this DFD. This is done to avoid getting bogged down with details while constructing a DFD for the overall system. If more details are desired, the DFD can be further refined.

 

It should be pointed out that a DFD is not a flowchart. A DFD represents the flow of data. While a flowchart shows the flow of control, a DFD does not represent procedural information. So, while drawing a DFD, one must not get involved in procedural details, and procedural thinking must be consciously avoided.

 

For example, considerations of loops and decisions must be ignored. In drawing the DFD, the designer has to specify the major transforms in the path of the data flowing.

 

While analyzing the problem domain the problem can be partitioned with respect to its functionality or with respect to objects. Object oriented modeling for (object-oriented analysis) uses the latter approach. During analysis, an object represents some entity or some concept in the problem domain.

 

An object contains some state information and provides some services to entities outside the objects. The state of the object can be accessed or modified only through the services they provide. Some objects also interact with the users through their services such that the users get the desired services.

 

Hence the goal of modeling is to identify the objects that exist in the problem domain, define the objects by specifying what state information they encapsulate and what services they provide, and identify relationships that exist between objects, such that the overall model is such that it supports the desired user services. Such a model of a system is called its object model.

Discuss Briefly the Validation of SRS

By Dinesh Thakur

The development of software starts with the requirements document, which is also used to determine eventually whether or not the delivered software system is acceptable. It is therefore important that the requirement specification contains no error and specifies the client’s requirement correctly.

 Further more due to the nature of the requirement specification phase, there is a lot of room for misunderstanding and committing errors, and it is quite possible that the requirements specification does not accurately represents the client’s needs. The basic objective of the requirement validation activity is to ensure that SRS reflects the actual requirements accurately and clearly. A related objective is to check that the SRS documents is itself of “good quality” (Some desirable quality objectives are given later).

 

Many different  types of errors are possible , but the most common errors that occurs can be classified in four types : omission, inconsistency, incorrect fact, and ambiguity. Omission is a common error in requirements. In this type of error, some user requirements is simply not included in the SRS; the omitted requirement may be related to the behavior of the system, its performance, constraints, or any other factor.

 

Omission directly affects the external completeness of the SRS. Another common form of error in requirement is inconsistency. Inconsistency can be due to contradictions within the requirements themselves or due to incompatibility of the started requirements with the actual requirements of the client or with the environment in which the system will operate.

 

The third common requirement error is incorrect fact. Errors of this type occur when some facts recorded in the SRS are incorrect. The fourth common error type is ambiguity. Errors of this type occur when there are some requirements that have multiple meanings that is their interpretation is not unique.

 

Omission

Incorrect Fact

Inconsistency

Ambiguity

26%

10%

38%

26

 

In the errors detected in the requirement specification of the A-7 project (which deals with a real time flight control software) were reported. A total of about 80 errors were detected. Out of the which, about 23% were clerical in nature, of the remaining the distribution with error type was :

 

Omission

Incorrect Fact

Inconsistency

Ambiguity

32%

49%

13%

5%

      

What are Requirement Reviews

By Dinesh Thakur

Because requirements specification are formally in people ‘s minds, requirements validation must necessarily involve the clients and the user. Requirement reviews, in which the SRS is carefully reviewed by a group of people including representative of the clients and the users, are the most common methods of validation.

Reviews can be used throughout software development for quality assurance and data collection. Requirements review is a review by a group of people to find errors and point out other matters of concern in the requirement specification of system. The review group should include the author of requirement documents, someone who understands needs of the client, a person of the design team, and the person(s) responsible for maintaining the requirement document. It is also good practice to include some people not directly involved with product development like a software quality engineer.

One way to organize the review meeting is to have each participant go over the requirement before the meeting and the mark the items he has doubts about or he feels need further clarification. Checklists can be quite useful in identifying such items. In the meeting each participant goes through the list of potential defects he has uncovered.

As the members ask questions, the requirements analyst (who is the author of the requirement specification document) provides clarifications if there are no errors or agrees to the presence of errors. Alternatively, the meeting can start with the analyst explaining each of the requirements in the document. The participants ask questions, share doubts, or seeks clarification. Checklists are frequently used in reviews to focus the review effort and to ensure that no major source of error is overlooked by the reviewers. A good checklist will usually depend on the project.

    Are all hardware resource defined?
    Have the response times of functions been specified?
    Have all the hardware, external software, and the data interfaces been defined?
    Have all the functions required by the client been specified.
    Is each requirement testable?
    Is the initial state of the system defined?
    Are the responses to exceptional conditions specified?
    Does the requirement contain restriction that can be controlled by the designer?
    Are possible future modifications specified?

Apart from Requirement Reviews what are the other Methods Used for the Validation of SRS

By Dinesh Thakur

Requirement reviews remain the most commonly used and viable means for requirement validation. However, there are other approaches that may be applicable for some system or parts of system or system that have been specified formally.

Automated Cross Referencing

 

Automated cross-referencing uses processors to verify some properties of requirements. Any automated processing of requirements is possible if the requirements are written in a formal specification language or a language specifically designed for machine processing.

 

We saw example of such language earlier. These tools typically focus on checks for internal consistency and completeness, which sometimes leads to checking of external completeness. However, these tools cannot directly checks for external completeness. For this reason, requirement reviews are needed even if the requirements are specified through a tool or in a formal notation.

 

If the requirements are in machine process able form, they can be analyzed for internal consistency among different elements of the requirement.

Reading

The goal in reading is to have someone other than the author or the requirements read the requirement specification document to identify potential problems. Having the requirement read by another person who may have different interpretation of requirements, many of the requirements problems caused by misinterpretations or ambiguities can be identified. Furthermore, if the reader is a person who is in interested in the project (like a person from the quality assurance group that will eventually test the system) can address issues that could cause problem later.

 

For example, if a tester reads the requirement, it is likely that the testability of requirement will be well examined.

Constructing Scenarios

Scenarios describe different situations of how the system will work once it is operational. The most common area for constructing scenarios is that of system user interaction. Constructing scenarios is good for clarifying misunderstandings in human computer interaction area. They are of limited value for verifying the consistency and completeness of requirements

Prototyping

Though prototypes are generally built to ascertain requirements, a prototype can be built to verify requirements. Prototype can be quite useful in verifying the feasibility of some of the requirement (such as answering the question Can this be done?) A prototype that has been built during problem analysis can also aid validation. For example, if the prototype has a use interfaces and the client has approved them after use, then the user interface, as specified by the prototype, can be considered validated. No further validation need be performed for user interface.

Metrics

The basic purpose of metrics at any point during a development project is to provide qualitative information of the management process so that the information can be used effectively to control the development process.

What are Function Points? How are they Computed? Explain

By Dinesh Thakur

Function points are one of the most widely used measures of software size. The basis of function points is that the “functionality ” of the system that is; what the system performs, is the measure of the system size. In function points, the system functionally is calculated in terms of the number of function it implements, the number of inputs, the number of output etc.

Parameter that can be obtained after requirements analysis and that are independent of the specification (and implementation) language.

 

The original formulation for computing the function points uses the count of five different parameters, namely, external input types, and external output types, logical internal file type, external interface file types and external inquiry type. According to the function point approach these five parameters capture the entire functionality of a system.

 

However, two elements of the same type may differ in their complexity and hence should not contribute the same amount of the “functionality ” of the system. To account for complexity, each parameter in its type is classified as simple, average, or complex. Each unique input (data or control) type that is given as input to application from outside is considered of external input type and is counted. The source of external input can be the user, or some other application, files.

 

An external input type is considered simple if it has a few data elements and affects only a few internal fields of the application. It is considered complex if it has many data items and may have internal logical files that are needed for processing them. The complexity is average if it is has many data items and many internal logical files are needed for processing them. The complexity is average if it is in between.

 

Similarly, each unique output that leaves the system boundary is counted as external output type. Reports or messages to the users or other applications are counted as external input types. The complexity criteria is similar to that of the external input type. For a report, if it contains a few columns it is considered simple, if it has multiple columns it is considered average, and if it contains complex structure of data and reference many files or production, it is considered complex.

 

Each application maintains information internally for performing its functions. Each logical group of data or control information that is generated, used and maintained by the application is counted as a logical internal file type. A logical internal file is simple if it contains a few record type, complex if is has many type, and average if it is in between.

 

Once the counts for all five different types are known for all three different complexity classes, the raw or unadjusted function point can be computed as a weighted sum as follows: –

 

     i=5     j=3

UFP  =          ? ?      wij Cij

     i=1     j=1

 

 Where i reflects the row and j reflects the column in Table, wij is the entry in the ith row and jth column of the table (i.e. it represents the contribution of an element of the type i and complexity j); and Cij is the count of the number of elements of type i that have been classified as having the complexity corresponding to column j.

 

Once the UFP is obtained, it is adjusted for the environment complexity. For this 14 different characteristics of the system are given. These are data communications, distributed processing, performance objective, operation configuration load, transaction rate, on line data entry, end user efficiency, on-line update, complex processing logic, re-usability, installation ease, operational ease, multiple sites, and desire to facilitate change. The degree of influence of each of these factors is taken to be from 0 to 5,

 

Function type

Simple

Average

Complex

External input

3

4

6

External output

4

5

7

Logical internal file

7

10

15

External interface file

5

7

10

External inquiry

3

4

6

 

representing  the six different levels : not present (0), insignificant influence (1), modern influence 92), average influence (3), significant influence (4), and strong influence (5). The 14 degrees of influence for the system are then summed, giving total N (N ranges from 0 to 14*5 = 70). This N is used to obtain a complexity adjustment factor (CAF) as follows:

CAF = 0.65 + 0.01 N.

With this equation, the value of CAF ranges between 0.65 and 1.35. The delivered function points (DFP) are simply computed by multiplying the UFP by CAF. That is,

Delivered Function Points = CAF * Unadjusted Function Points.

 

What do you Mean by Staffing

By Dinesh Thakur

When tasks are defined and schedules are estimated, the planning effort has sufficient information to begin staffing plans and organizing a team into units to address the development problem. The comprehensive staffing plan identifies the required skills and schedules the right people to be brought onto the project at appropriate times and released from the project when their tasks are complete.

Selection of individuals to fill position in the staffing plan is a very important step. Errors in staffing can lead to cost increases and schedule slips just as readily as errors in requirements, design, or coding.

Write a Short Note on Project Control Termination Analysis

By Dinesh Thakur

Termination analysis is performed when the development process is over. The basic reason for performing termination analysis is to provide information about the development process. Remember that a project is an instantiation of the process.

To understand the properties of the process, data from many projects that used the process can be used to make predictions and estimations about future projects. The data about the project is also needed to analyze the process.

What is Design Review? How Automated Cross- Checking Determines Review of System

By Dinesh Thakur

Design Reviews: The purpose of design reviews is to ensure that the design satisfies the requirements and is of “good quality.” If errors are made during the design process, they will ultimately reflect themselves in the code and the final system. Detecting errors in design is the aim of design reviews.

Automated Crosschecking: One of the important issues during system design verification is whether the design is internally consistent or not. For example, those modules used in module that is defined in the system design must also be defined in the design.

One should also check whether the interface of a module is consistent with the way in which other modules use it. Other internal consistency issues can be consistent use of data structures and whether data usage is consistent with declaration. These consistency issues can be checked during design review and is usually the place where they are checked if no automated help is available.

When is Cost Estimation Done? Discuss the COCOMO Model along with the Parameters Defined in it

By Dinesh Thakur

Any cost estimation model can be viewed as a function that outputs the cost estimate. The basic idea of having a model or procedure for cost estimation is that it reduces the problem of estimation of determining the value of he “key parameters” that characterize the project, based on which the cost can be estimated.

 The primary factor that controls the cost is the size of the project. That is, the larger the project, the greater the cost & resource requirement. Other factors that affect the cost include programmer ability, experience of developers, complexity of the project, & reliability requirements.

The goal of a cost model is to determine which of these many parameters have significant effect on cost & then to discover the relationships between the cost. The most common approach for estimating effort is to make a function of a single variable. Often this variable is the project size, & the equation of efforts is:

EFFORT = a x size b

Where a & b are constants.

 

If the size estimate is in KDLOC, the total effort, E, in person-months can be given by the equation.

          E = 5.2 (KDLOC) 91

On Size Estimation

Though the single variable cost models with size as the independent variable result in simple models that can be easily obtained, applying them for estimation is not simple. The reason is that these models now require size as the input, & size of the project is not known early in development & has to be estimated.

For estimating the size, the system is generally partitioned into components it is likely to have. Once size estimates for components are available, to get the overall size estimate for the system, the estimates for all the components can be added up. Similar property does not hold for cost estimation, as cost of developing a system is not the sum of costs of developing the components. With the size-based models, if the size estimate is inaccurate, the cost estimates produced by the models will also be inaccurate.

COCOMO Model

The Constructive cost model (COCOMO) was developed by Boehm. This model also estimates the total effort in terms of person-months of the technical project staff. The effort estimate includes development, management, and support tasks but does not include the cost of the secretarial and other staff that might be needed in an organization. The basic steps in this model are: –

1. Obtain an initial estimate of the development effort from the estimate of thousands of delivered lines of source code (KDLOC).

2. Determine a set of 15 multiplying factors from different attributes of the project.

3. Adjust the effort estimate by multiplying the initial estimate with all the multiplying factors.

The initial estimate is determined by an equation of the form used in the static single – variable models, using KDLOC as the measures of size. To determine the initial effort Ei in person-months the equation used is of the type

Ei = a * (KDLOC)b.
The value of the constants a and b depend on the project type. In COCOMO, projects are categorized into three types – organic, semidetached, and embedded.

Organic projects are in an area in which the organization has considerable experience and requirements are less stringent. A small team usually develops such systems. Examples of this type of project are simple business systems, simple inventory management systems, and data processing systems.

What is Quality Assurance Plans? Discuss the Different Approaches Used

By Dinesh Thakur

The purpose of the software quality assurance plans (SAQP) is to specify all the work products that need to be produced during the project, activities that need to be performed for checking the quality of each of the work products, and the tools and methods that may be used for the SQA activities.

It is interested in the quality of not only the final product, but also of the intermediate products. The SQAP specifies the tasks that need to be undertaken at different times in the life cycle to improve the software quality and how they are to be managed.

 

These tasks will generally include reviews and audits. The documents that should be produced during software development to enhance software quality should also be specified by the SQAP. It should identify all documents that govern the development, verification, validation, use and maintenance of the software and how these documents are to be checked for adequacy.

Verification and Validation

In verification and validation we are mostly concerned with the correctness of the product. Verification is the process of determining whether or not the products of a given phase of software development fulfill the specifications established during the previous phase. Verification activities include proving, testing, and review. Validation is the process of evaluating software at the end of the software development to ensure compliance with the software requirements.

 

The major V&V activities for software development are inspection, reviews, and testing.  Inspection is “a formal evaluation technique in which software requirements, design or code are examined in detail by a person or a group other than the author to detect faults, violations of development standards, and other problems”. It is formal, peer evaluation of a software element whose objective is to verify that the software element satisfies its specifications and conforms to standards.

Inspections and Reviews

The software inspection process was started by IBM in 1972 to improve software quality and increase productivity. Much of the earlier interest was focused on inspecting code. It was soon discovered that mistakes occur not only during coding but also during design, and this realization led to design inspections.

 

IEEE defines inspection as “a formal evaluation technique in which software requirement, design or code are examined in details by a person or a group other than the author to detect faults, violation of development standards, and other problems.

 

As is clear from the definitions, the purpose of an inspection is to perform a careful scrutiny of the product by peers. It is different from a walkthrough, which is generally informal and whose purpose is to train or inform someone about a product. In a walkthrough, the author describes the work product in an informal meeting on his peers or superiors to get feedback or inform or explain to them the work product.

 

In an inspection, in contrast to a walkthrough, the meeting and the procedure are much more formal.  There are three reasons for having reviews or inspections.

 

    Defect removal
    Productivity increases
    Provide information for project monitoring.

 

The primary purpose of inspection is to detect defects at different level during a project.

What are the Different Methods Used for Monitoring a Project

By Dinesh Thakur

Methods for Monitoring a Project

          Time sheets

Once project development commences, the management has to track the progress of the project and the expenditure incurred on the project. Progress can be monitored by using the schedule and milestones laid down in the plan.

The earned value method, discussed later, can also be used. How much time different project members are spending on the defending activities in the project. They are used as mechanism for collecting raw data and to obtain information regarding the overall expenditure and making us ready for ambit different tasks and different phases at any given time.

 

Reviews

 

Purpose of reviews is to provide information for project control, a definite and clearly defined milestone. It forces the author of product to complete the product before the review. Having this goal gives some impetus and motivation to complete the product.

 

Cost-Schedule-Milestone Graph

 

A cost-schedule millstone graph represents the planned cost of different milestones. It also shows the actual cost of achieving the milestones gained so far. By having both the planned cost versus milestones and the actual cost versus milestones on the same graph, the progress of the project can be grasped easily.

 

Earned Value Method.

 

The system design usually involves a small group of (senior) people. Having a large number of people at the system design stage is likely to result in not-very cohesive design. After the system design is completed, a large number of programmers whose job is to do the detailed design, coding, and testing may enter the project. During these activities, proper monitoring of people, progress of the different components and progress of the overall project are important.
Unit Development Folder

 

The project plan produced after the requirements are a macro-level plan. Even if this plan is prepared meticulously and accurately, if proper control is not exercised at the micro level (at the level of each programmer and each module), it will be impossible to implement the project plan.

What is Risk Management? Give Brief Ideas for Risk Assessment and Control

By Dinesh Thakur

Any large project involves certain risks, and that is true for software projects. Risk management is an emerging area that aims to address the problem of identifying and managing the risks associated with a software project.

 Risk is a project of the possibility that the defined goals are not met. The basic motivation of having risk management is to avoid disasters and heavy losses. The current interest in risk management is due to the fact that the history of software development projects is full of major and minor failures. A large percentage of projects have run considerably over budget and behind schedule, and many of these have been abandoned midway. It is now argued that many of these failures were due to the fact that the risks were not identified and managed properly.

 

Risk management is an important area, particularly for large projects. Like any management activity, proper planning of that activity is central to success. Here we discuss various aspects of risk management and planning.

Risk Management Overview

Risk is defined as an exposure to the chance of injury of loss (Kon94]. That is, risk implies that there is a possibility that negative may happen. In the context of software projects, negative implies that here is an adverse effect on cost, quality, or schedule. Risk management is the area that tries to ensure that the impact of risks on cost, quality, and schedule is minimum.

 

Like configuration management, which minimizes the impact of change, risk management minimizes the impact of risks. However, risk management is generally done by the project management. For this reason we have not considered risk management as a separate process   (through it can validly be considered one) but have considered such activities as part of project management.

 

Risk management can be considered as dealing with the possibility and actual occurrence of those events that are not “regular” or commonly expected. Normally project management handles the commonly expected events, such as people going on leave or some requirements changing. It deals with events that are infrequent, somewhat out of the control of the project management, and are large enough (i.e. can have a major impact on the project) to justify special attention.

Write a Note on Software Design Phases

By Dinesh Thakur

Software Design: It is the first step in moving from problem domain to solution domain. The purpose of the design phase is to plan a solution of the problem specified by the requirements document. Starting with what is needed, design takes towards how to satisfy the needs.

The design of a system is perhaps the most critical factor affecting the quality of the software. It has a major impact on the project during later phases, particularly during testing and maintenance. The output of this phase is the design document. This documents is similar to a blueprint or plan for the solution and is used later during implementation, testing and maintenance.

 

          It is further of two types:

1. System Design or Top level Design: It identifies the various modules that should be in the system specifications of the modules and interconnections between the various modules. At the end of system design all the major data structures, file formats, output formats, and the major modules in the system and their specifications are decided.

 

2. Detailed Design: It identifies the internal logic of the various modules. During this phase further details data structures and algorithmic design of each of the modules is specified. Once the design is complete, most of the major decisions about the system have been made. However, many of the details about coding the designs, which often depend on the programming language chosen, are not specified during design.

Discuss the Objectives of the Design Phase

By Dinesh Thakur

Design is essentially the bridge between requirements specification and the final solution for satisfying the requirements.  The goal of the design process is to produce a model or representation of a system, which can be used later to build that system. The produced model is called the design of the system. The design of a system is essentially a blueprint or a plan for a solution for the system. The design process for software systems often has two levels.

 At the first level the focus is on deciding which modules are needed for the system, the specifications of these modules, and how the modules should be interconnected. This is what is called the system design or top-level design. In the second level, the internal design of the modules, or how the specifications of the module can be satisfied, is decided.

 

This design level is often called detailed design or logic design. Detailed design essentially expands to the system design to contain a more detailed description of the processing logic and data structures so that the design is sufficiently complete for coding. A design methodology is a systematic approach to creating a design by applying a set of techniques and guidelines. Most design methodologies essentially offer a set of guidelines that can be used by the developer to design a system.

 

The input to the design phase is the specifications for the system to be designed. Hence a reasonable entry criteria can be that the specifications are stable and have been approved, hoping that the approval mechanism will ensure that the specifications are complete, consistent, unambiguous, etc. The output of the top-level design phase is the architectural design or the system design for the software system to be built. 

 

A design can be object-oriented or function-oriented. In function-oriented design the design consists of module definitions, with each module supporting a functional abstraction. In object-oriented design, the modules in the design represent data abstraction.

 

The purpose of the design phase is to specify the components for this transformation function, so that each component is also a transformation function. Hence, the basic output of the system design phase, when a function oriented design approach is being followed, is the definition of all the major data structures in the system, all the major modules of the system, and how the modules interact with each other.

Describe Difference Between Top-Down & Bottom up Coding Techniques for Programming

By Dinesh Thakur

In a top-down implementation, the implementation starts from the top of the hierarchy and proceeds to the lower levels. First the main module is implemented, then its subordinates are implemented, and their subordinates, and so on.

In a bottom-up implementation, the process is the reverse. The development starts with implementing the modules at the bottom of the hierarchy and proceeds through the higher levels until it reaches the top.

Top-down and bottom-up implementation should not be confused with top-down and bottom -up design. When we proceed top-down, for testing a set of modules at the top of the hierarchy, stubs will have to be written for the lower level modules that the set of modules under testing invoke. On the other hand, when we proceed bottom-up, all modules that are lower in the hierarchy have been developed and driver modules are needed to invoke these modules under testing.

Top-down versus bottom-up is also a pertinent issue when the design is not detailed enough. In such cases, some of the design decisions have to be made during development.

Discuss in Detail Coupling and Cohesion

By Dinesh Thakur

Coupling: Two modules are considered independent if one can function completely without the presence of other. Obviously, if two modules are independent, they are solvable and modifiable separately. However, all the modules in a system cannot be independent of each other, as they must interact so that together they produce the desired external behavior of the system.

The more connections between modules, the more dependent they are in the sense that more knowledge about one module is required to understand or solve the other module. Hence, the fewer and simpler the connections between modules, the easier it is to understand one without understanding the other. Coupling between modules is the strength of interconnection between modules or a measure of independence among modules.

To solve and modify a module separately, we would like the module to be loosely coupled with other modules. The choice of modules decides the coupling between modules. Coupling is an abstract concept and is not easily quantifiable. So, no formulas can be given to determine the coupling between two modules. However, some major factors can be identified as influencing coupling between modules.

Among them the most important are the type of connection between modules, the complexity of the interface, and the type of information flow between modules. Coupling increase with the complexity and obscurity of the interface between modules. To keep coupling low we would like to minimize the number of interfaces per module and the complexity of each interface. An interface of a module is used to pass information to and from other modules. Complexity of the interface is another factor affecting coupling. 

The more complex each interface is, higher will be the degree of coupling. The type of information flow along the interfaces is the third major factor-affecting coupling. There are two kinds of information that can flow along an interface: data or control, Passing or receiving control information means that the action of the module will depend on this control information, which makes it more difficult to understand the module and provide its abstraction. Transfer of data information means that a module passes as input some data to another module and gets in return some data as output.

Cohesion: Cohesion is the concept that tries to capture this intra-module. With cohesion we are interested in determining how closely the elements of a module are related to each other. Cohesion of a module represents how tightly bound the internal elements of the module are to one another. Cohesion of a module gives the designer an idea about whether the different elements of a module belong together in the same module.  Cohesion and coupling are clearly related. Usually the greater the cohesion of each module in the system, the lower the coupling between modules is. There are several levels of Cohesion:

 Coincidental

 Logical

 Temporal

 Procedural

 Communicational

 Sequential

 Functional

Coincidental is the lowest level, and functional is the highest. Coincidental Cohesion occurs when there is no meaningful relationship among the elements of a module. Coincidental Cohesion can occur if an existing program is modularized by chopping it into pieces and making different pieces modules.

A module has logical cohesion if there is some logical relationship between the elements of a module, and the elements perform functions that fill in the same logical    class. A typical example of this kind of cohesion is a module that performs all the inputs or all the outputs. Temporal cohesion is the same as logical cohesion, except that the elements are also related in time and are executed together. Modules that perform activities like “initialization”, “clean-up” and “termination” are usually temporally bound.

A procedurally cohesive module contains elements that belong to a common procedural unit. For example, a loop or a sequence of decision statements in a module may be combined to form a separate module. A module with communicational cohesion has elements that are related by a reference to the same input or output data. That is, in a communicationally bound module, the elements are together because they operate on the same input or output data. 

When the elements are together in a module because the output of one forms the input to another, we get sequential cohesion. Functional cohesion is the strongest cohesion. In a functionally bound module, all the elements of the module are related to performing a single function. By function, we do not mean simply mathematical functions; modules accomplishing a single goal are also included.

 

What are the Different Approaches Used for the Verification of a Design Document

By Dinesh Thakur

The output of the system design phase, like the output of other phases in the development process, should be verified before proceeding with the activities of the next phase. Unless the design is specified in a formal executable language, the design cannot be executed for verification. Other means for verification have to be used. The most common approach for verification is design reviews or inspections.

 

Design Reviews

 

The purpose of design reviews is to ensure that the design satisfies the requirement and is of “good quality”. If errors are made during the design process, they will ultimately reflect themselves in the code and the final system.

 

The system design review process is similar to the other reviews. In system design review a group of people get together to discuss the design with the aim of revealing design errors or undesirable properties. The review group must include a member of both the system design team and the detailed design team, the author of the requirements document, the author responsible for maintaining the design document and an independent software quality engineer.

 

The review can be held in the same manner as the requirement review. The aim of meeting is to uncover design errors not to fix them; fixing is done later.  The meeting ends with a list of action items, which are later acted on the design team. The number of ways in which errors can come in a design is limited only by the creativity of the designer.

 

A Sample checklist:  The use of checklists can be extremely useful for any review. The checklist can be used by each member during private study of the design and during the review meeting. Here we list a few general items that can be used to construct a checklist for a design review.

    Are all of the functional requirements taken into account?
    Are there analyses to demonstrate that performance requirement can be met?
    Are all assumptions explicitly stated, and are they acceptable?
    Are there are any limitations or constrains on the design beyond those in the requirement?
    Is external specification of each module completely specified?
    Have exceptional conditions been handled?
    Are all the data formats consistent with the requirement?
    Are the operator and user interfaces properly addressed?
    Is the design modular, and does it conform to local standard?
    Is the size of data structure estimated? Are provisions made to guard against overflow?

 

Automated Cross –checking

 

One of the important issues during system design verification is whether the design is internally consistent. For example, those modules used in a module that is defined in the system design must also be defined in the design. One should also check whether the interface of a module is consistent with the way in which other modules use it.  These consistency issues can be checked during design review and is usually the place where they are checked if no automated help is available.

 

However, if the design is expressed in a language like PDL the design can be “complied” to check for consistency.

Explain Various DESIGN TECHNIQUES

By Dinesh Thakur

The design process involves developing a conceptual view of the system, establishing system structure, identifying data streams and data stores, decomposing high level functions into sub functions, establishing relationships and interconnections among components, developing concrete data representations, and specifying algorithmic details. Software design is a creative activity.

 As with all creative processes, the system is decomposed into subsystems and more consideration is given to specific issues. Backtracking is fundamental to top-down design. In the bottom – up approach to software design, the designer first attempts to identify a set of primitive objects, actions, and relationships that will provide a basis for problem solution.

 

Higher-level concepts are then formulated in terms of the primitives. The bottom-up strategy requires the designer to combine features provided by the implementation language into more sophisticated entities.


Stepwise Refinement: Stepwise refinement is a top-down technique for decomposing a system from high-level specifications into more elementary levels. Stepwise refinement involves the following activities:

 

Decomposing design decisions to elementary levels.

 

1.       Isolating design aspects that are not truly interdependent.

2.       Postponing decisions concerning representation details as long as possible.

3.       Carefully demonstrating that each successive step in the refinement process is a faithful expansion of previous steps. The major benefits of stepwise refinement as a design technique are:

Top-down decomposition.

Incremental addition of detail

Postponement of design decisions

4.       Continual verification of consistency.

Levels of Abstraction: Levels of abstraction was originally described by Dijkstra as a bottom-up design technique. In Dijkstra’s system each level of abstraction is composed of a group of related functions, some of which are externally visible and some of which are internal to the level. Internal functions are hidden from other levels; they can only be invoked by functions on the same level. The internal functions are used to perform tasks common to the work being performed on that level of abstraction. Each level of abstraction performs a set of services for the functions on the next higher level of abstraction.

 

Structured Design: Structured design was developed by Constantine as a top-down technique for architectural design of software system. The basic approach in structured design is systematic conversion of data flow diagrams into structure charts. Design heuristics such as coupling and cohesion are used to guide the design process. 

 

The first step in structured design is review and refinement of the data flow diagram developed during requirements definition and external design. The second step is to determine whether the system is transform centered or transaction-driven, and to derive a high level structure chart based on this determination.

 

The third step in structured design is decomposition of each subsystem using guidelines such as coupling, cohesion, information hiding, and levels of abstraction, data abstraction, and the other decomposition criteria. The primary strength of structure design is provision of a systematic method for converting data flow diagrams into top-level structure charts.

 

Integrated Top-Down Development: Integrated top-down development integrates design, implementation, and testing. Using integrated top-down development, design precedes top-down from the highest-level routines; they have the primary function of coordinating and sequencing the lower-level routines.

 

Lower-level routines may be implementation of elementary functions or they may in turn invoke more primitive routines. There is thus a hierarchical structure to a top-down system in which routines can invoke lower-level routines but cannot invoke routines on a higher level.

What are the Different Methods Used to Specify the Modules in Detailed Design

By Dinesh Thakur

Formal methods of specification can ensure that the specification are precise & note open to multiple interpretations. There are some desirable properties that module specifications should have. First, the specifications should be complete. That is, the given specification should specify the entire behavior of the module.

 

A related property is that the specifications should be unambiguous. The specifications should be easily understandable & the specification language should be such that specifications can be easily written. An important property of specifications is that they should be implementation dependent. That is, they should be given in an abstract manner.

Specifying Functional Modules

 

The most abstract view of a functional module is to treat it as a black box that takes in some inputs & produces some output such that the outputs have a specified relationship with the inputs. Most modules are designed to operate only on inputs that satisfy some constraints. The constraints may be on the type of input & the range of inputs. For example, a function that finds the square root of a number may be designed to operate only on the real numbers. One method for specifying modules was proposal by Hare, based on pre and post conditions. In this method constrains on the input of a module were specified by a logical assertion on the input state called pre-condition. The output was specified as a logical assertion on he output state called post-condition. As an example, consider a module sort to be written to sort a list of L integers in ascending order. The pre & post condition of this module are:

          Pre condition         :         non-null L

          Pos condition:        for all i.,

The specification states that if the input state for the module sort is non-null L, the output state should be such that the elements of L are in increasing order.


Specifying classes

Data abstraction is considered one of the most important language concepts of recent times. Various specification techniques have evolved for specifying abstract data types. One of them is the axiomatic specification technique. In this technique, the operations are not directly specified. Instead, axioms are used that specify the behavior of different interaction.

 

Let us use the axiomatic method by writing specifications for a stack of integers. We define a stack that has four operations

 

1)           Create: to create a new stack

2)           Push: to push an element on a stack.

3)           Pop: to pop the top element from the stack.

4)           Top: return the element on top of the stack.

 

Based on our understanding of these words, we may derive proper semantics of operations in this simple case. However. For absolutely new data types, this assumption may not hold. The specification of the stack are shown below:

 

1.            Stack [integer] declare

2.            create ( ) – stack;

3.            Push (stack, integer) – stack

4.            Pop (stack) – stack;

5.            Top (Stack) – integer U undefined;

 

A high quality SRS reduces the development cost. Hence, the quality of the SRS impacts the customer satisfaction, system validation, quality of the final software, & the software development cost.

What are the Different Verification Methods Used for Detailed Design

By Dinesh Thakur

There are a few techniques available to verify that the detailed design is consistent with the system design. The focus of verification in the detailed design phase is on showing that the detailed design meets the specifications laid down in the system design. The three verification methods we consider are design walkthrough, critical design review, and consistency checkers.

Design Walkthrough

A design walkthrough is a manual method of verification. The definition and use of walkthroughs change from organization to organization. A design walkthrough is done in an informal meeting called by the designer or the leader of the designer’s group. The walkthrough group is usually small and contains, along with designer, the group and/or another designer of the group.

Critical Design Review

The purpose of critical design review is to ensure that the detailed design satisfies the specification laid down by system design. It is very desirable to detect and remove design error, as the cost of removing them later can be considerably more than the cost of removing them at design time. Detecting errors in detailed design is the aim of critical design review.

 

The critical design review process is similar to the other reviews, in that groups of people get together to discuss the design with the aim of revealing design errors or undesirable properties. The review groups include, besides the author of the detailed design, a member of the system design team, the programmer responsible for ultimately coding the module(s) under review, and an independent software quality engineer.

 

It should be kept in mind that the aim of the meeting is to uncover design errors, not to fix them. Fixing is done later. Also, the psychological frame of mind should be healthy, and the designer should not be put in a defensive position. The meeting should end with a list of action items, to be acted on later by the designer.

A Sample Checklist

    Does each of the modules in the system design exist in detailed design?
    Are there analyses to demonstrate the performance requirement can be met?
    Are all the assumptions explicitly stated and are they acceptable?
    Are all relevant aspects of system design reflected in detailed design?
    Are all the data formats consistent with the system design?

Consistency Checkers

Design reviews and walkthrough are manual processes; the people involved in the review and walkthrough determine the error in the design .If the design is specified in PDL or some other formally defined design language, it is possible to detect some design defects by using consistency checkers.

 

Consistency checkers are essentially compilers that take as input the design specified in a design language  (PDL in our case). Clearly, they cannot produce executable code because the inner syntax of PDL allows natural language and many activities are specified in the natural language. A consistency checker can ensure that any modules invoke or used by a given module actually exist in the design and that the interface used by the called is consistent with the interface definition of the called module.

Differentiate Between Error, Fault and Failure

By Dinesh Thakur

The term error is used in two different ways. It refers to the discrepancy between a computed, observed, or measured value and the true, specified, or theoretically correct value. That is error refers to the difference between the actual output of software and the correct output.

Fault is a condition that causes a system to fail in performing its required function. A fault is the basic reason for software malfunction and is synonymous with the commonly used term bug. Failure is the inability of a system or component to perform a required function according to its specifications. A software failure occurs if the behavior of the software is different from the specified behavior.

What are Test Oracles

By Dinesh Thakur

A test oracle is a mechanism; different from the program itself that can be used to check the correctness of the output of the program for the test cases. Conceptually, we can consider testing a process in which the test cases are given to the test oracle and the program under testing.

 

The output of the two is then compared to determine if the program behaved correctly for test cases. To help the oracle determine the correct behavior, it is important that the behavior of the system or component be unambiguously specified and that the specification itself is error free.

 

There are some systems where oracles are automatically generated from specifications of programs or modules. With such oracles, we are assured that the output of the oracle is consistent with the specifications.

What Do You Mean by Knot Count

By Dinesh Thakur

For a programmer, to understand a given program, he typically draws arrows from the point of control transfer to its destination, helping him to create a mental picture of the program and the control transfer in it.

 

According to this metric the more interwined these arrows become the more complex the program. This notion is captured in the concept of a “Knot”.


A knot is essentially the intersection of two such control transfer arrows. If each statement in the program is written one as separate line, this notion can be formalized as follows. A jump from line a to line b is represented by the pair (a, b).

 

Two jumps  (a,b) and (p,q) give rise to a knot if either min (a,b) <min (p,q)<max(a,b) and max (p,q) > max (a,b) ;or min (a,b)< max (p, qa)< max(a,b) and min (p, q)< min (a,b).

Differentiate Between Top Down and Bottom UP Approaches

By Dinesh Thakur

In top down strategy we start by testing the top of the hierarchy and we incrementally add modules that it calls and then test the new combined system. This approach of testing requires stubs to be written. A stub is a dummy routine that simulates a module.

In the top-down approach, a module cannot be tested in isolation because they invoke some other modules. To allow the modules to be tested before their subordinates have been coded, stubs simulate the behavior of the subordinates.

The bottom-up approach starts from the bottom of the hierarchy. First the modules at the very bottom, which have no subordinates, are tested. Then these modules are combined with higher-level modules for testing. At any stage of testing all the subordinate modules exist and have been tested earlier.

To perform bottom-up testing, drivers are needed to set up the appropriate environment and invoke the module. It is the job of the driver to invoke the module under testing with the different set of test cases.

What is the Psychology of Testing

By Dinesh Thakur

The aim of testing is often to demonstrate that a program works by showing that it has no errors. This is the opposite of what testing should be viewed as.

The basic purpose of the testing phase is to detect the errors that may be present in the program. Hence, one should not start testing with the intent of showing that a program works, but the intent should be to show that a program does not work. With this in mind we define testing as follows:

 

Testing is a process of executing a program with the intent of finding errors.

What are Test Case Specifications

By Dinesh Thakur

The test plan focuses on how the testing for the project will proceed, which units will be tested and what approaches (and tools) are to be used during the various stages of testing. However it does not deals with details of testing a unit nor does it specify which test case are to be used.

 Test case specification has to be done separately for each unit. Based on the approach specified in the test plan first the feature to be tested for this unit must be determined. The overall approach stated in the plan is refined into specific test techniques that should be followed and into the criteria to be used for evaluation. Based on these the test cases are specified for testing unit.

 

The two basic reasons test cases are specified before they are used for testing. It is known that testing has severe limitations and the effectiveness of testing depends very heavily on the exact nature of the test case. Even for a given criterion the exact nature of the test cases affects the effectiveness of testing.

 

Constructing good test case that will reveal errors in programs is still a very creative activity that depends a great deal on the tester. Clearly it is important to ensure that the set of test cases used is of high quality. As with many other verification methods evaluation of quality of test case is done through “test case review” And for any review a formal document or work product is needed. This is the primary reason for having the test case specification in the form of a document.

What is Black Box Testing

By Dinesh Thakur

There are two basic approaches to testing: functional and structural. In functional testing   the structure of the program is not considered. Test cases are decided solely on the basis of the requirements or specifications of the program or module, and the internals module of the program are not considered for selection of test cases.

Due to its nature, functional testing is often called “black box testing.” In the structural approach, test cases are generated based on the actual code of the program or module to be tested. This structural approach is sometimes called “glass box testing”

What is Exhaustive Testing

By Dinesh Thakur

While selecting test cases the primary objectives is to ensure that if there is an error or fault in the program, it is exercised by one of the test cases. An ideal test case set is one that succeeds (meaning that its execution reveals no errors) only if there are no errors in the program.

 One possible ideal set of test cases is one that includes all the possible inputs to the program. This is often called exhaustive testing.

« Previous Page
Next Page »

Primary Sidebar

Software Engineering

Software Engineering

  • SE - Home
  • SE - Feasibility Study
  • SE - Software
  • SE - Software Maintenance Types
  • SE - Software Design Principles
  • SE - Prototyping Model
  • SE - SRS Characteristics
  • SE - Project Planning
  • SE - SRS Structure
  • SE - Software Myths
  • SE - Software Requirement
  • SE - Architectural Design
  • SE - Software Metrics
  • SE - Object-Oriented Testing
  • SE - Software Crisis
  • SE - SRS Components
  • SE - Layers
  • SE - Problems
  • SE - Requirements Analysis
  • SE - Software Process
  • SE - Software Metrics
  • SE - Debugging
  • SE - Formal Methods Model
  • SE - Management Process
  • SE - Data Design
  • SE - Testing Strategies
  • SE - Coupling and Cohesion
  • SE - hoc Model
  • SE - Challenges
  • SE - Process Vs Project
  • SE - Requirements Validation
  • SE - Component-Level Design
  • SE - Spiral Model
  • SE - RAD Model
  • SE - Coding Guidelines
  • SE - Techniques
  • SE - Software Testing
  • SE - Incremental Model
  • SE - Programming Practices
  • SE - Software Measurement
  • SE - Software Process Models
  • SE - Software Design Documentation
  • SE - Software Process Assessment
  • SE - Process Model
  • SE - Requirements Management Process
  • SE - Time Boxing Model
  • SE - Measuring Software Quality
  • SE - Top Down Vs Bottom UP Approaches
  • SE - Components Applications
  • SE - Error Vs Fault
  • SE - Monitoring a Project
  • SE - Software Quality Factors
  • SE - Phases
  • SE - Structural Testing
  • SE - COCOMO Model
  • SE - Code Verification Techniques
  • SE - Classical Life Cycle Model
  • SE - Design Techniques
  • SE - Software Maintenance Life Cycle
  • SE - Function Points
  • SE - Design Phase Objectives
  • SE - Software Maintenance
  • SE - V-Model
  • SE - Software Maintenance Models
  • SE - Object Oriented Metrics
  • SE - Software Design Reviews
  • SE - Structured Analysis
  • SE - Top-Down & Bottom up Techniques
  • SE - Software Development Phases
  • SE - Coding Methodology
  • SE - Emergence
  • SE - Test Case Design
  • SE - Coding Documentation
  • SE - Test Oracles
  • SE - Testing Levels
  • SE - Test Plan
  • SE - Staffing
  • SE - Functional Testing
  • SE - Bottom-Up Design
  • SE - Software Maintenance
  • SE - Software Design Phases
  • SE - Risk Management
  • SE - SRS Validation
  • SE - Test Case Specifications
  • SE - Software Testing Levels
  • SE - Maintenance Techniques
  • SE - Software Testing Tools
  • SE - Requirement Reviews
  • SE - Test Criteria
  • SE - Major Problems
  • SE - Quality Assurance Plans
  • SE - Different Verification Methods
  • SE - Exhaustive Testing
  • SE - Project Management Process
  • SE - Designing Software Metrics
  • SE - Static Analysis
  • SE - Software Project Manager
  • SE - Black Box Testing
  • SE - Errors Types
  • SE - Object Oriented Analysis

Other Links

  • Software Engineering - PDF Version

Footer

Basic Course

  • Computer Fundamental
  • Computer Networking
  • Operating System
  • Database System
  • Computer Graphics
  • Management System
  • Software Engineering
  • Digital Electronics
  • Electronic Commerce
  • Compiler Design
  • Troubleshooting

Programming

  • Java Programming
  • Structured Query (SQL)
  • C Programming
  • C++ Programming
  • Visual Basic
  • Data Structures
  • Struts 2
  • Java Servlet
  • C# Programming
  • Basic Terms
  • Interviews

World Wide Web

  • Internet
  • Java Script
  • HTML Language
  • Cascading Style Sheet
  • Java Server Pages
  • Wordpress
  • PHP
  • Python Tutorial
  • AngularJS
  • Troubleshooting

 About Us |  Contact Us |  FAQ

Dinesh Thakur is a Technology Columinist and founder of Computer Notes.

Copyright © 2025. All Rights Reserved.

APPLY FOR ONLINE JOB IN BIGGEST CRYPTO COMPANIES
APPLY NOW