• Skip to main content
  • Skip to primary sidebar
  • Skip to secondary sidebar
  • Skip to footer

Computer Notes

Library
    • Computer Fundamental
    • Computer Memory
    • DBMS Tutorial
    • Operating System
    • Computer Networking
    • C Programming
    • C++ Programming
    • Java Programming
    • C# Programming
    • SQL Tutorial
    • Management Tutorial
    • Computer Graphics
    • Compiler Design
    • Style Sheet
    • JavaScript Tutorial
    • Html Tutorial
    • Wordpress Tutorial
    • Python Tutorial
    • PHP Tutorial
    • JSP Tutorial
    • AngularJS Tutorial
    • Data Structures
    • E Commerce Tutorial
    • Visual Basic
    • Structs2 Tutorial
    • Digital Electronics
    • Internet Terms
    • Servlet Tutorial
    • Software Engineering
    • Interviews Questions
    • Basic Terms
    • Troubleshooting
Menu

Header Right

Home » Software Engineering » Software Engineering

What is Static Analysis? How is it Performed? What are its Uses

By Dinesh Thakur

Analysis of programs by methodically analyzing the program text is called static analysis. Static analysis is usually performed mechanically by the aid of software tools. During static analysis the program itself is not executed, but the program text is the input to the tools .

The aim of the static analysis tools is to detect errors or potential errors or to generate information about the structure of the programs that can be useful for documentation or understanding of the program.

Static analysis can be very useful for exposing errors that may escape other techniques. As the analysis is performed with the help of software tools, static analysis is a very cost-effective way of discovering errors. Data flow analysis is one form of static analysis that concentrate on the uses of data by programs and detects some data flow anomalies.

 

An example of the data anomaly is the live variable problem. In which a variable is assigned some value but then the variable is not used in any later computation.

Uses of static analysis:

 

1)   It can provide valuable information for documentation of programs.

2)   It can reduce processing time of algorithms

3)   It can analyze different parts of the programs written by different people to detect errors.

4)   It can be useful for maintenance.

5)   It can also produce structure charts of the programs.

Explain Various Programming Practices used in Coding. What is meant by Information Hiding

By Dinesh Thakur

The primary goal of the coding phase is to translate the given design into source code in a given programming language, so that code is simple, easy to test, and easy to understand and modify. Simplicity and clarity are the properties that a programmer should strive for.

All designs contain hierarchies, as creating a hierarchy is a natural way to manage complexity. Most design methodologies for software also produce hierarchies.

 

In a top down implementation, the implementation starts from the top of the hierarchy and proceeds to the lower levels. First the main module is implemented, then its subordinates are implemented, and their subordinates, and so on. In a bottom up implementation, the process is the reverse.

 

The development starts with implementing the modules at the bottom of the hierarchy and proceeds through the higher levels unit until it reaches the top. We want to build the system in parts, even though the design of the entire system has been done. This is necessitated by the fact that for large systems it is simply not feasible or desirable to build the whole system and then test it.

 

When we proceed top down, for testing a set of modules at the top of the hierarchy, stubs will have to be written for the lower level modules that the set of modules under testing invoke. On the other hand when we proceed bottom up all modules that are lower in the hierarchy have been developed and driver modules are needed to invoke these modules under testing. In practice in large systems, a combination of the two approaches is used during coding.

 

The top modules of the system generally contain the overall view of the system and may even contain the user interfaces. On the other hand, the bottom level modules typically form the service routines that provide the basic operations used by higher level modules. A program has a static structure as well as a dynamic structure. The static structure is the structure of the text of the program, which is usually just a linear organization of statement of the program. The dynamic structure of the program is the sequence of statements executed during the execution of the program.

 

The goal of structured programming is to ensure that the static structure and the dynamic structures are the same. That is, statements executed during the execution of a program are same as the sequence of statements in the text of that program. As the statements in a program text are linearly organized, the objective of structured programming becomes developing programs whose control flow during execution is linearized and follows the linear organization of the program text.

 

In structured programming a statement is not a simple assignment statement, it is a structured statement. The key property of a structured statement, is that it has a single entry and single exit. That is, during execution, the execution of the (structured) statement starts from one defined point and the execution terminates at alone defined point. With single entry and single exit statements, we can view a program as a sequence of statements.

 

And if all statements are structured statements, then during execution, the sequence of execution of these statements will be the same as the sequence in the program text. Hence, by using single entry and single exit statement, the correspondence between the static and dynamic structures can be obtained.

 

Structured programming practice forms a good basis and guideline for writing programs clearly.  A software solution to a problem always contains data structures that are means to represent information in the problem domain. That is when software is developed to solve a problem; the software uses some data structures to capture the information in the problem domain.

 

With the problem information represented internally as data structures, the required functionality of the problem domain, which is in terms of information in that domain, can be implemented as software operations on the data structures. Hence, any software solution to a problem contains data structures that represent information in the problem domain.

 

When the information is represented as data structures, the same principle should be applied, and only some defined operations should be performed on the data structures. This essentially, is the principle of information hiding. The information captured in the data structures that represent the operations performed on the information should be visible. Information hiding can reduce the coupling between modules and make the system more maintainable.a

What are the Different Techniques Used for Proving the Correctness of a Program

By Dinesh Thakur

Many techniques for verification aim to reveals errors in the programs, because the ultimate goal is to make programs correct by removing the error. In proof of correctness, the aim is to prove a program correct. So, correctness is directly established, unlike the other techniques in which correctness is never really established but is implied by absence of detection of errors.

 

Any proof technique must begin with a formal specification of program. Here we will briefly describe a technique for proving correctness called the axiomatic method.

 

The Axiomatic Approach

 

In principle, all the properties of program can be determined statically from the text of the program, without actually executing the program. The first requirement in reasoning about programs is to state formally the properties of the elementary operations and statements that the program uses.

 

In the axiomatic model of Hoare, the goal is to take the program and construct a sequence of assertions, each of which can be inferred from previously proved assertions and rules and axioms about the statement and operations in program. For this, we need a mathematical model of a program and all the constructs in the programming language. Using Hoare’s notation, the basic assertion about a program segment is of the form:

 

P{S}Q

 

Code Inspection or Reviews

 

The review process was started with purpose of detecting defects in the code. Though design reviews substantially reduce defects in code, reviews are still very useful and can considerably enhance reliability to reduce efforts during testing. Code reviews are designed to detect errors that originate during the coding process, although they can also detect defects in details design.

 

Code inspections or reviews are usually held after code has been successfully completed and other forms of static tools have been applied but before any testing has been performed. Therefore, activities like code reading, symbolic execution, and static analysis should be performed, and defects found by these techniques are corrected before code reviews are held.

 

The aim of the reviews is to detect defects in code. One obvious coding defect is that the code fails to implement the design. This can occur in many ways. The function implemented by a module may be different from the function actually defined in the design or the interface of the modules may not be same as the interface specified in the design.

 

In addition, the input-output format assumed by a module may be inconsistent with the format assumed by a module may be inconsistent with the format specified in the design.

 

In addition to defects, there are quality issues, which also review addresses. A module may be implemented in an obviously inefficient manner and could be wasteful to memory or the computer time. The code could also be violating the local coding standards.

 

A sample Checklist: The following are some of the items that can be included in a checklist for code reviews.

 

    Do data definition exploit the typing capabilities of the languages?
    Do all the pointers point to some object? (Are there any ‘ dangling pointer”?)
    Is the pointer set to NULL, where needed?
    Are all the array indexes within bound?
    Are indexes properly initialized?
    Are all the branch conditions correct
    will a loop always terminated

     Is the loop termination condition correct?

Discuss Briefly Test Cases and Test Criteria

By Dinesh Thakur

Having test cases that are good at revealing the presence of faults is central to successful testing. Ideally, we would like to determine a set of test cases such that successful execution of all of them implies that there are no errors in the program. This ideal goal cannot usually be achieved due to practical and theoretical constraints.

As each test case costs money, effort is needed to generate the test case, machine time is needed to execute the program for that test case, and more effort is needed to evaluate the results.

 

Therefore, we would also like to minimize the number of test cases needed to detect errors. These are the two fundamental goals of a practical testing activity – maximize the number of errors detected and minimize the number of test cases. With selecting test cases the primary objectives is to ensure that if there is an error or fault in the program, it is exercised by one of the test cases.

 

An ideal test case set is one that succeeds (meaning that its execution reveals no errors) only if there are no errors in the program. For this test selection criterion can be used.  There are two aspects of test case selection – specifying a criterion for evaluating a set of test cases, and generating a set of test cases that satisfy a given criterion.

 

There are two fundamental properties for a testing criterion: reliability and validity. A criterion is reliable if all the sets that satisfy the criterion detect the same errors. A criterion is valid if for any error in the program there is some set satisfying the criterion that will reveal the error. Some axioms capturing some of the desirable properties of test criteria have been proposed. The first axiom is the applicability axiom, which states that for every program there exists a test set T that satisfies the criterion.

 

This is clearly desirable for a general-purpose criterion: a criterion that can be satisfied only for some types of programs is of limited use in testing. The anti extensionality axiom states that there are programs P and Q, both of which implement the same specifications, such that a test set T satisfies the criterion for P but does not satisfy the criterion for Q. This axiom ensures that the program structure has an important role to play in deciding the test cases.

 

The anti decomposition axiom states that there exists a program P and its component Q such that a test case set T satisfies the criterion for P and T1 is the set of values that variables can assume on entering Q for some test case in T and T1 does not satisfy the criterion for Q. Essentially, the axiom says that just because the criterion is satisfied for the entire program, it does not mean that the criterion has been satisfied for its components.

 

The anti composition axiom states that there exist program P and Q such that T satisfies the criterion for P and the outputs of P for T satisfy the criterion for Q, but T does not satisfy the criterion for the parts P and Q does not imply that the criterion has been satisfied by the program comprising P, Q. It is very difficult to get a criterion that satisfies even these axioms. This is largely due to the fact that a program may have paths that are infeasible, and one cannot determine these infeasible paths algorithmically as the problem is undecidable.

What is Functional Testing? What are the Different Techniques used in it

By Dinesh Thakur

In functional testing the structure of the program is not considered. Test cases are decided solely on the basis of requirements or specifications of the program or module, and the internals of the module or the program are not considered for selection of test cases.

 Due to its nature, functional testing is often called black box testing. The basic for deciding test cases in functional testing is the requirements or specifications of the system or module.

For the entire system, the test cases are designed from the requirement specification document for the system. For modules created during design, test cases for functional testing are decided from the module specifications product during the design. There are no formal rules for designing test cases for functional testing. In fact, there are no precise criteria for selecting test cases. However, there are a number of techniques or heuristics that can be used to select test cases that have been found to be very successful in detecting errors.

Equivalence Class Partitioning

Because we cannot do exhaustive testing, the next natural approach is to divide the domain of all the inputs into a set of equivalence   classes, so that if any test in an equivalence class succeeds, then every test in that class will succeed. That is, we want to identify classes of test cases such that the success of one test case in a class implies the success of others.  However, without looking at the internal structure of the program, it is impossible to determine such ideal equivalence classes.

The equivalence class partitioning method tries to approximate this ideal. Different equivalences classes are formed by putting inputs for which the behavior pattern of the module is specified to be different into similar groups and then regarding these new classes as forming equivalence classes. The rationale of forming equivalence classes like this is the assumption that if the specifications require exactly the same behavior for each element in a class of values, then the program is likely to be constructed so that it either succeeds or fails for each of the values in that class.

Boundary value analysis

Test cases that have values on the boundaries of equivalence classes are therefore likely to be high yield test cases, and selecting such test cases is the aim of the boundary value analysis.  In boundary value analysis, we choose an input for a test case from an equivalence class, such that the input lies at the edge of the equivalence classes.

Boundary values for each equivalence class, including the equivalence classes of the output, should be covered. Boundary value test cases are also called extreme cases. Hence, we can say that a boundary value test case is a set of input data that lies on the edge or boundary of a class of input data or that generates output that lies at the boundary of a class of output data. One way to exercise combinations of different input conditions is to consider all valid combinations of the equivalence classes of input conditions. This simple approach will result in a usually large number of test cases, many of which will not be useful for revealing any new errors.

Cause-Effect Graphing

Cause effect graphing is a technique that aids in selecting combinations of input conditions in a systematic way, such that the number of test cases does not become unmanageably large. The technique starts with identifying causes and effect of the system under testing. A cause is a distinct input condition and an effect is a distinct output condition. Each condition forms a node in the cause-effect graph.

The condition should be stated such that they can be set to either true or false. After identifying the causes and effects, for each effect we identify the causes that can produce that effect and how the conditions have to be combined to make the effect true. Conditions are combined using the Boolean operators and or and not which are represented in the graph by & ! and ~.

Then for each effect, all combination of the causes that the effect depends on which will make the effect true, are generated. By doing this, we identify the combinations of conditions that make different effects true. A test case is then generated for each combination of conditions, which make some effect true.

What is Structural Testing? Explain any Two Techniques used in it

By Dinesh Thakur

Structural testing on the other hand is concerned with testing the implementation of the program. The intent of structural testing is not to exercise all the different input or output conditions but to exercise the different programming structures and data structures used in the program.

To test the structure of a program, structural testing aims to achieve test cases that will force the desired coverage of different structures. Various criteria have been proposed for this. Unlike the criteria for functional testing, which are frequently imprecise, the criteria for structural testing are generally quite precise as they are based on program structure formal and precise.

Control Flow Based Criteria

Most common structure based criteria are based on the control flow of the program. In these criteria the control flow graph of a program is considered and coverage of various aspects of the graph are specified as criteria. Hence before we consider the criteria let us precisely define a control flow graph for a program.

Let the control flow graph (or simply flow graph) of a program P be G. A node in this graph represents a block of statements that is always executed together i.e., whenever the first statement is executed all other statements are also executed an edge 9.i.j (from node i to node j) represents a possible transfer of control after executing the last statement of the block represented by node i to the first statement of the block represented by node j.

A node corresponding to a block whose first statement is the start statement of P is called the start node of G and a node corresponding to a block whose last statement is an exit statement is called an exit node. A path is a finite sequence of nodes (n1,n2,…….kk) k>1, node nk) .A complete path is  a path whose first node is the start node and the last node is an exit node.

The simplest coverage criterion is statement coverage, which requires that each statement of the program be executed at least once during testing. In other words it requires that the paths executed during testing include all the nodes in the graph. This is also called the all nodes criterion. This coverage criterion is not very strong and can leave errors undetected.

For example, if there is an if statement in the program without having an else clause the statement coverage criterion for this statement will be satisfied by a test case that evaluates the condition to true. No test case is needed that ensures that the condition in the if statement evaluates to false. This is a serious shortcoming because decisions in programs are potential sources of errors. As an example consider the following function to compute the absolute value of a number.

int abs (x)

int x;

{       

if (x>=0) x=0 -x;

return (x)

}

A little more general coverage criterion is branch coverage, which requires that each edge in the control flow graph be traversed at least once during testing. In other words branch coverage requires that each decision in the program be evaluated to true and false values at least once during testing. Testing based on branch coverage is often called branch testing.

The Trouble with branch coverage comes if a decision has many conditions in it .For example; consider the following function that checks the validity of a data item. The data item is valid if it lies between 0 and 100 as it is checking for x <200 instead of 100 (perhaps a typing error made by the programmer).

Data  Flow -Based Testing

The basic idea behind data flow based testing is to make sure that during testing the definitions of variables and their subsequent use is tested. Just like the all nodes and all edges criteria try to generate confidence in testing by making sure that at least all statements and all branches have been tested the data flow testing tries to ensure some coverage of the definitions of variables. 

For data flow based criteria a definition use graph (de flues graph for short) for the program is first constructed from the control flow graph representing a block of code has variable occurrences in it. A variable occurrence can be one of the following three types (RW 85)

Ø     Def represents the definition of a variable. The variable on the left hand side of an assignment is the one getting defined.

Ø    C-use represents computational use of a variable. Any statement (e.g. read write an assignment) that uses the value of variables for computation purposes is said to be making c- use of the variables. In an assignment statement all variables on the right hand side have a c- use occurrence. In a read and a write statement all variable occurrences are of this type.

Ø    P- use represents predicate use. These are all the occurrences of the variables in a predicate (i.e. variables whose values are used for computing the value of the predicate), which is used for transfer of control.

In control flow based and data flow based testing the focus was on which paths to execute during testing. Mutation testing does not take a path-based approach. Instead it takes the program and creates many mutants of it by making simple changes to the program. The goal of testing is to make sure that during the course of testing each mutant produces an output different from the output of the original program.

In other words the mutation testing criterion does not say that the set of test cases must be such that certain paths are executed instead it requires the set of test cases to be such that they can distinguish between the original program and its mutants.

What is a Test Plan? What should a Test Plan Include

By Dinesh Thakur

In general, testing commences with a test plan and terminates with acceptance testing. A test plan is a general document for the entire project that defines the scope, approach to be taken, and the schedule of testing as well as identifies the test items for the entire testing process and the person responsible for the different activities of testing.

The test planning can be done well before the actual testing commences and can be done in parallel with the coding and design phases. The inputs for forming the test plan are: (1) project plan (2) requirements document and (3) system design document. The project plan is needed to make sure that the test plan is consistent with the overall plan for the project and the testing the test plan is consistent with the overall plan for the project and the testing schedule matches that of the project plan.

The requirements document and the design document are the basic documents used for selecting the test units and deciding the approaches to be used during testing. A test plan should contain the following:

    Test unit specification
    Features to be tested
    Approach for testing
    Test deliverable
    Schedule
    Personnel allocation. 

One of the most important activities of the test plan is to identify the test units. A test unit is a set of one or more modules, together with associate data, that are from a single computer program and that are the objects of testing. A test unit can occur at any level and can contain from a single module to the entire system.

Thus, a test unit may be a module, a few modules, or a complete system. The levels are specified in the test plan by identifying the test units for the project. Different units are usually specified for unit integration, and system testing. The identification of test units may be a module, a few modules or a complete system. The levels are specified in the test plan by identifying the test units for the project.

Different units are usually specified for unit, integration and system testing. The identification of test units establishes the different levels of testing that will be performed in the project. The basic idea behind forming test units is to make sure that testing is being performed incrementally, with each increment including only a few aspects that need to be tested. A unit should be such that it can be easily tested.

In other words, it should be possible to form meaningful test cases and execute the unit without much effort with these test cases. Features to be tested include all software features and combinations of features that should be tested. A software feature is a software characteristic specified or implied by the requirements or design documents. These may include functionality, performance, design constraints, and attributes.  

The approach for testing specifies the overall approach to be followed in the current project. The technique that will be used to judge the testing effort should also be specified. This is sometimes called the testing criterion. Testing deliverable should be specified in the test plan before the actual testing begins. Deliverables could be a list of test cases that were used, detailed result of testing, test summary report, test log, and data about the code coverage. In general, a test case specification report, test summary report, and a test log should always be specified as deliverables.

Discuss the Different Levels of Testing

By Dinesh Thakur

The first level of testing is called unit testing. In this, different modules are tested against the specifications produced during design for the modules. Unit testing is essentially for verification of the code produced during the coding phase, and hence the goal is to test the internal logic of the modules. It is typically done by the programmer of the module.

A module is considered for integration and use by others only after it has been unit tested satisfactorily. Due to its close association with coding, the coding phase is frequently called “coding and unit testing”. As the focus of this testing level is on testing the code, structural testing is best suited for this level. In fact, as structural testing is not very suitable for large programs, it  is used mostly at the unit testing level.

The next level of testing is often called integration testing. In this, many unit tested modules are combined into subsystems, which are then tested. The goal here is to see if the modules can be integrated properly. Hence, the emphasis is on testing interfaces between modules. This testing activity can be considered testing the design.

The next level is system testing and acceptance testing. Here the entire software system is tested. The reference document for this process is the requirements document, and the goal is to see if the software meets its requirements. This is essentially a validation exercise, and in many situations it is the only validation activity.

Acceptance testing is sometimes performed with realistic data of the client to demonstrate that the software is working satisfactorily. Testing here focuses on the external behavior of the system; the internal logic of the program is not emphasized. Consequently, mostly functional testing is performed at these levels.

What is Mutation Testing

By Dinesh Thakur

In control flow based and data flow based testing the focus was on which paths to execute during testing. Mutation testing does not take a path-based approach. Instead it takes the program and creates many mutants of it by making simple changes to the program.

The goal of testing is to make sure that during the course of testing each mutant produces an output different from the output of the original program. In other words the mutation testing criterion does not say that the set of test cases must be such that certain paths are executed instead it requires the set of test cases to be such that they can distinguish between the original program and its mutants.

For a program under test P, mutation testing prepares a set of mutants by applying mutation operators on the text of P. The set of mutation operators depends on the language in which P is written. In general a mutation operator makes a small unit change in the program to produce a mutant.

Example of mutation operators are replace an arithmetic operator with some other arithmetic operator change an array reference (say from A to B) replace a constant with another constant of the same type (e.g. change a constant to 1) change the label for a goto statement and replace a variable by some special value (e.g. an integer or a real variable with 0). Each application of a mutation operator results in one mutant.

« Previous Page

Primary Sidebar

Software Engineering

Software Engineering

  • SE - Home
  • SE - Feasibility Study
  • SE - Software
  • SE - Software Maintenance Types
  • SE - Software Design Principles
  • SE - Prototyping Model
  • SE - SRS Characteristics
  • SE - Project Planning
  • SE - SRS Structure
  • SE - Software Myths
  • SE - Software Requirement
  • SE - Architectural Design
  • SE - Software Metrics
  • SE - Object-Oriented Testing
  • SE - Software Crisis
  • SE - SRS Components
  • SE - Layers
  • SE - Problems
  • SE - Requirements Analysis
  • SE - Software Process
  • SE - Software Metrics
  • SE - Debugging
  • SE - Formal Methods Model
  • SE - Management Process
  • SE - Data Design
  • SE - Testing Strategies
  • SE - Coupling and Cohesion
  • SE - hoc Model
  • SE - Challenges
  • SE - Process Vs Project
  • SE - Requirements Validation
  • SE - Component-Level Design
  • SE - Spiral Model
  • SE - RAD Model
  • SE - Coding Guidelines
  • SE - Techniques
  • SE - Software Testing
  • SE - Incremental Model
  • SE - Programming Practices
  • SE - Software Measurement
  • SE - Software Process Models
  • SE - Software Design Documentation
  • SE - Software Process Assessment
  • SE - Process Model
  • SE - Requirements Management Process
  • SE - Time Boxing Model
  • SE - Measuring Software Quality
  • SE - Top Down Vs Bottom UP Approaches
  • SE - Components Applications
  • SE - Error Vs Fault
  • SE - Monitoring a Project
  • SE - Software Quality Factors
  • SE - Phases
  • SE - Structural Testing
  • SE - COCOMO Model
  • SE - Code Verification Techniques
  • SE - Classical Life Cycle Model
  • SE - Design Techniques
  • SE - Software Maintenance Life Cycle
  • SE - Function Points
  • SE - Design Phase Objectives
  • SE - Software Maintenance
  • SE - V-Model
  • SE - Software Maintenance Models
  • SE - Object Oriented Metrics
  • SE - Software Design Reviews
  • SE - Structured Analysis
  • SE - Top-Down & Bottom up Techniques
  • SE - Software Development Phases
  • SE - Coding Methodology
  • SE - Emergence
  • SE - Test Case Design
  • SE - Coding Documentation
  • SE - Test Oracles
  • SE - Testing Levels
  • SE - Test Plan
  • SE - Staffing
  • SE - Functional Testing
  • SE - Bottom-Up Design
  • SE - Software Maintenance
  • SE - Software Design Phases
  • SE - Risk Management
  • SE - SRS Validation
  • SE - Test Case Specifications
  • SE - Software Testing Levels
  • SE - Maintenance Techniques
  • SE - Software Testing Tools
  • SE - Requirement Reviews
  • SE - Test Criteria
  • SE - Major Problems
  • SE - Quality Assurance Plans
  • SE - Different Verification Methods
  • SE - Exhaustive Testing
  • SE - Project Management Process
  • SE - Designing Software Metrics
  • SE - Static Analysis
  • SE - Software Project Manager
  • SE - Black Box Testing
  • SE - Errors Types
  • SE - Object Oriented Analysis

Other Links

  • Software Engineering - PDF Version

Footer

Basic Course

  • Computer Fundamental
  • Computer Networking
  • Operating System
  • Database System
  • Computer Graphics
  • Management System
  • Software Engineering
  • Digital Electronics
  • Electronic Commerce
  • Compiler Design
  • Troubleshooting

Programming

  • Java Programming
  • Structured Query (SQL)
  • C Programming
  • C++ Programming
  • Visual Basic
  • Data Structures
  • Struts 2
  • Java Servlet
  • C# Programming
  • Basic Terms
  • Interviews

World Wide Web

  • Internet
  • Java Script
  • HTML Language
  • Cascading Style Sheet
  • Java Server Pages
  • Wordpress
  • PHP
  • Python Tutorial
  • AngularJS
  • Troubleshooting

 About Us |  Contact Us |  FAQ

Dinesh Thakur is a Technology Columinist and founder of Computer Notes.

Copyright © 2025. All Rights Reserved.

APPLY FOR ONLINE JOB IN BIGGEST CRYPTO COMPANIES
APPLY NOW