Wednesday, April 3, 2013

Testing types

Testing types:-

Its an approach to test the product or release or software. Depending on the objective of testing, it can be classified in to different types of testing as below.

Alpha Testing

The first test of newly developed product is called as Alpha Testing. Alpha testing is carried out when the product is incomplete and yet still handed over to a testing team. The main goal of Alpha Testing is to bring the customer's comment and feedback upon the Product developing.

At this stage tester's start to track feedback as bugs and document the fixes.
So, in these ways we can plan future work and directions better and we provide the tools and practice to look at the application closely and critically.

By these ways we can get more clarifications on our misunderstanding of the needs or flow of the Application.

 Automated Testing

Automated testing is running test cases where manual participation is not required. In Automated testing usually it contains the form of writing test suites which have multiple test cases , a xUnit library and command line tool that runs the test suite. The process could be automated and looked over from time to time to ensure that when the code changed, no problems were introduced.

Different types of tools had also been developed for the automated testing services. These tools had been helping people in the execution of the tests cases. And moreover automated testing not only saves time and money but at the same time the accuracy of the test is also enhanced.

For the working of the automated testing it must be ensured that correct variable such as the inputs has been entered which in turn will produce better outputs.

 Black Box Testing

Black-box testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. It is a complementary approach that is to uncover the errors. Black-box testing attempts to find the errors in, incorrect missing functions, interface errors, errors in data structures, external data base access, performance error or termination errors.

Black-Box Testing is also called as Behavioral testing. Black-box testing is usually described as focusing on testing functional requirements.

 Beta Testing

Beta testing is the last stage of testing. Beta testing helps to identify flows in the system. The first test, conducted by the system developer, outside the production environment, is called the alpha test; the second is called the beta test and requires participation by the user. If results are not good, a third test, the gamma test, is conducted. A test of new or revised hardware or software that is performed by users at their facilities under normal operating conditions a project high points are verified. Beta testing follows alpha testing.

 Gray Box Testing

Gray Box testing is a technique that combines the Black Box testing and White Box testing in Software testing.

Gray Box testing = Black box testing + white box testing.

Gray Box testing is used to find out defects related to bad design or bad implementation of the system.

In gray box testing, test engineer is equipped with the knowledge of system and designs test cases and test data based on system knowledge.

 GUI Testing:-

GUI testing is a process to test application's user interface and to detect if application is functionally correct.
GUI Testing includes how the application handles keyboard and mouse events, how different GUI components like menu bars, tool bars, dialogs, buttons, edit fields, list controls, images etc reacts to user input and whether or not it performs in the desired manner. GUI Testing can be performed both manually with a human tester or could be performed automatically with use of a software program.
Automated GUI Testing is a more accurate, efficient, reliable and cost effective replacement to manual testing.

In GUI testing the important aspects to be tested are:-
1)Windows:-
User interactions with other applications through different windows like
a)Primary windows.
b)Secondary windows.

2)Menus:-
There are different forms and styles of menus.
a)Action menus (push button, radio button).
b)Pull-down menus.
c)Pop-up menus.
d)Option menus.
e)Cascading menus.

3)Forms:-
Forms contain the screen.
a)Font.
b)Background.
c)Size.

4)Icons:-
These are "visual push buttons," through which user can navigate through the application. These are easily recognizable & easy to learn.

5)Controls:-
There are many types of control component that appears on the screen instantly.
Through these controls the user interacts with the application according to the action prescribed for that control.
Variety of controls are like: menu bars, pull-down menus, cascading menus, pop-up menus, push buttons, check boxes, radio buttons, list boxes, and drop-down list boxes.

 Load Testing

Load Testing helps to identify the problem before deploying the application for end users. With the help of load testing testers can design and simulate usage traffic which can be used to test your application infrastructure for performance, reliability and scalability.

By doing load testing we can achieve.

Software design issues:-

    Incorrect concurrency/pooling mechanism.
    Poor optimization.
    Memory build-up.

Server configuration issues :-

    Web server.
    Application server.
    Database server.
    Load balancer.

Hardware limitation issues :-

    Excessive disk I/O.
    CPU maximization.
    Memory limitations.
    Network bottleneck.

 Performance Testing

Performance testing is designed to test the run-time performance of software within the context of an integrated system. Performance testing occurs throughout all steps in the testing process. Even at the unit level, the Performance of an individual module may be assessed as white-box tests are conducted.

However, it is not until all system elements are fully integrated that the true performance of a system can be ascertained. Software Performance Testing is used to determine the speed and effectiveness of a product. To conduct performance testing is to engage in a careful controlled process of measurement and analysis.

 Regression Testing

Regression testing is used to check that the changes made to a program have not introduced new faults into the system. In principle, during regression testing, all tests should be repeated after every defect. Regression testing is an important strategy for reducing “side effects”. Regression testing is the activity that helps to ensure that changes do not introduce unintended behavior or additional errors.

Adequate coverage without wasting time should be a primary consideration when conducting regression tests. Regression testing is initiated after a programmer has attempted to fix a recognized problem or has added source code to a program that may have inadvertently introduced errors.

It is a quality control measure to ensure that the newly modified code still complies with its specified requirements and that unmodified code has not been affected by the maintenance activity.

 Sanity Testing

Sanity testing is usually narrow and deep.
A sanity test is usually unscripted.
A Sanity test is to determine a small section of the application is still working after a minor change.
Sanity testing is to verify whether requirements are met or not, checking all features.
A sanity test is a narrow regression test that focuses on one or a few areas of functionality.

Sanity Testing is also known as Cursory Testing.
Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.

 Smoke Testing

Smoke testing is the process of validating the code changes before the changes are checked into the product's source code. After the code reviews, smoke testing is most cost effective method for identifying and fixing defects. Smoke tests are designed to confirm that changes in the code function as expected and do not destabilize an entire build.

Smoke tests ensure that the primary critical or weak area identified either by code review or risk assessment is primarily validated, because if it fails the testing cannot continue.

 Stress Testing

Stress tests are designed to break a software module. This type of testing determines the strengths and limitations of the software. Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency or volume. This testing is also called as Sensitive Testing.

Sensitive Testing attempts to uncover data combinations within valid input classes that may cause instability or improper processing.

 Usability Testing

Usability testing focuses on determining if the product is easy to learn, satisfying to use and contains the functionality that the users desire

Usability testing is the process by which the requirements of the user are measured, and weaknesses are identified for correction.

The International Organization for Standardization (ISO's) definition of usability is the "effectiveness, efficiency and satisfaction with which a specified set of users can achieve a specified set of tasks in particular environments “

User Acceptance Testing

The final stage in the testing process before the system is accepted for operational use is known as Acceptance Testing. The system is tested with data supplied by the system procurer rather than stimulated data. By performing acceptance testing error may occur in the system requirements. This testing is also called as User Acceptance Testing.

In the case of software, acceptance testing performed by the customer is known as User Acceptance Testing(UAT) or End User Testing. Once the application is ready to be released the essential step is User Acceptance Testing. This type of testing gives the end users the confidence that the application being delivered to them meets their requirements.

Volume Testing

Volume Testing is a non-functional testing.
Volume Testing means Testing the software with large volume of data in the database. Volume testing refers to testing a software application with certain amount of data.

For example, for volume test on your application with a specific database size, you will expand your database to that size and then test the application's performance on that application.

If no problems arise we can say our application is the best one. Sample file of the size you want and then test the application's functionality with that file in order to test the performance. Volume testing can be used in component testing.

 White Box Testing

White-box testing is concerned with testing, the implementation of the program. The intent of this testing is not to exercise all the different input or output conditions but to exercise the different programming structures and data structures used in the program.

White-box testing is also called as Structural Testing or Glass box testing. It is a good practice to perform white box testing during the unit testing phase.

Tuesday, April 2, 2013

Status report template

Status report template

This template is used to report the status of your project to all the stake holders on a periodic basis

It can be customized to report on daily basis or weekly basis etc

Status report template

Minutes of meeting template

Minutes of meeting template

1. Enter the details of the meetings conducted
2. Enter the action items to be tracked
3. Action items can be tracked to closure


Download MOM template here

Test case template

Test case template

Steps to be followed to use the Test Case template are as below

1. Please dont add/delete columns in the "testcases" sheet and "data" sheet of the template
2. Add the list of features in the "data" sheet and the list will be populated in the "feature" column of the "testcases" sheet. Fill all the required columns in the "testcases" sheet
3. Run the macro
4. Results will be populated in the top table of the "testcases" sheet and the graph will be updated in the "Graphs" sheet. These details can be used in reporting to the customers or management
5. Fill the summary sheet for maintenance purpose

Download Test case template here

Monday, April 1, 2013

Defect Report Best Practices

Defect Report Best Practices

1. Defect report should have all the below items

   a. Summary of the defect: It should read the high level description of the defect.
    Example: Installation failed on Win 7 OS

   b. Description: Describe all the steps to be followed to reproduce the defect in detail along with the Expected result and Actual result.

    i. Steps to be followed: These would describe the exact sequence of steps to reproduce the defect
    ii. Expected result: The outcome that is expected by following the above steps
    iii. Actual result: The outcome of the steps while running the test case or following the steps at that point in time

   c. Environment details: Specify the test environment details including the OS, Browsers or any other specific hardware or software used.

    d. Reporter: Your email id or name should be mentioned. If any tool is being used, this field is automatically filled in with the logged in user details

    e. Version: Mention the exact version of the product in which the defect arise

    f. Component: This is the name of the module or feature.

    g. Priority: When bug should be fixed? Priority is generally set from P1 to P3. P1 is the highest priority and P3 being the lowest priority.

    h. Severity: This depicts the impact of the defect on the application. Types of Severity include the below:

    Blocker: No further testing work can be done.
    Critical: Application crash, Loss of data.
    Major: Major loss of function.
    Minor: Minor loss of function.
    Trivial: Some UI enhancements.
    Enhancement: Request for new feature or some enhancement in existing one.

i. Status: The status of the defect which can be one of the below

    a. New
    b. Assigned
    c. Fixed
    d. Invalid
    e. Deferred
    f. Closed
    g. Reopen
    h. Duplicate

j. Screen shots: Enclose the entire required screen shot so that the developers can understand the behavior.

k. Assigned to: Mention the name of the person to whom this defect is to be assigned for further processing.
 

2. Do not use any abusive language.

3. Reproduce the defect more than twice before reporting it.

4. Check if the defect is already reported to avoid duplicate defects.

5. Write a good summary.

6. Last but not the least report the defect immediately.

Test case authoring best practices


1. Every test case should be accurate and tests what it is intended to test with clearly mentioned preconditions, steps to be followed and the expected result.

2. Language should be simple and any person should be able to understand.

3. There should be no ambiguities as these would be distributed across other testers who are unaware of the functionality of the product and can only understand by going through the test cases.

4. There should be no hidden information with any mention to environment details. Example: The website to be tested on all browsers. Here there should be a reference to the browsers to be considered.

5. It should be reusable. As the product has different versions, the test cases written for the first version should be in a reusable format for the earlier version also.

6. It should be traceable to requirements. This gives the coverage details as to what requirements were covered.

7. It should be compliant to regulations.

8. Each test case should be independent.

9. Every condition/flow should have a separate test case.

10. Always write small test cases.

Test Case

Definition

A test case is a set of steps with input values, expected output and the preconditions for testing a functionality.

IEEE Standard 610 (1990) defines test case as follows:

A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.

Objectives behind writing and executing the test cases:

    1. To maintain the knowledge on the product in one place.
    2. To maintain the track of features tested.
    3. Understand the coverage of testing.
    4. Analyze the most buggiest feature.
    5. Maintain traceability across requirements, test cases and defects.
    6. Compliance with processes.
    7. Help management to make software delivery decisions.

AI in Software Testing: How Artificial Intelligence Is Transforming QA

For years, software testing has lived under pressure: more features, faster releases, fewer bugs, smaller teams. Traditional QA has done her...