Friday, May 23, 2008

Flight reservation Master Test plan

Follow below link for Master Test plan, its just an example

http://docs.google.com/Doc?id=dgtthv5g_76g8rm3ngm

Thursday, May 22, 2008

Concepts: Test Strategy

A strategy for the testing portion of a project describes the general approach and objectives of the test activities. It includes which stages of testing (unit, integration and system) are to be addressed and which kinds of testing (function, performance, load, stress, etc.) are to be performed.

The strategy defines:

  • Testing techniques and tools to be employed.

  • What test completion and success criteria are to be used. For example, the criteria might allow the software to progress to acceptance testing when 95 percent of the test cases have been successfully executed. Another criterion is code coverage. This criterion may, in a safety-critical system, be that 100% of the code should be covered by tests.

  • Special considerations affect resource requirements or have schedule implications such as:

  • The testing of interfaces to external systems.

  • Simulating physical damage or security threat.

Some organizations have corporate test strategies defined. In which case, you work to apply those strategies to your specific project.

The most important dimensions you should plan your test activities around are:

  • What iteration you are you in, and what the goals of that iteration are.

  • What stage of test (unit test, integration test, system test) you are performing. You may work all stages of test in one iteration.

Now take a look at how the characteristics of your test activities can change depending on where you are in the above-mentioned "test dimensions". There are of course many characteristics you could look at, such as resources needed and time spent, but at this point, focus on what is important to defining your test strategy:

  • Types of test (functional, stress, volume, performance, usability, distribution, and so on).

  • Evaluation criteria used (code-based test coverage, requirements-based test coverage, number of defects, mean time between failure, and so on.)

  • Testing techniques used (manual and automated)

There is no general pattern for how the types of tests are distributed over the test cycles. Depending on the number of iterations, the size of the iteration, and what kind of project this is, you will focus on different types of tests.

You will find that the system test stage has a strong focus on making sure you are covering all testable requirements expressed in terms of a set of test cases. This means your completion criteria will focus on requirements-based test coverage. In the integration and unit test stages, you will find code-based test coverage is a more appropriate completion criterion. The following figure shows how the use of these two types of test coverage measures can change as you develop new iterations of your software.

  • The test plan should define sets of completion criteria for unit test, integration test and system test.

  • You may have different sets of completion criteria defined for individual iterations.

In your project you should consider automating your tests as much as possible, specifically the kind of tests you repeat several times (regression tests). But keep in mind that it costs time and resources to create and maintain automated tests. There will always be some amount of manual testing in each project. The following figure illustrates when and in what stages of testing you will probably perform manual tests.

Example:

The following tables show when the different types of tests are identified, and provide an example of the completion criteria to define. The first table shows a "typical" MIS project:

Iteration / test

System test

Integration test

Unit test

Iteration 1

Automated performance testing for all use cases.
· All planned tests have been executed.
· All severity 1 defects have been addressed.
All planned tests have been re-executed and no new severity 1 defects identified.

None

Informal testing

Iteration 2

Automated performance and functionality testing for all new use cases and the previous as regression test.
· All planned tests have been executed.
· All severity 1 and 2 defects have been addressed.
· All planned tests have been re-executed and no new severity 1 or 2 defects identified.

None

Informal testing

Iteration 3

Automated functionality and negative testing for all new use cases and all the previous as regression test.
95% of test cases have to pass.
· All planned tests have been executed.
· All severity 1, 2, and 3 defects identified.

Automated testing, 70% code coverage.

Informal testing

Iteration 4

Automated functionality and negative testing for all use cases, manual testing for all parts that are not automated and all the previous as regression test.
100% of test cases have to pass.
· All planned tests have been executed.
· All severity 1, 2, and 3 defects have been addressed.
· All planned tests have been re-executed and no new severity 1 or 2 defects identified.

Automated testing, 80% code coverage.

Informal testing

The second table shows types of test and completion criteria applied for a "typical" safety-critical system:

Iteration / test

System test

Integration test

Unit test

Iteration 1

Automated performance testing for all use cases, 100% test-case coverage.
· All planned tests have been executed.
· All severity 1 defects have been addressed.
· All planned tests have bee re-executed and no new defects identified.

None

None

Iteration 2

Automated performance, functionality and negative testing for all use cases, 100% test-case coverage.
· All planned tests have been executed.
· All severity 1 or 2 defects have been addressed.
· All planned tests have been re-executed and no new defects identified.

Automated performance testing

Informal testing

Iteration 3

Automated performance, functionality, negative usability and documentation testing for all use cases, 100% test-case coverage.
· All planned tests have been executed.
· All severity 1, 2, and 3 defects have been addressed.
· All planned tests have been re-executed and no new defects identified.

Automated performance testing and the previous as regression test

Automated testing, 70% code coverage

Iteration 4

Automated performance, functionality, negative usability and documentation testing for all use cases, 100% test-case coverage.
· All planned tests have been executed.
· All severity 1, 2, and 3 defects have been addressed.
· All planned tests have been re-executed and no defects identified.

Automated performance testing and the previous as regression testing

Automated testing, 80% code coverage



Select appropriate implementation technique

Purpose:

To determine the appropriate technique to implement the test.


Select the most appropriate technique to implement the test. For each test that you want to conduct, consider implementing at least one Test Script. In some instances, the implementation for a given test will span multiple Test Scripts. In others, a single Test Script will provide the implementation for multiple tests.

Typical methods for implementing Test Scripts include the manual, programming, recording and generation. Each method is discussed in the following sections.

As with most approaches, we recommend you'll get more useful results if you use a mixture of the following techniques. While you don't need to use them all, you shouldn't confine yourself to using a single technique either.

Manual Test Scripts

Many tests make sense to be conducted manually. Usability tests are an area where manual testing is in most cases a better solution than an automated one. Also tests that require validation of the accuracy and quality of the physical outputs from a software system generally require manual validation. As a general heuristic, it's a good idea to begin the first tests of a particular Target Test Item with a manual implementation; this approach allows the tester to learn about the target item, adapt to unexpected behavior from it, and apply human judgment to determine the next appropriate action to be taken.

Sometimes manually conducted tests will be subsequently automated and reused as part of a regression testing strategy. Note however that it isn't necessary or desirable—or even possible—to automate every test that you could otherwise conduct manually. Automation brings certain advantages in speed and accuracy of test execution, visibility and collation of detailed test outcomes and in efficiency of creating and maintaining complex tests, but like all useful tools, it isn't the solution to all your needs.

Automation comes with certain disadvantages: these basically amount to an absence of human judgment and reasoning during test execution. Current automation solutions don't have the cognitive abilities that a human does—and it's arguably unlikely that they ever will. During implementation of a manual test, human reasoning can be applied to the observed system responses to stimulus. Automated test techniques and their supporting tools typically have minimal ability to notice the implications of certain system behaviors, and currently no ability to infer possible problems through deductive reasoning.

Programmed Test Scripts

Arguably the method of choice practiced by most testers who use test automation. In it's purest form, this practice is performed in the same manner and using the same general principles as software programming. As such, most methods and tools used for software programming are generally applicable and useful to test automation programming.

Using either a standard software development environment (such as Microsoft Visual Studio or IBM Visual Age) or a specialized test automation development environment (such as the IDE provided with Rational Robot), the tester is free to harness the features and power of the development environment to best effect.

The negative aspects of programming automated tests are related to the negative aspects of programming itself as a general technique. For programming to be effective, some consideration should be given to appropriate design: without this the implementation will likely fail. If the developed software will likely be modified by different people over time—the usual situation—then some consideration must be given to adopting a common style and form to be used in program development, and ensuring it's correct use. Arguably the two most important concerns relate to the misuse of this technique.

First, there is a risk that a tester will become engrossed in the features of the programming environment, and spend too much time crafting elegant and sophisticated solutions to problems that could be achieved by simpler means. The result is that the tester wastes precious time on what are essentially programming tasks to the detriment of time that could be spent actually testing and evaluating the Target Test Items. It requires both discipline and experience to avoid this pitfall.

Secondly, there is the risk that the program code used to implement the test will itself have bugs introduced through human error or omission. Some of these bugs will be easy to debug and correct in the natural course of implementing the automated test: others won't. Just as errors can be elusive to detect in the Target Test Item, it can be equally difficult to detect errors in test automation software. Furthermore, errors may be introduced where algorithms used in the automated test implementation are based on the same faulty algorithms used by the software implementation itself. This results in errors going undetected, hidden by the false security of automated tests that apparently execute successfully. Mitigate this risk by using different algorithms in the automated tests wherever possible.

Recorded or captured Test Scripts

There are a number of test automation tools that provide the ability to record or capture human interaction with a software application and produce a basic Test Script. There are a number of different tool solutions for this. Most tools produce a Test Script implemented in some form of a high-level, normally editable, programming language. The most common designs work in one of the following ways:

  • by capturing the interaction with the client UI of an application based on intercepting the inputs sent from the client hardware peripheral input devices: mouse, keyboard and so forth to the client operating system. In some solutions, this is done by intercepting high-level messages exchanged between the operating system and the device driver that describe the interactions in a somewhat meaningful way; in other solutions this is done by capturing low-level messages, often based at the level of time-based movements in mouse coordinates or key-up and key-down events.

  • by intercepting the messages sent and received across the network between the client application and one or more server applications. The successful interpretation of those messages relies typically on the use of standard, recognized messaging protocols, such as HTTP, SQL, Tuxedo and so forth. Some tools also allow the capture of "base" communications protocols such as TCP/IP, however it can be more complex to work with Test Scripts of this nature.

While these techniques are generally useful to include as part of your approach to automated testing, some practitioners feel these techniques have limitations. One of the main concerns is that some tools simply capture application interaction and do nothing else. Without the additional inclusion of observation points that capture and compare system state during subsequent script execution, the basic Test Script cannot be considered to be a fully-formed test. Where this is the case, the initial recording will need to be subsequently augmented with additional custom program code to implement observation points within the Test Script.

Various authors have published books and essays on this and other concerns related to using test procedure record or capture as a test automation technique. To gain a more in-depth understanding of these issues, we recommend reviewing the work available on the Internet by the following authors: James Bach, Cem Kaner, Brian Marick and Bret Pettichord, and the relevant content in the book Lessons Learned in Software Testing [KAN99]

Generated Tests

Some of the more sophisticated test automation software enables the actual generation of various aspects of the test—either the procedural aspects or the Test Data aspects of the Test Script—based on generation algorithms. This type of automation can play a useful part in your test effort, but shouldn't be considered as the only approach used. Both Rational TestFactory and the Rational TestManager datapool generation feature are examples of implementations of this type of technology.

Set up test environment preconditions

Purpose:

To ready the environment to the correct starting state.


Setup the test environment to ensure that all the needed components (hardware, software, tools, data, etc.) have been implemented and are in the test environment, ready in the correct state to enable the tests to be conducted. Typically this will involve some form of basic environment reset (e.g. Registry and other configuration files), restoration of underlying databases to known state, and so forth in addition to tasks such as loading paper into printers. While some tasks can be performed automatically, some aspects typically require human attention.

(Optional) Manual walk-through of the test

Purpose:

To verify all the required elements of the test are present and allow the test to be successfully implemented.


Especially applicable to automated Test Scripts, it can be beneficial to initially walk-through the test manually to confirm expected prerequisites are present. During the walk-through, you should verify the integrity of the environment, the software and the test design. The walk-through is most relevant where you are using an interactive recording technique, and least relevant where you are programming the Test Script.

Where the software is known to be sufficiently stable or mature, you way elect to skip this step where you deem the risk of problems occurring in the areas the manual walk-through addresses are relatively low.

Identify and confirm appropriateness of Test Oracles

During this walk-through, confirm that the Test Oracles you plan to use are appropriate. Where they have not already been identified, now is the time for you to do so.

You should try to confirm through alternative means that the chosen Test Oracle(s) will provide accurate and reliable results. For example, if you plan to validate test results using a field displayed via the application's GUI that indicates a database update has occurred, consider independently querying the back-end database to verify the state of the corresponding records in the database. Alternatively, you might ignore the results presented in an update confirmation dialog, and instead confirm the update by querying for the record through an alternative front-end function or operation.

Reset test environment and tools

Purpose:

To ready the environment and the supporting tools to the correct starting state.


Next you should restore the environment back to it's original state. Typically this will involve some form of basic operating environment reset (e.g. Registry and other configuration files), restoration of underlying databases to a known state, and so forth in addition to tasks such as loading paper into printers. While some reset tasks can be performed automatically, some aspects typically require human attention.

Set the implementation options of the supporting tools. Depending on the sophistication of the tool, this may be many options to consider. Failing to set these options appropriately may reduce the usefulness and value of the resulting test assets. Where possible, you should try to store these tool options and settings so that they can be reloaded easily based on one or more predetermined profiles.

In the case of automated test implementation tools, there may be many different settings to be considered. In the case of manual testing, it may be a simple matter of partitioning a new entry in a support system for logging results, signing into issue and changes request logging systems.

Implement the test

Purpose:

To successfully implement a reusable Test Script and identify any necessary Change Requests.


Using the Test-Ideas List, one or more selected Test Cases or the Workload Analysis Model, begin to implement the test. Start by giving the test a uniquely identifiable name (if it does not already have one) and prepare the IDE, capture tool or document to begin recording the specific steps of the test. Work through the following sections as many times as are required to implement the test.

Implement navigation actions

Program, record or generate the required navigation actions. Start by selecting your appropriate navigation method of choice. For most classes of system these days, a "Mouse" or other pointing device is the preferred and primary medium for navigation. For example, the pointing and scribing device used with a Personal Digital Assistants (PDA) is conceptually equivalent to a Mouse.

The secondary navigation means is generally that of keyboard interaction. In most cases, navigation will be made up of a combination of mouse-driven and keyboard-driven actions.

In some cases, you will need to consider voice-activated, light, visual and other forms of recognition. These can be more troublesome to automate tests against, and may require the addition of special test extensions to the application to allow audio and visual elements to be loaded and processed from file rather than captured dynamically.

In some situations, you may want to—or need to—perform the same test using multiple navigation methods. There are different approaches you can take to achieve this, for example: automate all the tests using one method and manually perform all or some subset of the tests using others; separate the navigation aspects of the tests from the Test Data that characterize the specific test, providing and building a logical navigation interface that allows either method to be selected to drive the test; simply mix and match navigation methods.

Implement observation points

At each point in the Test Script where an observation should be taken, use the appropriate Test Oracle to capture the desired information. In many cases, the information gained from the observation point will need to be recorded and retained to be referenced during subsequent control points.

Where this is an automated test, decide how the observed information should be reported from the Test Script. In most cases it usually appropriate simply to record the observation in a central Test Log relative to it's delta-time from the start of the Test Script; in other cases specific observations might be output separately to a spreadsheet or data file for more sophisticated uses.

Implement control points

At each point in the Test Script where a control decision should be taken, obtain and assess the appropriate information to determine the correct branch for the flow of control to follow. The data retrieved form prior observation points are usually input to control points.

Where a control point occurs, and a decision made about the next action in the flow-of-control, we recommend you record the input values to the control point, and the resulting flow that is selected in the Test Log.

Resolve implementation errors

During test implementation, you'll encounter errors, omissions and other issues that need to be resolved before the test can be implemented completely. Identify each error you encounter and work through addressing them.

In the case of test automation, this might include completion errors due to undeclared variables and functions, or invalid use of those functions. Work your way through the error listings from the compiler and other sources until the Test Script is free of syntactical and other obvious errors.

Establish external data sets

Purpose:

To create and maintain data, stored externally to the test scripts, that are used by the test scripts during test execution.


In many cases it's more appropriate to maintain your Test Data external to the Test Script. This provides flexibility, simplicity and security in Test Script and Test Data maintenance. External data sets provide value to test in the following ways:

  • Test Data is external to the Test Script eliminating hard-coded references in the Test Script

  • external Test Data can be modified easily, usually with minimal Test Script impact

  • additional Test Cases can easily be supported by the Test Data with little or no Test Script modifications

  • external Test Data can be shared with many Test Scripts

  • Test Scripts can be developed to use external Test Data to control the conditional branching logic within the Test Script.

Recover test environment to known state

Purpose:

To ensure the environment is properly cleaned up after Test Script development.


Again, you should restore the environment back to it's original state. Typically this will involve some form of basic environment reset (e.g. Registry and other configuration files), restoration of underlying databases to known state, and so forth in addition to tasks such as loading paper into printers. While some tasks can be performed automatically, some aspects typically require human attention.

Setup tools and initiate test execution

Purpose:

To verify the correct workings of the Test Script by executing the Test Script.


When you have completed the basic implementation of the Test Script, it should be tested to ensure it implements the individual tests appropriately and that they execute properly.

It's a good idea to perform this step using the same Build version of the software used to implement the Test Scripts. This eliminates the possibility of problems due to introduced errors in subsequent builds.

Resolve execution errors

Purpose:

To stabilize the workings of the test when executed.


It's pretty common that things done and approaches used during implementation will need some degree of "tweaking" to adjust the test to run in one or more Test Environment Configurations.

Be prepared to spend some time checking and the tests "function within tolerances" and adjusting them until they do before you declare the test as implemented. While you can delay step until later, we recommend that you don't: otherwise you could end up with a significant backlog of failures to subsequently be addressed.

Restore test environment to known state

Purpose:

To leave the environment either the way you found it, or in the required state to implement the next test.


While this step might seem trivial, but it's an important good habit to form to work effectively with the other testers on the team—especially where the implementation environment is shared. It's also important to establish a routine that makes thinking of the system state second nature.

While in a primarily manual testing effort, it's often simple to identify and fix environment restore problems, remember that test automation has much less ability to tolerate unanticipated problems with environment state.

Maintain traceability relationships

Purpose:

To enable impact analysis and assessment reporting to be performed on the traced items.


Using the Traceability requirements outlined in the Test Plan, update the traceability relationships as required.

Evaluate and verify your results

Purpose:

To verify that the activity has been completed appropriately and that the resulting artifacts are acceptable.


Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you did not simply consume vast quantities of paper. You should evaluate whether your work is of appropriate quality, and that it is complete enough to be useful to those team members who will make subsequent use of it as input to their work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".

Have the people performing the downstream activities that rely on your work as input take part in reviewing your interim work. Do this while you still have time available to take action to address their concerns. You should also evaluate your work against the key input artifacts to make sure you have represented them accurately and sufficiently. It may be useful to have the author of the input artifact review your work on this basis.

Try to remember that that RUP is an iterative process and that in many cases artifacts evolve over time. As such, it is not usually necessary—and is often counterproductive—to fully-form an artifact that will only be partially used or will not be used at all in immediately subsequent work. This is because there is a high probability that the situation surrounding the artifact will change—and the assumptions made when the artifact was created proven incorrect—before the artifact is used, resulting in wasted effort and costly rework. Also avoid the trap of spending too many cycles on presentation to the detriment of content value. In project environments where presentation has importance and economic value as a project deliverable, you might want to consider using an administrative resource to perform presentation tasks.


Monday, January 7, 2008

File System Filter Driver Test Strategy











The purpose of this Post is to define the software
testing scope and purpose for the File System Filter driver. This document will
not include any test steps rather it will focus on the strategy and objectives
of testing tasks.


Filter Driver
Functional Requirements:



  • To
    provide file mangling/unmangling to untrusted files.


Filter Driver
Behavioral Pre-Conditions:



  • A file
    is indicated as untrusted by having a GreenFrame around the file icon.

  • Applications
    launched in the untrusted environment will have a GreenFrame around the
    application window.

  • Accessing
    a trusted document from an untrusted “file open” window will open a
    Read-Only copy of the file inside the untrusted environment.

  • Trusted
    files will not be able to open Untrusted files.
    They will receive access denied.


 


Client
Configurations/Environment:



  • Microsoft
    Windows Versions


    • WXP,
      WXP SP1

    • W2K
      SP3, W2K SP4


  • Microsoft
    Office Suite


    • Office
      XP

    • Office
      2K


  • 3rd
    party applications


    • Winzip
      9.0

    • Adobe
      Acrobat, Acrobat Reader 6.0



 


Test Strategy:


 




  • Verify
    opening an untrusted file launches the application in the untrusted
    environment.


    • Application
      window will have a GreenFrame around it indicating the file application
      is running in the untrusted environment.



 



  • Verify
    the user is able to perform the following functions on both local and
    network drives:


    • style='mso-spacerun:yes'> Edit and save an untrusted file


      • File
        will remain as untrusted after making changes to the file.


    • Cut
      and Paste untrusted files from one location to another


      • File
        will remain as untrusted in it’s new location


    • Copy
      and Paste untrusted file from one location to another.


      • File
        will remain as untrusted in both locations.


    • Drag
      & Drop untrusted files from one location to another.


      • File
        will remain as untrusted in its new location.


    • Rename
      an untrusted file.


      • File
        will remain as untrusted after renaming the file.


    • Delete
      an untrusted file.


      • Untrusted
        file will be placed inside the Recycling bin.




 



  • Verify
    Guarded files no longer exists


    • Files
      with !!gb in the name should never appear as
      untrusted files.

    • Selecting
      “Remove Access Restriction” from the context menu immediately makes a
      file trusted.



 



  • Verify
    after removing GreenBorder Protection from an untrusted file, all access
    to this file is handled in the trusted environment.


    • File
      icon no longer has GB frame around it.

    • “Remove
      Access Restriction” verb from context menu is grayed out.

    • Content
      can be accessed by non-gb clients




 




  • Verify
    an untrusted file cannot be viewed correctly with GB disabled.


    • User
      will see mangled content when GB is disabled.








Wednesday, August 8, 2007

Programming Tutorials and Info Resources

This is a veritable link farm of learning material for folks wanting to beef up on their programming skills.

Be sure to work your way through the entire list to make sure you find all the materials related to the programming language that you are interested in.

Here is the link

http://www.infosyssec.net/infosyssec/prog1.htm

Wednesday, July 18, 2007

Whats your Automation Framework?

I have seen an umpteen no of posts in every QTP group I visited asking about Test Automation frameworks. Here I would like to collate the useful inputs i received from various QTP groups and discussion forums.

The Keyword Driven framework consists of the basic components given below,

  1. Control File
  2. Test Case File
  3. Startup Script
  4. Driver Script
  5. Utility Script

1. Control File :

  • Consists details of all the Test scenarios to be automated.
  • User will be able to select a specific scenario to execute based on turning on or off a flag in the Control File.
  • Control File is in the form of an excel worksheet and contains columns for Scenario ID,Execute (Y/N),Object Repository Path, Test Case File Path.

2. Test Case File :

  • Contains the detailed steps to be carried out for the execution of a test cases.
  • It is also in the form of an excel sheet and contains columns for Keyword, Object Name, Parameter.

3. Startup Script :

  • The Starup script is utilised for the initialization and reads the control files.
  • It then calls the driver script to execute all the scenarios marked for execution in the control file.

4. Driver Script :

  • It Reads the Test Case files. Checks the keywords and calls the appropriate utility script functions based on specific keyword.
  • Error Handling is taken care of in the driver script.

5. Utility Scripts :

  • Perform generic tasks that can be used across applications. It should not be application dependent Advantage of Framework.

The main advantage of this framework is the low cost for maintenace. If there is change to any test case then only the Test Case File needs to be updated and the Driver Script and Startup script will remain the same.

No need to update the scripts in case of changes to the application.

QTP Framework

The framework requires the creation of Repository files, Recovery Files, Library Files (Reusable Functions or user defined functions), XLS files

1. Repository files are created Test manager or test lead or senior S/W engineer.
So we have to add or load or executes these repository files using within a script or using QTP tool.

2. After loading the repository file, then the library files are loaded dynamically into the scripts.
This library file contains functions & sub procedures & verification checks, reports.
Then these files are loaded dynamically into the test in two ways:
[a] before creation of library files we creates a functions(just record the functionality of the app and mention the function name at first and copy the script and paste into notepad and save as .vbs extension and save this a folder which we want to create at first and give the folder name as Re_Usable_Functions) .
Then if u want to make this file as reusable then go to Test tab and select Test settings-select Resource tab &click + button & browse the location of the VBScript® file which u stored in Re_Usable_Functions folder. Then this function is reusable you can use this file with that function name like: function ().
[b] loading library file at runtime u can use the statement-Execute File ("full path of the file which u stored in a folder").

3. XLS files-the functionality (data within a fields) of the app under test(AUT) is documented in a Data Table.
This Data Table contains test data for a particular screen fields and this data tables are loaded onto scripts manually.
Advantage:-so whenever the app data is changed, we don’t want to change the script just we have to change the data within the data table.
I am sending a script which contains how we develop datatable within a script.

4. Recovery Files-Recovery files are created using "Recovery Scenario Manager".
These files r used for an unexpected event or error occur within a run session and the recovery operation is necessary.
At first we understand the app functionality and we want to guess that what types of errors are occur during the script execution.
ex:-some times if there is misspelled data is entered within a field then a pop-up window will open and says "enter appropriate data ",so we want to create file for this using "Recovery Scenario Manager", in this 1]trigger event 2] Recovery Operation and 3] post recovery test run.
After this we enter with "scenario name” and save this in "Recovery_files" folder which we want to create at earlier stage like re_uasable_func folder.
Then select Test tab-Test Settings-Recovery tab and add recovery file which is saved in "Recovery_files" folder with scenario name.
Next execute this file whenever u want within a script using few functions like:
-declare a variable (ex-rep)
Variable = Recovery.GetScenarioPosition (Scenario File, Scenario Name)
Recovery.Set Scenario Status (variable, Status)
Status=True or False
Recovery.Activate
Then this file is executed whenever this particular error occurs.
and also use OnErrorResumeNext function also used. just got to built-in functions.

This is the one type of frame work. We called it as Master FrameWork or Action FrameWork. So if u needs any information regarding QTP framework and about QTP automation u just mail me.

Test Post

Test