Sei sulla pagina 1di 19

Automated Testing Detail Test Plan

Feb 22 Posted by kuldeep kumar


Automated Testing Detail Test PlanAutomated Testing DTP Overview This Automated Testing Detail Test Plan (ADTP) will identify the specific tests that are to be performed to ensure the quality of the delivered product. System/Integration Test ensures the product functions as designed and all parts work together. This ADTP will cover information for Automated testing during the System/Integration Phase of the project and will map to the specification or requirements documentation for the project. This mapping is done in conjunction with the Traceability Matrix document, that should be completed along with the ADTP and is referenced in this document. This ADTP refers to the specific portion of the product known as PRODUCT NAME. It provides clear entry and exit criteria, and roles and responsibilities of the Automated Test Team are identified such that they can execute the test. The objectives of this ADTP are: Describe the test to be executed. Identify and assign a unique number for each specific test. Describe the scope of the testing. List what is and is not to be tested. Describe the test approach detailing methods, techniques, and tools. Outline the Test Design including: Functionality to be tested. Test Case Definition. Test Data Requirements. Identify all specifications for preparation. Identify issues and risks. Identify actual test cases. Document the design point

Test Identification This ADTP is intended to provide information for System/Integration Testing for the PRODUCT NAME module of the PROJECT NAME. The test effort may be referred to by its PROJECT REQUEST (PR) number and its project title for tracking and monitoring of the testing progress. Test Purpose and Objectives Automated testing during the System/Integration Phase as referenced in this document is intended to ensure that the product functions as designed directly from customer requirements. The testing goal is to identify the quality of the structure, content, accuracy and consistency, some response times and latency, and performance of the application as defined in the project documentation. Assumptions, Constraints, and Exclusions

Factors which may affect the automated testing effort, and may increase the risk associated with the success of the test include: Completion of development of front-end processes Completion of design and construction of new processes Completion of modifications to the local database Movement or implementation of the solution to the appropriate testing or production environment Stability of the testing or production environment Load Discipline Maintaining recording standards and automated processes for the project Completion of manual testing through all applicable paths to ensure that reusable automated scripts are valid

Entry Criteria The ADTP is complete, excluding actual test results. The ADTP has been signed-off by appropriate sponsor representatives indicating consent of the plan for testing. The Problem Tracking and Reporting tool is ready for use. The Change Management and Configuration Management rules are in place. The environment for testing, including databases, application programs, and connectivity has been defined, constructed, and verified. Exit Criteria In establishing the exit/acceptance criteria for the Automated Testing during the System/Integration Phase of the test, the Project Completion Criteria defined in the Project Definition Document (PDD) should provide a starting point. All automated test cases have been executed as documented. The percent of successfully executed test cases met the defined criteria. Recommended criteria: No Critical or High severity problem logs remain open and all Medium problem logs have agreed upon action plans; successful execution of the application to validate accuracy of data, interfaces, and connectivity. Pass/Fail Criteria The results for each test must be compared to the pre-defined expected test results, as documented in the ADTP (and DTP where applicable). The actual results are logged in the Test Case detail within the Detail Test Plan if those results differ from the expected results. If the actual results match the expected results, the Test Case can be marked as a passed item, without logging the duplicated results. A test case passes if it produces the expected results as documented in the ADTP or Detail Test Plan (manual test plan). A test case fails if the actual results produced by its execution do not match the expected results. The source of failure may be the application under test, the test case, the expected results, or the data in the test environment. Test case failures must be logged regardless of the source of the failure. Any bugs or problems will be logged in the DEFECT TRACKING TOOL. The responsible application resource corrects the problem and tests the repair. Once this is complete, the tester who generated the problem log is notified, and the item is re-tested. If the retest is successful, the status is updated and the problem log is closed. If the retest is unsuccessful, or if another problem has been identified, the problem log status is updated and the problem description is updated with the new findings. It is then returned to the responsible

application personnel for correction and test. Severity Codes are used to prioritize work in the test phase. They are assigned by the test group and are not modifiable by any other group. The following standard Severity Codes to be used for identifying defects are: Table 1 Severity Codes Severity Code Number 1. 2. Severity Code Name Critical High

Description Automated tests cannot proceed further within applicable test case (no work around) The test case or procedure can be completed, but produces incorrect output when valid information is input. The test case or procedure can be completed and produces correct output when valid information is input, but produces incorrect output when invalid information is input.(e.g. no special characters are allowed as part of specifications but when a special character is a part of the test and the system allows a user to continue, this is a medium severity) All test cases and procedures passed as written, but there could be minor revisions, cosmetic changes, etc. These defects do not impact functional execution of system

3.

Medium

4.

Low

The use of the standard Severity Codes produces four major benefits: Standard Severity Codes are objective and can be easily and accurately assigned by those executing the test. Time spent in discussion about the appropriate priority of a problem is minimized. Standard Severity Code definitions allow an independent assessment of the risk to the onschedule delivery of a product that functions as documented in the requirements and design documents. Use of the standard Severity Codes works to ensure consistency in the requirements, design, and test documentation with an appropriate level of detail throughout. Use of the standard Severity Codes promote effective escalation procedures.

Test Scope The scope of testing identifies the items which will be tested and the items which will not be tested within the System/Integration Phase of testing. Items to be tested by Automation (PRODUCT NAME ) Items not to be tested by Automation(PRODUCT NAME ) Test Approach Description of Approach The mission of Automated Testing is the process of identifying recordable test cases through all appropriate paths of a website, creating repeatable scripts, interpreting test results, and reporting to project management. For the Generic Project, the automation test team will focus on positive testing and will complement the manual testing undergone on the system. Automated test results will be generated,

formatted into reports and provided on a consistent basis to Generic project management. System testing is the process of testing an integrated hardware and software system to verify that the system meets its specified requirements. It verifies proper execution of the entire set of application components including interfaces to other applications. Project teams of developers and test analysts are responsible for ensuring that this level of testing is performed. Integration testing is conducted to determine whether or not all components of the system are working together properly. This testing focuses on how well all parts of the web site hold together, whether inside and outside the website are working, and whether all parts of the website are connected. Project teams of developers and test analyst are responsible for ensuring that this level of testing is performed. For this project, the System and Integration ADTP and Detail Test Plan complement each other. Since the goal of the System and Integration phase testing is to identify the quality of the structure, content, accuracy and consistency, response time and latency, and performance of the application, test cases are included which focus on determining how well this quality goal is accomplished. Content testing focuses on whether the content of the pages match what is supposed to be there, whether key phrases exist continually in changeable pages, and whether the pages maintain quality content from version to version. Accuracy and consistency testing focuses on whether todays copies of the pages download the same as yesterdays, and whether the data presented to the user is accurate enough. Response time and latency testing focuses on whether the web site server responds to a browser request within certain performance parameters, whether response time after a SUBMIT is acceptable, or whether parts of a site are so slow that the user discontinues working. Although Loadrunner provides the full measure of this test, there will be various AD HOC time measurements within certain Winrunner Scripts as needed. Performance testing (Loadrunner) focuses on whether performance varies by time of day or by load and usage, and whether performance is adequate for the application. Completion of automated test cases is denoted in the test cases with indication of pass/fail and follow-up action. Test Definition This section addresses the development of the components required for the specific test. Included are identification of the functionality to be tested by automation, the associated automated test cases and scenarios. The development of the test components parallels, with a slight lag, the development of the associated product components. Test Functionality Definition (Requirements Testing) The functionality to be automated tested is listed in the Traceability Matrix, attached as an appendix. For each function to undergo testing by automation, the Test Case is identified. Automated Test Cases are given unique identifiers to enable cross-referencing between related test documentation, and to facilitate tracking and monitoring the test progress. As much information as is available is entered into the Traceability Matrix in order to complete the scope of automation during the System/Integration Phase of the test. Test Case Definition (Test Design) Each Automated Test Case is designed to validate the associated functionality of a stated requirement. Automated Test Cases include unambiguous input and output specifications. This information is documented within the Automated Test Cases in Appendix 8.5 of this ADTP. Test Data Requirements The automated test data required for the test is described below. The test data will be used to populate the data bases and/or files used by the application/system during the System/Integration Phase of the test. In most cases, the automated test data will be built by the OTS Database Analyst or OTS Automation Test Analyst.

Automation Recording Standards Initial Automation Testing Rules for the Generic Project: 1. Ability to move through all paths within the applicable system 2. Ability to identify and record the GUI Maps for all associated test items in each path 3. Specific times for loading into automation test environment 4. Code frozen between loads into automation test environment 5. Minimum acceptable system stability Winrunner Menu Settings 1. Default recording mode is CONTEXT SENSITIVE 2. Record owner-drawn buttons as OBJECT 3. Maximum length of list item to record is 253 characters 4. Delay for Window Synchronization is 1000 milliseconds (unless Loadrunner is operating in same environment and then must increase appropriately) 5. Timeout for checkpoints and CS statements is 1000 milliseconds 6. Timeout for Text Recognition is 500 milliseconds 7. All scripts will stop and start on the main menu page 8. All recorded scripts will remain short; Debugging is easier. However, the entire script, or portions of scripts, can be added together for long runs once the environment has greater stability. Winrunner Script Naming Conventions 1. All automated scripts will begin with GE abbreviation representing the Generic Project and be filed under the Winrunner on LAB11 W Drive/Generic/Scripts Folder. 2. GE will be followed by the Product Path name in lower case: air, htl, car 3. After the automated scripts have been debugged, a date for the script will be attached: 0710 for July 10. When significant improvements have been made to the same script, the date will be changed. 4. As incremental improvements have been made to an automated script, version numbers will be attached signifying the script with the latest improvements: eg. XX0710.1 XX0710.2 The .2 version is the most up-to-date Winrunner GUIMAP Naming Conventions 1. All Generic GUI Maps will begin with XX followed by the area of test. Eg. XX. XXpond GUI Map represents all pond paths. XXEmemmainmenu GUI Map represents all membership and main menu concerns. XXlogin GUI Map represents all XX login concerns. 2. As there can only be one GUI Map for each Object, etc on the site, they are under constant revision when the site is undergoing frequent program loads. Winrunner Result Naming Conventions 1. When beginning a script, allow default res## name to be filed 2. After a successful run of a script where the results will be used toward a report, move file to results and rename: XX for project name, res for Test Results, 0718 for the date the script was run, your initials and the original default number for the script. Eg. XXres0718jr. 1 Winrunner Report Naming Conventions

1. When the accumulation of test result(s) files for the day are formulated, and the statistics are confirmed, a report will be filed that is accessible by upper management. The daily Report file will be as follows: XXdaily0718 XX for project name, daily for daily report, and 0718 for the date the report was issued. 2. When the accumulation of test result(s) files for the week are formulated, and the statistics are confirmed, a report will be filed that is accessible by upper management. The weekly Report file will be as follows: XXweek0718 XX for project name, week for weekly report, and 0718 for the date the report was issued. Winrunner Script, Result and Report Repository 1. LAB 11, located within the XX Test Lab, will house the original Winrunner Script, Results and Report Repository for automated testing within the Generic Project. WRITE access is granted Winrunner Technicians and READ ONLY access is granted those who are authorized to run scripts but not make any improvements. This is meant to maintain the purity of each script version. 2. Winrunner on LAB11 W Drive houses all Winrunner related documents, etc for XX automated testing. 3. Project file folders for the Generic Project represent the initial structure of project folders utilizing automated testing. As our automation becomes more advanced, the structure will spread to other appropriate areas. 4. Under each Project file folder, a folder for SCRIPT, RESULT and REPORT can be found. 5. All automated scripts generated for each project will be filed under Winrunner on LAB11 W Drive/Generic/Scripts Folder and moved to folder ARCHIVE SCRIPTS as necessary 6. All GUI MAPS generated will be filed under Winrunner on LAB11 W Drive/Generic/Scripts/gui_files Folder. 7. All automated test results are filed under the individual Script Folder after each script run. Results will be referred to and reports generated utilizing applicable statistics. Automated Test Results referenced by reports sent to management will be kept under the Winrunner on LAB11 W Drive/Generic/Results Folder. Before work on evaluating a new set of test results is begun, all prior results are placed into Winrunner on LAB11 W Drive/Generic/Results/Archived Results Folder. This will ensure all reported statistics are available for closer scrutiny when required. 8. All reports generated from automated scripts and sent to upper management will be filed under Winrunner on LAB11 W Drive/Generic/Reports Folder Test Preparation Specifications Test Environment Environment for Automated Test Automated Test environment is indicated below. Existing dependencies are entered in comments. Environment Production Other (specify) Test System Comments Access via http://xxxxx/xxxxx Access via http:// www.xxxxxx.xxx Production

Test System/Integration Test (SIT) Cert

Development Individual Test Environments

Hardware for Automated Test The following is a list of the hardware needed to create production like environment:

Manufacturer Device Type Various Personal Computer (486 or Higher) with monitor & required peripherals; with connectivity to internet test/production environments. Must be enabled to ADDITIONAL REQUIREMENTS.

Software The following is a list of the software needed to create a production like environment: Software Internet Explorer Version (if applicable) Programmer Support ZZZ or higher

Netscape Navigator ZZZ or higher

Test Team Roles and Responsibilities Test Team Roles and Responsibilities Role COMPANY NAME Sponsor XXX Sponsor XXX Project Manager COMPANY NAME Development Manager COMPANY NAME Project Manager COMPANY NAME Technical Lead COMPANY NAME Back End Services Manager COMPANY NAME Infrastructure Manager COMPANY NAME Test Coordinator COMPANY NAME Tracker Coordinator/ Tester Responsibilities Approve project development, handle major issues related to project development, and approve development resources Signature approval of the project, handle major issues Ensures all aspects of the project are being addressed from CUSTOMERS point of view Name Name, Phone Name, Phone Name, Phone

Manage the overall development of project, including obtaining resources, handling major issues, approving technical design and overall Name, timeline, delivering the overall product according to the Partner Phone Requirements Provide PDD (Project Definition Document), project plan, status reports, Name, track project development status, manage changes and issues Phone Provide Technical guidance to the Development Team and ensure that overall Development is proceeding in the best technical direction Develop and deliver the necessary Business Services to support the PROJECT NAME Provide PROJECT NAME development certification, production infrastructure, service level agreement, and testing resources Develops ADTP and Detail Test Plans, tests changes, logs incidents identified during testing, coordinates testing effort of test team for project Tracks XXXs in DEFECT TRACKING TOOL. Reviews new XXXs for duplicates, completeness and assigns to Module Tech Leads for fix. Produces status documents as needed. Tests changes, logs incidents Name, Phone Name, Phone Name, Phone Name, Phone Name, Phone

identified during testing. COMPANY NAME Automation Enginneer Tests changes, logs incidents identified during testing Name, Phone

Test Team Training Requirements Automation Training Requirements Training Requirement Training Approach Target Date for Completion Roles/Resources to be Trained . . . . . . . .

Automation Test Preparation

1. Write and receive approval of the ADTP from Generic Project management 2. Manually test the cases in the plan to make sure they actually work before recording repeatable
scripts

3. Record appropriate scripts and file them according to the naming conventions described within 4.
this document Initial order of automated script runs will be to load GUI Maps through a STARTUP script. After the successful run of this script, scripts testing all paths will be kicked off. Once an appropriate number of PNRs are generated, GenericCancel scripts will be used to automatically take the inventory out of the test profile and system environment. During the automation test period, requests for testing of certain functions can be accommodated as necessary as long as these functions have the ability to be tested by automation. The ability to use Generic Automation will be READ ONLY for anyone outside of the test group. Of course, this is required to maintain the pristine condition of master scripts on our data repository. Generic Test Group will conduct automated tests under the rules specified in our agreement for use of the Winrunner tool marketed by Mercury Interactive. Results filed for each run will be analyzed as necessary, reports generated, and provided to upper management.

5.

6. 7.

Test Issues and Risks Issues The table below lists known project testing issues to date. Upon sign-off of the ADTP and Detail Test Plan, this table will not be maintained, and these issues and all new issues will be tracked through the Issue Management System, as indicated in the projects approved Issue Management Process Target Date for Resolution

Issue COMPANY NAME test team is not in possession of market data regarding

Impact

Owner

Testing may not cover Beginning of Automated CUSTOMER some browsers used Testing during System TO PROVIDE

what browsers are most in use in CUSTOMER target market. OTHER

by CLIENT customers and Integration Test Phase . . .

Risks Risks The table below identifies any high impact or highly probable risks that may impact the success of the Automated testing process. Risk Assessment Matrix Overall Difficulty of Threat(H, M, Timely Detection L) Immediately .

Risk Area 1. Unstable Environment

Potential Impact

Likelihood of Occurrence

Delayed Start

HISTORY OF PROJECT Dependent upon quality standards of development group Dependent upon browser version

2. Quality of Unit Greater delays taken Testing by automated scripts 3. Browser Issues Intermittent Delays

Immediately

Immediately

Risk Management Plan Risk Area Preventative Action Contingency Plan Action Trigger Owner . . . . . . . . .

1. Meet with Environment Group . 2. Meet with Development Group . 3. .

Traceability Matrix The purpose of the Traceability Matrix is to identify all business requirements and to trace each requirement through the projects completion. Each business requirement must have an established priority as outlined in the Business Requirements Document. They are: Essential Must satisfy the requirement to be accepted by the customer. Useful Value -added requirement influencing the customers decision. Nice-to-have Cosmetic non-essential condition, makes product more appealing. The Traceability Matrix will change and evolve throughout the entire project life cycle. The requirement definitions, priority, functional requirements, and automated test cases are subject to change and new requirements can be added. However, if new requirements are added or existing requirements are modified after the Business Requirements document and this document have been approved, the changes will be subject to the change management process. The Traceability Matrix for this project will be developed and maintained by the test coordinator. At the completion of the matrix definition and the project, a copy will be added to the project notebook. Functional Areas of Traceability Matrix # Functional Area Priority

B1 Pond B2 River B3 Lake B4 Sea B5 Ocean B6 Misc B7 Modify L1 Language

E E U E E U E E

EE1 End-to-End Testing EE Legend: B = Order Engine L = Language N = Nice to have EE = End-to-End E = Essential U = Useful Definitions for Use in Testing Test Requirement A scenario is a prose statement of requirements for the test. Just as there are high level and detailed requirements in application development, there is a need to provide detailed requirements in the test development area. Test Case A test case is a transaction or list of transactions that will satisfy the requirements statement in a test scenario. The test case must contain the actual entries to be executed as well as the expected results, i.e., what a user entering the commands would see as a system response. Test Procedure Test procedures define the activities necessary to execute a test case or set of cases. Test procedures may contain information regarding the loading of data and executables into the test system, directions regarding sign in procedures, instructions regarding the handling of test results, and anything else required to successfully conduct the test. Automated Test Cases NAME OF FUNCTION Test Case Project Name/Number |Generic Project / Project Request #|Date |

|Test Case Description |Check all drop down boxes, fill in | boxes and pop-up windows operate |Build # | according to requirements on the main Pond web page. |Function / Module | B1.1 | Run # | |Execution | || Under Test |Retry # |

|Test Requirement # |Written by |Goals

|Case #

|AB1.1.1(A for Automated)

| Verify that Pond module functions as required

|Setup for Test | Access browser, Go to .. . |Pre-conditions | Login with name and password. When arrive at Generic Main Menu |Step|Action|Expected Results | |Pass/Fail|Actual Results if Step Fails |click on the Pond gif and go to |

|Go to |From the Generic Main Menu,|

Pond |Pond web page. Once on the Pond | | and |web page, check all drop down | |boxes for appropriate information| |(eg Time.7a, 8a in 1 hour | |increments), fill in boxes |(remarks allows alpha and numeric||but no other special characters),| |and pop up windows (eg. Privacy.| |Ensure it is retrieved, has | |correct verbage and closes). Each automation project team needs write up an automation standards document stating the following: The installation configurations of the automation tool. How the client machines environment will be set up Where the network repositories, and manual test plans documents are located. Identify what the drive letter is that all client machines must map to. How the automation tool will be configured. Identify what Servers and Databases the automation will run against. Any naming standards that the test procedures, test cases and test plans will follow. Any recording standards and scripting standards that all scripts must follow. Describe what components of the product that will be tested.}

Installation Configuration Install Step: Installations Components Type Of Repository Scripting Language Test Station Name DLL messages Selection: Full Completed:

Destination Directory C:\sqa6 Microsoft Access SQA Basic only Your PC Name Overlay all DLLs the system prompts for. Robot will not run without its own DLLs.

Client Machines Configuration Configuration Item Setting: Shut down lotus notes before using robot. Close down all applications down (except SQA robot recorder and the application you are testing) Notes: This will prevent mail notification messages from interrupting your scripts and it will allow robot to have more memory. This will free up memory on the PC.

Lotus Notes

Close all applications

Select printer window from start menu Shut down printing Select File -> Server Properties Select Advance tab Un-check notify check box Shut down printing Bring up dos prompt Select Z drive Type Network CASTOFF Turn off Screensavers Select NONE or change it to 90 minutes

Set in Control Panel display application Display Settings for Colors 256 Font Size small Desktop PC 800 X 600 pixels Map a Network drive to {LETTER} Bring up explorer and map a network drive to here.

Repository Creation Item Repository Name Location Mapped Drive Letter Project Name Users set up for Project Sbh files used in projects scripts Client Setup Options for the SQA Robot tool Option Window Recording Option ID list selections by ID Menu selections by Selection Contents Text Admin no password Information

Record unsupported mouse Mouse click if within object drags as Window positions While Recording Record Object as text Auto record window size Put Robot in background

Playback

Test Procedure Control Partial Window Caption Caption Matching options

Delay Between :5000 milliseconds On Each window search Check Match reverse captions Ignore file extensions Ignore Parenthesis Output Playback results to test log All details View test log after playback Specify Test Log Info at Playback Check Check Select pushbutton with focus Abort playback Retry 4 Timeout after 90 Retry 2 Timeout after 120 Playback delay 100 millsec Check record delay after enter key Continue Execution Check all but last 2

Test Log

Test log Management Update SQA repository Test Log Data

Unexpected Window Detect Capture Playback response On Failure to remove Wait States Wait Pos/Neg Region Automatic wait Keystroke option Error Recovery

On Script command Failure Abort Playback On test case failure SQA trap

Object Recognition Object Data Test Definitions Editor Preferences

Do not change Do not change Leave with defaults Leave with defaults

Identify what Servers and Databases the automation will run against. This {Project name} will use the following Servers: {Add servers} On these Servers it will be using the following Databases: {Add databases} Naming standards for test procedures, cases and plans The naming standards for this project are: Recording standards and scripting standards In order to ensure that scripts are compatible on the various clients and run with the minimum maintenance the following recording standards have been set for all scripts recorded.1. Use assisting scripts to open and close applications and activity windows. 2. Use global constants to pass data into scripts and between scripts. 3. Make use of main menu selections over using double clicks, toolbar items and pop up menus whenever possible. 4. Each test procedure should have a manual test plan associated with it. 5. Do not Save in the test procedure unless it is absolutely necessary, this will prevent the need to write numerous clean up scripts. 6. Do a window existence test for every window you open, this will prevent scripts dying from slow client/server calls.

7. Do not use the mouse for drop down selections, whenever possible use hotkeys and the arrow keys. 8. When navigating through a window use the tab and arrow keys instead of using a mouse, this will make maintenance of scripts due to UI changes easier in the future. 9. Create a template header file called testproc.tpl. This file will insert template header information on the top of all scripts recorded. This template area can be used for modification tracking and commenting on the script. 10. Comment all major selections or events in the script. This will make debugging easier. 11. Make sure that you maximize all MDI main windows in login initial scripts. 12. When recording make sure you begin and end your scripts in the same position. Ex. On the platform browser always start your script opening the browser tree and selecting your activity (this will ensure that the activity window will always be in the same position), likewise always end your scripts with collapsing the browser tree. Describe what components of the product that will be tested. This project will test the following components: The objective is to:

Category Archives: Priority and Severity of Defects in Real Time App.

High priority and low Severity defects


Mar 1 Posted by kuldeep kumar Suppose in one banking application there is one module ATM Facility. in that ATM facility when ever we are dipositing/withdrawing money it is not showing any conformation message but actually at the back end it is happening properly with out any mistake means only missing of message . in this case as it is happenig properly so there is nothing wrong with the application but as end user is not getting any conformation message so he/she will be confuse for this. So we can consider this issue as HIGH Priority but LOW Severity defects.

Category Archives: Test Cases Types

Type of Test Case


Mar 1

Posted by kuldeep kumar


The purpose of this article is defining Test Case, presenting three types of test cases and advising the optimal timing of writing each type of test case. I bet you heard different definitions for test case. I know that test case has only one clear definition. Test case is a set of activities which allow the tester or developer detecting defects. Thats it! Extremely speaking, I dont care if one of the set activities is paining the office wall if eventually all high and critical defects will be reported. Different groups using different types of test cases. Unit test case Unit test cases are being used by development team. When developer is completed the code writing he will start running unit test cases. These cases are more technical and usually contain certain activities which executing and checking queries, loops, compilations etc. A unit test case is mostly running on a local developer environment. Optimal timing of writing unit test case: The best timing to write unit test case is during development. Although each code line is a potential to be a defect, not each code line should be written as tests case. Selectively choose the line/s which can cause severe defects. Subsystem test case Subsystem test cases are being used by both developers and testers. These test cases are necessary to perform, but some organizations are skipping directly to system test due to effort constraints. The test cases of subsystem test are focusing in the correctness of applications or outputs such as GUI windows, invoices, web sites, generation of files etc. The cases are checking the end results of the software. There are two main important tasks of subsystem cases: 1. Testing that back-end and front-end of the same functionality are working correctly. 2. Testing that two integrated front-end applications or two integrated back-end applications are working correctly. Example for task number one, the set of subsystem test case activities will be the following: Testing the retrieval of the correct data of parameters in window application or in web site application which defined dynamically in tables or even hard-coded in the code lines. Window or web site = front-end Dynamic tables or hard-coded code lines = back-end I can agree that in some extent system test is also performing end software output tests and integrated applications tests, but the difference is huge, see both below and system test article. Optimal timing of writing subsystem test case: The most efficient time to write subsystem test cases is during code review activity. The review of the code as one package is providing the ability to understand what integration constraints of this code can be. It is no longer a local isolated query which is running stand alone but rather wide and correlated one. System test case

System test cases are being used by testers. The cases are detailed and covering the entire system functionality, therefore the total number of cases is much bigger than unit or subsystem tests. The effort or writing system test cases is almost the same as the effort of executing them, because the analysis of many important cases is required time and resources. In software, several applications are integrated. Some of them are internal and others are external. A complex system will have many different interfaces, some of them are GUI applications others are receiving flat files etc. An output of one application is an input of another one and so on. Following are four important elements of system test cases: 1. Each test case will simulate a real live scenario. 2. The cases will execute with priorities according to the real live volume of each scenario. 3. Cross application scenarios will test carefully. 4. Test execution will run on environments which close to the production platform from data and infrastructure point of views. Optimal timing of writing system test cases: In order to gain both time and knowledge system test cases should be prepared from the moment development finished the design. In other words, once development team knows how and what is going to be changed system test should be involved and cases should be written. My name is Kuldeep Sharma. I am a professional QA. My profession and experience is all related to testing of software. From being a tester to managing big groups of testers, I was experienced all roles and responsibilities of testing in software life cycle. Now, I would like to share it with you.

Posted in Manual Testing, Test Cases Types

Category Archives: Manual Testing

Life Cycle of Testing Process


Dec 11 Posted by kuldeep kumar This article explains about Different steps in Life Cycle of Testing Process. in Each phase of the development process will have a specific input and a specific output. Once the project is confirmed to start, the phases of the development of project can be divided into the following phases:

Software requirements phase.

Software Design Implementation Testing Maintenance

In the whole development process, testing consumes highest amount of time. But most of the developers oversee that and testing phase is generally neglected. As a consequence, erroneous software is released. The testing team should be involved right from the requirements stage itself. The various phases involved in testing, with regard to the software development life cycle are: 1. Requirements stage 2. Test Plan 3. Test Design. 4. Design Reviews 5. Code Reviews 6. Test Cases preparation. 7. Test Execution 8. Test Reports. 9. Bugs Reporting 10. Reworking on patches. 11. Release to production.
Requirements Stage

Normally in many companies, developers itself take part in the requirements stage. Especially for product-based companies, a tester should also be involved in this stage. Since a tester thinks from the user side whereas a developer cant. A separate panel should be formed for each module comprising a developer, a tester and a user. Panel meetings should be scheduled in order to gather everyones view. All the requirements should be documented properly for further use and this document is called Software Requirements Specifications.
Test Plan

Without a good plan, no work is a success. A successful work always contains a good plan. The testing process of software should also require good plan. Test plan document is the most important document that brings in a process oriented approach. A test plan document should be prepared after the requirements of the project are confirmed. The test plan document must consist of the following information: Total number of features to be tested. Testing approaches to be followed. The testing methodologies Number of man-hours required. Resources required for the whole testing process.

The testing tools that are to be used. The test cases, etc
Test Design

Test Design is done based on the requirements of the project. Test has to be designed based on whether manual or automated testing is done. For automation testing, the different paths for testing are to be identified first. An end to end checklist has to be prepared covering all the features of the project. The test design is represented pictographically. The test design involves various stages. These stages can be summarized as follows: The different modules of the software are identified first. Next, the paths connecting all the modules are identified. Then the design is drawn. The test design is the most critical one, which decides the test case preparation. So the test design assesses the quality of testing process.
Test Cases Preparation

Test cases should be prepared based on the following scenarios: Positive scenarios Negative scenarios Boundary conditions and Real World scenarios
Design Reviews

The software design is done in systematical manner or using the UML language. The tester can do the reviews over the design and can suggest the ideas and the modifications needed.
Code Reviews

Code reviews are similar to unit testing. Once the code is ready for release, the tester should be ready to do unit testing for the code. He must be ready with his own unit test cases. Though a developer does the unit testing, a tester must also do it. The developers may oversee some of the minute mistakes in the code, which a tester may find out.
Test Execution and Bugs Reporting

Once the unit testing is completed and the code is released to QA, the functional testing is done. A top-level testing is done at the beginning of the testing to find out the top-level failures. If any top-level failures occur, the bugs should be reported to the developer immediately to get the required workaround.

The test reports should be documented properly and the bugs have to be reported to the developer after the testing is completed.
Release to Production

Once the bugs are fixed, another release is given to the QA with the modified changes. Regression testing is executed. Once the QA assures the software, the software is released to production. Before releasing to production, another round of top-level testing is done. The testing process is an iterative process. Once the bugs are fixed, the testing has to be done repeatedly. Thus the testing process is an unending process.

Potrebbero piacerti anche