Sei sulla pagina 1di 19

What is 'Software Quality Assurance'?

Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'. What is 'Software Testing'? Testing involves operation of a system or application under controlled conditions and evaluating the results (e.g. 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. (See the Bookstore section's 'Software Testing' category for a list of useful books on Software Testing.) What is the 'software life cycle'? The life cycle begins when an application is first conceived and ends when it is no longer in use. It includes aspects such as initial concept, requirements analysis, functional design, internal design, documentation planning, test planning, coding, document preparation, integration, testing, maintenance, updates, retesting, phase-out, and other aspects. (See the Bookstore section's 'Software QA', 'Software Engineering', and 'Project Management' categories for useful books with more information.) What's a 'test plan'? A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project: What's a 'test case'? A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results. Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible. Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Software testing also provides an objective, independent view of the software to allow the business to appreciate and understand the risks at implementation of the software. Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs. Software testing can also be stated as the process of validating and verifying that a software program/application/product:

1. meets the business and technical requirements that guided its design and development; 2. Works as expected; and 3. Can be implemented with the same characteristics. Software testing, depending on the testing method employed, can be implemented at any time in the development process. However, most of the test effort occurs after the requirements have been defined and the coding process has been completed. As such, the methodology of the test is governed by the software development methodology adopted. Different software development models will focus the test effort at different points in the development process. Newer development models, such as Agile, often employ test driven development and place an increased portion of the testing in the hands of the developer, before it reaches a formal team of testers. In a more traditional model, most of the test execution occurs after the requirements have been defined and the coding process has been completed. Software bug, a software bug is the common term used to describe an error, flaw, mistake, failure, or fault in a computer program or system that produces an incorrect or unexpected result, or causes it to behave in unintended ways. Most bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code. A program that contains a large number of bugs, and/or bugs that seriously interfere with its functionality, is said to be buggy. Reports detailing bugs in a program are commonly known as bug reports, fault reports, problem reports, trouble reports, change requests, and so forth. Overview Testing can never completely identify all the defects within software. Instead, it furnishes a criticism or comparison that compares the state and behavior of the product against oracles principles or mechanisms by which someone might recognize a problem. These oracles may include (but are not limited to) specifications, contracts,[2] comparable products, past versions of the same product, inferences about intended or expected purpose, user or customer expectations, relevant standards, applicable laws, or other criteria. Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers, and other stakeholders. Software testing is the process of attempting to make this assessment. A study conducted by NIST in 2002 reports that software bugs cost the U.S. economy $59.5 billion annually. More than a third of this cost could be avoided if better software testing was performed. Software testing topics A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. This is a non-trivial pursuit. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. The scope of software testing often includes examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team. There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.

Software verification and validation: Software testing is used in association with verification and validation: Verification: Have we built the software right? (i.e., does it match the specification). Validation: Have we built the right software? (i.e., is this what the customer wants). The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms incorrectly defined. According to the IEEE Standard Glossary of Software Engineering Terminology: Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements. The software testing team: Software testing can be done by software testers. Until the 1980s the term "software tester" was used generally, but later it was also seen as a separate profession. Regarding the periods and the different goals in software testing, different roles have been established: manager, test lead, test designer, tester, automation developer, and test administrator. Software quality assurance (SQA): Though controversial, software testing may be viewed as an important part of the software quality assurance (SQA) process. In SQA, software process specialists and auditors take a broader view on software and its development. They examine and change the software engineering process itself to reduce the amount of faults that end up in the delivered software: the so-called defect rate. What constitutes an "acceptable defect rate" depends on the nature of the software; A flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies. Software testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from occurring in the first place.

Testing methods The box approach: Software testing methods are traditionally divided into white- and black-box testing. These two approaches are used to describe the point of view that a test engineer takes when designing test cases. White box testing: White box testing is when the tester has access to the internal data structures and algorithms including the code that implement these. Types of white box testing The following types of white box testing exist: API testing (application programming interface) - testing of the application using public and private APIs Code coverage - creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once) Fault injection methods - improving the coverage of a test by introducing faults to test code paths Mutation testing methods Static testing - White box testing includes all static testing Test coverage: White box testing methods can also be used to evaluate the completeness of a test suite that was created with black box testing methods. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested. Two common forms of code coverage are: Function coverage, which reports on functions executed Statement coverage, which reports on the number of lines executed to complete the test They both return code coverage metric, measured as a percentage. Black box testing: Black box testing treats the software as a "black box"without any knowledge of internal implementation. Black box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, fuzz testing, model-based testing, traceability matrix, exploratory testing and specification-based testing. Specification-based testing: Specification-based testing aims to test the functionality of software according to the applicable requirements. Thus, the tester inputs data into, and only sees the output from, the test object. This level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case.

Specification-based testing is necessary, but it is insufficient to guard against certain risks. Advantages and disadvantages: The black box tester has no "bonds" with the code, and a tester's perception is very simple: a code must have bugs. Using the principle, "Ask and you shall receive," black box testers find bugs where programmers do not. On the other hand, black box testing has been said to be "like a walk in a dark labyrinth without a flashlight," because the tester doesn't know how the software being tested was actually constructed. As a result, there are situations when (1) a tester writes many test cases to check something that could have been tested by only one test case, and/or some parts of the back-end are not tested at all. Therefore, black box testing has the advantage of "an unaffiliated opinion", on the one hand, and the disadvantage of "blind exploring", on the other. Grey box testing: Grey box testing involves having knowledge of internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. Manipulating input data and formatting output do not qualify as grey box, because the input and output are clearly outside of the "black-box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, modifying a data repository does qualify as grey box, as the user would not normally be able to change the data outside of the system under test. Grey box testing may also include reverse engineering to determine, for instance, boundary values or error messages. Testing levels: Tests are frequently grouped by where they are added in the software development process, or by the level of specificity of the test. Unit testing: Unit testing refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors. These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to assure that the building blocks the software uses work independently of each other. Unit testing is also called component testing. Integration testing: Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be localized more quickly and fixed. Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components

corresponding to elements of the architectural design are integrated and tested until the software works as a system. System testing: System testing tests a completely integrated system to verify that it meets its requirements. System integration testing: System integration testing verifies that a system is integrated to any external or third party systems defined in the system requirements.[citation needed] Regression testing: Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, or old bugs that have come back. Such regressions occur whenever software functionality that was previously working correctly stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previously run tests and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, to very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Acceptance testing: Acceptance testing can mean one of two things: 1. A smoke test is used as an acceptance test prior to introducing a new build to the main testing process, i.e. before integration or regression. 2. Acceptance testing performed by the customer, often in their lab environment on their own hardware, is known as user acceptance testing (UAT). Acceptance testing may be performed as part of the hand-off process between any two phases of development. [citation needed] Alpha testing: Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers' site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing. Beta testing: Beta testing comes after alpha testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users.

Introduction to Test Plan

A test plan is a systematic approach to testing a system such as a machine or software. The plan typically contains a detailed understanding of what the eventual workflow will be. A

formal test plan is a document that provides and records important information about a test project, for example: Project and quality assumptions Project background information Resources Schedule & timeline Entry and exit criteria Test milestones Tests to be performed Reference to related test plans Use cases and/or test cases

A quality test team must be able to test a product or system quickly and constructively in order to provide some value to the project. The role of a test plan is to guide all testing activities. It defines what is to be tested and what is to be overlooked, how the testing is to be performed (described on a general level) and by whom. It is therefore a managerial document, not technical one - in essence; it is a project plan for testing. Therefore, the target audience of the plan should be a manager with a decent grasp of the technical issues involved. 1.1 Purpose of Test Plan The purpose of preparing a test plan is to: Ensure that the product is usable. Achieve accuracy in coding. Ensure all Functional and Design Requirements are implemented as specified in the documentation. 1.2 Builds confidence in software. Provide procedure for Unit testing and System testing Identify the documentation process for Unit and System Testing Identify the test methods for Unit Testing and System Testing

Characteristics of Test Plan Each Test Plan Item should have the following specific characteristics Uniquely identifiable Unambiguous Well-defined objective (what is being tested and what is not being tested)

It should not involve any in-depth knowledge of the actual system for the person performing the test (monkey-testing) It should have well-defined test-data (or data-patterns) Well defined pass/fail criteria for each sub-item and overall-criteria for the pass/fail of the entire test itself It should be easy to record It should be easy to demonstrate repeatedly

1.3

Creating a Test Plan While designing a Test Plan one should follow the below mentioned approach: 1. Identify the requirements to be tested. All test cases shall be derived using the current Design Specification. 2. Identify which particular test(s) you're going to use to test each module. 3. Review the test data and test cases to ensure that the unit has been thoroughly verified and that the test data and test cases are adequate to verify proper operation of the unit. 4. Identify the expected results for each test. 5. Document the test case configuration, test data, and expected results. 6. Perform the test(s). 7. Document the test data, test cases, and test configuration used during the testing process. This information shall be submitted via the Unit/System Test Report (STR). 8. Successful unit testing is required before the unit is eligible for component integration/system testing. 9. Unsuccessful testing requires a Bug Report Form to be generated. This document shall describe the test case, the problem encountered, its possible cause, and the sequence of events that led to the problem. It shall be used as a basis for later technical analysis. 10. Test documents and reports shall be submitted. Any specifications to be reviewed, revised, or updated shall be handled immediately.

1.4

Test Plan Components

Test requirements based on new features or functions. Specific testing based on the features defined as Development Priority 1. There must be a plan in place for these features and they must be scheduled for testing. A product release date will be slipped in order to complete adequate testing of the Priority 1 features.

Specific testing based on new features or functions defined as Development Priority There must be a plan in place for these features and they must be scheduled for testing. If testing of the Priority 1 features impact adequate testing of these, they may be dropped from the product. Specific testing based on new features or functions defined as Development Priority Software Quality Assurance will not schedule or plan for these features. However, Priority 3 completed prior to Functional Freeze will be added to the SQA Priority 2 for testing and appropriate risk assessment will be taken with respect their inclusion in the released product. SQA has its own set of Priority 1, Priority 2, Priority 3, which include not only the Development activities, but also testing required as due diligence for product verification prior to shipment. Priority 1, features include the testing of new features and functions, but also a defined set of base installations, program and data integrity checks, regression testing, documentation (printed, HTML and on-line Help) review and final "confidence" (high level manual or automated tests exercising the most frequently used features of the product) checks on all media to be released to the public. Products being distributed over the Web also have their Web download and installation verified. Priority 2, include a greater spectrum of installation combinations, boundary checking, advanced test creation and more in-depth "creative" ad hoc testing. Priority 3, usually reflect attempts to bring greater organization to the SQA effort in documentation of test scripts, creation of Flashboards for metric tracking, or expanded load testing. 1.5 Checklist for Testing You should test the following things while performing Testing 1.1.1 Testing a Website Interface Headers and Tags Secured pages. Field validations and boundaries Mandatory fields. Links, Logos, buttons, background color, text fields and combo boxes. Dynamic links or objects or buttons Page navigations. Browser compatibility.

How are CGI programs, applets, java scripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested? Performance and stress testing.

1.1.2

Testing a Desktop Application Installation and Un-Installation. Basic functionality. Interface Field validations and boundaries Logos, buttons, background color Compatibility with different operating systems

1.1.3

Testing an Application Interface(If provided) Data validation. Check for handling improper data Check for performance Check for load. Check for data flow. Check for data base and Q connection fluctuations. Check Stored Procedures, initial scripts, table design. Check the complete functionality. Check for error handling and error logging. Have to do white box testing. Have to go for integration testing. Have to do end to end testing(if required) Have to check compatibility with 3rd party applications

Testing Methodologies The following is an overview of the quality practices of Software Quality Assurance team: The iterative approach to software development presents a significant challenge for SQA. The iterative, rapid deployment process is characterized by a lack of strict adherence to a traditional waterfall development methodology (marketing first specs the feature set, then engineering refines the marketing requests into more detailed specifications and a schedule, then engineering starts building to

10

specification and SQA starts building tests, then a formal testing cycle, and finally product release). As progress is made toward a release, the first priority features are done to a significant level of completion before much progress is made on the second priority features. A similar approach is taken for the hopefully and third priority features. The first priority feature list is all that has to be completed before a product is feature complete, even though, there has been time built into the schedule to complete the second priority, as well. Other than the initial OK from the executive team that they want a particular product built, there is not a rigorous set of phases that each feature must pass. Developers (designers, coders, testers, writers, managers) are expected to interact aggressively and exchange ideas and status. By not going heavily into complete specifications, the impact of a better idea along the way need not invalidate a great deal of work. One prototype is worth a pound of specification. However, this does not mean that large scale changes should not be specified in writing. Often times, the effort to do paper based design is significantly cheaper than investing in a working prototype. The right balance is sought here. Complementing the strategy of iterative software development, the SQA testing assessment is accomplished through personal interaction between SQA engineers and Development engineers. Lead SQA engineers meet with the development team to assess the scope of the project, whether new features for an existing product, or the development of a new product. Feature, function, GUI, and cross-tool interaction are defined to the level of known attributes. When development documentation is provided, the understanding of the SQA engineer is greatly enhanced. The lead SQA engineer then meets with the test team, to scope the level and complexity of testing required. An estimate of test cases and testing time is arrived at and published, based upon the previous discussions. Working with the development team, the SQA team takes the builds, from the first functioning integration, and works with the features as they mature, to determine their interaction and the level of testing required to validate the functionality throughout the product. The SQA engineers, working with existing test plans and development notations on new functionality, as well as their notes on how new features function, develop significant guidelines for actual test cases and strategies to be employed in the testing. The SQA engineers actively seek the input of the development engineers in definition and review of these tests.

11

Testing is composed of intertwined layers of manual ad hoc and structured testing, supplemented by automated regression testing which is enhanced as the product matures.

2.1

Testing Strategies A strategy outlines what to plan, and how to plan it. A test strategy is a vital enabler to creative process of software development - keeping focus on core values and consistent decision-making to help achieve desired goals with best use of resource. A good strategy stands as a clear counter to reactive, counter-productive test approaches. There are various strategies which are followed in testing; here is a brief overview of the most common strategies:

1.1.4

Black Box Testing Black Box Testing is a testing strategy, which is done without any specific knowledge of coding or logics. In Black Box testing of a software design, the tester is only required to know the inputs and the output. Tester does not require knowing how that program is generated. Tester compares the product with the specifications and ensures that they are matched. For implementing Black Box testing strategy, the tester should have thorough knowledge of the requirement specifications of the system and as a user should know how the system should behave in response to the particular action. The testing types that fall under the Black Box Testing strategy are: Functional Testing: The software is tested for the functional requirements. The tests are written in order to check if the application behaves as expected. User is not required in this case of testing. Stress Testing: The application is tested against heavy load such as complex numerical values, large number of inputs, large number of queries etc. which checks for the stress/load the applications can withstand. User is not required in this case of testing. Recovery Testing: Recovery testing is basically done in order to check how fast and better the application can recover against any type of crash or hardware failure etc. Type or extent of recovery is specified in the requirement specifications. User is not required in this case of testing. Volume Testing: Volume testing is done against the efficiency of the application. Huge amount of data is processed through the application (which is being tested) in order to check the extreme limitations of the system. User is not required in this case of testing.

12

User Acceptance testing (also known as UAT): User plays a major role in it. In this type of testing, the software is handed over to the user in order to find out if the software meets the user expectations and works as it is expected to. Sanity or Smoke Testing: This type of testing is also called sanity testing and is done in order to check if the application is ready for further major testing and is working properly without failing up to least expected level. It is performed to check the some of the very basic functionality of the build before it is accepted in QA from release engineering. If the build that does not pass the smoke is not brought into QA for further testing. Smoke test is used to indicate a quick (basic) test to see that the test object at least is in good condition enough to continue testing, (or whatever else you intend to do with it.) User is not required in this case of testing Load Testing: The application is tested against heavy loads or inputs such as testing of web sites in order to find out at what point the web-site/application fails or at what point its performance degrades. User is not required in this case of testing. Usability Testing: This testing is also called as Testing for User-Friendliness. This testing is done if User Interface of the application stands an important consideration and needs to be specific for the specific type of user. User is not required in this case of testing. Ad-hoc Testing: This type of testing is done without any formal Test Plan or Test Case creation. Ad-hoc testing helps in deciding the scope and duration of the various other testing and it also helps testers in learning the application prior starting with any other testing. User is not required in this case of testing. Exploratory Testing: This testing is similar to the ad-hoc testing and is done in order to learn/explore the application. User is not required in this case of testing. Alpha Testing: In this type of testing, the users are invited at the development center where they use the application and the developers note every particular input or action carried out by the user. Any type of abnormal behavior of the system is noted and rectified by the developers. Beta Testing: In this type of testing, the software is distributed as a beta version to the users and users test the application at their sites. As the users explore the software, in case if any exception/defect occurs that is reported to the developers.

1.1.4.1

Advantages of Black Box Testing: The test is unbiased because the designer and the tester are independent of each other. The tester does not need to acquire knowledge of any specific programming

13

languages. 1.1.4.2 The test is done from the point of view of the user, not the designer. Test cases can be designed as soon as the specifications are complete. The test can be redundant if the software designer has already run a test case. The test cases are difficult to design. Testing every possible input stream is unrealistic because it would take a inordinate amount of time; therefore, many program paths will go untested. 1.1.5 White Box Testing

Disadvantages of Black Box Testing:

White box testing strategy deals with the internal logic and structure of the code. It is also called as glass, structural, open box or clear box testing. It refers to testing the smallest unit of an application. It covers the code, branches, paths, statements and internal logic of the code etc. Tester is required to possess knowledge of coding and logic i.e. internal working of the code. Bugs if left undetected at the basic level, may spawn additional bugs, as bugs build upon and interact with one another. This may lead to increase in the number bugs and the effort in fixing the same. Hence White-Box testing enables detection of errors at the stage where it is easiest and most cost-effective to find and fix errors. White-box testing not only verifies that the class behaves properly when appropriate input is given but also validates that unexpected inputs to a class will not cause the program to crash. The testing types that fall under the White Box Testing strategy are: Unit Testing: The developer carries out unit testing in order to check if the particular module or unit of code is working fine. The Unit Testing comes at the very basic level as it is carried out as and when the unit of the code is developed or a particular functionality is built. Typically it is done by the programmer and not by testers, as it requires detailed knowledge of internal program design and code. The goal here is to test the internal logic of the modules. Different approaches based on Test Code structure are: o o o o o Statement testing Conditional Loops Branch/ Decision testing Branch condition testing Branch condition combination testing

14

Static and dynamic Analysis: Static analysis involves going through the code in order to find out any possible defect in the code. Dynamic analysis involves executing the code and analyzing the output. Statement Coverage: In this type of testing the code is executed in such a manner that every statement of the application is executed at least once. It helps in assuring that all the statements execute without any side effect. Branch Coverage: No software application can be written in a continuous mode of coding, at some point we need to branch out the code in order to perform a particular functionality. Branch coverage testing helps in validating of all the branches in the code and making sure that no branching leads to abnormal behavior of the application. 100% branch testing automatically ensures 100% statement coverage Security Testing: Security Testing is carried out in order to find out how well the system can protect itself from unauthorized access, hacking cracking, any code damage etc. which deals with the code of application. This type of testing needs sophisticated testing techniques. Mutation Testing: A kind of testing in which, the application is tested for the code that was modified after fixing a particular bug/defect. It also helps in finding out which code and which strategy of coding can help in developing the functionality effectively.

1.1.5.1

Advantages of White box testing are: Helps in optimizing code As the knowledge of internal coding structure is pre-requisite, it becomes very easy to find out which type of input/data can help in testing the application effectively. It helps in removing the extra lines of code, which can bring in hidden defects Reduction in the amount of debugging. Improvement in the quality of the released software. Disadvantages of white box testing are: As knowledge of code and internal structure is a prerequisite, a skilled tester is needed to carry out this type of testing, which increases the cost. It is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.

1.1.5.2

15

Documentation and Communication Skills

Effective communication is core competency for every part of every business. Success in sales and service depends upon it. Managers who are leaders should be competent in it. Teams, which are the basis of an organization, are formed through it. Effective communications is a life-skill upon which sound relationships are established. Here is a brief overview of some common skills which every tester should possess: Effective Questioning Skills: A tester should be good in questioning skills. He should be able to Gather relevant data Understand the problems context Identify issues that we need to solve as testers Discover & resolve hidden issues/problems. Identify gaps and risks. Build consensus amongst relevant stakeholders(developers and project managers) While questioning you should make sure that the questions are organized, clear and concise. Questions should relevant to the specific topic/ the interviewees role in the issue. Make sure to ask one question at a time. A tester should: Prepare a list of key contact points. Example test lead, development lead, PM, client contact point. Try to clarify issues and queries through email. If required follow this up with a meeting or conference call. Before scheduling a meeting or conference call get all the questions together. Ideally this should be the list of questions mailed earlier. Get familiarized with any acronyms or business terminology that you may have come across in the documentation or application. If a Question is not answered; get pointers on who has information about the same. Understand expected results. Log failure with actual results described concisely, correctly and completely. Provide all relevant information regarding the defect for effective future resolution to be possible without referring back to the tester who raised it. Prepare test report. Prepare associated metrics. Communicate reports in a timely manner Preparing Good Test Reports: While preparing a test report tester should:

16

Communicating with Developers: While communicating with developers tester should keep following things in mind: Read the requirements well and understand them before raising bugs. Report bugs and have the developers see them before you approach them. Try and email them your queries instead of interrupting them every now and then. Prepare your list so that it can be sent at end of day or end of week Avoid emailing bugs to developers. Report every finding in the bug reporting tool.

Handling Telephone Calls and Conference Calls: Testers should make not of following while handling the calls: Be Specific. Speak Slowly and Clearly. Avoid artificial accents. Be Sensitive to Cultural Issues. If something is unclear; seek immediate clarification Present the agenda in brief. Do not digress. Avoid cross conversations. Always create a MOM after a Conference Call to ensure all the participants have the same understanding. Specify action items if any and the person assigned to it.

Escalating Issues: A tester should escalate an issue when: When after repeated emails/phone calls your questions have not been answered. When you need to get immediate attention to something. When it is taking you too long to get anything turn is hampering work. When in doubt the best way to handle an escalation is to consult your Test Lead. He/She would offer the best advice on when an issue needs escalation. The job of a tester is to be patient in listening, careful in questioning, thoughtful in evaluating, and precise in checking/extending responses (docs/prototypes etc) which in

17

3.1

Documentation

Documents serve as useful artifacts. By preparing accurate documents a tester establishes a reliable source of information. If, however, the document written is sloppy or incomplete, chances are tester has to spend more time clarifying the doubts of others. Good writing skills are mandatory, for example for writing test cases and error reports. Writing understandable and repeatable test scripts, which also contains information about the idea and the intention of the test, the expected results and technical background, isnt easy. There's also a fine line between too detailed test scripts, which are difficult to maintain, and too superficial ones. Follow a simple process for writing effective documents: Understand: Be clear about the idea and concept Decide: Take prompt decisions Analyze: Analyze the situation clearly Write: Write down the important details. Try to be correct, accurate, clear and concise. Review: Give a thorough inspection to what you have written. Make use of available tools like spell check, grammar review, reference books etc. Submit: Finally submit the document to the relevant person involved

Unit Testing: Unit Testing is nothing but testing the smallest executable part in a system to determine if they are fit for use. In procedural programming languages a Unit is nothing but individual function or procedure. It is normally performed by programmers By performing Unit Test we can find problems in the development cycle. Acceptance Testing: It is a type of Black Box Testing performed on a system prior to its delivery. A Smoke Test is used as an Acceptance Test prior to introducing a build to the main Testing process. Integration Testing: Here individual modules of a system are combined and tested as a group. Integration comes after Unit Testing and before System Testing.

18

It takes modules which have been unit tested as input and combine them and perform Integration Testing by executing test cases then delivers output as an integrated system, which is ready for ' System Testing System Testing: System Testing of hardware or software is testing conducted on a whole to test complete behavior of the system. It includes functional, behavioral, performances of the system. Smoke Testing: Smoke test refers to first test made after system repairs or assembly of a system, to provide some assurance to the people who test it further. It is a subset of test cases which cover the main functionality of the system. Smoke Test performed on particular build is called "Build verification Test". Sanity Testing: It is a Build Acceptance Test which means that before accepting a build performs some basic functionality on that build like login functionality, navigation flow. After performing Sanity Test successfully then execute some other test cases on that. Compatibility Testing: This test is useful to test system compatibility with computing environment. Computing environment includes operating system, web server, browser compatibility, hardware peripherals etc.

Usability Testing: This is a type of technique used to evaluate a product on by testing it on users. Usability Testing involves in testing how well the system is used by users under some specific conditions.

19

Potrebbero piacerti anche