Sei sulla pagina 1di 47

Testing Circus

Vol 2 - Issue 2

February 2011
Common Sense Testing Methodology Practical Steps to Increase Your Testing Process Don't worry if dont have a Defect database yet A Fake Testers Diary Software Performance Testing Has Your Product Passed Security Tests? QTP Code Corner Test Case Writing Practice NCR Testers Monthly Meet Bug deBug Testing Conference Interview with Trish Khoo
YOUR MONTHLY MAGAZINE ON SOFTWARE TESTING

www.TestingCircus.com

Testing Circus

Vol 2 - Issue 2

HelpChandrasekharBN,aSoftwareTester, beatAcuteLymphoblasticLeukemia.
Chandrasekhar B N is a 26 year old Software Tester working at Bangalore. He was detected with Blood cancer (Acute Lymphoblastic Leukemia - with Ph+ve) in October. We urge all readers of Testing Circus to donate generously for this purpose. No donation is small. You can do it online through your VISA/Master Card. (Donors from outside India can do it online) https://donations.cpaaindia.org/ You can write a cheque or send demand draft in the name "CANCER PATIENTS AID ASSOCIATION " and mail it to:
Dr. Shubha Maudgal Executive Director Cancer Patients Aid Association Smt. Panadevi Dalmia Cancer Management Centre King George V Memorial, Dr. E. Moses Road, Mahalakshmi, Mumbai - 400 011 Tel: +91 22 2492 4000 / 2492 8775 Fax: +91 22 2497 3599

Please Note: Write Chandrasekhar B N on the back of the cheque.

h tt p: /

/h

elpch

andru.com/
www.TestingCircus.com February 2011 -2-

Testing Circus

Vol 2 - Issue 2

Where is What?
Topic Editorial Letters to the Editor Common Sense Testing Methodology Practical Steps to Increase Your Testing Process Dont have a Defect database yet? A Fake Testers Diary Software Performance Testing In Lighter Moods Has Your Product Passed Security Tests? Part II QTP Code Corner Software Testers @ Twitter Test Case Writing Practice Software Testing News Interview with Trish Khoo Tool Watch - BB TestAssistant Naresh Bisht Santhosh Tuppad Jaijeet Pandey Gil Bloom Namratha Prabhu Rob van Steenbergen A Fake Tester Author Ajoy Kumar Singha Page Number 4 5 6 8 13 16 19 27 28 32 33 35 37 40 42

www.TestingCircus.com

February 2011

-3-

From the Keyboard of Editor


here is a popular punch line Shut off Google and develop ers will not be able to write code. Developers do not commit a crimebecausetheysearchforcodein Google. They simply dont want to reinvent the wheel. Developers, sys temadministratorsandotherITpro fessionals share their knowledge onlineandmanydevelopersgethelp fromthesesites,blogsandforumsto writetheircodeortomaketheircode better. Dowehaveasimilarsystemfortest ing?Howmanyofuswriteblogsand share our knowledge with online world? I am not saying there is no tester blogger or good forums from wherewecangetsomeknowledgeon testing. But there arent enough of them. Can I do a Google search and get some solution for problem that I amfacinginmytestingproject?Prob ablytheanswerisNo. There are testers who blog. Some teach testing. Few do workshops on testing.Weneedmoreofthem.Test ersthatIhaveseenovertheperiodof last67yearsworkinginorganizations followacareergraphwhichsomehow resembleslikethistakesometrain ing from some institutes where the teachers do not have a testing job themselves(theyhavenevertesteda real time product) get hired as a trainee work (execute prewritten testcases,writesometestcases,re port some bugs) get promoted as senior tester get into management position finally become someone

who is not into testing anymore. In between, there will be job hopping andonsitetravels.Thisseemstobea prescriptedcareerpath. How many of us are ready to teach testing to somebody who wants to learn testing? Nobody can become a good tester by reading definition basedbooksreadilyavailableinmar kettoday.Whatweneedismoretest ing teachers who are experienced in testing real projects, more testing bloggers and more good testing fo rumswhererealknowledgeisshared and discussed. Testing professionals shouldcomeforwardandsparesome time(weekends?)toteachtestingto interestedfolks.AlsoIwouldrequest students/ enthusiasts who are read ingthis,andthosewhowanttomove totesting,tolearntestingfromsome body who is already working in soft waretestingfield.Donotlearnfrom crap teachers who just teach defini tionsandwrongconcepts,whoactu allyneverworkedinatestingproject. Letusbuildatestinguniversewhere there is ample scope for people to shareandlearntesting.Enjoythe5th issueofTestingCircus.JaiHoTesting! - Ajoy Kumar Singha http://twitter.com/ajoysingha

Vol 2 - Issue 2

www.TestingCircus.com

February 2011

Testing Circus
-4-

Letters to the Editor


Hello, Thanks for starting these magazines for tester, I found it very helpful for me. But it would be really fine if you can give us more examples of test-cases as it is very for tester who are new to software testing. Thanks once again for these magazine. And I would also like to take an active participation in this magazine. Ujash Kadakia

Hi, I have gone through the Jan 2011 issue. Seriously, its a type of New Year big bang for us. I like the part "Importance of communication in testing". However, still I am having a query regarding the same. Query: How would we formally communicate to PM/Lead/Client when a bug is missed out at a tester's end? Please try to add some mail communication skills if possible in the magazine. Agrta Kansal

Hi, I am currently working as Test Lead in TCS, Chennai. I would like to represent TCS in Testing Circus. Mahesh

Hello All, I am a fellow tester and a new member to this forum. I found all the articles/writeup in the magazine interesting & informative; they tend to help me analyse the gaps in the existing process. Wish you post many more amazing articles and keep up the good work. Ganesh

Follow Testing Circus at Twitter http://twitter.com/testingcircus

Hi Testing Circus Team, My name is Bhawin Joshi. I am working as an Associate Test Engineer in SATVIK Inc. I am regular follower of Testing Circus Magazine. Great Work Guys..!!! Three Cheers to you..!!! Bhawin Joshi

Testing Circus
Write to editor@testingcircus.com
www.TestingCircus.com

Vol 2 - Issue 2

February 2011

-5-

Common Sense Testing Methodology


By Gil Bloom
Common Sense Is Not So Common!

Usually after a short dialog, they find the best way for them to use Testuff. Were quite proud that Testuff does indeed fit with many different methodologies. However, the question I always wonder about is: Why do most people (i.e. testers) think that somewhere, over the rainbow, theres a magic methodology which flies. In my mind, it is almost obvious that no methodology can replace common sense. Keeping the testing process efficient, simple and manageable, is sometimes more important than finding and adopting a methodology, which doesnt necessarily fit in with the company, its processes, people and attitude. This post is not intended to be philosophical and does not attempt to create yet another testing methodology. The intention is to simply try and describe my common sense testing approach for those who want a simple 5 step guideline. 5 Steps to follow The first two steps involve some preparation: 1. Get your requirements! No matter if you are a one-man-band or a global multinational organization, if you dont have any requirements, you cant test. You can even have The site should not crash, and it should be accessible using FF and IE as your only requirement. But if you have nothing, you may be able to do a great development work but not when it comes to testing. 2. Write a simple test script for each requirement. The test script is an instruction how to ensure the requirement is fulfilled. You can write it in a simple language: Check that all links work, Check that all links have a proper title when the mouse hovers over them. You could also use any format of step/expected result. Just write something easy to understand. I dont have any particular advice on how to organize your test scripts. Some prefer to group them by requirement (each test under its requirement), some by the product areas, some by type (GUI, DB, etc). I prefer to put them in a place where I can find them.

- Voltaire

e often receive requests from our customers, most of which are professional testers, to work with them and show them how to use Testuff when using a specific testing methodology.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 6 -

The third step is a mental step, just to bring you to a testing mode and make sure it is done correctly. Think zen. 3. Freeze the test environment. No matter if you have separate environments for development, testing, release and production or if you use only one environment for all (which is probably a bad idea). The golden rule is to freeze everything before and during testing. If someone changes your test environment whilst testing, all your effort goes down the drain and you might as well not do any testing at all. Now for the actual testing process. Theres a little twist that will make your testing better on every cycle: 4. Start testing (and improve your scripts). Just run the Application you test and follow the test scrips you wrote in step 2. If the Application fails on a particular step of the test script, simply open a defect to be fixed later by the development team. If you test something that was not specified on the script and find a defect then (This is the continuous improvement part) add it to the script and fail the test. If you find the script irrelevant or not up-to-date then update it. * Sub-step #2 enables you start with a very basic test script, even an empty one! and fill it in as you progress with testing. Thats it. All done. Another step, to be carried out outside of the testing cycle, which will help you to get better coverage of your application, is:

Gil Bloom is a long time business development expert, with years of experience in the software development industry. CoFounder of Testuff, a test management SaaS solution, since 2007, with world-wide customer base and great success in the software testing industry.

5. Prepare for the next testing cycle. Whilst running the application in production, you will no doubt find some defects. Gil can be reached at Dont worry! Everybody does The key is however to avoid http://twitter.com/testuff these defects on the next release of your application. You can make sure you improve by updating the relevant scripts for the processes in which the defect was found in the first place. This way, next time, you test the application you make sure to check that the defect was fixed. It also serves well for sanity / regression testing purposes. Thats all. The testing world has some great methodologies, and Im sure you can improve this common sense five step process. The main reason for writing this post was to encourage you to start testing your application! Instead of hiding behind its too complex, I dont know how to, We need an expert to drive our strategy excuses, simply jump in and get started. This is what Testuff is all about.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 7 -

Practical Steps to Increase Your Testing Process

By Namratha Prabhu

testers life is hard. Whether or not your manager or your development counterparts agree, it remains a fact that a testers life is hard. You are told to test everything and certify everything is working but the deadline set for it is a joke.

You are running test cases and/or test scripts that are mostly written by someone else and the scripts were written most probably for an older version of the product. And most often, we have a developer who says Hey that works on my system! We have all been through this, havent we? Can we do something to ease our work life? The answer is YESJ, else I would not have written this article, eh? ;) So go ahead and check out these pointers. 1. Do you share a centralized repository of project requirements accessible to testers, developers and managers alike? 2. Do you have a centralized test case database and test script repository? 3. Does the entire team (including developers) have access to the test cases? 4. Do you have a schedule? 5. Do you have a centralized bug tracker? 6. Do you write clear, concise test reports with enough proof (screen shots, input data files, etc)? 7. Do you communicate regularly with the team members? If you answered YES to all the above questions, you are probably better off than most testers. But for those of you answered No to any/all of these, please read on. 1. You dont have access to a centralized location for requirement documents? You should have a centralized location where everyone in the team can access the requirement documents ideally captured in a Wiki. This document should always be up to date with all the new requirements/enhancements. Circulating the documents by email or

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 8 -

shared file is risky since there are chances that someone is missed and he/she ends up working with an outdated spec. The requirement document helps to empathize with the customer and understand his requirements. Study the requirement document - you should know what you are going to test before you test it. You should know how the system should be working for the user and NOT how the developer/manager thinks it should work. If you find any difficulty understanding the documents or have any doubts, discuss it with the people who have done it before. Talk to the customer directly too, if it is possible, or someone who has direct access to the customer. Having a set of written requirements also helps you to identify nonfunctional requirements and arrive at suitable test cases and/or scripts. 2. You dont have a centralized test case database and test script repository? Usually multiple testers work on a product and hundreds of test cases are written. Hence it necessary to have a centralized test case database where all the test cases can reside. This way, everyone has access to the test cases. As the number of test cases become larger, it will be easier to search and/or edit the existing test cases also. Have a centralized repository for the test scripts. Instead of a tester saving her test scripts in her local drive, it is better to check it in to a repository. This provides a tester with a pool of scripts which may be used in whole or part in another module. This will save time and effort for the team. As more and more versions/builds are released, new test scripts would be needed to be designed. Not all the test cases and scripts that were valid for the earlier version may hold valid for the new one. With a number of test scripts being edited/added regularly, it would be a pain to maintain them individually. Check-in the scripts in a repository and let it handle the versioning for you. Since the repository will maintain who wrote a test case or who checkedin the script, it would be easier for team members to approach him/her with doubts/clarifications regarding the scripts. Make it a habit to cover every new found bug/scenario with a test case/script. 3. Your entire team (including developers) does not have access to the test cases? You should give the entire team access to the test cases. Accept it that a tester is not always able to come up with ALL the scenarios for a particular AUT after all a tester is human too. Share the test cases with the team and request them to review the test cases and let you know if they think some scenarios have been missed, or if they want a particular flow tested. A member of the team working directly with the customer would have a better idea of how the system is used in production and what scenarios he/she came across when working with the w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 9 -

customer. The testers need to remember that the other team members coming up with test cases do not mean they think the tester is incompetent and the other team members need to realize that testing is not a testers job alone. All of us need to work as a team and bring out a quality product and make the customer happy. But always remember - Keep developers away from the test environment. Never allow them to come to your machine and make changes in the code. You should only bring them to your machine to demonstrate a bug and for nothing else. 4. You dont have a schedule? Im not talking about the one your manager set for you. We have to release this next Wednesday is not a schedule. Have your own schedule - plan things yourself. Only you know how many test cases you can run in one hour, and only you know how much time it will take for you to log a defect. So why not do it yourself? For starters, you can work with a rough estimate. You don't need any complex calendar program to do that - you may do so in your favorite text file or spreadsheet application. (Screenshot below this is a sample only)

Update the actual time taken accordingly. In due course, you will know how your time is utilized. Keep the following things in mind when you work out a schedule The holidays, leaves and vacations - Plan them accordingly Take regression testing and defect verification time into account If there is a review process in your organization, allot time for that and subsequent editions to your files too. It is good to keep a bit of buffer wherever you deem necessary because unforeseen events keep happening all the time - the management comes up with this feature that HAS to go into this release no matter what and you end up getting an extra feature to test at the end of the day, or maybe you or your team mate suddenly falls ill. If your manager or lead tells you that you are taking a longer time and your estimates are too big, well, have another column and capture your manager's estimates as well and get back to your work.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 10 -

5. You dont have a centralized bug tracker? This is one mandatory thing you should have in your organization. If there isnt, install one of the free and open source bug trackers available (Bugzilla is my favorite) and start using it. There is no way you are going to track bugs in emails and excel sheets. You could manage with excel spreadsheets in a team of two or three, but as time goes the number of people in the team is going to increase, so also the number of bugs. With a bug tracker, a bugs entire lifecycle (and comments by those working on it) are clearly visible and available to all the team members. 6. You dont write clear, concise test reports with enough proof (screen shots, input data files, etc)? It is not enough to have a great bug tracker but filled with bugs that dont make sense. Most often than not, a lot of time is wasted trying to understand what the bug is about there is not enough details entered in the bug report. Another person understands/interprets it wrongly is a different matter altogether and cannot be helped. Leaving them aside, a bug report should be such that a person using the system for the first time should be able to reproduce the bug. A good test report has to have the following three things Steps to reproduce: The least number of steps required to reproduce the bug, understandable by a newbie Expected behavior: What the system should have done on doing the above mentioned steps Actual behavior: What the system actually did on performing the above mentioned steps That said, there are a lot of other things that need to be captured in a bug report. Things like OS type and version, software (AUT) version/build number, date of test, underlying software/hardware version (RAM, browser, printer, etc, as applicable), and so on. Sometimes it is beneficial to take screen shots or videos of the bug as proof and attach it to the bug report. These things are necessary for the developer or a new comer to try and reproduce the bug. 7. You dont communicate regularly with the team members? Although emails, test databases and bug reports exist, they are not a replacement for regular face to face interactions between the team members. As much as possible, communicate the small bugs (UI bugs, spelling mistakes, formatting) to the developers directly and close them. Always remember, developers hate bugs as much as we hate regression issues, inflating bug count is never the solution. Participate actively in team discussion, even if the subject is too technical/developer oriented. More often than not you will walk out of these discussions wiser and brimming with new test ideas and scenarios.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 11 -

A few miscellaneous things to keep in mind If the product you are testing is input-driven, always test the product with live data from the customer, as early during the process as possible. Take a dump of the database or take input values from a live system and test accordingly. This will not guarantee that nothing will go wrong in production but at least you will have tried your best. Always automate the system as much as possible for those mundane tasks you need to do at every regression. It is better to have a single click test script which will run all other test scripts without human intervention. A log file generated during the run should give you the results of the script. Have a continuous integration server (like Hudson or Namratha Prabhu has over 5 Cruise Control) which runs on every check-in/hour. years experience in product Schedule unit tests and automated tests to run on this testing. She is currently working automatically so that the team does not waste time on at IBM India Software Labs, it. Have the results emailed every morning to the team Bangalore. Namratha is members. It is up to the team members to then check passionate about books and loves the result and fix the broken code. This way no one can music and dance. break the code and blame someone else for making wrong check-ins because everything is there in the system for everyone to see. It also gives a general idea Namratha can be reached at of daily progress of development and testing. http://twitter.com/namrathabaliga If it is needed to simulate multiple machines, use virtual machines (VM images). Use spoon.net/browsers to test your application on multiple browsers without having to install them separately. There are loads of testing tools available scripting, record and play, user simulation, etc use them as suited to your needs. Always log serious bugs first. Smaller bugs (UI issues, spelling mistakes, formatting and so on), though important, can wait until later. The latter are mostly closed by directly talking to the developers. If that is not possible at a given time, make a note of the low priority bugs and log them under a single bug. Always remember that quality of the bugs is more important than the quantity - one bug with all the small issues is better than 50 different bugs with spelling mistakes and formatting issues. There exist organizations where testers and developers are rated purely based on the number of bugs found/created, indirectly encouraging test teams to inflate the bug count. But on the longer run, this approach of appraising will prove to be counterproductive. Hope this helps you J

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 12 -

Dont have a Defect database yet?


Practical ways to promote testing in your organization.

By Rob van Steenbergen

had written on ideas and tips for promoting software testing in an organization in the last issue of Testing Circus. One of these was: Dont have a bugs database yet? Create it yourself; start with an Excel sheet if necessary. A central list of product issues convinces many people of the usefulness of such a tool. Here, I will explain the above point a litter further. If you dont have a bug database yet at your company, make it yourself. Start a bug database yourself and use (if tooling is not available yet) with an Excel sheet on a central location on your network. A centralized list of issues will help other people see the importance of one central list in comparison to everyone having his own list. Everyone has a list of problems and issues in his mind but the complete picture of all the software problems isnt there yet. Benefits of an issue / bugs database

To raise awareness, give a presentation on the benefits of a bug database (or a central list of issues). It is an easy way to arrest known bugs within the project Everyone has the same view on the issues in a product By assigning defects to people (action holders) it creates a to-do list for everyone From the bugs database you can generate input for readme files or release notes for new software updates It is an analysis tool for testers and developers to find out where the most problems are found, which the most vulnerable part of the software is.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 13 -

Creating an Excel sheet Use references in your documentation to defect number #0002, #0054, etc from your bug list. Other people in your company will become quickly aware of the central bugs list. Use MS Excel to start with, or are you good with MS Access? While using MS-Excel or an MS Access database think about this: Keep it simple

to use!
Ensure that there is a weekly defect analysis meeting with the project manager, developer, designer and software tester. In this meeting you will discuss the open issues, status, and priority for every issue.

Keep it simple to use Keep it simple to use, but at least create the following fields Short description: In a clear sentence describing the problem (not the cause) Finding number: you will need unique numbers (so you can refer to the defects) Status: Open, in progress, fixed, test, solved, closed etc Priority: High, Middle or Low? Or 1 (high) to 5 (low)? Action Holder: Who needs to do something now Date when the defect is found Resolved date Be the defect manager in your organization Dont be afraid to play the role of defect manager. Protect the list and keep it updated and realistic. If someone has found an issue, ask if he can send you an email with explanation Add this defect to your issue list and mail the person back that it can be found there and there and that it will be discussed in the next issue meeting Do not combine different issues in one. If several issues arrive from one technical cause (which looks often so), it does not mean that solving this technical cause, every problem we thought was related to it, is automatically resolved. Engage your colleagues in the issue meetings or discuss issues. When an issue database is used for the first time and when a defect manager is speaking about issues and solving them for the first time, some people will not like that, try to stay positive about it. w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 14 -

Check and discuss issues at the beginning of the project or iteration with the project leader or a change manager. Whoever has the authority and confidence of the team should make sure that people would act upon the issues to analyze and resolve them. Be consistent and keep watching and guarding the issues every day.

Beyond Excel You can go really far with an issue reporting tool. Be careful not to make your Excel or MS Access a fully grown application. There are much better defect and issue reporting tools available for free today. If you get to the point that everyone in your team is using the issue list and follows the issue process, it is time to search for a good defect reporting tool for your organization. For implementing the tools that are available on the Internet you'll need some help from some technical colleagues. Maybe you can ask a programmer if he can search for and analyze some tools and compare them and let you know what they recommend. Free tools still cost a lot of money if they do not fit in the organization. When looking for these tools, check for example: http://www.software-pointers.com/en-defecttrackingtools.html . Here you will find many free tools. Bugzilla for instance is free and is widely used in the world.

Rob van Steenbergen is an independent software test consultant from The Netherlands. In the last 4 years he has been involved in infrastructure projects and is working now on a desktop virtualization project where he is testing and coordinating the tests. For more information visit www.chickenwings.nl Rob can be reached at
http://twitter.com/rvansteenbergen

Remember, it is your team that should be ready for the tool, not you alone. First find out what people think rather than what you would like to do.

In the next issue Rob will write more on how to promote testing in an organization. Keep reading Testing Circus. - Editor

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 15 -

A Fake Testers Diary


By Fake Software Tester [A fake Testers Diary was first published in Testing Circus January issues. New Readers are encouraged to read the January issue to understand the journey of Tanash as software tester. Editor] Tanashs 1st month at work. The Induction Following his offer from last month, Tanash walked in to work on a sunny Monday morning. The receptionists re-directed him to the induction rooms, wherein he had to fill in 1048 forms to enroll himself into Ele Info Systems. Being smart, he quickly realized that hed complete it far quicker if he filled in only the mandated fields. He was unable to understand why a technology company has to rely on hard-copy of forms. Wouldnt it be easier for them to have a soft copy printed with the information and his sign at the bottom? Thinking he was being innovative, Tanash gave that suggestion only to be silenced by the HR representative. The reply he got was Government Norms!!! Before he knew it, his induction formalities were completed and he was assigned an email address and a team. Though the company had advertised that selected candidates would work on cutting edge and out of the world technologies, he was a tad disappointed since nobody had actually asked him for his aspirations before he was assigned to a team. Tanash felt like a fish in a fish market. Can a fish choose its buyer? Tanashs minds echoed the same sentiments!!! He was taken to meet his team by the recruiters!!! His Manager Tanashs manager was named Delspe. (Meaning Delegation Specialist)!!! Delspe loved delegation. Some company sponsored management training program had taught him that effective managers do effective delegation, and ever since, Delspe had gotten into the mode of delegating everything. Well, not everything, of course. Most of his Sent Items had the words FYI or FYA. He typed those words himself!!!

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 16 -

Now, Delspe had delegated Shyam, the test lead to meet Tanash and put him on a 2 week training program along with others, before starting to test. Tanash asked Shyam whether 2 weeks is sufficient for a person to pick up effective testing skills. Shyam replied Nothings impossible. If you have the will, and the skill, you can do it in a day!!! Tanash was also told that he would be testing a clients customer facing website after the training program. Training Week 1 The trainings lasted for 2 weeks. During his first week in training, Tanash came across a training room that said that they train leaders. Tanash started wondering if leaders require trainings. A peep into his thoughts --- Can anyone train a person to become a leader? Arent leaders those people who took the path not travelled by and become a leader?!!! Tanash was also surprised when he found out that the company even had a test to certify that a person is a leader. The 1st week training was on testing concepts. The company used to outsource trainings to an external company earlier. But, a brilliant mind in the company had come up with the concepts of doing the training in-house, to save costs. The same brilliant mind also came up with the idea of asking existing employees to train people and included this in their objectives. Another peep into Tanashs thoughts --- A trainer needs to be coached on how to train others. Teaching cannot be done by all and sundry. It needs practice, experience and patience. All my schools and colleges had a lot of experienced teachers, who had a plethora of knowledge on the subjects that they taught. What do these kids know? Though the training syllabus looked awesome, the training was conducted by existing employees. These employees, it seemed to Tanash, mostly did Google searches and pulled up information off the internet, indulging in unbiased plagiarism without sparing a thought for the true creator of the documents and passed it off as their own. Most of them did not make practical sense. Another example was the session on Security testing. Most of the content seemed borrowed from the OWASP Top 10 list from the internet, and the trainer was unable to answer any question, that warranted applying thought to reply with an answer. Training - Week 2 The 2nd week was on QTP. Tanash was silenced by his teachers when he questioned the usage of QTP. He never disliked QTP, but wanted to think if there are any w w w . T e s tWwwC i r c u s . c o m ing February 2011 Page - 17 -

methodologies outside QTP to create test automation. Though they told him to practice lateral thinking, and think out-of-the-box, they largely trained him to work with QTP when working on test automation. He realized that the company had invested in QTP and wanted him to learn the tool. The seniors Tanash had dreamed about having the senior test engineers sit with him and do testing alongside him, write test cases with him and guide him. He wanted them to sit with him and explain the customer business, the customer business context, share with him the success stories of testing, talk to him about infra on which the files are hosted, But, sadly, all that remained, only a dream!!! Career Aspirations At the end of the training, the company had a vice-president come in for a session on career aspirations. When he quizzed the bunch on their career aspirations, 1 guy said I want to keep working in robust cutting edge technology. Another said I want to reach the top in quick time and lead the rest of the pack. A third said I want to become an entrepreneur in the next few years. Tanash said I want to remain a tester all life. He was greeted by silence. The vicepresident could not believe that their recruitment had picked up such a loser. The vicepresident made a note of his name and passed it on to HR with comments that this person was not ambitious. Tanash was advised, in a session with the HR team, to follow the leaders of the company and grow up the ladder. End of the training And at the end of the training, he was disgusted with himself. Not because he had a difficult time in training. Some of the concepts that were taught were wrong. Some of the concepts that were thought were only theoretical and not practical. Some of them had taken classes without knowing the concept itself. Some of them had indulged in plagiarism. And instead of getting punished, were rewarded for being thought leaders. The vice-president does not respect someone who wants to remain a tester all life long. Tanash wondered where he was headed. Well, we really dont know Keep reading future issues of Testing Circus to know more!!! Do you have anything to say about fake testing practices? Write to us. faketester@testingcircus.com By the way, have you seen the blog on fake software testing? Here it - http://fakesoftwaretester.blogspot.com w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 18 -

Information from the Internet


Software Performance Testing
In software engineering, performance testing is testing that is performed, to determine how fast some aspect of a system performs under a particular workload. It can also serve to validate and verify other quality attributes of the system, such as scalability, reliability and resource usage. Performance testing is a subset of Performance engineering, an emerging computer science practice which strives to build performance into the design and architecture of a system, prior to the onset of actual coding effort. Performance Testing Sub-Genres Load testing is the simplest form of performance testing. A load test is usually conducted to understand the behavior of the application under a specific expected load. This load can be the expected concurrent number of users on the application performing a specific number of transactions within the set duration. This test will give out the response times of all the important business critical transactions. If the database, application server, etc. are also monitored, then this simple test can itself point towards any bottlenecks in the application software. Stress testing is normally used to understand the upper limits of capacity within the application landscape. This kind of test is done to determine the application's robustness in terms of extreme load and helps application administrators to determine if the application will perform sufficiently if the current load goes well above the expected maximum. Endurance Testing (Soak Testing) is usually done to determine if the application can sustain the continuous expected load. During endurance tests, memory utilization is w w w . T e s tWwwC i r c u s . c o m ing February 2011 Page - 19 -

monitored to detect potential leaks. Also important, but often overlooked is performance degradation. That is, to ensure that the throughput and/or response times after some long period of sustained activity are as good as or better than at the beginning of the test. Spike testing, as the name suggests is done by spiking the number of users and understanding the behavior of the application; whether performance will suffer, the application will fail, or it will be able to handle dramatic changes in load. Configuration testing is another variation on traditional performance testing. Rather than testing for performance from the perspective of load you are testing the effects of configuration changes in the application landscape on application performance and behavior. A common example would be experimenting with different methods of loadbalancing. Isolation testing is not unique to performance testing but a term used to describe repeating a test execution that resulted in an application problem. Often used to isolate and confirm the fault domain. Setting performance goals Performance testing can different purposes.

serve

It can demonstrate that the system meets performance criteria. It can compare two systems to find which performs better. Or it can measure what parts of the system or workload causes the system to perform badly. Many performance tests are undertaken without due consideration to the setting of realistic performance goals. The first question from a business perspective should always be "why are we performance testing?". These considerations are part of the business case of the testing. Performance goals will differ depending on the application technology and purpose however they should always include some of the following:Concurrency / Throughput If an application identifies end-users by some form of login procedure then a concurrency goal is highly desirable. By definition this is the largest number of concurrent application users that the application is expected to support at any given moment. The work-flow of your scripted transaction may impact true application concurrency especially if the iterative part contains the Login & Logout activity If your application has no concept of end-users then your performance goal is likely to be based on a maximum throughput or transaction rate. A common example would be casual browsing of a web site such as Wikipedia. w w w . T e s tWwwC i r c u s . c o m ing February 2011 Page - 20 -

Server response time This refers to the time taken for one application node to respond to the request of another. A simple example would be a HTTP 'GET' request from browser client to web server. In terms of response time this is what all load testing tools actually measure. It may be relevant to set server response time goals between all nodes of the application landscape. Render response time A difficult thing for load testing tools to deal with as they generally have no concept of what happens within a node apart from recognizing a period of time where there is no activity 'on the wire'. To measure render response time it is generally necessary to include functional test scripts as part of the performance test scenario which is a feature not offered by many load testing tools. Performance specifications It is critical to detail performance specifications (requirements) and document them in any performance test plan. Ideally, this is done during the requirements development phase of any system development project, prior to any design effort. See Performance Engineering for more details. However, performance testing is frequently not performed against a specification i.e. no one will have expressed what the maximum acceptable response time for a given population of users should be. Performance testing is frequently used as part of the process of performance profile tuning. The idea is to identify the weakest link there is inevitably a part of the system which, if it is made to respond faster, will result in the overall system running faster. It is sometimes a difficult task to identify which part of the system represents this critical path, and some test tools include (or can have add-ons that provide) instrumentation that runs on the server (agents) and report transaction times, database access times, network overhead, and other server monitors, which can be analyzed together with the raw performance statistics. Without such instrumentation one might have to have someone crouched over Windows Task Manager at the server to see how much CPU load the performance tests are generating (assuming a Windows system is under test). There is an apocryphal story of a company that spent a large amount optimizing their software without having performed a proper analysis of the problem. They ended up rewriting the systems idle loop, where they had found the system spent most of its time, but even having the most efficient idle loop in the world obviously didnt improve overall performance one iota! Performance testing can be performed across the web, and even done in different parts of the country, since it is known that the response times of the internet itself vary regionally. It can also be done in-house, although routers would then need to be w w w . T e s tWwwC i r c u s . c o m ing February 2011 Page - 21 -

configured to introduce the lag what would typically occur on public networks. Loads should be introduced to the system from realistic points. For example, if 50% of a system's user base will be accessing the system via a 56K modem connection and the other half over a T1, then the load injectors (computers that simulate real users) should either inject load over the same connections (ideal) or simulate the network latency of such connections, following the same user profile. It is always helpful to have a statement of the likely peak numbers of users that might be expected to use the system at peak times. If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification. Questions to ask Performance specifications should ask the following questions, at a minimum: In detail, what is the performance test scope? What subsystems, interfaces, components, etc. are in and out of scope for this test? For the user interfaces (UI's) involved, how many concurrent users are expected for each (specify peak vs. nominal)? What does the target system (hardware) look like (specify all server and network appliance configurations)? What is the Application Workload Mix of each application component? (for example: 20% login, 40% search, 30% item select, 10% checkout). What is the System Workload Mix? [Multiple workloads may be simulated in a single performance test] (for example: 30% Workload A, 20% Workload B, 50% Workload C) What are the time requirements for any/all backend batch processes (specify peak vs. nominal)? Pre-requisites for Performance Testing A stable build of the application which must resemble the Production environment as close to possible. The performance testing environment should not be clubbed with User acceptance testing (UAT) or development environment. This is dangerous as if an UAT or Integration test or other tests are going on in the same environment, then the results obtained from the performance testing may not be reliable. As a best practice it is always advisable to have a separate performance testing environment resembling the production environment as much as possible. Test conditions In performance testing, it is often crucial (and often difficult to arrange) for the test conditions to be similar to the expected actual use. This is, however, not entirely possible in actual practice. The reason is that the workloads of production systems have a random nature, and while the test workloads do their best to mimic what may happen in the production environment, it is impossible to exactly replicate this workload variability - except in the simplest system. w w w . T e s tWwwC i r c u s . c o m ing February 2011 Page - 22 -

Loosely-coupled architectural implementations (e.g.: SOA) have created additional complexities with performance testing. Enterprise services or assets (that share a common infrastructure or platform) require coordinated performance testing (with all consumers creating production-like transaction volumes and load on shared infrastructures or platforms) to truly replicate production-like states. Due to the complexity and financial and time requirements around this activity, some organizations now employ tools that can monitor and create production-like conditions (also referred as "noise") in their performance testing environments (PTE) to understand capacity and resource requirements and verify / validate quality attributes. Timing It is critical to the cost performance of a new system that performance test efforts begin at the inception of the development project and extend through to deployment. The later a performance defect is detected, the higher the cost of remediation. This is true in the case of functional testing, but even more so with performance testing, due to the end-toend nature of its scope. Tools In the diagnostic case, software engineers use tools such as profilers to measure what parts of a device or software contributes most to the poor performance or to establish throughput levels (and thresholds) for maintained acceptable response time. Myths of Performance Testing Some of the very common myths are given below. 1. Performance Testing is done to break the system. Stress Testing is done to understand the break point of the system. Otherwise normal load testing is generally done to understand the behavior of the application under the expected user load. Depending on other requirements, such as expectation of spike load, continued load for an extended period of time would demand spike, endurance soak or stress testing. 2. Performance Testing should only be done after the System Integration Testing Although this is mostly the norm in the industry, performance testing can also be done while the initial development of the application is taking place. This kind of approach is known as the Early Performance Testing. This approach would ensure a holistic development of the application keeping the performance parameters in mind. Thus the finding of a performance bug just before the release of the application and the cost involved in rectifying the bug is reduced to a great extent.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 23 -

3. Performance Testing only involves creation of scripts and any application changes would cause a simple refactoring of the scripts. Performance testing in itself is an evolving science in the Software Industry. Scripting itself although important, is only one of the components of the performance testing. The major challenge for any performance tester is to determine the type of tests needed to execute and analyzing the various performance counters to determine the performance bottleneck. The other segment of the myth concerning the change in application would result only in little refactoring in the scripts is also untrue as any form of change on the UI especially in the Web protocol would entail complete re-development of the scripts from scratch. This problem becomes bigger if the protocols involved include Web Services, Siebel, Citrix, and SAP. Technology Performance testing technology employs one or more PCs or Unix servers to act as injectors each emulating the presence of numbers of users and each running an automated sequence of interactions (recorded as a script, or as a series of scripts to emulate different types of user interaction) with the host whose performance is being tested. Usually, a separate PC acts as a test conductor, coordinating and gathering metrics from each of the injectors and collating performance data for reporting purposes. The usual sequence is to ramp up the load starting with a small number of virtual users and increasing the number over a period to some maximum. The test result shows how the performance varies with the load, given as number of users vs response time. Various tools are available to perform such tests. Tools in this category usually execute a suite of tests which will emulate real users against the system. Sometimes the results can reveal oddities, e.g., that while the average response time might be acceptable, there are outliers of a few key transactions that take considerably longer to complete something that might be caused by inefficient database queries, pictures etc. Performance testing can be combined with stress testing, in order to see what happens when an acceptable load is exceeded does the system crash? How long does it take to recover if a large load is reduced? Does it fail in a way that causes collateral damage? Analytical Performance Modeling is a method to model the behaviour of an application in a spreadsheet. The model is fed with measurements of transaction resource demands (CPU, disk I/O, LAN, WAN), weighted by the transaction-mix (business transactions per hour). The weighted transaction resource demands are added-up to obtain the hourly resource demands and divided by the hourly resource capacity to obtain the resource loads. Using the responsetime formula (R=S/(1-U), R=responsetime, S=servicetime, U=load), responsetimes can be calculated and calibrated with the results of the performance tests. Analytical performance modelling allows evaluation of design options and system sizing based on actual or anticipated business usage. It is therefore

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 24 -

much faster and cheaper than performance testing, though it requires thorough understanding of the hardware platforms. Tasks to perform such a test would include: Decide whether to use internal or external resources to perform the tests, depending on in-house expertise (or lack thereof) Gather or elicit performance requirements (specifications) from users and/or business analysts Develop a high-level plan (or project charter), including requirements, resources, timelines and milestones Develop a detailed performance test plan (including detailed scenarios and test cases, workloads, environment info, etc.) Choose test tool(s) Specify test data needed and charter effort (often overlooked, but often the death of a valid performance test) Develop proof-of-concept scripts for each application/component under test, using chosen test tools and strategies Develop detailed performance test project plan, including all dependencies and associated timelines Install and configure injectors/controller Configure the test environment (ideally identical hardware to the production platform), router configuration, quiet network (we dont want results upset by other users), deployment of server instrumentation, database test sets developed, etc. Execute tests probably repeatedly (iteratively) in order to see whether any unaccounted for factor might affect the results Analyze the results - either pass/fail, or investigation of critical path and recommendation of corrective action

Methodology Performance Testing Web Applications MethodologyAccording to the Microsoft Developer Network the Performance Testing Methodology consists of the following activities: Activity 1. Identify the Test Environment. Identify the physical test environment and the production environment as well as the tools and resources available to the test team. The physical environment includes hardware, software, and network configurations. Having w w w . T e s tWwwC i r c u s . c o m ing February 2011 Page - 25 -

a thorough understanding of the entire test environment at the outset enables more efficient test design and planning and helps you identify testing challenges early in the project. In some situations, this process must be revisited periodically throughout the projects life cycle. Activity 2. Identify Performance Acceptance Criteria. Identify the response time, throughput, and resource utilization goals and constraints. In general, response time is a user concern, throughput is a business concern, and resource utilization is a system concern. Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics. Activity 3. Plan and Design Tests. Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed. Activity 4. Configure the Test Environment. Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary. Activity 5. Implement the Test Design. Develop the performance tests in accordance with the test design. Activity 6. Execute the Test. Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment. Activity 7. Analyze Results, Tune, and Retest. Analyse, Consolidate and share results data. Make a tuning change and retest. Improvement or degradation? Each improvement made will return smaller improvement than the previous improvement. When do you stop? When you reach a CPU bottleneck, the choices then are either improve the code or add more CPU.

Content Source http://en.wikipedia.org/wiki/Software_performance_testing

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 26 -

The dream of three animals

[ We do use animal shaped utilities. What if animal starts using human shaped utilities? ]
w w w . T e s tWwwC i r c u s . c o m ing February 2011 Page - 27 -

Has Your Product Passed Security Tests?

Part II

By Santhosh Tuppad
[This is the 2nd and last part of the article on security testing by Santhosh Tuppad. The first part was published in the January issue of Testing Circus.] Directory Traversal This might or might not be security vulnerability depending on what sensitive information does a hacker get from this technique. Lets say, example.com directory listing is allowed and the webmaster is not aware of it. Lets say there is a text file which webmaster has stored in /example/hardtocrack/thisismyfolder/youcantgether e/thisisnotpassword.txt path. Webmaster thinks its hard to guess such a long directory path. But, the hacker smells that directory listing is enabled and just enters http://example.com/example/ and then the entire directory gets listed and its not cakewalk to get to that text file which contains the password of admin. What to do in order to avoid it? Webmaster needs to login to the hosting provider control panel and disable directory listing. Note that, depending on different hosting providers the control panel directory listing option might be different.
w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 28 -

Spider is a friend of a hacker. Ah! I mean Crawlers. There are many crawling programs which are FREE as well as commercial. But, I would suggest going with FREE depending on your need. Google for crawler and get one for yourself. The crawler will help the hacker in finding all the WebPages or files that are publicly accessible. You can still use Google to find it, but using crawler is like hacker is automating. Leave the crawler to do smart work while hacker is thinking of what to do next. Which information can be used for what exploit? And many other things. Robots.txt Most of you might know about robots.txt, but have you ever wondered how this could be used by hackers or attacker? Robots.txt usually has the information which the webmaster doesnt want to be indexed by search engine spiders which gives a indication that this might have some kind of sensitive information. Possible things that you might be interested into as a security tester are following stuff, Robots.txt containing, /admin/ /cgi-bin/ /secret/ /password.txt and such creamy stuff. While you are testing a web-application make sure robots.txt file in any way is of no help to hacker or attacker. If some sensitive information is exposed, make sure they are protected using .htaccess or .htpasswd Data Manipulation attack - I love Firebug Most of you are in a practice of saying, I did a boundary value test and the boundary value is 10. I would say that, the test is not yet finished. What you have seen is bug fooling you in the front-end. Boundary value analysis phrase has analysis as a word in it. Merely, saying boundary value is 10 is not any analysis.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 29 -

My analysis is here, I would use Firebug and press F12 (Windows XP) and then inspect the text field. I see maxlength attribute is set to 10. Then I manipulate the data by changing it from 10 to 100. Now, I enter 100 characters. If there is no validation my test data goes to 1000 and 10000 to do a risk analysis. Possible things that might happen are, crashing the server, enough space to enter SQL query to test for SQL Injection, I would like to inject some code and many other things Think from a hackers viewpoint. Looking for sensitive / confidential information through the source code Sometimes the source code in commercial applications has inappropriate information such as, authors name and information, sometimes admin logins which havent been removed when deployed and many such other things which would be harmful for the product. Example: Consider a scenario where O1 and O2 are two competitors. E1 works for O1 and develops the algorithm which is highly confidential. Then E1 switches to some X company. O2 organization finds the information of the developer through the source code which is commented and was not removed when deployed. So, O2 uses this information and contacts E1 to steal the algorithm by bribing or whatever. So, such data shouldnt be visible to the enduser. Tips for testers Refer to OWASP Top 10 Guidelines for Web Application Security Testing Attend the conferences of Hacking Example: DerbyCon Subscribe for the security testing keywords at Google Alerts Interact with hackers Blackhat / Whitehat / Greyhat Read articles on Security Testing, Hacking techniques Try to learn hacking by practicing Dont be scripted.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 30 -

Certifications doesnt teach you anything, do you know if the trainer is a hacker or he / she is one among those scripted trainers Get a guide / mentor But, personally author recommends to learn by own Explore the tools / utilities that would help testers to perform better security testing Learn more about Virus / Trojan / Malware / Adware Learn to write crack for software (NOTE : This is for your learning and not to cause harm) Learn about registry hacks Read books on Hacking Get in touch with local hackers / security testers Learn about mistakes done by the developers, think what doors or windows have been left open by the developers for the unauthorized entry by bad guys Learn about encryption / decryption / HTTPS protocol / SSL Learn about permission / configuration / settings of database Learn to use and explore Network Sniffers like Fiddler / Wireshark Explore Mozilla Firefox add-ons Web Developer, XSSMe, Firebug, Hackbar etc. Learn the browser vulnerabilities to exploit the web applications with different hacking techniques As you are reading this article, how about learning and practicing some of the things mentioned in the tips :)

Santhosh Tuppad is the Cofounder & Senior Tester of Moolya Software Testing Private Limited He also (www.moolya.com). recently won the uTest Top Tester of the Year 2010 apart from winning several testing competitions from uTest and Zappers. Santhosh specializes in exploratory testing approach and his core interests are security, usability and accessibility amidst other quality criteria. Santhosh loves writing and he has a blog http://tuppad.com/blog. He has also authored several articles and crash courses in the past. He attends conferences and confers with testers he meets. Santhosh is known for his skills in testing and you should get in touch with him if you are passionate about testing.

Santhosh can be reached at http://twitter.com/santhoshst

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 31 -

Problem: How to Search content of 1 excel file in other excel file? Solution: Here is the solution Jaijeet Pandey has over 5 years of experience in Application Development, Maintenance and Testing. From more than last 3 years he is involved in automation testing with QTP and Load Runner tools. He also teaches QTP on weekends. He is currently employed with Birlasoft, Noida. He can be reached at
http://twitter.com/jaijeetpandey 'Excel1 : D:\temp.xls two columns "Mobile" & "result" 'Excel2 : D:\temp1.xls 'Search "Mobile #" present in Excel1 into Excel2 'if information is available in Excel 2 , Put the text "P" in result column of Excel1 'if information is not available in Excel 2 , Put the text "NP" in result column of Excel1 'Save temp.xls as temp2.xls DataTable.Import "D:\temp.xls" globalrow=DataTable.GetRowCount Set obj=Createobject("Excel.Application") Set objwb=obj.WorkBooks.Open("D:\temp1.xls") Set objsheet=objwb.Sheets("Sheet1") obj.visible="True" flag=0 msgbox "Find entire cells only" For i=1 to globalrow

Datatable.SetCurrentRow(i) dev=int(DataTable("Mobile",global)) set foundrow=objsheet.Cells.Find(dev) On Error resume next flag=foundrow.Row If flag<>0Then DataTable.LocalSheet.SetCurrentRow(flag-1) objsheet.Cells(flag,2).value="P" DataTable.Value("result",global)="P" else DataTable.Value("result",global)="NP" End If flag=0 Next obj.Save obj.Quit Set obj=NULL DataTable.Export "d:\temp2.xls"

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 32 -

Software Testers @ Twitter


MATTHEW HEUSSER
BIO: Tester/Writer and other things. Collaborative Software Geek since before it was cool. http://xndev.blogspot.com 396 following 1,295 followers 144 listed http://twitter.com/mheusser

Andy Glover
BIO: I test, lead a team of testers and draw cartoons about testing. Husband of 1 wife, dad to 3 kids. http://cartoontester.blogspot.com/ 301 following 524 followers 60 listed http://twitter.com/cartoontester

Ajay Balamurugadas
BIO: A software tester passionate to learn to test any software http://www.enjoytesting.blogspot.com 242 following 529 followers 62 listed http://twitter.com/ajay184f

www.TestingCircus.com

February 2011

Page - 33 -

Software Testers @ Twitter


Mubbashir
BIO: A software Tester who likes being Agile. http://mubbashir11.blogspot.com/ 816 following 639 followers 29 listed http://twitter.com/mubbashir

Bj Rollison
BIO: Principal Test Architect at Microsoft, special interest in test data generation http://www.TestingMentor.com 132 following 312 followers 45 listed http://twitter.com/TestingMentor

..... more testers in next issue. TestingCircus


Bio: Testing Circus is a free e-magazine on Software Testing. http://TestingCircus.com 232 following 295 followers 26 listed http://twitter.com/testingcircus

www.TestingCircus.com

February 2011

Page - 34 -

Test Case Writing Practice


By Naresh Biisht By Naresh B sht
Requirement A Login Page Objective: To generate and write test cases to test a login page. Sr. Steps to Execute No.
1 Check for text boxes for login and password field

Expected Result
Text box for Login and Password should be visible and enabled to enter characters Login button should be disabled if Login and password text boxes are blank Reset button should appear on the page Forgot password link should appear on the page Register link should appear on the page Remember me checkbox should appear on the page The text cursor should be at the login text field

Actual Remarks Result*

Check for Login button

Check for Reset button

Check for "Forgot password" link Check for "Register" link

Check for "Remember me" checkbox Check the cursor location when the page is loaded first time Put data into user name and password text boxes and click reset button Check if the password typed is visible or not

User name and password text boxes should become blank Characters typed into password field should be hidden and replaced with dot Page should redirect to forgot password page Page should redirect to registration page

10

Click on Forgot password link Click on Register link

11

www.TestingCircus.com

February 2011

Page - 35 -

12

Check for Login button after put values in User Id and Password text boxes Put valid user name and leave password field blank and click Login button Put invalid user name and any password and click Login button Put valid user name and invalid password and click Login button Put valid user name and valid password and click Login button Check if wrong password is entered more than three times enables the CAPTCHA feature or not

Login button should be enabled

13

It should prompt the message "Please enter Password"

14

It should prompt the message "User does not Exist" It should prompt the message "Incorrect password" Page should redirect to home page of application

15

16

17

The CAPTCHA feature should be enabled after three attempts with wrong id password combination.

*Actual Results are written when you get to test the actual build.

___________________________________

How am I supposed to enter this?


Naresh Bisht has 3 years experience in Software Testing. He has hands on experience in both manual testing and automation testing using QTP. He is currently employed with HCL Technologies, Gurgaon.

CAPTCHA Fail

Posted in Twitter by @AndreasEK


___________________________________

Follow Naresh at http://twitter.com/Naresh_Bisht

www.TestingCircus.com

February 2011

Page - 36 -

NEWS on Software Testing

NCR Testers Monthly Meet (NCR, India)


The NCR Testers Monthly Meet (NCRTMM) 2 was held on 15th January at Nextag Software & Services, Gurgaon and NCRTMM -3 was held at Sopra Group, Noida. More than 80 testers met and discussed about testing in general at both the meets. NCRTMM is a monthly meet conceptualized by Vipul Kocher of Pure Testing, Vipul Gupta of Impetus Technologies and Ajoy Kumar Singha of HCL Technologies. NCRTMM was first started in December and is now replicated in other cities of India. Mumbai has done its Mumbai Testers Monthly Meet in January and Pune will start its Pune TMM in March. The idea of the meet is to get in touch with testers to learn, discuss, debate on testing ideas, concepts, issues and solutions related to software testing including topics like automation, performance, exploratory and agile testing and many more. The general agenda of the meet is to have some presentations on core testing topics followed by 5minute talk by few speakers on topics including personal views, technical problems and trending topics. Next is Problem on the Table where 3-4 participants will present problems they are facing in their projects. The attendees can then discuss the problem with the presenter in their allocated area. At the end of an hour of discussion the problem presenters will discuss what they got as a feedback and what they think they can implement and solve their problems. In January meet there was a presentation from Microsoft on Testing Tools. Exploratory Testing Technique was presented by Sopra Team in the February meet. Ajai Jain from Adobe presented a topic on how to utilize metrics to projects favor. There is a full day testing conference planned in April at NCR. For more information visit http://ncrtesters.blogspot.com

Follow #NCRTMM hashtag in twitter for more updates on the event.

www.TestingCircus.com

February 2011

Page - 37 -

Bug deBug - The Software Testing Conference


Bug deBug, The Software Testing Conference happened on Jan 29 2011 in Tidel Park, Chennai. This event was conducted by the Chennai Software Testing Group and supported by the Society for Rich Internet Application and Rich User Interface (RIA-RUI - www.ria-rui.org). The enthusiastic testers from all over India who were waiting for a long time for an exclusive tester event by the testing community started showing up for the registration from 9 AM. The event was kick started at sharp 10 AM with the words from the host "Not all conferences will start right on time and this is not among them". Mr. Vipul Kocher, President, Indian Testing Board was the keynote speaker, who spoke about "Present problems and future solutions" in software testing. With his impressive and interactive speech, Vipul gave a very active start to the event. Following him were a powerful set of speakers - Narayana Raman, Pradeep Soundararajan, Ruturaj Doshi, Anuj Magazine, Ajay Balamurugadas & Praveen Singh who discussed on topics such as Economical, Robust Web Automation using Sahi, Notes from a problem solving tester consultant, Smarter ways of doing Selenium Automation, The Emerging Trend of Cloud Computing and Software Testing, I Am The Bug Hunter & Testing at Startups. The event helped the freshers as well as the professionals to gain rich knowledge and experience from the conference. The presentations given by the speakers can be accessed from here www.slideshare.net/riarui The interesting parts of the conference were the testing contests and raffles. Online Testing contests conducted by 99tests.com got an overwhelming response from the community. Around 50 testers participated and cash prize was announced for the top three winners. The contest which got every participants attention on the day of the event was the "Testing Tips Contest", where participants were asked to share a testing tip. The best three were judged as the winners and given prizes. Not to forget the raffles where winners were chosen from a lot and given prizes.

www.TestingCircus.com

February 2011

Page - 38 -

At the end of the event, all the speakers came on stage to answer the questions raised by the participants. This was the moment for all the participants because they had more time to get their questions answered and interact more. The good question was awarded fabulous testing books like Lessons Learned in Software Testing and Six Thinking Hats. As a social responsibility, participants whole heartedly contributed their donations and prayed for the well being of Mr. Chandru (Tester ailing from Blood cancer.). You can now more about Chandru here http://www.helpchandru.com - Help Chandru Live His Testing Dreams. Visit www.bug-de-bug.com to check out testimonials and photos take during the conference. About RIA-RUI Society for Rich Internet Application - Rich User Interface is a non-profit organization promoting congregation of ideas & knowledge across technologies. The society is working hard to bring high quality technical events to India by partnering with local and international organizations for the benefit of the local technical community and students. The main aim of having a low cost conference is to make technology available to all and make everyone understand the importance of attending conferences, interaction and networking among the professionals. Ananth, Bala, Sathish, Karan, Umesh, Jackson and Bharath make the core team of RIARUI.

Bug debug Software Testing Conference was sponsored by Test Pro, Fabilus.com, Moolya Software Testing, 99tests.com, Test Republic, Indian Testing Board, Quality Testing, Testing Circus Magazine & Bhash.

www.TestingCircus.com

February 2011

Page - 39 -

INTERVIEW WITH TESTERS

TRISH KHOO
Organization Campaign Monitor Role/Designation Test Lead Location Sydney, Australia

Q: How long have you been associated with software testing? A: I have been professionally testing software for about 6 years. Q: How did you become a software tester? A: After graduating from university, I started work as a programmer in a small team. As our team only consisted of two programmers, we would take turns in testing each others work. My boss noticed that I had an aptitude for finding bugs and suggested that I consider a career as a tester. I was bored with programming and found testing interesting, so I took a job as a tester and found the job to be very enjoyable. Q: By any means, do you regret being associated with software testing? A: Absolutely not, its a great profession. Q: Do you think software testing is less respected than other departments in IT industry? A: I get the impression that in some organizations the testing role is viewed as a less skilled profession than many other IT roles. I even had an IT recruiter tell me that any idiot can do testing. I think this is a shame, and possibly a reflection on the hiring process of many places. I hope that as more skilled people join the profession and prove themselves that this idea will fade over time. Q: What will you suggest to people who want to join IT industry as software testers? A: I would suggest that they study software development, and I would suggest that they learn a programming language. It will help them to gain a greater understanding of the software that they are testing, and the people that they are working with. It will also enable them to write helpful tools for use in their work.

Q: Where do you see software testing in next five years? A: I dont think Im well-placed to predict the direction of the whole industry, but Im hoping to see more growth in the testing community, and a more positive attitude towards testing as a profession. Q: What qualities will you look for in a candidate when you want to recruit someone for software testing job? A: When I recruit a new tester, I look for somebody who enjoys testing and is interested in what they do. I look for someone who will get along well with the other testers and developers, and I will give them a short test to see how they would go about testing an application. Q: Your weekend routine? A: I dont really have a routine for weekends, but they seem to get filled up with social activities and whatever hobby I am into at the time. My hobbies tend to go through phases. My latest fads are salsa dancing, drawing, playing Mariokart Wii, learning guitar, writing, learning Ruby, taking photographs and learning about a different country every week. Q: Movie you would like to watch again? A: Im trying to build up enough courage to watch Primer again, perhaps with some kind of timeline guide this time. Q: I am a social networking site geek Or I hate facebook /orkut / twitter? A I have to admit I am a social networking addict. I start to get withdrawal symptoms if Im away from my iPhone for too long.

Blog/Site http://trishkhoo.com Twitter URL http://twitter.com/hogfish


February 2011 - 40 -

www.TestingCircus.com

What do you think of Testing Circus?


Write to us with suggestions, feedbacks. Our team will try to implement all your suggestions and feedbacks in the future issues of Testing Circus.

Editor@TestingCircus.com

www.TestingCircus.com

February 2011

- 41 -

Breathing life into the tired domain of Software Testing and Quality Assurance
An overview of BB TestAssistant: Blueberry Software's innovative new product

The problem
Professional software development companies understand the need for strict regimes of software testing and quality assurance to ensure a product is fit for purpose before it is sold. There are a number of ways to go about implementing these processes - however, they can all be affected by problems that increase costs and reduce efficiency. Common problems stem from a lack of communication between developers (charged with fixing faults) and testers (charged with identifying faults), and they are caused by two scenarios: The first is that, quite simply, information and descriptions of faults may get lost in translation when working in a multi-national team, or when outsourcing testing-related tasks to foreign countries. The second scenario, which is more common, is that the tester is not sufficiently clued up with the deployed technology to convey the details of the problem accurately. This is particularly likely in black-box testing, where testers of software are not required to provide insight into the programmatic cause of a fault and are therefore unlikely to be as technically-minded as their developer colleagues. Another problem with traditional methods of software testing is the replication of faults.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 42 -

In order to implement a fix for a software failure, the developer needs to confirm that it exists in the first place. This is done by mimicking the set of operations performed by the tester, which led to the fault. It is a time-intensive process, and suffers from its own problems: what if the fault does not occur under the same circumstances for the developer? Does he assume the tester described the fault incorrectly? What if it is a platform-dependent issue that affects the tester's system but not the developer's? The replication of faults is not an absolute science, and the ambiguity and uncertainty it could bring will cost time and money to overcome.

Recording the fault: BB TestAssistant


BB TestAssistant was developed to make life easier for those involved in software testing and quality assurance. At its core, BB TestAssistant is built upon the idea that showing someone a problem (or a test failure), rather than just describing it, greatly increases clarity and understanding. BB TestAssistant enables a software tester to record the entire test process capturing everything that occurs on their system in video format. These videos can be supplemented by real-time input via webcams (which appear as picture-in-a-picture) and audio commentary from microphones. BB TestAssistant solves the two fundamental problems mentioned earlier: Quality of communication is increased dramatically because testers can show the problem (and the events leading up to it) to the tester as opposed to just describing it. The need to replicate test failures will cease to exist in many cases. The developer will have all the evidence he requires of a test failure and can instead focus on the cause of the fault and how it should be fixed. By developing a purpose-built platform on which to conduct software testing, Blueberry has been able to extend the functionality of BB TestAssistant to include such useful features as QA system integration (complete with open API) and parallel reviewing of system event logs and external log files. BB TestAssistant integrates fully with QA systems such as JIRA and Bugzilla. In other words, after recording a test, the software tester is able to use BB TestAssistant to automatically create defect reports which comply with industry-standard QA system formats.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 43 -

Of course, not everyone uses the same QA systems. For this reason, the integration API is open to make it easy for programmers to enable integration with other defect tracking systems. BB TestAssistant allows developers to review system event logs and application log files in parallel to the video of the test. This means developers can draw from multiple sources of information relating to the software failure (the video itself; any edit features such as webcam, annotations and audio commentary; and properly synchronised event logs and log file information) in order to solve the problem. The inclusion of synchronised access to log files and system event logs means that the developer is able to analyse what's going on inside the program at the same time as viewing the tester's descriptions. This combination of low-level detail with high-level overview of the problem assists the developer in pinpointing the cause of the error quickly.

BB TestAssistant feature summary


QA system integration: BB TestAssistant integrates fully with current industry-standard QA systems such as JIRA and Bugzilla. The integration API is open, so it's easy to add support for your favourite QA system as well. Record everything: Everything the tester sees on their screen will be recorded by BB TestAssistant, even complex Windows Aero animations. Editing: Once the video has been recorded, you can edit it from within BB TestAssistant to include annotations and audio commentary (which can also be supplied in real-time). You also have access to other video editing options such as clipping, cropping and quality adjustment. Easy Navigation: Long videos are easy to navigate around and find the relevant points. Jump straight to significant events in the recording, such as applications opening and closing, by clicking on a thumbnail image. The tester can add thumbnails shortcuts as needed. Record-Time Notes: The tester can quickly add notes to a recording while its inprogress, and the developer easily see these when reviewing. Remove Inactive Periods: This feature allows the user to identify and remove periods of inactivity within a movie. Hide Other Processes: This security / data protection feature allows the user to restrict BB TestAssistant to record only a specific set of processes. Export to Word: This feature allows a user to mark important points in a movie and add notes, then automatically produce a Word document containing screenshots of all these points, together with the notes perfect for making test scripts.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 44 -

Agile methods and testing


BB TestAssistant is particularly well-suited for teams using the Agile software development methodology. Agile software development encourages an iterative process of performing requirements analysis, designing, coding, unit testing and acceptance testing the product. Testing is performed earlier than in traditional methodologies, with developers and end-users working in close collaboration with QA staff. The emphasis is on communication between people with widely varying technical skills, and BB TestAssistant assists that process by making it simple to create a visual document of issues. BB TestAssistant is also ideal in acceptance testing, where it can be used by everyone involved in the development process. For example: By developers to demonstrate current functionality of the product to the customer representative. By dedicated software testers to produce thorough test reports on the product, highlighting potential failures for developers to fix. By the customer representative to test the product according to their expectations and also to convey additional, desired functionality: it would be helpful if the program did x when I clicked y.

Summary of benefits
With BB TestAssistant, there's no need for QA staff to have special training or skills in order to create professional-standard movies that show defects. BB TestAssistant improves tester-developer communication by enabling the problem to be shown quickly and clearly, so simplifying the testing process. BB TestAssistant is applicable to different kinds of testing: Usability testing where you can see exactly how your user intuitively interacts with the program via BB TestAssistants support for webcams, microphones and system event-logs. Automated testing leave BB TestAssistant recording an automated test session to obtain a permanent record of exactly what happened while you were away. BB TestAssistants use in projects complements Agile methods of software development by allowing you to track iteration progress and involve customer representatives in a more natural fashion. Quiz on BB TestAssistant There will be a quiz published on BB TestAssistant in Testing Circus March issue. Readers can answer the quiz and two winners will get full version of BB TestAssistant. 30-day trial version is available for download at www.bbtestassistant.com.

w w w . T e s tWwwC i r c u s . c o m ing

February 2011

Page - 45 -

We need people from all over the world. Become a Testing Circus Representative (TCR)

Visit our web site for more information. http://www.TestingCircus.com/TCR.aspx

www.TestingCircus.com

February 2011

- 46 -

Testing Circus Team


EditorAjoyKumarSingha
Core Team Jaijeet Pandey, Naresh Bisht, C. Nellai Sankar,KumarGaurav,SunilGodiyal Publicity Team Maheepati Tyagi, Ish Chand Tripathi, AnujBatta,PramodKumar OnlineCollaborationTeamAmitDas,BharatiSingha TechnicalTeamDebasishNath,NasimAhmed ImagePartnerhttp://bigfoto.com

Subscribe Testing Circus at - www.TestingCircus.com Follow us at Twitter - http://www.Twitter.com/TestingCircus Read our Blog at - http://TestingCircus.blogspot.com

Volume2Issue2February2011 Thecontentspublishedinthismagazinearecopyrightmaterialofrespectiveauthors.TestingCircusdoesnotholdanyrightonthe material.Torepublishanypartofthemagazinepermissionneedtobeobtainedfromrespectiveauthors.

TestingCircus.PublishedfromGurgaon/NCRIndia.Copyright20102011

Potrebbero piacerti anche