Who fixes the bug finally ? the developer or the tester ?
In these kind of questions u should be little bit triki & cool.
Simply u have to ask him/her. Sir/Mam In what sense you are asking:
Either on the bug status bias or at the point of rectification of bug (in code).
Ans for Status of Bug (Fixed ) By Tester.
And If talking about resolving (rectification from code) done by developer
-----------------------------
Developer will Fix the Bug and the tester is going to verify the fix and close the Bug
In these kind of questions u should be little bit triki & cool.
Simply u have to ask him/her. Sir/Mam In what sense you are asking:
Either on the bug status bias or at the point of rectification of bug (in code).
Ans for Status of Bug (Fixed ) By Tester.
And If talking about resolving (rectification from code) done by developer
-----------------------------
Developer will Fix the Bug and the tester is going to verify the fix and close the Bug
Explain Boundary value testing (BVA) & Equivalence testing
with example.
Boundary value testing is a technique to find whether the application is accepting the expected range of values and rejecting the values which falls out of range.
Ex. A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to 10 characters.
BVA is done like this,
max value = 10 pass
max-1 = 9 pass
max+1 = 11 fail
min = 4 pass
min+1 = 5 pass
min-1 = 3 fail.
Like wise we check the corner values and come out with a details whether the application is accepting correct range of values.
Equivalence testing is normally used to check the type of the object.
Ex. A user ID text box has to accept alphabet characters ( a - z ) with length of 4 to 10 characters.
In +ve condition we have to test the object by giving alphabets. a-z char only, after that we need to check whether the object accepts the value, it will pass.
In -ve condition we have to test by giving other than alphabets (a-z) like A-Z, 0-9, blank etc it will fail.
Boundary value testing is a technique to find whether the application is accepting the expected range of values and rejecting the values which falls out of range.
Ex. A user ID text box has to accept alphabet characters ( a-z ) with length of 4 to 10 characters.
BVA is done like this,
max value = 10 pass
max-1 = 9 pass
max+1 = 11 fail
min = 4 pass
min+1 = 5 pass
min-1 = 3 fail.
Like wise we check the corner values and come out with a details whether the application is accepting correct range of values.
Equivalence testing is normally used to check the type of the object.
Ex. A user ID text box has to accept alphabet characters ( a - z ) with length of 4 to 10 characters.
In +ve condition we have to test the object by giving alphabets. a-z char only, after that we need to check whether the object accepts the value, it will pass.
In -ve condition we have to test by giving other than alphabets (a-z) like A-Z, 0-9, blank etc it will fail.
How do you test whether a database is updated when information is
entered in the front end ?
It depend on your application interface..
1. If your application provides view functionality for the entered data, then you can verify that from front end only. Most of the time Black box test engineers verify the functionality in this way.
2. If your application has only data entry from front end and there is no view from front end, then you have to go to Database and run relevent SQL query.
And also by using Database checkpoints we can find out whether information is updated r not.
It depend on your application interface..
1. If your application provides view functionality for the entered data, then you can verify that from front end only. Most of the time Black box test engineers verify the functionality in this way.
2. If your application has only data entry from front end and there is no view from front end, then you have to go to Database and run relevent SQL query.
And also by using Database checkpoints we can find out whether information is updated r not.
What is Bugzilla ?
Bugzilla is a fine bug tracker. Its from free software communities, it has some minor faults , like GUI , organizing etc. Its GUI is not a attractive one. But the people who concentrate on its functionality, they ignore such silly things. Bugzilla is very use full for all small -medium companies.
Bugzilla is a fine bug tracker. Its from free software communities, it has some minor faults , like GUI , organizing etc. Its GUI is not a attractive one. But the people who concentrate on its functionality, they ignore such silly things. Bugzilla is very use full for all small -medium companies.
Difference between smoke and sanity testing?
During Smoke OR Sanity testing the test engineer will
concentrate on testing the stability of the application.(means to
say the positive flow of the application is tested)
For example
In a system u have several modules. the test engineer will test the modules one by one in a sequence(testing in the scene) consider
login - id & password. u just enter user id & password if it is accepting them well u will log in..
here we will just consider the positive flow only. u r least bother about weather it is accepting even for wrong id & password here u need to test whether the flow is going or not. then in the same way 4 all other modules..
If the testing fails the test engineer will report the developer with defect and ask for modifications..
unless this smoke test is passed further functional and system testing cannot be conducted.
For example
In a system u have several modules. the test engineer will test the modules one by one in a sequence(testing in the scene) consider
login - id & password. u just enter user id & password if it is accepting them well u will log in..
here we will just consider the positive flow only. u r least bother about weather it is accepting even for wrong id & password here u need to test whether the flow is going or not. then in the same way 4 all other modules..
If the testing fails the test engineer will report the developer with defect and ask for modifications..
unless this smoke test is passed further functional and system testing cannot be conducted.
Attached thumbnail(s)

What is quality?
Quality is defined as meeting the customer requirements in the first time and every time.
There are five perspectives of quality.
1 - Transcendent - I know when I see it.
2 - Product based - Possesses desired features.
3 - User based - Fitness for use.
4 - Development and manufacturing based -Confirms to requirements.
5 - Value based - At an acceptable cost.
Quality is defined as meeting the customer requirements in the first time and every time.
There are five perspectives of quality.
1 - Transcendent - I know when I see it.
2 - Product based - Possesses desired features.
3 - User based - Fitness for use.
4 - Development and manufacturing based -Confirms to requirements.
5 - Value based - At an acceptable cost.
When we know that testing is complete?
There are some points which indicate that testing is complete:
1.The exit criteria that was mentioned in the test plan document is achieved.
2. when we achieved number of successful test case execution at a specified level.
3. when rate of bug finding decrease from a specified level.
4. when manager said that ok stop. http://www.ChetanaSforum.com
5.when we don’t have enough time to perform more test and have achieved a specified level of quality.
6.when the cost of fixing a bug is more then the impact of the bug in the system.
There are some points which indicate that testing is complete:
1.The exit criteria that was mentioned in the test plan document is achieved.
2. when we achieved number of successful test case execution at a specified level.
3. when rate of bug finding decrease from a specified level.
4. when manager said that ok stop. http://www.ChetanaSforum.com
5.when we don’t have enough time to perform more test and have achieved a specified level of quality.
6.when the cost of fixing a bug is more then the impact of the bug in the system.
What is the difference between System Testing & End- to- end
Testing?
System testing is testing the system as the whole i.e whether the system is working fine while end to end testing is testing the system from requirement phase to delivery phase
What is the difference between Regression testing and Retesting ?
Re-testing is a process of testing the fix of the bug in the same version or check whether the bug is fixed in the same version.
Regression testing is a process of checking whether the fix of the earlier bug break/effect some area in the application or process of checking the fixed bugs in the next upgrade version.
Regression Testing:-When test Engineer Found any defects in application build, Test Engineers report that defect to Development Team. when the development team resolved that defect and send to modified build to testing team. Then testing team conduct Regression test whether the modified build is correctly working or not. http://www.ChetanaSforum.com
What is the Difference between Retest and Regression Testing?
Retest is testing the application with multiple sets of data.
Regression testing is to test the modified build
System testing is testing the system as the whole i.e whether the system is working fine while end to end testing is testing the system from requirement phase to delivery phase
What is the difference between Regression testing and Retesting ?
Re-testing is a process of testing the fix of the bug in the same version or check whether the bug is fixed in the same version.
Regression testing is a process of checking whether the fix of the earlier bug break/effect some area in the application or process of checking the fixed bugs in the next upgrade version.
Regression Testing:-When test Engineer Found any defects in application build, Test Engineers report that defect to Development Team. when the development team resolved that defect and send to modified build to testing team. Then testing team conduct Regression test whether the modified build is correctly working or not. http://www.ChetanaSforum.com
What is the Difference between Retest and Regression Testing?
Retest is testing the application with multiple sets of data.
Regression testing is to test the modified build
What is the role of tester in SDLC cycle ( in each phase )
In the SDLC the tester should be involved at the beginning of the discussion, inception, analysis. Most will try and say not until the Software Requirements Specification(SRS) is written ... but that is not true. Testing should be involved from inception to the production phase of a project.
If there are a lot of bugs to be fixed, which one would you resolve first ?
From a Developer's point
Fixing the bugs which are risk free (less regressive issues)
From Q/A's point
Testing the Highest priority/severity ones
http://www.ChetanaSforum.com
What is the differance between an exception and an error?
EXCEPTION : Unexpected event or happening, not related to SRS
BUG: Related to SRS, Found by testers.
ERROR: not related to SRS. Found by users
In the SDLC the tester should be involved at the beginning of the discussion, inception, analysis. Most will try and say not until the Software Requirements Specification(SRS) is written ... but that is not true. Testing should be involved from inception to the production phase of a project.
If there are a lot of bugs to be fixed, which one would you resolve first ?
From a Developer's point
Fixing the bugs which are risk free (less regressive issues)
From Q/A's point
Testing the Highest priority/severity ones
http://www.ChetanaSforum.com
What is the differance between an exception and an error?
EXCEPTION : Unexpected event or happening, not related to SRS
BUG: Related to SRS, Found by testers.
ERROR: not related to SRS. Found by users
How to write test cases for use cases? Please give me example
In software engineering, a use case is
1. A technique for capturing the potential requirements of a new system or software change.
2. Each use case provides one or more scenarios that convey how the system should interact with the end user or another system to achieve a specific business goal.
3. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert.
4. Use cases are often co-authored by software developers and end users.
By definition of use cases, we just follow the requirement document so we concentrate of testing like
functionality testing, acceptance testing, Alpha testing etc
How can we write a good test case?
Essentially a test case is a document that carries a Test Case ID no, title, type of test being conducted, Input, Action or Event to be performed, expected output and whether the Test Case has achieved the desired Output(Yes \ No)
Basically Test cases are based on the Test Plan, which includes each module and what to be tested in each module. Further each action in the module is further divided into testable components from where the Test Cases are derived.
Since the test case handles a single event at a time normally, as long as it reflects its relation to the Test plan, it can be called as a good test case. It does not matter whether the event passes or fails, as long as the component to be tested is addressed and can be related to in the Test Plan, the Test Case can be called a good Test Case.
How is test case written?
Effective test case format:
test case name
test case id
test suite id
feature to be tested
priority
test environment
test setup
test duration
http://www.ChetanaSforum.com
test procedure
1.step no
2 action
3 input required
4expected
5 actual
6 result
7 comment
test effort
test case pass or fail criteria
What are the test cases for mouse?
Test cases for mouse (few)
Mainly check mouse is attached system, color, check it buttons on it. etc
1. Objective is to verify that when we click the right mouse button is it going to open list of that particular file or anything.
Description(test case) is point to one file or anything and click the right mouse button
Expected is it opens the list regarding that file (eg: list includes properties and etc)
2.Objective is to verify that when we click the left mouse button twice is it going to open that particular file or not.
Description is point to one file and double click on that
expected is it open that particular file
3. Objective is to verify the scroll button scrolls the particular page or not
Description is point to one page or file and use the scroll button
Expected is it scrolls the page down or up
http://www.ChetanaSforum.com
Test cases for mobile phone ?
1. Check whether Battery is inserted into mobile properly
2. Check Switch on/Switch off of the Mobile
3. Insert the sim into the phone & Check
4. Add one user with name and phone number in Address book
5. Check the Incoming call
6. Check the outgoing call
7. send/receive messages for that mobile
8. Check all the numbers/Characters on the phone working fine by clicking on them..
9. Remove the user from phone book & Check removed properly with name and phone number
10. Check whether Network working fine..
11. If its GPRS enabled Check for the connectivity.
Write a test case for telephone?
Test case for telephone:
1. To check connectivity of telephone line or cable
2. To check the modem to determine whether it is functioning or not
3. To check dial tone of the phone
4. To check the keypad while you dial any valid number on the phone
5. To check ring tone with volume levels
6. To check voice of both sides (from and to) of the phone
7. To check display monitor if the phone has
8. To check redial option whether its functioning or not
10. To check the company standard of phone
11. To check the weight and color of the phone
12. To check loud speaker whether it is functioning or notary missing above then you can add any more test cases
p://www.ChetaaSforum.com
Write the test cases for ATM in security point of view
1. successful card insertion
2. unsuccessful operation due to wrong angle card insertion.
3.unsuccesssful operation due to invalid account card
4. successful entry of pin number.
5.unsuccessful operation due to wrong pin number entered 3 times.
6. successful selection of language.
7. successful selection of account type
8. unsuccessful operation due to wrong account type selected w/r to that inserted card.
9. successful selection of withdrawal option.
10. successful selection of amount.
11. unsuccessful operation due to wrong denominations.
12. successful withdrawal operation.
13. unsuccessful withdrawal operation due to amount greater than possible balance.
14. unsuccessful due to lack of amount in ATM.
15. un due to amount greater than the day limit.
16. un due to server down.
17. un due to click cancel after insert card.
18.un due to click cancel after indert card and pin no.
19. un due to click cancel after language selection, account type selection, withdrawal selection, enter amount
In software engineering, a use case is
1. A technique for capturing the potential requirements of a new system or software change.
2. Each use case provides one or more scenarios that convey how the system should interact with the end user or another system to achieve a specific business goal.
3. Use cases typically avoid technical jargon, preferring instead the language of the end user or domain expert.
4. Use cases are often co-authored by software developers and end users.
By definition of use cases, we just follow the requirement document so we concentrate of testing like
functionality testing, acceptance testing, Alpha testing etc
How can we write a good test case?
Essentially a test case is a document that carries a Test Case ID no, title, type of test being conducted, Input, Action or Event to be performed, expected output and whether the Test Case has achieved the desired Output(Yes \ No)
Basically Test cases are based on the Test Plan, which includes each module and what to be tested in each module. Further each action in the module is further divided into testable components from where the Test Cases are derived.
Since the test case handles a single event at a time normally, as long as it reflects its relation to the Test plan, it can be called as a good test case. It does not matter whether the event passes or fails, as long as the component to be tested is addressed and can be related to in the Test Plan, the Test Case can be called a good Test Case.
How is test case written?
Effective test case format:
test case name
test case id
test suite id
feature to be tested
priority
test environment
test setup
test duration
http://www.ChetanaSforum.com
test procedure
1.step no
2 action
3 input required
4expected
5 actual
6 result
7 comment
test effort
test case pass or fail criteria
What are the test cases for mouse?
Test cases for mouse (few)
Mainly check mouse is attached system, color, check it buttons on it. etc
1. Objective is to verify that when we click the right mouse button is it going to open list of that particular file or anything.
Description(test case) is point to one file or anything and click the right mouse button
Expected is it opens the list regarding that file (eg: list includes properties and etc)
2.Objective is to verify that when we click the left mouse button twice is it going to open that particular file or not.
Description is point to one file and double click on that
expected is it open that particular file
3. Objective is to verify the scroll button scrolls the particular page or not
Description is point to one page or file and use the scroll button
Expected is it scrolls the page down or up
http://www.ChetanaSforum.com
Test cases for mobile phone ?
1. Check whether Battery is inserted into mobile properly
2. Check Switch on/Switch off of the Mobile
3. Insert the sim into the phone & Check
4. Add one user with name and phone number in Address book
5. Check the Incoming call
6. Check the outgoing call
7. send/receive messages for that mobile
8. Check all the numbers/Characters on the phone working fine by clicking on them..
9. Remove the user from phone book & Check removed properly with name and phone number
10. Check whether Network working fine..
11. If its GPRS enabled Check for the connectivity.
Write a test case for telephone?
Test case for telephone:
1. To check connectivity of telephone line or cable
2. To check the modem to determine whether it is functioning or not
3. To check dial tone of the phone
4. To check the keypad while you dial any valid number on the phone
5. To check ring tone with volume levels
6. To check voice of both sides (from and to) of the phone
7. To check display monitor if the phone has
8. To check redial option whether its functioning or not
10. To check the company standard of phone
11. To check the weight and color of the phone
12. To check loud speaker whether it is functioning or notary missing above then you can add any more test cases
p://www.ChetaaSforum.com
Write the test cases for ATM in security point of view
1. successful card insertion
2. unsuccessful operation due to wrong angle card insertion.
3.unsuccesssful operation due to invalid account card
4. successful entry of pin number.
5.unsuccessful operation due to wrong pin number entered 3 times.
6. successful selection of language.
7. successful selection of account type
8. unsuccessful operation due to wrong account type selected w/r to that inserted card.
9. successful selection of withdrawal option.
10. successful selection of amount.
11. unsuccessful operation due to wrong denominations.
12. successful withdrawal operation.
13. unsuccessful withdrawal operation due to amount greater than possible balance.
14. unsuccessful due to lack of amount in ATM.
15. un due to amount greater than the day limit.
16. un due to server down.
17. un due to click cancel after insert card.
18.un due to click cancel after indert card and pin no.
19. un due to click cancel after language selection, account type selection, withdrawal selection, enter amount
What are the test cases for one Rs Coin Box(Telephone box) ?
Positive test cases:
1.Pick up the dial
Expected: Should display the message " Insert one rupee coin"
2. Insert the new(model) coin
Expected: Should get the dial tone
3. When you get a busy tone, hang-up the receiver
Expected: The inserted one rupee coin comes out of the exit door.
4. finish off the conversation and hang-up the receiver
Expected: The inserted coin should not come out.
5. During the conversation, in case of a local call, (assume the duration is of 60 secs), when 45 secs are completed
Expected: It should prompt you to insert another coin to continue by giving beeps.
6. In the above scenario, if another coin is inserted
Expected: 60 secs will be added to the counter.
7. In the TC5 scenario, if you don't insert one more coin.
Expected: The call gets ended.
8. Pick up the receiver. Insert appropriate one rupee coin, Dial the number after hearing the ring tone. Assume it got connected and you are getting the ring tone. Immediately you end up the call.
Expected: The inserted one rupee coin comes out of the exit door.
Write a test case of login window?
Login window:
1 focus to window to height, width of the text boxes
2 enter username for allow lowercase, special characters (some of char allow only), numerical.
3 password, which is encrypted only allow
4 then enable "ok" button.
5 if miss matching of any username & password then it display the worming message window
Positive test cases:
1.Pick up the dial
Expected: Should display the message " Insert one rupee coin"
2. Insert the new(model) coin
Expected: Should get the dial tone
3. When you get a busy tone, hang-up the receiver
Expected: The inserted one rupee coin comes out of the exit door.
4. finish off the conversation and hang-up the receiver
Expected: The inserted coin should not come out.
5. During the conversation, in case of a local call, (assume the duration is of 60 secs), when 45 secs are completed
Expected: It should prompt you to insert another coin to continue by giving beeps.
6. In the above scenario, if another coin is inserted
Expected: 60 secs will be added to the counter.
7. In the TC5 scenario, if you don't insert one more coin.
Expected: The call gets ended.
8. Pick up the receiver. Insert appropriate one rupee coin, Dial the number after hearing the ring tone. Assume it got connected and you are getting the ring tone. Immediately you end up the call.
Expected: The inserted one rupee coin comes out of the exit door.
Write a test case of login window?
Login window:
1 focus to window to height, width of the text boxes
2 enter username for allow lowercase, special characters (some of char allow only), numerical.
3 password, which is encrypted only allow
4 then enable "ok" button.
5 if miss matching of any username & password then it display the worming message window
Sforum.com
How to write Test cases for Lift?
According to me following things could be tested:
1.Capacity of the lift
2.usability (whether its easy to operate or not)
3.Functionality( whether it functions properly or not)
4.comfort (whether a person is comfortable in it or not)
5.height
6.weight
7.volume
8.time it takes to reach every floor
9.test for maximum capacity of the lift
10.test for more than max capacity
11.check the working in case of a power failure
Write a test case for computer keyboard
1.to check the keyboard company
2.to check the keyboard category i.e normal keyboard or multimedia keyboard
3.to check the total no of keys in that keyboard
4.to check the keyboard type i.e normal or PS/2
5.to check the keyboard color i.e white or black
6.to check default NumLock should be in on condition
7.Default the caps lock and scroll lock should be in off condition
8.to check the keyboard wire length
9.to check all keys are working properly or not
How to write Test cases for Lift?
According to me following things could be tested:
1.Capacity of the lift
2.usability (whether its easy to operate or not)
3.Functionality( whether it functions properly or not)
4.comfort (whether a person is comfortable in it or not)
5.height
6.weight
7.volume
8.time it takes to reach every floor
9.test for maximum capacity of the lift
10.test for more than max capacity
11.check the working in case of a power failure
Write a test case for computer keyboard
1.to check the keyboard company
2.to check the keyboard category i.e normal keyboard or multimedia keyboard
3.to check the total no of keys in that keyboard
4.to check the keyboard type i.e normal or PS/2
5.to check the keyboard color i.e white or black
6.to check default NumLock should be in on condition
7.Default the caps lock and scroll lock should be in off condition
8.to check the keyboard wire length
9.to check all keys are working properly or not
How do you go about testing a web application
?
Its clear that for testing any application, one should be clear about the requirements and specification documents.
For testing Web application, the tester should know, what the web application deals with.
For Testing Web application, the test cases written should be in two different types,
1) The Test cases related to the Look and Feel of the Web pages and navigation
2) The test cases related to the functionality of the web application.
Make sure, whether the web application is connected to the Database for the inputs.
Write Test cases based on the Database and write test cases for the backend testing as well if there is any database. The web application should be tested for the server response time for displaying the web pages, Make sure the web pages under load as well. For load testing, the tools are very much useful for simulating the many users.
Write Test case for Search Engine ?
Test Case for Search Engine
Check for cursor on the search text box in the starting position
Check whether the blank space is trimmed on starting of the first word in the text box
On no entry done in text box, check the engine does not display any result
Check no result is displayed on only single special character is entered in text box
Check the search response time
Check the total number of results to be displayed in one page
Check the page resolution
Check the URLs' unchecked are colored BLUE & checked are Maroon
Check for total number of results found for a search
Check whether the search text box is present on top & bottom of the page
How to write test cases for n Factorial ?
n! means n(n-1)(n-2)......3,2,1
First Check the given integer is positive. If the given integer is negative, n! is not defined. If the given number is fraction, n! is not defined. Check the given number is 0, n! is 1hope u'll get an idea...if any wrong in this....correct me...
or
We can write following test cases
Verify for result when n is non-numeric
Verify for result when n is negative
Verify for n = 0 (A valid case)
Verify for n = 3(A valid case)
What is the test plan for Notepad and MS Word ?
Test Plan for Notepad:
Verify notepad option in "Run" dialog box.
Verify "OK" button.
Verify when click on "ok" button whether it shows Untitled Notepad window open with cursor or not
Verify New, Open, save, Save As, Close etc options in file menu.
Test Plan for Ms Word:
Verify Ms Word option in Ms Office tools .
Verify Verify All options in word pad
Verify scrolling button when page is greater than the desktop size.
Verify it indicates red color when we mistake in typing the letters.
Verify changes in page no s when one page is completed.
How many test cases a testing engineer prepares and executes per day ?
The answer to this question cannot be told in specific numbers as to how much test cases can be prepared and executed by a tester in a day.
It totally depends upon the type of application you have to test, might be simple, medium or complex. It also depends how many test scenarios or test cases you tend to think of regarding the testing of that application, I mean to say, both positive test cases and negative test cases. It also depends as to how many modules are there in your application for testing and how simple or complex the functionalities are there in each module.
Hence this question depends upon a number of factors to deal with it first. Suppose you prepare 50 tst cases for a particular module, then you have to execute all the test cases in a defines period as mentioned in the project plan. Then you define your testing process as to how many cycles you need for testing the application completely. etc.
How to write tets cases for electric bulb please explain me in brief ?
Test cases for electric bulb:
The bulb should be of required shape and size
Should be able to be fitted and removed from the holder
Should sustain the voltage for which it is designed for
Should glow on switching on
Should not glow on switching off
Should glow with read illumination (correct me if wrong)
Life of the bulb should meet the requirement
I think still more cases are there
Check the color appearance of the bulb
Check the color of light which it makes (illuminates)
Check the time it takes to blow
Check the max life of bulb
Check if switch on and switch off suddenly what will happen
Check the power consumption
Check the initial voltage it takes to blow
Check if voltage vary suddenly what will happen (increase or decrease)
Check what will happen when switch on bulb in less voltage
Check max heat dissipate when bulb is on
Along with all this we check for
Holder and sachet should be present to test the bulb
Check electricity connection should be there for that holder.
Check whether electricity is passing through that wire or not
Its clear that for testing any application, one should be clear about the requirements and specification documents.
For testing Web application, the tester should know, what the web application deals with.
For Testing Web application, the test cases written should be in two different types,
1) The Test cases related to the Look and Feel of the Web pages and navigation
2) The test cases related to the functionality of the web application.
Make sure, whether the web application is connected to the Database for the inputs.
Write Test cases based on the Database and write test cases for the backend testing as well if there is any database. The web application should be tested for the server response time for displaying the web pages, Make sure the web pages under load as well. For load testing, the tools are very much useful for simulating the many users.
Write Test case for Search Engine ?
Test Case for Search Engine
Check for cursor on the search text box in the starting position
Check whether the blank space is trimmed on starting of the first word in the text box
On no entry done in text box, check the engine does not display any result
Check no result is displayed on only single special character is entered in text box
Check the search response time
Check the total number of results to be displayed in one page
Check the page resolution
Check the URLs' unchecked are colored BLUE & checked are Maroon
Check for total number of results found for a search
Check whether the search text box is present on top & bottom of the page
How to write test cases for n Factorial ?
n! means n(n-1)(n-2)......3,2,1
First Check the given integer is positive. If the given integer is negative, n! is not defined. If the given number is fraction, n! is not defined. Check the given number is 0, n! is 1hope u'll get an idea...if any wrong in this....correct me...
or
We can write following test cases
Verify for result when n is non-numeric
Verify for result when n is negative
Verify for n = 0 (A valid case)
Verify for n = 3(A valid case)
What is the test plan for Notepad and MS Word ?
Test Plan for Notepad:
Verify notepad option in "Run" dialog box.
Verify "OK" button.
Verify when click on "ok" button whether it shows Untitled Notepad window open with cursor or not
Verify New, Open, save, Save As, Close etc options in file menu.
Test Plan for Ms Word:
Verify Ms Word option in Ms Office tools .
Verify Verify All options in word pad
Verify scrolling button when page is greater than the desktop size.
Verify it indicates red color when we mistake in typing the letters.
Verify changes in page no s when one page is completed.
How many test cases a testing engineer prepares and executes per day ?
The answer to this question cannot be told in specific numbers as to how much test cases can be prepared and executed by a tester in a day.
It totally depends upon the type of application you have to test, might be simple, medium or complex. It also depends how many test scenarios or test cases you tend to think of regarding the testing of that application, I mean to say, both positive test cases and negative test cases. It also depends as to how many modules are there in your application for testing and how simple or complex the functionalities are there in each module.
Hence this question depends upon a number of factors to deal with it first. Suppose you prepare 50 tst cases for a particular module, then you have to execute all the test cases in a defines period as mentioned in the project plan. Then you define your testing process as to how many cycles you need for testing the application completely. etc.
How to write tets cases for electric bulb please explain me in brief ?
Test cases for electric bulb:
The bulb should be of required shape and size
Should be able to be fitted and removed from the holder
Should sustain the voltage for which it is designed for
Should glow on switching on
Should not glow on switching off
Should glow with read illumination (correct me if wrong)
Life of the bulb should meet the requirement
I think still more cases are there
Check the color appearance of the bulb
Check the color of light which it makes (illuminates)
Check the time it takes to blow
Check the max life of bulb
Check if switch on and switch off suddenly what will happen
Check the power consumption
Check the initial voltage it takes to blow
Check if voltage vary suddenly what will happen (increase or decrease)
Check what will happen when switch on bulb in less voltage
Check max heat dissipate when bulb is on
Along with all this we check for
Holder and sachet should be present to test the bulb
Check electricity connection should be there for that holder.
Check whether electricity is passing through that wire or not
Test cases for coffee machine ?
1. Plug the power cable and press the on button. The indicator bulb should glow indicating the machine is on.
2. Whether there are three different buttons Red, Blue and Green.
3. Whether Red indicated Coffee.
4. Whether Blue indicated Tea.
5. Whether Green indicated Milk.
6. Whether each button produces the correct out put (Coffee, Tea or Milk).
7. Whether the the desired out put is hot or not (Coffee, Tea or Milk).
8. Whether the quantity is exceeding the specified the limit of a cup.
9. Whether the power is off (including the power indicator) when pressed the off button.
Write Test Cases on white paper (For e.g. A4 size) ?
1.Check size of a page.
2.Check quality of paper by using different pen and pencils.
3.Check use of whitener on paper.
4.Check erase.
Can you please tell me the negative test case for a glass of water?
To check the glass inside water is there or not
To check the water i.e cooling water or hot water
To check that water quality i.e pure water or normal
For a triangle (sum of two sides is greater than or equal to the third side) what is the minimal number of test cases required.?
The answer is 3
1. Measure all sides of the triangle.
2. Add the minimum and second highest length of the triangle and store the result as Res.
3. Compare the Res with the largest side of the triangle
Differentiate between product quality and process quality?
Product quality means we concentrate always final quality but in case of process quality we set the process parameter
What is gui test caes?
Gui test cases is nothing but Graphical User Interface to check the given application in all aspects (full fill the Microsoft standards) whether the total application is proper or not, look & feel, no spell mistakes, the alignment & total objects are present or not.
How can we write a good test case?
Essentially a test case is a document that carries a Test Case ID no, title, type of test being conducted, Input, Action or Event to be performed, expected output and whether the Test Case has achieved the desired Output(Yes \ No)
Basically Test cases are based on the Test Plan, which includes each module and what to be tested in each module. Further each action in the module is further divided into testable components from where the Test Cases are derived.
Since the test case handles a single event at a time normally, as long as it reflects its relation to the Test plan, it can be called as a good test case. It does not matter whether the event passes or fails, as long as the component to be tested is addressed and can be related to in the Test Plan, the Test Case can be called a good Test Case.
What is the difference between Use Case and Test Case? Pls let me know with a detailed Example?
Use Case is written in Business Design Document(BDD)by the Business Analyst. It is for understanding the functionality for the person who is involved in writing the test cases.
USE CASE EXAMPLE Action Response
when OK button is clicked Screen 1 appears
Test case is different perceptions for a functionality to be tested. Usually written by Test Engineer. The same person who has written the test case may execute them or the other person
Above Use case is converted into Test Case keeping in mind different perceptions (-ve and +ve)
Action Expected Value Actual Value Result
click on Ok screen 1 should appear(+ve perception) screen1 appeared pass
click on ok screen 2 should appear(-ve perception) screen 1 appeared fail
click on ok screen 2 should appear(-ve perception screen 2 appeared pass
Define Bug Life Cycle ? What is Metrics ?
When we find out the bug, we will put into the open status. After fixing the bug developer change the status as fixed. Again we will test the fixed part, if there is no bug, change the bug status as Closed other wise change the bug status as Reopen.
A s/w metric defines the a standard method of measuring certain attributes of the process or the product or the service.
1. Plug the power cable and press the on button. The indicator bulb should glow indicating the machine is on.
2. Whether there are three different buttons Red, Blue and Green.
3. Whether Red indicated Coffee.
4. Whether Blue indicated Tea.
5. Whether Green indicated Milk.
6. Whether each button produces the correct out put (Coffee, Tea or Milk).
7. Whether the the desired out put is hot or not (Coffee, Tea or Milk).
8. Whether the quantity is exceeding the specified the limit of a cup.
9. Whether the power is off (including the power indicator) when pressed the off button.
Write Test Cases on white paper (For e.g. A4 size) ?
1.Check size of a page.
2.Check quality of paper by using different pen and pencils.
3.Check use of whitener on paper.
4.Check erase.
Can you please tell me the negative test case for a glass of water?
To check the glass inside water is there or not
To check the water i.e cooling water or hot water
To check that water quality i.e pure water or normal
For a triangle (sum of two sides is greater than or equal to the third side) what is the minimal number of test cases required.?
The answer is 3
1. Measure all sides of the triangle.
2. Add the minimum and second highest length of the triangle and store the result as Res.
3. Compare the Res with the largest side of the triangle
Differentiate between product quality and process quality?
Product quality means we concentrate always final quality but in case of process quality we set the process parameter
What is gui test caes?
Gui test cases is nothing but Graphical User Interface to check the given application in all aspects (full fill the Microsoft standards) whether the total application is proper or not, look & feel, no spell mistakes, the alignment & total objects are present or not.
How can we write a good test case?
Essentially a test case is a document that carries a Test Case ID no, title, type of test being conducted, Input, Action or Event to be performed, expected output and whether the Test Case has achieved the desired Output(Yes \ No)
Basically Test cases are based on the Test Plan, which includes each module and what to be tested in each module. Further each action in the module is further divided into testable components from where the Test Cases are derived.
Since the test case handles a single event at a time normally, as long as it reflects its relation to the Test plan, it can be called as a good test case. It does not matter whether the event passes or fails, as long as the component to be tested is addressed and can be related to in the Test Plan, the Test Case can be called a good Test Case.
What is the difference between Use Case and Test Case? Pls let me know with a detailed Example?
Use Case is written in Business Design Document(BDD)by the Business Analyst. It is for understanding the functionality for the person who is involved in writing the test cases.
USE CASE EXAMPLE Action Response
when OK button is clicked Screen 1 appears
Test case is different perceptions for a functionality to be tested. Usually written by Test Engineer. The same person who has written the test case may execute them or the other person
Above Use case is converted into Test Case keeping in mind different perceptions (-ve and +ve)
Action Expected Value Actual Value Result
click on Ok screen 1 should appear(+ve perception) screen1 appeared pass
click on ok screen 2 should appear(-ve perception) screen 1 appeared fail
click on ok screen 2 should appear(-ve perception screen 2 appeared pass
Define Bug Life Cycle ? What is Metrics ?
When we find out the bug, we will put into the open status. After fixing the bug developer change the status as fixed. Again we will test the fixed part, if there is no bug, change the bug status as Closed other wise change the bug status as Reopen.
A s/w metric defines the a standard method of measuring certain attributes of the process or the product or the service.
What is L10 Testing?
L10 Testing is Localization Testing, it verifies whether your products are ready for local markets or not.
L10 Testing is Localization Testing, it verifies whether your products are ready for local markets or not.
What is open box testing?
Open box testing is same as white box testing. It is a testing approach that examines the applications program structure, and derives test cases from the applications program logic.
Open box testing is same as white box testing. It is a testing approach that examines the applications program structure, and derives test cases from the applications program logic.
What is closed box testing?
Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the inner workings of the software.
Closed box testing is same as black box testing. Black box testing a type of testing that considers only externally visible behavior. Black box testing considers neither the code itself, nor the inner workings of the software.
What are three test cases you should go through in unit testing?
>> Positive test cases (correct data, correct output)
>>Negative test cases (broken or missing data, proper handling)
>>Exception test cases (exceptions are thrown and caught properly).
>> Positive test cases (correct data, correct output)
>>Negative test cases (broken or missing data, proper handling)
>>Exception test cases (exceptions are thrown and caught properly).
What is Cause Effect Graph?
A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
What is Code Complete?
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
What is Code Coverage?
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
What is Code Inspectio ?
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
What is Code Walkthrough?
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
What is Code Complete?
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
What is Code Coverage?
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
What is Code Inspectio ?
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.
What is Code Walkthrough?
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.
What is Coding?
The generation of source code.
What is Compatibility Testing?
Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
What is Component?
A minimal software item for which a separate specification is available.
What is Component Testing?
Testing of individual software components (Unit Testing).
What is Concurrency Testing?
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
The generation of source code.
What is Compatibility Testing?
Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.
What is Component?
A minimal software item for which a separate specification is available.
What is Component Testing?
Testing of individual software components (Unit Testing).
What is Concurrency Testing?
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.
What makes a good test engineer?
A good test engineer has a test to break attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail.
* Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.
* Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers point of view, and reduce the learning curve in automated test tool programming.
* Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
A good test engineer has a test to break attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail.
* Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful.
* Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers point of view, and reduce the learning curve in automated test tool programming.
* Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
What is Conformance Testing?
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
What is Context Driven Testing?
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
What is Conversion Testing?
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
What is Cyclomatic Complexity?
A measure of the logical complexity of an algorithm, used in white-box testing.
What is Data Dictionary?
A database that contains definitions of all data items defined during analysis.
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.
What is Context Driven Testing?
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.
What is Conversion Testing?
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.
What is Cyclomatic Complexity?
A measure of the logical complexity of an algorithm, used in white-box testing.
What is Data Dictionary?
A database that contains definitions of all data items defined during analysis.
How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
̢ۢ Deadlines (release deadlines, testing deadlines, etc.)
̢ۢ Test cases completed with certain percentage passed
̢ۢ Test budget depleted
̢ۢ Coverage of code/functionality/requirements reaches a specified point
̢ۢ Bug rate falls below a certain level
̢ۢ Beta or alpha testing period ends
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
̢ۢ Deadlines (release deadlines, testing deadlines, etc.)
̢ۢ Test cases completed with certain percentage passed
̢ۢ Test budget depleted
̢ۢ Coverage of code/functionality/requirements reaches a specified point
̢ۢ Bug rate falls below a certain level
̢ۢ Beta or alpha testing period ends
What is Data Flow Diagram?
A modeling notation that represents a functional decomposition of a system.
What is Data Driven Testing?
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
What is Debugging?
The process of finding and removing the causes of software failures.
What is Defect?
Nonconformance to requirements or functional / program specification
What is Dependency Testing?
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
A modeling notation that represents a functional decomposition of a system.
What is Data Driven Testing?
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.
What is Debugging?
The process of finding and removing the causes of software failures.
What is Defect?
Nonconformance to requirements or functional / program specification
What is Dependency Testing?
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.
What is Top-Down Testing?
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested
What is software failure?
Software failure occurs when the software does not do what the user expects to see
Software failure occurs when the software does not do what the user expects to see
What is Depth Testing?
A test that exercises a feature of a product in full detail.
What is Dynamic Testing?
Testing software through executing it. See also Static Testing.
What is Emulator?
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
What is Endurance Testing?
Checks for memory leaks or other problems that may occur with prolonged execution.
What is End-to-End testing?
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
A test that exercises a feature of a product in full detail.
What is Dynamic Testing?
Testing software through executing it. See also Static Testing.
What is Emulator?
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
What is Endurance Testing?
Checks for memory leaks or other problems that may occur with prolonged execution.
What is End-to-End testing?
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
What is Bottom Up Testing?
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.
What other roles are in testing?
Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers, System Administrators, Database Administrators, Technical Analysts, Test Build Managers, and Test Configuration Managers.
Depending on the project, one person can and often will wear more than one hat. For instance, we, Test Engineers, often wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager as well.
Depending on the organization, the following roles are more or less standard on most testing projects: Testers, Test Engineers, Test/QA Team Leads, Test/QA Managers, System Administrators, Database Administrators, Technical Analysts, Test Build Managers, and Test Configuration Managers.
Depending on the project, one person can and often will wear more than one hat. For instance, we, Test Engineers, often wear the hat of Technical Analyst, Test Build Manager and Test Configuration Manager as well.
What is Equivalence Class?
A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
What is Equivalence Partitioning?
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
What is Exhaustive Testing ?
Testing which covers all combinations of input values and preconditions for an element of the software under test.
What is Functional Decomposition?
A technique used during planning, analysis and design; creates a functional hierarchy for the software.
What is Functional Specification?
A document that describes in detail the characteristics of the product with regard to its intended features.
A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.
What is Equivalence Partitioning?
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
What is Exhaustive Testing ?
Testing which covers all combinations of input values and preconditions for an element of the software under test.
What is Functional Decomposition?
A technique used during planning, analysis and design; creates a functional hierarchy for the software.
What is Functional Specification?
A document that describes in detail the characteristics of the product with regard to its intended features.
What is the ratio of developers and testers?
The ratio of developers and testers is not a fixed one, but depends on what phase of the software development life cycle the project is in. When a product is first conceived, organized, and developed, this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp contrast, when the product is near the end of the software development life cycle, just before alpha testing begins, this ratio tends to be 1:1, or even 1:2, in favor of testers.
The ratio of developers and testers is not a fixed one, but depends on what phase of the software development life cycle the project is in. When a product is first conceived, organized, and developed, this ratio tends to be 10:1, 5:1, or 3:1, i.e. heavily in favor of developers. In sharp contrast, when the product is near the end of the software development life cycle, just before alpha testing begins, this ratio tends to be 1:1, or even 1:2, in favor of testers.
What is clear box testing?
Clear box testing is the same as white box testing. It is a testing approach that examines the applications program structure, and derives test cases from the applications program logic.
Clear box testing (White Box testing) approach, the tester have an inside view of the system. They are concerned with how it is done and not what is done
Clear Box testing is Logic Oriented. The testers are concerned with the execution of all possible paths of control flow through the program.
Clear Box Testing is essentially an Unit Test Method Which is some times used in the integration test or in the Operability test.
Clear box testing is the same as white box testing. It is a testing approach that examines the applications program structure, and derives test cases from the applications program logic.
Clear box testing (White Box testing) approach, the tester have an inside view of the system. They are concerned with how it is done and not what is done
Clear Box testing is Logic Oriented. The testers are concerned with the execution of all possible paths of control flow through the program.
Clear Box Testing is essentially an Unit Test Method Which is some times used in the integration test or in the Operability test.
What is a bug life cycle?
Bug life cycles are similar to software development life cycles. At any time during the software development life cycle errors can be made during the gathering of requirements, requirements analysis, functional design, internal design, documentation planning, document preparation, coding, unit testing, test planning, integration, testing, maintenance, updates, re-testing and phase-out.
Bug life cycle begins when a programmer, software developer, or architect makes a mistake, creates an unintentional software defect, i.e. bug, and ends when the bug is fixed, and the bug is no longer in existence.
What should be done after a bug is found? When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested.
Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere.
If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking, management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.
Bug life cycles are similar to software development life cycles. At any time during the software development life cycle errors can be made during the gathering of requirements, requirements analysis, functional design, internal design, documentation planning, document preparation, coding, unit testing, test planning, integration, testing, maintenance, updates, re-testing and phase-out.
Bug life cycle begins when a programmer, software developer, or architect makes a mistake, creates an unintentional software defect, i.e. bug, and ends when the bug is fixed, and the bug is no longer in existence.
What should be done after a bug is found? When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested.
Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn't create other problems elsewhere.
If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking, management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.
What is Functional Testing?
Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. or Black Box Testing.
What is Gorilla Testing?
Testing one particular module, functionality heavily.
What is Gray Box Testing?
A combination of Black Box and White Box testing methodologies? testing a piece of software against its specification but using some knowledge of its internal workings.
What is High Order Tests?
Black-box tests conducted once the software has been integrated.
Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. or Black Box Testing.
What is Gorilla Testing?
Testing one particular module, functionality heavily.
What is Gray Box Testing?
A combination of Black Box and White Box testing methodologies? testing a piece of software against its specification but using some knowledge of its internal workings.
What is High Order Tests?
Black-box tests conducted once the software has been integrated.
What is Independent Test Group (ITG)?
A group of people whose primary responsibility is software testing
A group of people whose primary responsibility is software testing
What are the phases of the software development life cycle?
The software development life cycle consists of the concept phase, requirements phase, design phase, implementation phase, test phase, installation phase, and checkout phase.
The software development life cycle consists of the concept phase, requirements phase, design phase, implementation phase, test phase, installation phase, and checkout phase.
What is Inspection?
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
What is Integration Testing?
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
What is Installation Testing?
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
What is Localization Testing?
This term refers to making software specifically designed for a specific locality.
What is Loop Testing?
A white box testing technique that exercises program loops.
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).
What is Integration Testing?
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.
What is Installation Testing?
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
What is Localization Testing?
This term refers to making software specifically designed for a specific locality.
What is Loop Testing?
A white box testing technique that exercises program loops.
What is user documentation?
"User documentation" is a document that describes the way a software product or system should be used to obtain the desired results
"User documentation" is a document that describes the way a software product or system should be used to obtain the desired results
What is security / penetration testing?
Security / penetration testing is testing how well the system is protected against unauthorized internal access, external access, or willful damage. Security/penetration testing usually requires sophisticated testing techniques.
Security / penetration testing is testing how well the system is protected against unauthorized internal access, external access, or willful damage. Security/penetration testing usually requires sophisticated testing techniques.
What is Metric?
A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
What is Monkey Testing?
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
What is Monkey Testing?
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.
What is Negative Testing?
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.
What is Path Testing?
Testing in which all paths in the program source code are tested at least once.
What is Performance Testing?
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing"
What is the difference between software fault and software failure?
Software failure occurs when the software does not do what the user expects to see. Software fault, on the other hand, is a hidden programming error.
A software fault becomes a software failure only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU. This can occur during normal usage. Or, when the software is ported to a different hardware platform. Or, when the software is ported to a different complier. Or, when the software gets extended
Software failure occurs when the software does not do what the user expects to see. Software fault, on the other hand, is a hidden programming error.
A software fault becomes a software failure only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU. This can occur during normal usage. Or, when the software is ported to a different hardware platform. Or, when the software is ported to a different complier. Or, when the software gets extended
What is a test scenario?
The terms "test scenario" and "test case" are often used synonymously. Test scenarios are test cases or test scripts, and the sequence in which they are to be executed. Test scenarios are test cases that ensure that all business process flows are tested from end to end. Test scenarios are independent tests, or a series of tests that follow each other, where each of them dependent upon the output of the previous one. Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures. Test scenarios are designed to represent both typical and unusual situations that may occur in the application. Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios. It is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing. Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts may cover multiple test scenarios.
The terms "test scenario" and "test case" are often used synonymously. Test scenarios are test cases or test scripts, and the sequence in which they are to be executed. Test scenarios are test cases that ensure that all business process flows are tested from end to end. Test scenarios are independent tests, or a series of tests that follow each other, where each of them dependent upon the output of the previous one. Test scenarios are prepared by reviewing functional requirements, and preparing logical groups of functions that can be further broken into test procedures. Test scenarios are designed to represent both typical and unusual situations that may occur in the application. Test engineers define unit test requirements and unit test scenarios. Test engineers also execute unit test scenarios. It is the test team that, with assistance of developers and clients, develops test scenarios for integration and system testing. Test scenarios are executed through the use of test procedures or scripts. Test procedures or scripts define a series of steps necessary to perform one or more test scenarios. Test procedures or scripts may cover multiple test scenarios.
What is Positive Testing?
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
What is Quality Assurance?
All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
What is Quality Audit?
A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
What is Quality Circle?
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
What is Quality Control?
The operational techniques and the activities used to fulfill and verify requirements of quality.
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.
What is Quality Assurance?
All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.
What is Quality Audit?
A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
What is Quality Circle?
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
What is Quality Control?
The operational techniques and the activities used to fulfill and verify requirements of quality.
What models are used in the software development life cycle?
In software development life cycle the following models are used: waterfall model, incremental development model, rapid prototyping model, and spiral model.
In software development life cycle the following models are used: waterfall model, incremental development model, rapid prototyping model, and spiral model.
What is disaster recovery testing?
Disaster recovery testing is testing how well a system recovers from disasters, crashes, hardware failures, or other catastrophic problems
Disaster recovery testing is testing how well a system recovers from disasters, crashes, hardware failures, or other catastrophic problems
What is Quality Management?
That aspect of the overall management function that determines and implements the quality policy.
What is Quality Policy?
The overall intentions and direction of an organization as regards quality as formally expressed by top management.
What is Quality System?
The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
What is Race Condition?
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
What is Ramp Testing?
Continuously raising an input signal until the system breaks down.
That aspect of the overall management function that determines and implements the quality policy.
What is Quality Policy?
The overall intentions and direction of an organization as regards quality as formally expressed by top management.
What is Quality System?
The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
What is Race Condition?
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
What is Ramp Testing?
Continuously raising an input signal until the system breaks down.
What is a requirements test matrix?
The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project's life cycle.
The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column headers of the same table.
The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality. The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration effort.
The requirements test matrix is a representation of user requirements aligned against system testing. Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort
The requirements test matrix is a project management tool for tracking and managing testing efforts, based on requirements, throughout the project's life cycle.
The requirements test matrix is a table, where requirement descriptions are put in the rows of the table, and the descriptions of testing efforts are put in the column headers of the same table.
The requirements test matrix is similar to the requirements traceability matrix, which is a representation of user requirements aligned against system functionality. The requirements traceability matrix ensures that all user requirements are addressed by the system integration team and implemented in the system integration effort.
The requirements test matrix is a representation of user requirements aligned against system testing. Similarly to the requirements traceability matrix, the requirements test matrix ensures that all user requirements are addressed by the system test team and implemented in the system testing effort
What is Error guessing and Error seeding?
Error Guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.
Error Seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal and also to estimate the number of faults remaining in the program.
Error Guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.
Error Seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal and also to estimate the number of faults remaining in the program.
What is Re- test? What is Regression Testing?
Re- test - Retesting means we testing only the certain part of an application again and not considering how it will effect in the other part or in the whole application.
Regression Testing - Testing the application after a change in a module or part of the application for testing that is the code change will affect rest of the application.
Re- test - Retesting means we testing only the certain part of an application again and not considering how it will effect in the other part or in the whole application.
Regression Testing - Testing the application after a change in a module or part of the application for testing that is the code change will affect rest of the application.
What is Recovery Testing?
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
What is Regression Testing?
Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
What is Release Candidate?
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
What is Sanity Testing?
Brief test of major functional elements of a piece of software to determine if its basically operational.
What is Scalability Testing?
Performance testing focused on ensuring the application under test gracefully handles increases in work load
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.
What is Regression Testing?
Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.
What is Release Candidate?
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
What is Sanity Testing?
Brief test of major functional elements of a piece of software to determine if its basically operational.
What is Scalability Testing?
Performance testing focused on ensuring the application under test gracefully handles increases in work load
What is Test bed and Test data?
Test Bed is an execution environment configured for software testing. It consists of specific hardware, network topology, Operating System, configuration of the product to be under test, system software and other applications. The Test Plan for a project should be developed from the test beds to be used.
Test Data is that run through a computer program to test the software. Test data can be used to test the compliance with effective controls in the software.
Test Bed is an execution environment configured for software testing. It consists of specific hardware, network topology, Operating System, configuration of the product to be under test, system software and other applications. The Test Plan for a project should be developed from the test beds to be used.
Test Data is that run through a computer program to test the software. Test data can be used to test the compliance with effective controls in the software.
What is Security Testing?
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
What is Smoke Testing?
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
What is Soak Testing?
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
What is Software Requirements Specification?
A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software
What is Software Testing?
A set of activities conducted with the intent of finding errors in software.
What is Static Analysis?
Analysis of a program carried out without executing the program.
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.
What is Smoke Testing?
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.
What is Soak Testing?
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.
What is Software Requirements Specification?
A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software
What is Software Testing?
A set of activities conducted with the intent of finding errors in software.
What is Static Analysis?
Analysis of a program carried out without executing the program.
Why does software have bugs?
* Miscommunication or no communication - about the details of what an application should or shouldn't do
* Programming errors - in some cases the programmers can make mistakes.
* Changing requirements - there are chances of the end-user not understanding the effects of changes, or may understand and request them anyway to redesign, rescheduling of engineers, effects of other projects, work already completed may have to be redone or thrown out.
* Time force - preparation of software projects is difficult at best, often requiring a lot of guesswork. When deadlines are given and the crisis comes, mistakes will be made.
* Miscommunication or no communication - about the details of what an application should or shouldn't do
* Programming errors - in some cases the programmers can make mistakes.
* Changing requirements - there are chances of the end-user not understanding the effects of changes, or may understand and request them anyway to redesign, rescheduling of engineers, effects of other projects, work already completed may have to be redone or thrown out.
* Time force - preparation of software projects is difficult at best, often requiring a lot of guesswork. When deadlines are given and the crisis comes, mistakes will be made.
What is UAT testing? When it is to be done?
UAT testing - UAT stands for 'User acceptance Testing. This type of testing comes on the final stage and mostly done on the specifications of the end-user or client. It is usually done before the release.
UAT testing - UAT stands for 'User acceptance Testing. This type of testing comes on the final stage and mostly done on the specifications of the end-user or client. It is usually done before the release.
What are the common problems in the software development process?
* Inadequate requirements from the Client - if the requirements given by the client is not clear, unfinished and not testable, then problems may come.
* Unrealistic schedules – Sometimes too much of work is being given to the developer and ask him to complete in a Short duration, then the problems are unavoidable.
* Insufficient testing – The problems can arise when the developed software is not tested properly.
* Given another work under the existing process – request from the higher management to work on another project or task will bring some problems when the project is being tested as a team.
* Miscommunication – in some cases, the developer was not informed about the Clients requirement and expectations, so there can be deviations
* Inadequate requirements from the Client - if the requirements given by the client is not clear, unfinished and not testable, then problems may come.
* Unrealistic schedules – Sometimes too much of work is being given to the developer and ask him to complete in a Short duration, then the problems are unavoidable.
* Insufficient testing – The problems can arise when the developed software is not tested properly.
* Given another work under the existing process – request from the higher management to work on another project or task will bring some problems when the project is being tested as a team.
* Miscommunication – in some cases, the developer was not informed about the Clients requirement and expectations, so there can be deviations
What are the basic solutions for the software development problem?
* Basic requirements - clear, detailed, complete, achievable, testable requirements has to be developed. Use some prototypes to help pin down requirements. In nimble environments, continuous and close coordination with customers/end-users is needed.
* Schedules should be realistic - enough time to plan, design, test, bug fix, re-test, change, and document in the given schedule.
* Adequate testing – testing should be started early, it should be re-tested after the bug fixed or changed, enough time should be spend for testing and bug-fixing.
* Proper study on initial requirements – be ready to look after more changes after the development has begun and be ready to explain the changes done to others. Work closely with the customers and end-users to manage expectations. This avoids excessive changes in the later stages.
* Communication – conduct frequent inspections and walkthroughs in appropriate time period; ensure that the information and the documentation is available on up-to-date if possible electronic. More emphasize on promoting teamwork and cooperation inside the team; use prototypes and proper communication with the end-users to clarify their doubts and expectations.
* Basic requirements - clear, detailed, complete, achievable, testable requirements has to be developed. Use some prototypes to help pin down requirements. In nimble environments, continuous and close coordination with customers/end-users is needed.
* Schedules should be realistic - enough time to plan, design, test, bug fix, re-test, change, and document in the given schedule.
* Adequate testing – testing should be started early, it should be re-tested after the bug fixed or changed, enough time should be spend for testing and bug-fixing.
* Proper study on initial requirements – be ready to look after more changes after the development has begun and be ready to explain the changes done to others. Work closely with the customers and end-users to manage expectations. This avoids excessive changes in the later stages.
* Communication – conduct frequent inspections and walkthroughs in appropriate time period; ensure that the information and the documentation is available on up-to-date if possible electronic. More emphasize on promoting teamwork and cooperation inside the team; use prototypes and proper communication with the end-users to clarify their doubts and expectations.
What is Defect Bash?
Locating defects after regression testing, thoroughly, accompanied by smoke testing...they are profiled out in a detailed way and mailed back to the developers who are supposed to fix it.
Locating defects after regression testing, thoroughly, accompanied by smoke testing...they are profiled out in a detailed way and mailed back to the developers who are supposed to fix it.
What are the contents in an effective Bug report?
Project, Subject, Description, Summary, Detected By (Name of the Tester), Assigned To (Name of the Developer who is supposed to the Bug), Test Lead ( Name ), Detected in Version, Closed in Version, Date Detected, Expected Date of Closure, Actual Date of Closure, Priority (Medium, Low, High, Urgent), Severity (Ranges from 1 to 5), Status, Bug ID, Attachment, Test Case Failed (Test case that is failed for the Bug)
Project, Subject, Description, Summary, Detected By (Name of the Tester), Assigned To (Name of the Developer who is supposed to the Bug), Test Lead ( Name ), Detected in Version, Closed in Version, Date Detected, Expected Date of Closure, Actual Date of Closure, Priority (Medium, Low, High, Urgent), Severity (Ranges from 1 to 5), Status, Bug ID, Attachment, Test Case Failed (Test case that is failed for the Bug)
If the client identified some bugs to whom did he reported?
He will report to the Project Manager. Project Manager will arrange a meeting with all the leads (Dev. Manager, Test Lead and Requirement Manager) then raise a Change Request and then, identify which all the screens are going to be impacted by the bug. They will take the code and correct it and send it to the Testing Team.
He will report to the Project Manager. Project Manager will arrange a meeting with all the leads (Dev. Manager, Test Lead and Requirement Manager) then raise a Change Request and then, identify which all the screens are going to be impacted by the bug. They will take the code and correct it and send it to the Testing Team.
What is Six sigma? Explain.
Six Sigma:
A quality discipline that focuses on product and service excellence to create a culture that demands perfection on target, every time.
Six Sigma quality levels
Produces 99.9997% accuracy, with only 3.4 defects per million opportunities.
Six Sigma is designed to dramatically upgrade a company's performance, improving quality and productivity. Using existing products, processes, and service standards,
They go for Six Sigma MAIC methodology to upgrade performance.
MAIC is defined as follows:
Measure: Gather the right data to accurately assess a problem.
Analyze: Use statistical tools to correctly identify the root causes of a problem
Improve: Correct the problem (not the symptom).
Control: Put a plan in place to make sure problems stay fixed and sustain the gains.
Key Roles and Responsibilities:
The key roles in all Six Sigma efforts are as follows:
Sponsor: Business executive leading the organization.
Champion: Responsible for Six Sigma strategy, deployment, and vision.
Process Owner: Owner of the process, product, or service being improved responsible for long-term sustainable gains.
Master Black Belts: Coach black belts expert in all statistical tools.
Black Belts: Work on 3 to 5 $250,000-per-year projects; create $1 million per year in value.
Green Belts: Work with black belt on projects.
What is V-Model?
Many of the process models currently used can be more generally
connected by the V-model where the â€Å“Vâ€
describes the graphical arrangement of the individual
phases. The V & V is also a stands for
verification and validation.
The model is very simple and easy to understand. By the ordering of activities in time sequence and with abstraction levels the connection between development and test activities becomes clear. Oppositely laying activities complement one another i.e. serve as a base for test activities. So, for example, the system test is carried out on the basis of the results specification phase. The coarse view of the model gives the impressionthat the test activities first start after the implementation. However, in the description of the individual activities the preparatory work is usually listed. So, for example, the test plan and test strategy should be worked out immediately after the definition of the requirements. Nevertheless it can contribute very well to the structuring of the software development process.
The model is very simple and easy to understand. By the ordering of activities in time sequence and with abstraction levels the connection between development and test activities becomes clear. Oppositely laying activities complement one another i.e. serve as a base for test activities. So, for example, the system test is carried out on the basis of the results specification phase. The coarse view of the model gives the impressionthat the test activities first start after the implementation. However, in the description of the individual activities the preparatory work is usually listed. So, for example, the test plan and test strategy should be worked out immediately after the definition of the requirements. Nevertheless it can contribute very well to the structuring of the software development process.
What is data integrity?
Data integrity is one of the six fundamental components of information security. Data integrity is the completeness, soundness, and wholeness of the data that also complies with the intention of the creators of the data.
In databases, important data - including customer information, order database, and pricing tables - may be stored. In databases, data integrity is achieved by preventing accidental, or deliberate, or unauthorized insertion, or modification, or destruction of data.
How do you test data integrity?
Data integrity is tested by the following tests:
Verify that you can create, modify, and delete any data in tables.
Verify that sets of radio buttons represent fixed sets of values.
Verify that a blank value can be retrieved from the database.
Verify that, when a particular set of data is saved to the database, each value gets saved fully, and the truncation of strings and rounding of numeric values do not occur.
Verify that the default values are saved in the database, if the user input is not specified.
Verify compatibility with old data, old hardware, versions of operating systems, and interfaces with other software.
Why do we perform data integrity testing?
Because we want to verify the completeness, soundness, and wholeness of the stored data. Testing should be performed on a regular basis, because important data could, can, and will change over time.
Data integrity is one of the six fundamental components of information security. Data integrity is the completeness, soundness, and wholeness of the data that also complies with the intention of the creators of the data.
In databases, important data - including customer information, order database, and pricing tables - may be stored. In databases, data integrity is achieved by preventing accidental, or deliberate, or unauthorized insertion, or modification, or destruction of data.
How do you test data integrity?
Data integrity is tested by the following tests:
Verify that you can create, modify, and delete any data in tables.
Verify that sets of radio buttons represent fixed sets of values.
Verify that a blank value can be retrieved from the database.
Verify that, when a particular set of data is saved to the database, each value gets saved fully, and the truncation of strings and rounding of numeric values do not occur.
Verify that the default values are saved in the database, if the user input is not specified.
Verify compatibility with old data, old hardware, versions of operating systems, and interfaces with other software.
Why do we perform data integrity testing?
Because we want to verify the completeness, soundness, and wholeness of the stored data. Testing should be performed on a regular basis, because important data could, can, and will change over time.
What is the difference between monkey
testing and smoke testing?
1: Monkey testing is random testing, and smoke testing is a nonrandom testing. Smoke testing is nonrandom testing that deliberately exercises the entire system from end to end, with the goal of exposing any major problems.
2: Monkey testing is performed by automated testing tools, while smoke testing is usually performed manually.
3: Monkey testing is performed by "monkeys", while smoke testing is performed by skilled testers.
4: "Smart monkeys" are valuable for load and stress testing, but not very valuable for smoke testing, because they are too expensive for smoke testing.
5: "Dumb monkeys" are inexpensive to develop, are able to do some basic testing, but, if we used them for smoke testing, they would find few bugs during smoke testing.
6: Monkey testing is not a thorough testing, but smoke testing is thorough enough that, if the build passes, one can assume that the program is stable enough to be tested more thoroughly.
7: Monkey testing either does not evolve, or evolves very slowly. Smoke testing, on the other hand, evolves as the system evolves from something simple to something more thorough.
8: Monkey testing takes "six monkeys" and a "million years" to run. Smoke testing, on the other hand, takes much less time to run, i.e. from a few seconds to a couple of hours.
1: Monkey testing is random testing, and smoke testing is a nonrandom testing. Smoke testing is nonrandom testing that deliberately exercises the entire system from end to end, with the goal of exposing any major problems.
2: Monkey testing is performed by automated testing tools, while smoke testing is usually performed manually.
3: Monkey testing is performed by "monkeys", while smoke testing is performed by skilled testers.
4: "Smart monkeys" are valuable for load and stress testing, but not very valuable for smoke testing, because they are too expensive for smoke testing.
5: "Dumb monkeys" are inexpensive to develop, are able to do some basic testing, but, if we used them for smoke testing, they would find few bugs during smoke testing.
6: Monkey testing is not a thorough testing, but smoke testing is thorough enough that, if the build passes, one can assume that the program is stable enough to be tested more thoroughly.
7: Monkey testing either does not evolve, or evolves very slowly. Smoke testing, on the other hand, evolves as the system evolves from something simple to something more thorough.
8: Monkey testing takes "six monkeys" and a "million years" to run. Smoke testing, on the other hand, takes much less time to run, i.e. from a few seconds to a couple of hours.
What are the technical reviews and reviews?
For each document, it should be reviewed. Technical Review in the sense, for each screen, developer will write a Technical Specification. It should be reviewed by developer and tester. There are functional specification review, unit test case review and code review etc.
For each document, it should be reviewed. Technical Review in the sense, for each screen, developer will write a Technical Specification. It should be reviewed by developer and tester. There are functional specification review, unit test case review and code review etc.
What is Defect Leakage?
Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of the application to the client, if the end user gets any type of defects by using that application then it is called as Defect leakage. This Defect Leakage is also called as Bug Leak
Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of the application to the client, if the end user gets any type of defects by using that application then it is called as Defect leakage. This Defect Leakage is also called as Bug Leak
When testing the password field, what is your focus?
When testing the password field, one needs to focus on encryption; one needs to verify that the passwords are encrypted.
When testing the password field, one needs to focus on encryption; one needs to verify that the passwords are encrypted.
1 : With thorough testing it is possible to remove all defects
from a program prior to delivery to the customer.
a. True
b. False
ANSWER : b
2 : Which of the following are characteristics of testable software ?
a. observability
b. simplicity
c. stability
d. all of the above
ANSWER : d
3 : The testing technique that requires devising test cases to demonstrate that each program function is operational is called
a. black-box testing
b. glass-box testing
c. grey-box testing
d. white-box testing
ANSWER : a
4 : The testing technique that requires devising test cases to exercise the internal logic of a software module is called
a. behavioral testing
b. black-box testing
c. grey-box testing
d. white-box testing
ANSWER : d
5 : What types of errors are missed by black-box testing and can be uncovered by white-box testing ?
a. behavioral errors
b. logic errors
c. performance errors
d. typographical errors
e. both b and d
ANSWER : e
6 : Program flow graphs are identical to program flowcharts.
a. True
b. False
ANSWER : b
7 : The cyclomatic complexity metric provides the designer with information regarding the number of
a. cycles in the program
b. errors in the program
c. independent logic paths in the program
d. statements in the program
ANSWER : c
8 : The cyclomatic complexity of a program can be computed directly from a PDL representation of an algorithm without drawing a program flow graph.
a. True
b. False
ANSWER : a
9 : Condition testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely on basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs
ANSWER : b
10 : Data flow testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely on basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs
ANSWER : c
11 : Loop testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs
ANSWER : d
12 : Black-box testing attempts to find errors in which of the following categories
a. incorrect or missing functions
b. interface errors
c. performance errors
d. all of the above
e. none of the above
ANSWER : d
13 : Graph-based testing methods can only be used for object-oriented systems
a. True
b. False
ANSWER : b
14 : Equivalence testing divides the input domain into classes of data from which test cases can be derived to reduce the total number of test cases that must be developed.
a. True
b. False
ANSWER: a
15 : Boundary value analysis can only be used to do white-box testing.
a. True
b. False
ANSWER: b
16 : Comparison testing is typically done to test two competing products as part of customer market analysis prior to product release.
a. True
b. False
ANSWER : b
17 : Orthogonal array testing enables the test designer to maximize the coverage of the test cases devised for relatively small input domains.
a. True
b. False
ANSWER : a
18 : Test case design "in the small" for OO software is driven by the algorithmic detail of the individual operations.
a. True
b. False
ANSWER : a
19 : Encapsulation of attributes and operations inside objects makes it easy to obtain object state information during testing.
a. True
b. False
ANSWER : b
20 : Use-cases can provide useful input into the design of black-box and state-based tests of OO software.
a. True
b. False
ANSWER : a
21 : Fault-based testing is best reserved for
a. conventional software testing
b. operations and classes that are critical or suspect
c. use-case validation
d. white-box testing of operator algorithms
ANSWER : b
22 : Testing OO class operations is made more difficult by
a. encapsulation
b. inheritance
c. polymorphism
d. both b and c
ANSWER : d
23 : Scenario-based testing
a. concentrates on actor and software interaction
b. misses errors in specifications
c. misses errors in subsystem interactions
d. both a and b
ANSWER : a
24 : Deep structure testing is not designed to
a. examine object behaviors
b. exercise communication mechanisms
c. exercise object dependencies
d. exercise structure observable by the user
a. True
b. False
ANSWER : b
2 : Which of the following are characteristics of testable software ?
a. observability
b. simplicity
c. stability
d. all of the above
ANSWER : d
3 : The testing technique that requires devising test cases to demonstrate that each program function is operational is called
a. black-box testing
b. glass-box testing
c. grey-box testing
d. white-box testing
ANSWER : a
4 : The testing technique that requires devising test cases to exercise the internal logic of a software module is called
a. behavioral testing
b. black-box testing
c. grey-box testing
d. white-box testing
ANSWER : d
5 : What types of errors are missed by black-box testing and can be uncovered by white-box testing ?
a. behavioral errors
b. logic errors
c. performance errors
d. typographical errors
e. both b and d
ANSWER : e
6 : Program flow graphs are identical to program flowcharts.
a. True
b. False
ANSWER : b
7 : The cyclomatic complexity metric provides the designer with information regarding the number of
a. cycles in the program
b. errors in the program
c. independent logic paths in the program
d. statements in the program
ANSWER : c
8 : The cyclomatic complexity of a program can be computed directly from a PDL representation of an algorithm without drawing a program flow graph.
a. True
b. False
ANSWER : a
9 : Condition testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely on basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs
ANSWER : b
10 : Data flow testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely on basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs
ANSWER : c
11 : Loop testing is a control structure testing technique where the criteria used to design test cases is that they
a. rely basis path testing
b. exercise the logical conditions in a program module
c. select test paths based on the locations and uses of variables
d. focus on testing the validity of loop constructs
ANSWER : d
12 : Black-box testing attempts to find errors in which of the following categories
a. incorrect or missing functions
b. interface errors
c. performance errors
d. all of the above
e. none of the above
ANSWER : d
13 : Graph-based testing methods can only be used for object-oriented systems
a. True
b. False
ANSWER : b
14 : Equivalence testing divides the input domain into classes of data from which test cases can be derived to reduce the total number of test cases that must be developed.
a. True
b. False
ANSWER: a
15 : Boundary value analysis can only be used to do white-box testing.
a. True
b. False
ANSWER: b
16 : Comparison testing is typically done to test two competing products as part of customer market analysis prior to product release.
a. True
b. False
ANSWER : b
17 : Orthogonal array testing enables the test designer to maximize the coverage of the test cases devised for relatively small input domains.
a. True
b. False
ANSWER : a
18 : Test case design "in the small" for OO software is driven by the algorithmic detail of the individual operations.
a. True
b. False
ANSWER : a
19 : Encapsulation of attributes and operations inside objects makes it easy to obtain object state information during testing.
a. True
b. False
ANSWER : b
20 : Use-cases can provide useful input into the design of black-box and state-based tests of OO software.
a. True
b. False
ANSWER : a
21 : Fault-based testing is best reserved for
a. conventional software testing
b. operations and classes that are critical or suspect
c. use-case validation
d. white-box testing of operator algorithms
ANSWER : b
22 : Testing OO class operations is made more difficult by
a. encapsulation
b. inheritance
c. polymorphism
d. both b and c
ANSWER : d
23 : Scenario-based testing
a. concentrates on actor and software interaction
b. misses errors in specifications
c. misses errors in subsystem interactions
d. both a and b
ANSWER : a
24 : Deep structure testing is not designed to
a. examine object behaviors
b. exercise communication mechanisms
c. exercise object dependencies
d. exercise structure observable by the user
ANSWER : d
25 : Random order tests are conducted to exercise different class instance life histories.
a. True
b. False
ANSWER : a
26 : Which of these techniques is not useful for partition testing at the class level
a. attribute-based partitioning
b. category-based partitioning
c. equivalence class partitioning
d. state-based partitioning
ANSWER : c
27 : Multiple class testing is too complex to be tested using random test cases.
a. True
b. False
ANSWER : b
28 : Tests derived from behavioral class models should be based on the
a. data flowdiagram
b. object-relation diagram
c. state diagram
d. use-case diagram
ANSWER : c
29 : Client/server architectures cannot be properly tested because network load is highly variable.
a. True
b. False
ANSWER : b
30 : Real-time applications add a new and potentially difficult element to the testing mix
a. performance
b. reliability
c. security
d. time
ANSWER : d
25 : Random order tests are conducted to exercise different class instance life histories.
a. True
b. False
ANSWER : a
26 : Which of these techniques is not useful for partition testing at the class level
a. attribute-based partitioning
b. category-based partitioning
c. equivalence class partitioning
d. state-based partitioning
ANSWER : c
27 : Multiple class testing is too complex to be tested using random test cases.
a. True
b. False
ANSWER : b
28 : Tests derived from behavioral class models should be based on the
a. data flowdiagram
b. object-relation diagram
c. state diagram
d. use-case diagram
ANSWER : c
29 : Client/server architectures cannot be properly tested because network load is highly variable.
a. True
b. False
ANSWER : b
30 : Real-time applications add a new and potentially difficult element to the testing mix
a. performance
b. reliability
c. security
d. time
ANSWER : d
What's the Software Testing?
A set of activities conducted with the intent of finding errors in software.
What is the Purpose of Testing?
To check whether system is meeting requirement.
What is the need for testing?
To Make error Free Product and Reduce Development Cost.
What is the Outcome of Testing?
System which is bug free and meet the system requirements.
When to start and Stop Testing?
When system meets the requirement and there is no change in functionality.
After completing testing, what would you deliver to the client?
Bug free product.
How many types of testing?
There are two types of testing-
* Functional- Black Box Testing
* Structural- white Box Testing
What is Functional Testing?
Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. or Black Box Testing.
What is Structural Testing?
Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. or White box testing.
What's the Black Box testing?
Black Box testing is not based on any knowledge of internal logic.
What's the White Box testing?
White Box testing is based on knowledge of internal logic.
What is Gray Box Testing?
A combination of Black Box and White Box testing methodologies testing a piece of software against its specification but using some knowledge of its internal workings.
What are the three techniques of Black Box testing?
Three techniques are-
*.Equivalence Partitioning
*.Boundary Value Analysis
*.Error guessing
What is Equivalence Partitioning?
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
What's the Boundary Value Analysis?
Boundary Values Analysis is a test data selection technique in which values are chosen maximum, minimum, just inside, just outside boundaries, typical values, and error values. The hope is then if software work correctly for these values then it's will works for all values in between.
What’s the Error guessing?
A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them.
What are the various levels of testing?
Unit, Integration, System and Acceptance testing.
What's the Unit Testing?
Testing of individual component of software.
What's the Integration Testing?
In Integration Testing, we can test combined parts of application to determine if they work together correctly.
How many types of approaches are used in Integration Testing?
There are two types of approaches used-
* Bottom-Up
* Top-Down
What's the Bottom-up Testing?
An approach to integration testing where the lowest level component are tested first.
What's the Top-Down testing?
An approach to integration testing where the top level component are tested first.
What's the System Testing?
System Testing is third level of Testing In this level we check Functionality of application.
What's the Acceptance Testing?
A testing conducted to enable a user or customer to determine whether to accept a software project.
Why do you go for White box testing, when Black box testing is available?
To check code, branches and loops in code.
What types of testing do testers perform?
Black Box & White Box testing.
Can Automation testing replace manual testing? If it so, how?
Yes, if there are many modifications in functionality and it is near impossible to update the automated scripts.
What are the entry criteria for Automation testing?
Should have stable code.
A set of activities conducted with the intent of finding errors in software.
What is the Purpose of Testing?
To check whether system is meeting requirement.
What is the need for testing?
To Make error Free Product and Reduce Development Cost.
What is the Outcome of Testing?
System which is bug free and meet the system requirements.
When to start and Stop Testing?
When system meets the requirement and there is no change in functionality.
After completing testing, what would you deliver to the client?
Bug free product.
How many types of testing?
There are two types of testing-
* Functional- Black Box Testing
* Structural- white Box Testing
What is Functional Testing?
Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. or Black Box Testing.
What is Structural Testing?
Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. or White box testing.
What's the Black Box testing?
Black Box testing is not based on any knowledge of internal logic.
What's the White Box testing?
White Box testing is based on knowledge of internal logic.
What is Gray Box Testing?
A combination of Black Box and White Box testing methodologies testing a piece of software against its specification but using some knowledge of its internal workings.
What are the three techniques of Black Box testing?
Three techniques are-
*.Equivalence Partitioning
*.Boundary Value Analysis
*.Error guessing
What is Equivalence Partitioning?
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.
What's the Boundary Value Analysis?
Boundary Values Analysis is a test data selection technique in which values are chosen maximum, minimum, just inside, just outside boundaries, typical values, and error values. The hope is then if software work correctly for these values then it's will works for all values in between.
What’s the Error guessing?
A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them.
What are the various levels of testing?
Unit, Integration, System and Acceptance testing.
What's the Unit Testing?
Testing of individual component of software.
What's the Integration Testing?
In Integration Testing, we can test combined parts of application to determine if they work together correctly.
How many types of approaches are used in Integration Testing?
There are two types of approaches used-
* Bottom-Up
* Top-Down
What's the Bottom-up Testing?
An approach to integration testing where the lowest level component are tested first.
What's the Top-Down testing?
An approach to integration testing where the top level component are tested first.
What's the System Testing?
System Testing is third level of Testing In this level we check Functionality of application.
What's the Acceptance Testing?
A testing conducted to enable a user or customer to determine whether to accept a software project.
Why do you go for White box testing, when Black box testing is available?
To check code, branches and loops in code.
What types of testing do testers perform?
Black Box & White Box testing.
Can Automation testing replace manual testing? If it so, how?
Yes, if there are many modifications in functionality and it is near impossible to update the automated scripts.
What are the entry criteria for Automation testing?
Should have stable code.
Who's the good software engineer?
A good software engineer has "test to break" attitude, an ability to take the point of view of the customers, and strong quality desire.
What are the Qualities of a Tester?
Should have ability of find out hidden bug as early as possible in SDLC.
What is Quality Assurance?
All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer. (or) Bug presentation activity is called QA.
What is Quality Audit?
A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.
What is Quality Circle?
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.
What is Quality Control?
The operational techniques and the activities used to fulfill and verify requirements of quality.
What is Quality Management?
That aspect of the overall management function that determines and implements the quality policy.
What is Quality Policy?
The overall intentions and direction of an organization as regards quality as formally expressed by top management.
What is Quality System?
The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
What is Race Condition?
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
What is Static Analysis?
Analysis of a program carried out without executing the program.
What is Component?
A minimal software item for which a separate specification is available.
What's Coding?
A generation of source code.
What is Code Complete?
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
What is Code Coverage?
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
What's the Code Walkthrough?
Code Walkthrough help in analyzing the coding techniques and if the code is meeting the coding standards.
What's the Code Inspections?
Code Inspections is formal analysis of the program source code to find defects as defind by meeting system design specifications.
What's the Defect?
The difference between the functional specification and actual program text.
What's the Defect Tracking?
Defect tracking is the process of finding bugs in software.
What's the Debugging?
Debugging is the method of finding and rectifying the cause of software failures.
What is an Inconsistent bug?
Bug which are not reproducible
What is verification?
To review the document.
What is validation?
To validate the system against the requirements.
What is Functional Decomposition?
A technique used during planning, analysis and design; creates a functional hierarchy for the software.
What is Functional Specification?
A document that describes in detail the characteristics of the product with regard to its intended features.
What is Release Candidate?
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
What's the Test Tool?
A computer program that used in testing the systems.
The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.
What is Race Condition?
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.
What is Static Analysis?
Analysis of a program carried out without executing the program.
What is Component?
A minimal software item for which a separate specification is available.
What's Coding?
A generation of source code.
What is Code Complete?
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.
What is Code Coverage?
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.
What's the Code Walkthrough?
Code Walkthrough help in analyzing the coding techniques and if the code is meeting the coding standards.
What's the Code Inspections?
Code Inspections is formal analysis of the program source code to find defects as defind by meeting system design specifications.
What's the Defect?
The difference between the functional specification and actual program text.
What's the Defect Tracking?
Defect tracking is the process of finding bugs in software.
What's the Debugging?
Debugging is the method of finding and rectifying the cause of software failures.
What is an Inconsistent bug?
Bug which are not reproducible
What is verification?
To review the document.
What is validation?
To validate the system against the requirements.
What is Functional Decomposition?
A technique used during planning, analysis and design; creates a functional hierarchy for the software.
What is Functional Specification?
A document that describes in detail the characteristics of the product with regard to its intended features.
What is Release Candidate?
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).
What's the Test Tool?
A computer program that used in testing the systems.
What's the Test Driver?
A program or test tool used to execute test.
What is the test data?
The actual values used in the test or that are necessary to execute the test. (or) Input data against which we validate the system.
or
Test Data is the data that you use to execute your Test Cases or while doing Ad-hoc Testing.
What is a Data Guidelines?
Guidelines which are to be followed for the preparation of test data.
What is Data Dictionary?
A database that contains definitions of all data items defined during analysis.
What is Cyclomatic Complexity?
A measure of the logical complexity of an algorithm, used in white-box testing.
What is Emulator?
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.
What's the Test Bed?
An environment that contain different types of hardware, software, simulator, testing tools, and other support elements that are necessary to conduct a test.
Why do you go for Test Bed?
To validate the system against the required input.
What is a Test condition?
logical input data against which we validate the system. (or) A set of circumstances that a test invokes.
What's the Test Scenario?
It defines a set of test cases or test scripts and the sequences in which they are to be executed.
What's the Test Plan?
A high level of documents that define the software testing project.
What's the Test Case?
A set of test inputs, execution, and expected result developed for a particular objective.
What's the Test Log?
A chronological record of all relevant details about the execution of a test.
What's the Traceability Matrix?
A document that showing the relationship between Test Requirements and Test Cases.
What is Cause Effect Graph?
A graphical representation of inputs and the associated outputs effects which can be used to design test cases.
What is Equivalence Class?
A portion of a component's input or output domains for which the component's behavior is assumed to be the same from the component's specification.
What is Metric?
A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.
What is Software Requirements Specification?
A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software
1. Test coverage analysis is the process of
a. Creating additional test case to increase coverage
b. Finding areas of program exercised by the test cases.
c. Determining a quantitative measure of code coverage, which is a direct measure of quality.
d. All of the above.
ANSWER : a
a. Creating additional test case to increase coverage
b. Finding areas of program exercised by the test cases.
c. Determining a quantitative measure of code coverage, which is a direct measure of quality.
d. All of the above.
ANSWER : a
2. Critical in web testing
a. Performance & Functionality
b. Functionality & usability
c. Usability & Performance
d. None of the above.
ANSWER : c
3. Class testing
a. require a driver to test
b. no need of instances of other classes
c. no need to test the transitions.
d. All of the above.
ANSWER : a
4. Testing across different languages is called
a. Linguistic testing
b. localization testing.
c. Both a & b
d. None of the above.
ANSWER : b
5. Hierarchical system
a. several levels of component that includes objects & classes
b. several levels of component that includes objects classes & systems
c. several levels of component that includes foundation, components, systems
d. None of the above.
ANSWER : d
6. Hybrid testing
a. combination of one or more testing techniques
b. combination of top-down & bottom-up testing
c. Both a & b
d. None of the above.
ANSWER : b
7. White Box testing
a. same as glass box testing
b. same as clear box testing
c. Both a & b
d. None of the above.
ANSWER : c
8. Build Verfication test
a. same as smoke test
b. done after each build to make sure that the buils doesn`t contain major errors
c. Both a & b
d. None of the above.
ANSWER : c
9. Content writing
a. Similar to proof reading
b. Widely used in web testing
c. part of usabililty testing
d. All of the above.
ANSWER : d
10. Decision Coverage
a. testing the boolean expressions which are not in control structures.
b. entire expression is considered as boolean expression irrespective of logical-and & logical-or operators
c. coverage except switch-statement cases, exception handlers
d. All of the above.
ANSWER : b
11. Branch coverage
a. another name for decision coverage
b. another name for all edges coverage
c. another name for basic path coverage
d. All of the above.
ANSWER : d
12. The following example is a
if(condition1 && (condition2 || function1()) )
Statement1;
else
statement2;
a. Decision Coverage
b. Condition Coverage
c. Statement Coverage
d. path Coverage
ANSWER : a
13. Desk checking
a. same as code walkthrough
b. same as Cade inspection
c. verification of code by the developers
d. none of the above
ANSWER : c
14. The benefits of glass box testing are
a. focused testing, testing coverage, control flow
b. date integrity, internal boundaries, algorithm specific testing
c. both a & b
d. either a or b
ANSWER : a
15. Structural testing
a. same as black box testing
b. same as white box testing
c. same as functional testing
d. none of the above
ANSWER : b
16. Testing user documentation involves
a. improved usability, reliability, maintainability
b. install ability, scalability, liability
c. Both a & b
d. None of the above.
ANSWER : c
17. Smoke testing
a. To find whether the hardware burns out
b. Same as build verification test
c. To find that software is stable
d. None of the above
ANSWER : b
18. Code Walkthrough
a. type of dynamic testing
b. type of static testing
c. neither dynamic nor static
d. performed by the testing team
ANSWER : b
19. Static Analysis
a. same as static testing
b. done by developers
c. Both a & b
d. None of the above.
ANSWER : c
20. User Acceptance testing
a. same as Alpha testing
b. same as Beta testing
c. combination of both Alpha & Beta testing
d. None of the above.
ANSWER : c
21. The Goal of software testing is to
a. Debug the system.
b. Validate that the system behaves as expected
c. Let the developer know the defects injected by him
d. Execute the program with the intent of finding errors
ANSWER : d
22. Which type of testing is performed to test application across different browsers & OS ?
a. Static testing
b. Performance testing
c. Compatibility testing
d. Functional testing
ANSWER : c
23. Which document helps you to track test coverage ?
a. Traceability Matrix
b. Test plan
c. Test log
d. Test summary report
ANSWER : a
24. Functional testing is mostly
a. Validation techniques
b. Verification techniques
c. Both a & b
d. None of the above.
ANSWER : a
25. Which of the following is not a type of test under phases in testing life cycle /
a. Integration test
b. Load test
c. User acceptance test
d. Beta test
ANSWER : d
27. Verification performed without any executable code is referred to as
a. Review
b. Static testing
c. Validation
d. Sanity testing
ANSWER : b
28. Use of an executable model to represent the behavior of an object is called
a. Simulation
b. Software item
c. Software feature
d. None of the above.
ANSWER : a
29. U A T is different from other testing normally because of
a. Data
b. Cycles
c. Defects
d. None of the above.
ANSWER : a
30. Alpha testing is differentiated from Beta testing by
a. The location where the tests are conducted
b. The types of test conducted
c. The people doing the testing
d. The degree to which white box techniques are used
ANSWER : a
31. Defects are least costly to correct at what stage of the development cycle ?
a. Requirements
b. Analysis & Design
c. Construction
d. Implementation
ANSWER : a
32. The purpose of software testing is to
a. Demonstrate that the application works properly
b. Detect the defects
c. Validate the logical design
ANSWER : a
33. _______ must be developed to describe when and how testing will occur
a. Test Strategy
b. Test plan
c. Test Design.
d. High level document
ANSWER : b
34. It is difficult to create test scenarios for high levels risks
a. True
b. False
ANSWER : a
35. _________ testing assumes that path of logic in a unit or program is known
a. Black box testing
b. Performance testing
c. White box testing
d. Functional testing
ANSWER : c
36. __________ test is conducted at the developers site by a customer
a. Beta
b. System
c. Alpha
d. None of the above.
ANSWER : c
37. Software testing activities should start
a. As soon as the code is written
b. during the design stage
c. when the requirements have been formally documented
d. as soon as possible in the development life cycle
ANSWER : c
38. Testing where the system is subjected to large number of data is
a. System testing
b. Volume testing
c. Statistical testing
d. Statement testing
ANSWER : b
39. Decision to stop testing should be based upon
a. Successful use of specific test case design methodologies
b. A percentage of coverage foe each coverage category
c. Rate of error detection falls below a specified threshold
d. All of the above
ANSWER : d
40. Which testing methods are used by end users who actually test software before they use it
a. Alpha & Beta testing
b. White box testing
c. Black box testing
d. Trial & Error testing
ANSWER : a
41. Recovery testing aims at verifying the systems ability to recover from varying degrees of failure
a. True
b. False
ANSWER : a
42. Integration testing where no incremental testing takes place prior to all the systems components being combined to form the system
a. System testing
b. Component testing
c. Incremental testing
d. Big bang testing
ANSWER : d
43. Testing errors messages fall under ________ category of
testing
a. Incremental testing
b. Thread testing
c. Documentation testing
d. Stress testing
ANSWER : c
44. What is COTS ?
a. Commercial On - the - shelf software
b. Commercial Of f- the - shelf software
c. Common Offshore testing software
ANSWER : b
45. What type of efficiency can be evaluated testing ?
a. Software system
b. Testing
c. Development
d. a & c
e. a & b
ANSWER : e
46. Test Result data is
a. Test Transactions
b. Test events
c. Business Objectives
d. Reviews
ANSWER : c
47. Which is section of summary status report
a. Vital project information
b. Gernal project information
c. Project Activities information
d. Time line information
ANSWER : d
48. Which is not test result data ?
a. Test factors
b. Interface objective
c. Platform
d. Test estimation
ANSWER : d
49. Project status report provides
a. General view of a project
b. General view of all the projects
c. Detailed view of all the projects
d. Detailed information about a projects
ANSWER : d
50. Summary (Project) status report provides
a. General view of a project
b. General view of all the projects
c. Detailed view of all the projects
d. Detailed information about a projects
ANSWER : b
a. Incremental testing
b. Thread testing
c. Documentation testing
d. Stress testing
ANSWER : c
44. What is COTS ?
a. Commercial On - the - shelf software
b. Commercial Of f- the - shelf software
c. Common Offshore testing software
ANSWER : b
45. What type of efficiency can be evaluated testing ?
a. Software system
b. Testing
c. Development
d. a & c
e. a & b
ANSWER : e
46. Test Result data is
a. Test Transactions
b. Test events
c. Business Objectives
d. Reviews
ANSWER : c
47. Which is section of summary status report
a. Vital project information
b. Gernal project information
c. Project Activities information
d. Time line information
ANSWER : d
48. Which is not test result data ?
a. Test factors
b. Interface objective
c. Platform
d. Test estimation
ANSWER : d
49. Project status report provides
a. General view of a project
b. General view of all the projects
c. Detailed view of all the projects
d. Detailed information about a projects
ANSWER : d
50. Summary (Project) status report provides
a. General view of a project
b. General view of all the projects
c. Detailed view of all the projects
d. Detailed information about a projects
ANSWER : b
51. Do the current project results meet the performance
requirements ? Which section of Project Status Report I should look for
a. Vital project information
b. Gernal project information
c. Project Activities information
d. Essential Elements information
ANSWER : d
52. What is not primary data given by the tester in test execution
a. Total number of tests
b. Number of test cases written for change request
c. Number of test executed to data
d. Number of test executed successfully to data
ANSWER : b
53. Validation is
a. Execute test
b. Review code
c. Desk check
d. Audit
ANSWER : a
54. Which is not a phase of SDLC ?
a. Initiation Phase
b. Definition Phase
c. Planning Phase
d. Programming & Training phase
ANSWER : c
55. Structural testing is not
a. Installation testing
b. Stress testing
c. Recovery testing
d. Compliance testing
ANSWER : b
56. Who is policy / oversight participant in SDLC ?
a. Project Manager
b. Contracting Officer
c. Information Technology Manager
d. Information Resources Management official
ANSWER : d
57. Comparison of Expected benefit versus the cost of the solution is done in which phase of SDLC
a. Definition Phase
b. Design phase
c. Initiation Phase
d. Implementation phase.
ANSWER : c
58. Structural testing is
a. Requirements are properly satisfied by the application
b. Uncover errors during coding of the program
c. Functions works properly
d. To test how the business requirements are implemented
ANSWER : b
59. "V" testing process is
a. System development process & system test process begins
b. Testing starts after coding is done
c. Do procedures are followed by Check procedures
d. Testing starts after prototype is done
ANSWER : a
60. Functional testing is
a. Path testing
b. Technology has been used properly.
c. Uncover errors that occurs in implementing requirements
d. Uncover errors in program unit
ANSWER : c
61. What is Structural testing process
a. Parallel
b. Regression
c. Stress
d. Intersystem
ANSWER : c
62. Stress testing transaction can b obtained from
a. Test data generators
b. Test transactions created by the test group
c. Transactions previously processed in the production environment
d. All the above
ANSWER : d
63. Who will access vulnerability in the system
a. Internal control officer
b. System Security Officer
c. QA Engineer
d. Test Manager
ANSWER : a
64. Objective of review meeting is
a. to identify problems with design
b. to solve problems with design
c. Both a & b
d. None of the above.
ANSWER : a
65. Metrics collected during testing includes
a. System test cases planned / executed/ planned
a. Discrepancies reported/resolved
c. Staff hours
d. All of the above Review is one of the methods of V&
ANSWER : d
66. Review is one of the methods of V & V. The other methods are
a. Inspection
b. Walkthrough
c. Testing
d. All of the above
ANSWER : d
67. According to Crosby, it is less costly
a. Let the customers find the defects.
b. detect defects than to prevent them
c. prevent defects than to detect them
d. Ignore minor defects.
ANSWER : c
68. Cost of quality is
a. Prevention costs
b. Appraisal costs
c. Failure costs
d. All of the above
ANSWER : d
69. Which of the following metrics involves defects reported by client
a. Test efficiency
b. Test efficiencies
c. Test Coverage
d. None of the above
ANSWER : b
70. Detecting a defects at which of the following stage is most economical ?
a. Design
d. Build
c. Testing
d. Development
ANSWER : b
71. Most imprecise definition foe quality is
a. Fitness for use
b. Meeting customers expectations
c. Completeness of requirements
d. Efficient & effective product
ANSWER : c
72. Test efficiency is directly proportional to
a. Product delivery
b. Functional coverage
c. Product cost
d. Product Reliability
ANSWER : d
73. If Quality Control & Quality Assurance are compared
a. Both are literally the same
b. QA is a higher activity in the management
c. QC is a higher activity in the management
d. QA is done by client & QC is done by the software vendor
ANSWER : c
74. Some of the metrics which are collected in a testing project are
1. Productivity 2. Test effectiveness 3. Requirement stability 4. Bug fix rate
a. 1 & 2
b. 2 & 3
c. 1, 2 & 4
d. 1 & 4
ANSWER : c
75. Baseline means
a. A single software product may or may not fully support a business function
b. A quantitative measure of the current level of performance
c. A test or analysis conducted after an application is moved into production
d. None of the above
ANSWER : b
75. What is used to measure the characteristics of the documentation & code ?
a. Process metrics
b. Product metrics
c. Software Quality metrics
d. None of the above
ANSWER : b
76. Benchmarking is
a. Comparing your company’s products services or processes against best practices or competitive practices to help define superior performance of a product service or support process
b. A quantitative measure of the current level of performance
c. A test or analysis conducted after an application is moved into production
d. None of the above
ANSWER : a
77. An example for a failure cost is
a. Training
b. Inspections
c. Rework
ANSWER : c
78. How many Deming principles are there ?
a. 10
b. 14
c. 5
d. 7
ANSWER : b
79. How many levels are in the CMM ?
a. 18
b. 3
c. 4
d. 5
ANSWER : d
80. Who is essential responsible for the quality of a product ?
a. Customer
b. QA Manager
c. Development Manager
ANSWER : c
81. The Pareto analysis is most effective for
a. Ranking items by importance
b. Showing relationships between items
c. Measuring the impact of identified items
ANSWER : a
82. What are 3 costs that make up threw Cost of Quality ?
a. Prevention, Appraisal, Failure
b. Appraisal, Development, Testing
c. Testing, Prevention, Rework
ANSWER : a
83. Appraisal costs are
a. Costs associated with preventing errors
b. Costs associated with detection of errors
c. Costs associated with defective products delivered to customers
ANSWER : b
84. What are expected production costs ?
a. Labor, Materials and Equipment
b. Personnel, Training and rollout
c. training, testing, user acceptance
ANSWER : a
85. Review is what category of cost of Quality
a. Preventive
b. Appraisal
c. failure
ANSWER : b
86. The largest cost of quality is from production failure
a. True
b. False
ANSWER : a
87. Jordan is famous for
a. Quality Control
b. Working on Trend Analysis
c. Pareto.
d. Fish Bone diagram
ANSWER : b
88. The _______ is an application of process management & quality improvement concepts to software development & maintenance
a. Malcolm Bald ridge
b. Thread testing
c. Documentation testing
d. Stress Testing
ANSWER : c
89. QC is
a. Phase building activity
b. Intermediate activity
c. End of phase activity
d. Design activity
ANSWER : c
90. Test cases need to be written for
a. invalid & unexpected conditions
b. valid & expected conditions
c. Both a & b
d. None of the above
ANSWER : c
91. Path coverage includes
a. statement coverage
b. condition coverage
d. decision coverage
d. none of these
ANSWER : c
92. Tools usage is
a. very useful in regression testing
b. saves times
c. helpful in simulating users
d. all of the above
ANSWER : d
93. Characteristics of good test
a. reasonable probability of catching an error & can be redundant
b. It is not simple nor complex
c. reasonable probability of catching an error & cannot be redundant
d. It is either simple or too complex
ANSWER : c
94. Sources of Regression test cases are
a. boundary test & other preplanned tests
b. tests that reveal bugs in the program
c. Customer reported bugs
d. All the above
ANSWER : d
95. System testing is responsible for
a. Performing the data validations
b. Performing the Usability testing
c. Performing the Beta testing
d. none of the above
ANSWER : d
96. Localization testing
a. Testing performed for local functions
b. Testing across different languages
c. Testing across different locations
d. None of the above
ANSWER : b
97. Object oriented testing is
a. Same as Top-Down testing
b. Same as Bottom-up testing
c. Same as Hybrid testing
d. All the above
ANSWER : d
98. Test plan is
a. Road map for testing
b. Tells about the actual results & expected results
c. Both a & b
d. None of the above
ANSWER : a
99. Test script is
a. written version of test cases
b. code used in manual testing
c. Always used when we use tools
d. A code segment to replace the test case
ANSWER : d
100. Test Procedure is
a. collection of test plans
b. combination of test cases & test plans
c. collection of test cases
d. none of the above
ANSWER : c
a. Vital project information
b. Gernal project information
c. Project Activities information
d. Essential Elements information
ANSWER : d
52. What is not primary data given by the tester in test execution
a. Total number of tests
b. Number of test cases written for change request
c. Number of test executed to data
d. Number of test executed successfully to data
ANSWER : b
53. Validation is
a. Execute test
b. Review code
c. Desk check
d. Audit
ANSWER : a
54. Which is not a phase of SDLC ?
a. Initiation Phase
b. Definition Phase
c. Planning Phase
d. Programming & Training phase
ANSWER : c
55. Structural testing is not
a. Installation testing
b. Stress testing
c. Recovery testing
d. Compliance testing
ANSWER : b
56. Who is policy / oversight participant in SDLC ?
a. Project Manager
b. Contracting Officer
c. Information Technology Manager
d. Information Resources Management official
ANSWER : d
57. Comparison of Expected benefit versus the cost of the solution is done in which phase of SDLC
a. Definition Phase
b. Design phase
c. Initiation Phase
d. Implementation phase.
ANSWER : c
58. Structural testing is
a. Requirements are properly satisfied by the application
b. Uncover errors during coding of the program
c. Functions works properly
d. To test how the business requirements are implemented
ANSWER : b
59. "V" testing process is
a. System development process & system test process begins
b. Testing starts after coding is done
c. Do procedures are followed by Check procedures
d. Testing starts after prototype is done
ANSWER : a
60. Functional testing is
a. Path testing
b. Technology has been used properly.
c. Uncover errors that occurs in implementing requirements
d. Uncover errors in program unit
ANSWER : c
61. What is Structural testing process
a. Parallel
b. Regression
c. Stress
d. Intersystem
ANSWER : c
62. Stress testing transaction can b obtained from
a. Test data generators
b. Test transactions created by the test group
c. Transactions previously processed in the production environment
d. All the above
ANSWER : d
63. Who will access vulnerability in the system
a. Internal control officer
b. System Security Officer
c. QA Engineer
d. Test Manager
ANSWER : a
64. Objective of review meeting is
a. to identify problems with design
b. to solve problems with design
c. Both a & b
d. None of the above.
ANSWER : a
65. Metrics collected during testing includes
a. System test cases planned / executed/ planned
a. Discrepancies reported/resolved
c. Staff hours
d. All of the above Review is one of the methods of V&
ANSWER : d
66. Review is one of the methods of V & V. The other methods are
a. Inspection
b. Walkthrough
c. Testing
d. All of the above
ANSWER : d
67. According to Crosby, it is less costly
a. Let the customers find the defects.
b. detect defects than to prevent them
c. prevent defects than to detect them
d. Ignore minor defects.
ANSWER : c
68. Cost of quality is
a. Prevention costs
b. Appraisal costs
c. Failure costs
d. All of the above
ANSWER : d
69. Which of the following metrics involves defects reported by client
a. Test efficiency
b. Test efficiencies
c. Test Coverage
d. None of the above
ANSWER : b
70. Detecting a defects at which of the following stage is most economical ?
a. Design
d. Build
c. Testing
d. Development
ANSWER : b
71. Most imprecise definition foe quality is
a. Fitness for use
b. Meeting customers expectations
c. Completeness of requirements
d. Efficient & effective product
ANSWER : c
72. Test efficiency is directly proportional to
a. Product delivery
b. Functional coverage
c. Product cost
d. Product Reliability
ANSWER : d
73. If Quality Control & Quality Assurance are compared
a. Both are literally the same
b. QA is a higher activity in the management
c. QC is a higher activity in the management
d. QA is done by client & QC is done by the software vendor
ANSWER : c
74. Some of the metrics which are collected in a testing project are
1. Productivity 2. Test effectiveness 3. Requirement stability 4. Bug fix rate
a. 1 & 2
b. 2 & 3
c. 1, 2 & 4
d. 1 & 4
ANSWER : c
75. Baseline means
a. A single software product may or may not fully support a business function
b. A quantitative measure of the current level of performance
c. A test or analysis conducted after an application is moved into production
d. None of the above
ANSWER : b
75. What is used to measure the characteristics of the documentation & code ?
a. Process metrics
b. Product metrics
c. Software Quality metrics
d. None of the above
ANSWER : b
76. Benchmarking is
a. Comparing your company’s products services or processes against best practices or competitive practices to help define superior performance of a product service or support process
b. A quantitative measure of the current level of performance
c. A test or analysis conducted after an application is moved into production
d. None of the above
ANSWER : a
77. An example for a failure cost is
a. Training
b. Inspections
c. Rework
ANSWER : c
78. How many Deming principles are there ?
a. 10
b. 14
c. 5
d. 7
ANSWER : b
79. How many levels are in the CMM ?
a. 18
b. 3
c. 4
d. 5
ANSWER : d
80. Who is essential responsible for the quality of a product ?
a. Customer
b. QA Manager
c. Development Manager
ANSWER : c
81. The Pareto analysis is most effective for
a. Ranking items by importance
b. Showing relationships between items
c. Measuring the impact of identified items
ANSWER : a
82. What are 3 costs that make up threw Cost of Quality ?
a. Prevention, Appraisal, Failure
b. Appraisal, Development, Testing
c. Testing, Prevention, Rework
ANSWER : a
83. Appraisal costs are
a. Costs associated with preventing errors
b. Costs associated with detection of errors
c. Costs associated with defective products delivered to customers
ANSWER : b
84. What are expected production costs ?
a. Labor, Materials and Equipment
b. Personnel, Training and rollout
c. training, testing, user acceptance
ANSWER : a
85. Review is what category of cost of Quality
a. Preventive
b. Appraisal
c. failure
ANSWER : b
86. The largest cost of quality is from production failure
a. True
b. False
ANSWER : a
87. Jordan is famous for
a. Quality Control
b. Working on Trend Analysis
c. Pareto.
d. Fish Bone diagram
ANSWER : b
88. The _______ is an application of process management & quality improvement concepts to software development & maintenance
a. Malcolm Bald ridge
b. Thread testing
c. Documentation testing
d. Stress Testing
ANSWER : c
89. QC is
a. Phase building activity
b. Intermediate activity
c. End of phase activity
d. Design activity
ANSWER : c
90. Test cases need to be written for
a. invalid & unexpected conditions
b. valid & expected conditions
c. Both a & b
d. None of the above
ANSWER : c
91. Path coverage includes
a. statement coverage
b. condition coverage
d. decision coverage
d. none of these
ANSWER : c
92. Tools usage is
a. very useful in regression testing
b. saves times
c. helpful in simulating users
d. all of the above
ANSWER : d
93. Characteristics of good test
a. reasonable probability of catching an error & can be redundant
b. It is not simple nor complex
c. reasonable probability of catching an error & cannot be redundant
d. It is either simple or too complex
ANSWER : c
94. Sources of Regression test cases are
a. boundary test & other preplanned tests
b. tests that reveal bugs in the program
c. Customer reported bugs
d. All the above
ANSWER : d
95. System testing is responsible for
a. Performing the data validations
b. Performing the Usability testing
c. Performing the Beta testing
d. none of the above
ANSWER : d
96. Localization testing
a. Testing performed for local functions
b. Testing across different languages
c. Testing across different locations
d. None of the above
ANSWER : b
97. Object oriented testing is
a. Same as Top-Down testing
b. Same as Bottom-up testing
c. Same as Hybrid testing
d. All the above
ANSWER : d
98. Test plan is
a. Road map for testing
b. Tells about the actual results & expected results
c. Both a & b
d. None of the above
ANSWER : a
99. Test script is
a. written version of test cases
b. code used in manual testing
c. Always used when we use tools
d. A code segment to replace the test case
ANSWER : d
100. Test Procedure is
a. collection of test plans
b. combination of test cases & test plans
c. collection of test cases
d. none of the above
ANSWER : c
1. State which of the one is false
a. In performance testing, usage of tool is a must
d. In database testing, database knowledge is a must
c. In Functional testing, knowledge of business logic is must
d. None of the above.
ANSWER : d
2. Manual testing
a. at least performed one time
b. need to bre executed before going for automation
c. Both a & b
d. Neither a or b
ANSWER : c
3. State which one is true. Collection of testing metrics contributes
a. in the improvement of testing
b. Affects tester’s growth
c. Used against a developer
d. none
ANSWER : a
4. What is the use of Affinity Diagram ?
a. A group process that takes large amounts of language data such as a list developed by brain storming and divides it into categories
b. A test or analysis conducted after an application is moved into production to determine whether it is likely to meet the originating business case
c. A test method that requires that each possible branch on each decision point be executed at least once
d none of the above
ANSWER : a
5. Which of the following technique is the most suitable for negative testing
a. Boundary valve analysis
b. Internal valve analysis
c. State transition testing
d. All the above
ANSWER : d
6. The following best describes the defect density
a. ratio of failure received per unit of time
b. ratio of discovered errors per size of code
c. modifications made per size of code
d. number of failure reported against the code
ANSWER : b
7. How do you test a module for integration ?
a. Big bang approach
b. Pareto analysis
c. Cause & effect diagram
d. Scatter diagram
ANSWER : a
8. 80:20 rule can also be called as
a. Fish bone diagram
b. Pareto analysis
c. Scatter diagram
d. Histogram
ANSWER : b
9. Which of following describes the difference between clear box & opaque box ?
1. Clear box is structural testing, opaque box is functional testing
2. Clear box is done by tester, & opaque box is done by developer
3. Ad-hoc testing is a type of opaque box testing
a. 1 only
b. 1 & 3
c. 2
d. 3
ANSWER : b
10. Coverage based analysis is best described as
a. A metric used to show the logic covered during a test session providing insight to the extent of testing
b. A tool for documenting the unique combinations of conditions & associated results in order to derive unique test cases for validation testing
c. Tools for documenting defects as they are found during testing & for tracking their status through to resolution
d. The most traditional means for analyzing a system or a program
ANSWER : a
11. Which of the following best describes validation
a. Determination of the correctness of the final program or software produced from a development project with respect to the user needs & requirements
b. A document that describes testing activities & results and evaluates the corresponding test items
c. Test data that lie within the function represented by the program
d. All the above
ANSWER : a
12. What can be done to minimize the reoccurrence of defects
a. Defect Prevention plan
b. Defect tracking
c. Defect Management
d. All of the above
ANSWER : c
13. What needed to be done when there is an insufficient time for testing
1. Do Ad-hoc testing
2. Do usability testing
3. Do sanity testing
4. Do a risk based analysis to prioritize
a. 1 & 2
b. 3 & 4
c. All of the above
d. None of the above
ANSWER : b
14. What is the scenario in which automation testing is done
1. Application is stable
2. Usability testing is to be done
3. The project is short term
4. Long term project having numerous release
a. 1
b. 1 & 4
c. 1 & 2
d. 2 & 3
ANSWER : b
15. Choose the best match for cyclomatic complexity
a. The number of decision statements plus one
b. A set of Boolean conditions such that complete test sets for the conditions uncover the same errors
c. The process of analyzing & correcting syntactic logic & others errors identified during testing
d. None of the above
ANSWER : a
16. How can it be known when to stop testing ?
a. when no more bugs can be found
b. when the time allocated is over
c. when the quality goals set up for testing have been achieved
d. All of the above
ANSWER : d
17. Which of the following is LEAST likely to be used during software maintenance
a. Project management plan
b. Customer support hot line
c. Software problem reports
d. Change control board
ANSWER : a
a. In performance testing, usage of tool is a must
d. In database testing, database knowledge is a must
c. In Functional testing, knowledge of business logic is must
d. None of the above.
ANSWER : d
2. Manual testing
a. at least performed one time
b. need to bre executed before going for automation
c. Both a & b
d. Neither a or b
ANSWER : c
3. State which one is true. Collection of testing metrics contributes
a. in the improvement of testing
b. Affects tester’s growth
c. Used against a developer
d. none
ANSWER : a
4. What is the use of Affinity Diagram ?
a. A group process that takes large amounts of language data such as a list developed by brain storming and divides it into categories
b. A test or analysis conducted after an application is moved into production to determine whether it is likely to meet the originating business case
c. A test method that requires that each possible branch on each decision point be executed at least once
d none of the above
ANSWER : a
5. Which of the following technique is the most suitable for negative testing
a. Boundary valve analysis
b. Internal valve analysis
c. State transition testing
d. All the above
ANSWER : d
6. The following best describes the defect density
a. ratio of failure received per unit of time
b. ratio of discovered errors per size of code
c. modifications made per size of code
d. number of failure reported against the code
ANSWER : b
7. How do you test a module for integration ?
a. Big bang approach
b. Pareto analysis
c. Cause & effect diagram
d. Scatter diagram
ANSWER : a
8. 80:20 rule can also be called as
a. Fish bone diagram
b. Pareto analysis
c. Scatter diagram
d. Histogram
ANSWER : b
9. Which of following describes the difference between clear box & opaque box ?
1. Clear box is structural testing, opaque box is functional testing
2. Clear box is done by tester, & opaque box is done by developer
3. Ad-hoc testing is a type of opaque box testing
a. 1 only
b. 1 & 3
c. 2
d. 3
ANSWER : b
10. Coverage based analysis is best described as
a. A metric used to show the logic covered during a test session providing insight to the extent of testing
b. A tool for documenting the unique combinations of conditions & associated results in order to derive unique test cases for validation testing
c. Tools for documenting defects as they are found during testing & for tracking their status through to resolution
d. The most traditional means for analyzing a system or a program
ANSWER : a
11. Which of the following best describes validation
a. Determination of the correctness of the final program or software produced from a development project with respect to the user needs & requirements
b. A document that describes testing activities & results and evaluates the corresponding test items
c. Test data that lie within the function represented by the program
d. All the above
ANSWER : a
12. What can be done to minimize the reoccurrence of defects
a. Defect Prevention plan
b. Defect tracking
c. Defect Management
d. All of the above
ANSWER : c
13. What needed to be done when there is an insufficient time for testing
1. Do Ad-hoc testing
2. Do usability testing
3. Do sanity testing
4. Do a risk based analysis to prioritize
a. 1 & 2
b. 3 & 4
c. All of the above
d. None of the above
ANSWER : b
14. What is the scenario in which automation testing is done
1. Application is stable
2. Usability testing is to be done
3. The project is short term
4. Long term project having numerous release
a. 1
b. 1 & 4
c. 1 & 2
d. 2 & 3
ANSWER : b
15. Choose the best match for cyclomatic complexity
a. The number of decision statements plus one
b. A set of Boolean conditions such that complete test sets for the conditions uncover the same errors
c. The process of analyzing & correcting syntactic logic & others errors identified during testing
d. None of the above
ANSWER : a
16. How can it be known when to stop testing ?
a. when no more bugs can be found
b. when the time allocated is over
c. when the quality goals set up for testing have been achieved
d. All of the above
ANSWER : d
17. Which of the following is LEAST likely to be used during software maintenance
a. Project management plan
b. Customer support hot line
c. Software problem reports
d. Change control board
ANSWER : a
18. What can be done if requirements are changing’s continuously
a. Work with the projects stakeholders early onto understand how requirements might change so that alternate test plans & strategies can be worked out in advance, if possible.
b. Negotiate to allow only easily implemented new requirements into the project, while moving more difficult new requirements into the projects
c. Both a & b
d. None of the above
ANSWER : c
19. Following are some of the testing’s risks
a. Budget, Test environment
b. Budget, Number of qualified test resources
c. Budget, Number of qualified test resources, Test environment
d. None of the above
ANSWER : c
20. What are the two major components taken into consideration with risks analysis ?
a. The probability
b. The potential loss or impact associated with the event
c. Both a & b
d. None of the above
ANSWER : c
21. Test plan defines
a. What is selected for testing
b. Objectives & results
c. Expected results
d. Targets & misses
ANSWER : b
22. Testing planning should begin
a. At the same time that requirements definition begins
b. When building starts
c. When code build is complete
d. After shipping the first version
ANSWER : a
23. What is common limitation of automated testing
a. They are not useful for performance testing
b. They cannot be used for requirement Validation
c. It is very difficult for automated scripts to verify a wide range of application responses
d. They are not useful when requirements are changing frequently
ANSWER : d
24. Test effort estimation uses which of the following techniques
a. Functional point method
b. Test case point method
c. Use case point method
d. All of the above
ANSWER : d
25. From a testing perspective, what results in the clicking of a button
a. An interface event
b. A sound.
c. A text item
d. A bio metric event
e. An internal processing event
ANSWER : a
26. Test design emphasizes all the following except
a. Data planning
b. Test procedures planning
c. Mapping the data & test cases
d. Data synchronization
ANSWER : d
27.Which type of test would u perform to accept a build
a. Beta test
b. Smoke test
c. functional test
d. User acceptance test
ANSWER : b
28. System testing include all the following except
a. Performances services
b. Security services
c. Usability services
d. Monitoring services
ANSWER : d
29. Function points are used for estimating
a. Size
b. Effort
c. Cost
d. None of the above
ANSWER : a
30. Size of a project is defined in terms of all the following except
a. Persons days
b. Persons hours
c. Calendar months
d. None of the above
ANSWER : c
31. Deliverables of test design include all the following except
a. Test data
b. Test data plan
c. Test summary report
d. Test procedure plan
ANSWER : c
32. Which of the following is not decided in the test-planning phase
a. Schedules & Deliverables
b. Hardware & software
c. Entry & exit criteria
d. Types of test cases
ANSWER : d
33. Compatibility testing for products involves all the following except
a. Certified & supported client environments
b. High & low level sanity testing
c. Client & server side Compatibility
d. Functional & non Functional Compatibility
ANSWER : b
34. Regression testing mainly helps in
a. Retesting fixed defects
b. Checking for side affects of fixes
c. Checking the core gaps
d. Ensuring high level sanity
ANSWER : b
35. Evaluating business importance & testing the core business cases in an application is called
a. Risk based testing
b. High level sanity testing
c. Low level sanity testing
d. Regression testing
ANSWER : b
36. Load testing emphasizes on performance under load while stress testing emphasizes on
a. Breaking load
b. Performance under stress
c. Performance under load
d. There is no such difference, both are same
ANSWER : a
37. Which of the following is not a form of Performance testing
a. Spike testing
b. Volume testing
c. Transaction testing
d. Endurance testing
ANSWER : c
38. Per economics of testing - optimum test is suggested because
a. Number of defects decrease along with extent of testing
b. Number of defects increase along with extent of testing
c. Cost of testing increases with extent of testing
d. Cost of testing increases with number of defects
ANSWER : c
39. In a V- model of software testing, U A T plans are prepared during the
a. Analysis phase
b. HLD phase
c. LLD phase
d. System testing phase
ANSWER : a
40. Test data planning essentially includes
a. Network
b. Operational Model
c. Boundary value analysis
d. Test planning procedures
ANSWER : c
41. The extent of automation for a given project is generally guided by
a. Scope for automation
b. Tool support
c. Business Functionality
d. Vendor’s skills
ANSWER : a
42. Which of the following is not a client side statistics in load testing
a. Hits per second
b. Throughput
c. Cache hit ratio
d. Transaction per second
ANSWER : c
43. Which one of the following need not be apart of bud tracker
a. Bug identifier
b. One line bug description
c. Severity of the bug
d. None of the above
ANSWER : d
a. Size
b. Effort
c. Cost
d. None of the above
ANSWER : a
30. Size of a project is defined in terms of all the following except
a. Persons days
b. Persons hours
c. Calendar months
d. None of the above
ANSWER : c
31. Deliverables of test design include all the following except
a. Test data
b. Test data plan
c. Test summary report
d. Test procedure plan
ANSWER : c
32. Which of the following is not decided in the test-planning phase
a. Schedules & Deliverables
b. Hardware & software
c. Entry & exit criteria
d. Types of test cases
ANSWER : d
33. Compatibility testing for products involves all the following except
a. Certified & supported client environments
b. High & low level sanity testing
c. Client & server side Compatibility
d. Functional & non Functional Compatibility
ANSWER : b
34. Regression testing mainly helps in
a. Retesting fixed defects
b. Checking for side affects of fixes
c. Checking the core gaps
d. Ensuring high level sanity
ANSWER : b
35. Evaluating business importance & testing the core business cases in an application is called
a. Risk based testing
b. High level sanity testing
c. Low level sanity testing
d. Regression testing
ANSWER : b
36. Load testing emphasizes on performance under load while stress testing emphasizes on
a. Breaking load
b. Performance under stress
c. Performance under load
d. There is no such difference, both are same
ANSWER : a
37. Which of the following is not a form of Performance testing
a. Spike testing
b. Volume testing
c. Transaction testing
d. Endurance testing
ANSWER : c
38. Per economics of testing - optimum test is suggested because
a. Number of defects decrease along with extent of testing
b. Number of defects increase along with extent of testing
c. Cost of testing increases with extent of testing
d. Cost of testing increases with number of defects
ANSWER : c
39. In a V- model of software testing, U A T plans are prepared during the
a. Analysis phase
b. HLD phase
c. LLD phase
d. System testing phase
ANSWER : a
40. Test data planning essentially includes
a. Network
b. Operational Model
c. Boundary value analysis
d. Test planning procedures
ANSWER : c
41. The extent of automation for a given project is generally guided by
a. Scope for automation
b. Tool support
c. Business Functionality
d. Vendor’s skills
ANSWER : a
42. Which of the following is not a client side statistics in load testing
a. Hits per second
b. Throughput
c. Cache hit ratio
d. Transaction per second
ANSWER : c
43. Which one of the following need not be apart of bud tracker
a. Bug identifier
b. One line bug description
c. Severity of the bug
d. None of the above
ANSWER : d
44. What are the key features to be concentrated upon when doing a testing for world wide web sites
a. Interaction between html pages
b. performance on the client side
c. Security aspects
d. All of the above
ANSWER : d
45. What if the project isn’t big enough to justify extensive testing
a. use risk based analysis to find out which areas need to be tested
b. use automation tools for testing
c. a & b
d. None of the above
ANSWER : a
46. The selection of test cases for regression testing
a. Requires knowledge on the bug fixes & how it affect the system
b. Includes the area of frequent defects
c. Includes the area which has undergone many/recent code changes
d. None of the above
ANSWER : d
47. What are the main attributes of test automation
1. Time Saving 2. Correctness 3. Less Manpower 4. More reliable
a. 1 & 2
b. 2 & 3
c. 1, 2, 3 & 4
d. None of the above
ANSWER : c
48. Some of the common problem of test automation are
a. Changing requirements
b. Lack of time
c. Both a & b
d. None of the above
ANSWER : c
49. A document describing any event during the testing process that requires investigation
a. Test log
b. Test Incident report
c. Test Cycle
d. Test Item
ANSWER : b
50. Test Suit Manager
a. A tool that specifies an order of actions that should be performed during a test session
b. A software package that creates test transactions for testing application systems & programs
c. A tool that allows testers to organize test scripts by function or other grouping
d. None of the above
ANSWER : c
1. The purpose of this event is to review the application user
interface & other human factors of the application with the people who will
be using the application
a. Use Acceptance test
b. Usability test
c. Validation
d. None of the above
ANSWER : b
2. Recovery testing is a system test that forces the software to fail & verifies that data recovery is properly performed. The following should be checked for correctness
1. Re-initialization 2. Restart 3. Data Recovery 4. Check Point Mechanism
a. 1 & 2
b. 1, 2 & 3
c. 1, 2, 3 & 4
d. 2 & 4
ANSWER : c
3. What is need for test planning
a. to utilize a balance of testing
b. to understand testing process
c. to collect metrics
d. to perform ad hoc testing
ANSWER : a
4. Which one of the following is NOT a part of Test plan document
a. assumptions
b. communication approach
c. risk analysis
d. status report
ANSWER : d
5. Which part of the test plan will define "what will & will not be covered in the test"
a. test scope
b. test objective
c. both a & b
d. None of the above
ANSWER : a
6. Test objective is simply a testing
a. direction
b. vision.
c. mission.
d. goal
ANSWER : d
7. Which out of the below is NOT a concern for testers to complete a test plan
a. not enough training
b. lack of test tools
c. enough time for testing
d. rapid change
ANSWER : c
8. THE effort taken to create a test plan should be
a. half of the total test effort
b. one third of the total test efforts
c. two times of the total test effort
d. one fifth of the total test effort
ANSWER : b
9. Tools like change Man, Clear case are used as
a. functional automation tools
b. performance testing tools
c. configuration management tools
d. none of the above
ANSWER : c
10. In life cycle approach to testing, test execution occurs
a. during testing phase
b. during requirement phase
c. during coding phase
d. none of the above
ANSWER : d
11. Who is responsible for conducting test readiness review
a. Test Manager
b. Test engineer
c. both a & b
d. Project Manager
ANSWER : a
12. What is NOT a test log
a. Maps the test results to requirements
b. Records test activities
c. Maintain control over the test
d. Contains pass or fails results
ANSWER : a
13. When Integration testing should begin
a. during black box testing
b. once unit testing is complete for the integrating components
c. Before unit testing is complete
d. All the above
ANSWER : b
14. Which is NOT a part of Integration testing
a. validations of the links between the clients & server
b. output interface file accuracy
c. back out situations
d. None of the above
ANSWER : d
15. When to stop testing ?
a. When all quality goals defined at the start of the project have been met
b. When running short of time
c. When all test cases are executed
d. All the above
ANSWER : a
16. Authorization ________
a. compliance testing
b. disaster testing
c. verifying compliance to rules
d. functional testing
e. ease of operations
ANSWER : c
17. File integrity falls under
a. compliance testing
b. disaster testing
c. verifying compliance to rules
d. functional testing
e. ease of operations
ANSWER : d
18. Operations testing is
a. compliance testing
b. disaster testing
c. verifying compliance to rules
d. functional testing
e. ease of operations
ANSWER : e
19. Security falls under
a. compliance testing
b. disaster testing
c. verifying compliance to rules
d. functional testing
e. ease of operations
ANSWER : a
20. Portability falls under
a. compliance testing
b. disaster testing
c. verifying compliance to rules
d. functional testing
e. ease of operations
ANSWER : b
21. What are the four attributes to be present in any test problem
a. statement, criteria, effect & cause
b. priority, fix, schedule & report
c. statement, fix effect & report
d. None of the above
ANSWER : a
22. What is Risk analysis ?
a. Evaluating risks
b. Evaluating controls
c. Evaluating vulnerabilities
d. All of the above
ANSWER : d
23. Major component of Risk analysis are
a. The probability that the negative event will occur
b. The potential loss is very high
c. The potential loss or impact associated with the event
d. a & c
ANSWER : d
24. Method of conducting Risk analysis is
a. User your judgment
b. User your instinct
c. Cost of failure
d. All of the above
ANSWER : d
25. Which is not Testing risk
a. Budget
b. Number of qualified test resources
c. Sequence & increments of code delivery
d. Inadequately tested application
ANSWER : d
26. What is Cascading error ?
a. Unrelated errors
b. Triggers a second unrelated error in another part
c. A functionality could not be tested
d. Two similar errors
ANSWER : b
27. Configuration defects will be introduced if
a. Environment is not safe
b. Environment does not mirror test environment
c. Environment does not mirror production environment
d. All of the above
ANSWER : d
28. Quality risk is
a. Requirement comply with methodology
b. Incorrect result will br produced
c. Result of the system are unreliable
d. Complex technology used
ANSWER : a
29. Risk control objectives are established in
a. Design phase
b. Requirement phase
c. testing phase
d. Implementation phase
ANSWER : b
30. Which of the following is not Risk Characteristic
a. inherent in every project
b. Neither intrinsically good not bad
c. Something to fear but not Something to manage
d. Probability of loss
ANSWER : c
31. Application developed should fit user’s business process, The components of fit are
a. Data
b. People
c. Structure
d. All the Above
ANSWER : d
32. Which of not the responsibility of customer/user of the software
a. plan how & by whom each acceptance activity will be performed
b. Prepare the acceptance plan
c. Prepare resource plan
d. plan resource for providing information on which to base acceptance decisions
ANSWER : c
33. Acceptance requirements that a system should meet is
a. Usability
b. Understandability
c. Functionality
d. Enhancements
ANSWER : c
34. Testing techniques that can be used in acceptance testing are
a. Structural
b. Functional
c. Usability
d. a & b
e. b & c
ANSWER : d
35. For final software acceptance testing, the system should include
a. Delivered software
b. All user documents
c. Final version of the other software deliverables
d. all the above
ANSWER : d
36. Acceptance test are normally conducted by the
a. Developers
b. End users
c. Test team
d. System engineers
ANSWER : b
37. Acceptance testing means
a. Testing performed on a single stand-alone module or unit of code
b. Testing after changes have been made to ensure that no unwanted changes were introduced
c. Testing to ensure that the system meets the needs of the organization & end user
d. Users test the application in the developers environment
ANSWER : c
38. What is the purpose of code coverage tools
a. They are used to show the extent to which the logic in the program was executed during testing
b. They are used as an alternative to testing
c. They are used to compile the program
ANSWER : a
39. Fore examples of the test specific metrics
a. Testing Effort variation, Defect Density , testing Efficiency, Requirements tested
b. Inspection, review efficiency, Testing Effort variation, Defect density
c. Test scalability, Defect deviation, Testing Efficiency, Schedule variations
ANSWER : a
40. Give one commonly recognized six measurement tool
a. Effort analysis
b. LCO analysis
c. LOC analysis
d. Code analysis
ANSWER : c
41. Give 3 components included in system test report
a. Description of testing, resource requirements & recommendation
b. Testing , defects & usability
c. Description of test results & findings(defects), summary (environment & reference), & Recommendation
ANSWER : c
42. Status Reports in Test director can be generated using ______
a. Document Viewer
b. Document Generator
c. Document Tracker
d. none of the above
ANSWER : b
43. Most common cause of defects is
a. Failure to estimate
b. Failure to asses risks
c. Ambiguous or in complete requirements
d. Weak communication
ANSWER : c
44. The term "defect" is related to the term "fault" because a "fault" is a defect, which has not yet been identified
a. True
b. False
ANSWER : a
a. Use Acceptance test
b. Usability test
c. Validation
d. None of the above
ANSWER : b
2. Recovery testing is a system test that forces the software to fail & verifies that data recovery is properly performed. The following should be checked for correctness
1. Re-initialization 2. Restart 3. Data Recovery 4. Check Point Mechanism
a. 1 & 2
b. 1, 2 & 3
c. 1, 2, 3 & 4
d. 2 & 4
ANSWER : c
3. What is need for test planning
a. to utilize a balance of testing
b. to understand testing process
c. to collect metrics
d. to perform ad hoc testing
ANSWER : a
4. Which one of the following is NOT a part of Test plan document
a. assumptions
b. communication approach
c. risk analysis
d. status report
ANSWER : d
5. Which part of the test plan will define "what will & will not be covered in the test"
a. test scope
b. test objective
c. both a & b
d. None of the above
ANSWER : a
6. Test objective is simply a testing
a. direction
b. vision.
c. mission.
d. goal
ANSWER : d
7. Which out of the below is NOT a concern for testers to complete a test plan
a. not enough training
b. lack of test tools
c. enough time for testing
d. rapid change
ANSWER : c
8. THE effort taken to create a test plan should be
a. half of the total test effort
b. one third of the total test efforts
c. two times of the total test effort
d. one fifth of the total test effort
ANSWER : b
9. Tools like change Man, Clear case are used as
a. functional automation tools
b. performance testing tools
c. configuration management tools
d. none of the above
ANSWER : c
10. In life cycle approach to testing, test execution occurs
a. during testing phase
b. during requirement phase
c. during coding phase
d. none of the above
ANSWER : d
11. Who is responsible for conducting test readiness review
a. Test Manager
b. Test engineer
c. both a & b
d. Project Manager
ANSWER : a
12. What is NOT a test log
a. Maps the test results to requirements
b. Records test activities
c. Maintain control over the test
d. Contains pass or fails results
ANSWER : a
13. When Integration testing should begin
a. during black box testing
b. once unit testing is complete for the integrating components
c. Before unit testing is complete
d. All the above
ANSWER : b
14. Which is NOT a part of Integration testing
a. validations of the links between the clients & server
b. output interface file accuracy
c. back out situations
d. None of the above
ANSWER : d
15. When to stop testing ?
a. When all quality goals defined at the start of the project have been met
b. When running short of time
c. When all test cases are executed
d. All the above
ANSWER : a
16. Authorization ________
a. compliance testing
b. disaster testing
c. verifying compliance to rules
d. functional testing
e. ease of operations
ANSWER : c
17. File integrity falls under
a. compliance testing
b. disaster testing
c. verifying compliance to rules
d. functional testing
e. ease of operations
ANSWER : d
18. Operations testing is
a. compliance testing
b. disaster testing
c. verifying compliance to rules
d. functional testing
e. ease of operations
ANSWER : e
19. Security falls under
a. compliance testing
b. disaster testing
c. verifying compliance to rules
d. functional testing
e. ease of operations
ANSWER : a
20. Portability falls under
a. compliance testing
b. disaster testing
c. verifying compliance to rules
d. functional testing
e. ease of operations
ANSWER : b
21. What are the four attributes to be present in any test problem
a. statement, criteria, effect & cause
b. priority, fix, schedule & report
c. statement, fix effect & report
d. None of the above
ANSWER : a
22. What is Risk analysis ?
a. Evaluating risks
b. Evaluating controls
c. Evaluating vulnerabilities
d. All of the above
ANSWER : d
23. Major component of Risk analysis are
a. The probability that the negative event will occur
b. The potential loss is very high
c. The potential loss or impact associated with the event
d. a & c
ANSWER : d
24. Method of conducting Risk analysis is
a. User your judgment
b. User your instinct
c. Cost of failure
d. All of the above
ANSWER : d
25. Which is not Testing risk
a. Budget
b. Number of qualified test resources
c. Sequence & increments of code delivery
d. Inadequately tested application
ANSWER : d
26. What is Cascading error ?
a. Unrelated errors
b. Triggers a second unrelated error in another part
c. A functionality could not be tested
d. Two similar errors
ANSWER : b
27. Configuration defects will be introduced if
a. Environment is not safe
b. Environment does not mirror test environment
c. Environment does not mirror production environment
d. All of the above
ANSWER : d
28. Quality risk is
a. Requirement comply with methodology
b. Incorrect result will br produced
c. Result of the system are unreliable
d. Complex technology used
ANSWER : a
29. Risk control objectives are established in
a. Design phase
b. Requirement phase
c. testing phase
d. Implementation phase
ANSWER : b
30. Which of the following is not Risk Characteristic
a. inherent in every project
b. Neither intrinsically good not bad
c. Something to fear but not Something to manage
d. Probability of loss
ANSWER : c
31. Application developed should fit user’s business process, The components of fit are
a. Data
b. People
c. Structure
d. All the Above
ANSWER : d
32. Which of not the responsibility of customer/user of the software
a. plan how & by whom each acceptance activity will be performed
b. Prepare the acceptance plan
c. Prepare resource plan
d. plan resource for providing information on which to base acceptance decisions
ANSWER : c
33. Acceptance requirements that a system should meet is
a. Usability
b. Understandability
c. Functionality
d. Enhancements
ANSWER : c
34. Testing techniques that can be used in acceptance testing are
a. Structural
b. Functional
c. Usability
d. a & b
e. b & c
ANSWER : d
35. For final software acceptance testing, the system should include
a. Delivered software
b. All user documents
c. Final version of the other software deliverables
d. all the above
ANSWER : d
36. Acceptance test are normally conducted by the
a. Developers
b. End users
c. Test team
d. System engineers
ANSWER : b
37. Acceptance testing means
a. Testing performed on a single stand-alone module or unit of code
b. Testing after changes have been made to ensure that no unwanted changes were introduced
c. Testing to ensure that the system meets the needs of the organization & end user
d. Users test the application in the developers environment
ANSWER : c
38. What is the purpose of code coverage tools
a. They are used to show the extent to which the logic in the program was executed during testing
b. They are used as an alternative to testing
c. They are used to compile the program
ANSWER : a
39. Fore examples of the test specific metrics
a. Testing Effort variation, Defect Density , testing Efficiency, Requirements tested
b. Inspection, review efficiency, Testing Effort variation, Defect density
c. Test scalability, Defect deviation, Testing Efficiency, Schedule variations
ANSWER : a
40. Give one commonly recognized six measurement tool
a. Effort analysis
b. LCO analysis
c. LOC analysis
d. Code analysis
ANSWER : c
41. Give 3 components included in system test report
a. Description of testing, resource requirements & recommendation
b. Testing , defects & usability
c. Description of test results & findings(defects), summary (environment & reference), & Recommendation
ANSWER : c
42. Status Reports in Test director can be generated using ______
a. Document Viewer
b. Document Generator
c. Document Tracker
d. none of the above
ANSWER : b
43. Most common cause of defects is
a. Failure to estimate
b. Failure to asses risks
c. Ambiguous or in complete requirements
d. Weak communication
ANSWER : c
44. The term "defect" is related to the term "fault" because a "fault" is a defect, which has not yet been identified
a. True
b. False
ANSWER : a
Non Statistical tools are used in the
a. Work practice process
b. Benchmarking process
c. Both a & b
d. None of the above
ANSWER : b
Quality function deployment (QFD) is a
a. Statistical tool
b. Non Statistical tool
c. Development tool
d. None of the above
ANSWER : b
The sequence of the four phases involved in Bench marking process is
a. Action, Planning, Integration, Analysis
b. Planning, Analysis, Integration, Action
c. Analysis, Planning, Integration, Action
d. Analysis, Action, Planning, Integration
ANSWER : b
Defect density is calculated by
a. Total no. of Defects/ Effort
b. Valid defects/ Total no. of defects
c. Invalid Defects/ Valid Defects
d. Valid Defects / Efforts
ANSWER : a
a. Work practice process
b. Benchmarking process
c. Both a & b
d. None of the above
ANSWER : b
Quality function deployment (QFD) is a
a. Statistical tool
b. Non Statistical tool
c. Development tool
d. None of the above
ANSWER : b
The sequence of the four phases involved in Bench marking process is
a. Action, Planning, Integration, Analysis
b. Planning, Analysis, Integration, Action
c. Analysis, Planning, Integration, Action
d. Analysis, Action, Planning, Integration
ANSWER : b
Defect density is calculated by
a. Total no. of Defects/ Effort
b. Valid defects/ Total no. of defects
c. Invalid Defects/ Valid Defects
d. Valid Defects / Efforts
ANSWER : a
Effort Variation is calculated by
a. (Planned - Actual) / Actual
b. (Actual - Planned) / Actual
c. (Actual - Planned) / Planned
d. (Planned - Actual) / Planned
ANSWER : c
Percentage Rework is calculated by
a. (Review effort + rework effort) / Actual Effort expended
b. (Review effort - rework effort) / Actual Effort expended
c. Review effort / Planned effort
d. Review effort / Actual Effort expended
ANSWER : d
QUOTE
what is schedule variance? how it is
calculated?
Schedule variance = (Actual time taken - Planned time) / Planned
time * 100
QUOTE
who designs test case format?
Test case format is Standard format. with the help of developers
& clients, testing team develops test case.
QUOTE
who will prepare test data?
Test data is prepared by testing team. Should begin at the same
time requirements definition starts.
QUOTE
what is the difference between
scalability testing and stress testing? tell me more about scalability testing
Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.
Scalability testing is a subtype of performance test where performance requirements for response time, throughput, and/or utilization are tested as load on the AUT is increased over time
or
Performance testing focused on ensuring the application under test gracefully handles increases in work load.
QUOTE
what is the approach implemented in
usability testing?
Usability testing is for user friendliness, which users can
learn and use a product
Generally we do testing about look and feel of the application like...
Background Color, Images, Font, Alignment, Navigation
It is just like u r NOKIA phone easy to understand & Navigate various Options.
Generally we do testing about look and feel of the application like...
Background Color, Images, Font, Alignment, Navigation
It is just like u r NOKIA phone easy to understand & Navigate various Options.
QUOTE
when can we start writing test cases?
It is the test team who, with assistance of developers and
clients, develops test cases. After test plan is prepared we can start test
case.
1)
Approach used in usability testing.
Which of the following is correct ?
a) test point analysis
b) qualitative and quantitative approach
a) test point analysis
b) qualitative and quantitative approach
Ans: b
2)
when can we start writing test cases.
Which among these is correct ?
a) once test data in ready
b) once business requirements are freeze
c) once software requirements are freeze
d) once coding is finished
a) once test data in ready
b) once business requirements are freeze
c) once software requirements are freeze
d) once coding is finished
Ans:C
Test Cases are written in every phase of the SDLC. The type of
test cases that are developed by testers depend upon the stage of SDLC.
For example, let us that we have just received one requirement from the client. Once the requirement is given to the development team, with the help of development team, the testing team start to fix the acceptance criteria for the requirement and start to prepare the acceptance testing from the developers end.
Once this is done. The development team will start to do the impact analysis and go ahead with the development. At this stage, we will define the test completeness for each phase and start to write the test cases.
When the system specification are done we develop system test cases.
When the system is in implementation we are done with integration. Like this it will happen.
Coming to the next point who will set the format of test cases....
Generally the QA will associate with the client during the kick off phases and they will provide all the kinds of quality that we need to keep for this project and this may vary from project to project..
I hope this gives you some picture... let me know if you any other doubts...
For example, let us that we have just received one requirement from the client. Once the requirement is given to the development team, with the help of development team, the testing team start to fix the acceptance criteria for the requirement and start to prepare the acceptance testing from the developers end.
Once this is done. The development team will start to do the impact analysis and go ahead with the development. At this stage, we will define the test completeness for each phase and start to write the test cases.
When the system specification are done we develop system test cases.
When the system is in implementation we are done with integration. Like this it will happen.
Coming to the next point who will set the format of test cases....
Generally the QA will associate with the client during the kick off phases and they will provide all the kinds of quality that we need to keep for this project and this may vary from project to project..
I hope this gives you some picture... let me know if you any other doubts...
How do I test multi-user chatting program manually ?
You have to check out all the features and dependencies of this software. Suppose there is a software where you can login to many account in a Chat software like, Yahoo, MSN, Gmail, etc.. And friends list is common there for all the logged in accounts.
-> As you login your online and offline friends list should be immediately refreshed.
-> When you send a message to one of your friend then it should send message to that friend by corresponding account.
-> Sending a message to an offline user and ensuring that it is delivered to the correct user
-> If offline message is sent, then user should get the correct timestamp of the message
-> Checking for archives/chat history etc... (if it is a requirement
-> Can you send the files? if yes then type, limit etc
-> Does it allow you to be "invisible" to other users? also, being invisible all as well as available to few selected..
-> There should be an option to view logged in accounts and logg off accounts.
-> There should be an option to edit account settings, etc....
Manual testing multiple users is a hectic job. Takes a lot of time. Remote Desktop Connection would gives the answer else get them on LAN.
You have to check out all the features and dependencies of this software. Suppose there is a software where you can login to many account in a Chat software like, Yahoo, MSN, Gmail, etc.. And friends list is common there for all the logged in accounts.
-> As you login your online and offline friends list should be immediately refreshed.
-> When you send a message to one of your friend then it should send message to that friend by corresponding account.
-> Sending a message to an offline user and ensuring that it is delivered to the correct user
-> If offline message is sent, then user should get the correct timestamp of the message
-> Checking for archives/chat history etc... (if it is a requirement
-> Can you send the files? if yes then type, limit etc
-> Does it allow you to be "invisible" to other users? also, being invisible all as well as available to few selected..
-> There should be an option to view logged in accounts and logg off accounts.
-> There should be an option to edit account settings, etc....
Manual testing multiple users is a hectic job. Takes a lot of time. Remote Desktop Connection would gives the answer else get them on LAN.
What do we do when we get a bug in an application
first ?
When we found a bug while testing, We need to clarify whether it is bug or not? If u think that's a bug then we have to log the bug if u r using any bug tracking tool. If u r not using any bug tracking tool, U have to prepare the bug report in an excel sheet. Assume that u don't have any bug tracking tool.
What is Bug leak or defect leak ?
When a application is tested & released. Client will do beta testing, while do this test if client finds any bug then tat bug is called Bug leak or defect leak.
What is Memory leak ?
When memory used by variable or oject cannot be freed even after the code that created by object or variable has finished execution then such state is called memory leak.
What is Error , Bug & defect ?
If their is any mistake in coding then it is called Error.
If the same error is found by a tester while testing then it is called Bug.
If same bug is given to developer to fix it then it is called Defect
When we found a bug while testing, We need to clarify whether it is bug or not? If u think that's a bug then we have to log the bug if u r using any bug tracking tool. If u r not using any bug tracking tool, U have to prepare the bug report in an excel sheet. Assume that u don't have any bug tracking tool.
What is Bug leak or defect leak ?
When a application is tested & released. Client will do beta testing, while do this test if client finds any bug then tat bug is called Bug leak or defect leak.
What is Memory leak ?
When memory used by variable or oject cannot be freed even after the code that created by object or variable has finished execution then such state is called memory leak.
What is Error , Bug & defect ?
If their is any mistake in coding then it is called Error.
If the same error is found by a tester while testing then it is called Bug.
If same bug is given to developer to fix it then it is called Defect
Test Case
Template:
testcase id
|
test case name
|
test case desc
|
test steps
|
test case status
|
test status (P/F)
|
test prority
|
defect severity
|
||
step
|
expected
|
actual
|
|||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Test Case:
HOME PAGE:
Test URL:
Preconditions: Open
Web browser and enter the given URL in the address bar. Home page must be
displayed. All test cases must be executed from this page.
Test case id
|
Test case name
|
test case desc
|
test steps
|
test case status
|
test status (P/F)
|
test prority
|
defect severity
|
||
step
|
expected
|
actual
|
|||||||
Login01
|
Validate Login
|
To verify that Login name on login page must be greater than 3
characters
|
enter login name less than 3 chars (say a) and password and
click Submit button
|
an error message “Login not less than 3 characters” must be
displayed
|
|
design
|
|
high
|
|
enter login name less than 3 chars (say ab) and password and
click Submit button
|
an error message “Login not less than 3 characters” must be
displayed
|
|
design
|
|
high
|
|
|||
enter login name 3 chars
(say abc) and password and click Submit button
|
Login success full or an error message “Invalid Login or
Password” must be displayed
|
|
design
|
|
high
|
|
|||
Login02
|
Validate Login
|
To verify that Login name on login page should not be greater
than 10 characters
|
enter login name greater than 10 chars (say abcdefghijk) and
password and click Submit button
|
an error message “Login not greater than 10 characters” must be
displayed
|
|
design
|
|
high
|
|
enter login name less than 10 chars (say abcdef) and password
and click Submit button
|
Login success full or an error message “Invalid Login or
Password” must be displayed
|
|
design
|
|
high
|
|
|||
Login03
|
Validate Login
|
To verify that Login name on login page does not take special characters
|
enter login name starting with special chars (!hello) password
and click Submit button
|
an error message “Special chars not allowed in login” must be
displayed
|
|
design
|
|
high
|
|
enter login name ending with specail chars (hello$) password and
click Submit button
|
an error message “Special chars not allowed in login” must be
displayed
|
|
design
|
|
high
|
|
|||
enter login name with special
chars in middle(he&^llo) password and click Submit button
|
an error message “Special chars not allowed in login” must be
displayed
|
|
design
|
|
high
|
|
|||
Pwd01
|
Validate Password
|
To verify that Password on login page must be greater than 6
characters
|
enter Password less than 6 chars (say a) and Login Name and
click Submit button
|
an error message “Password not less than 6 characters” must be
displayed
|
|
design
|
|
high
|
|
enter Password 6 chars
(say abcdef) and Login Name and click Submit button
|
Login success full or an error message “Invalid Login or
Password” must be displayed
|
|
design
|
|
high
|
|
|||
Pwd02
|
Validate Password
|
To verify that Password on login page must be less than 10
characters
|
enter Password greater than 10 chars (say a) and Login Name and
click Submit button
|
an error message “Password not greater than 10 characters” must
be displayed
|
|
design
|
|
high
|
|
enter Password less than 10 chars (say abcdefghi) and Login Name
and click Submit button
|
Login success full or an error message “Invalid Login or
Password” must be displayed
|
|
design
|
|
high
|
|
|||
Pwd03
|
Validate Password
|
To verify that Password on login page must be allow special
characters
|
enter Password with special characters(say !@hi&*P) Login
Name and click Submit button
|
Login success full or an error message “Invalid Login or
Password” must be displayed
|
|
design
|
|
high
|
|
Llnk01
|
Verify Hyperlinks
|
To Verify the Hyper Links available at left side on login page
working or not
|
Click Home Link
|
Home Page must be displayed
|
|
design
|
|
low
|
|
Click Sign Up Link
|
Sign Up page must be displayed
|
|
design
|
|
low
|
|
|||
|
|
Click New Users Link
|
New Users Registration Form must be displayed
|
|
design
|
|
low
|
|
|
|
Click Advertise Link
|
Page with Information and Tariff Plan for Advertisers must be
displayed
|
|
design
|
|
low
|
|
||
Click Contact Us Link
|
Contact Information page must be displayed
|
|
design
|
|
low
|
|
|||
Click Terms Link
|
Terms Of the service page must be displayed
|
|
design
|
|
low
|
|
|||
Flnk01
|
Verify Hyper links
|
To Verify the Hyper Links displayed at Footer on login page
working or not
|
Click Home Link
|
Home Page must be displayed
|
|
design
|
|
low
|
|
Click Sign Up Link
|
Contact Information page must be displayed
|
|
design
|
|
low
|
|
|||
Click Contact Us Link
|
Page with Information and Tariff Plan for Advertisers must be
displayed
|
|
design
|
|
low
|
|
|||
Click Advertise Link
|
Terms Of the service page must be displayed
|
|
design
|
|
low
|
|
|||
Click Terms Of Membership Link
|
Privacy Policy page must be displayed
|
|
design
|
|
low
|
|
|||
Click Privacy Policy Link
|
Privacy Policy page must be displayed
|
|
design
|
|
low
|
|
|||
Lblnk01
|
Verify Hyper links
|
To Verify the Hyper Links displayed at Login Box on login page
working or not
|
Click NEW USERS Link located in login box
|
New Users Registration Form must be displayed
|
|
design
|
|
low
|
|
Click New Users(Blue Color) Link located in login box
|
New Users Registration Form must be displayed
|
|
design
|
|
low
|
|
|||
Click Forget Password
Link located in login box
|
Password Retrieval Page must be displayed
|
|
design
|
|
medium
|
|
|||
|
|
|
|
|
|
|
|
|
|
What is Acid Testing ?
ACID testing is related to testing a transaction.
A-Atomicity
C-Consistent
I-Isolation
D-Durable
ACID testing is related to testing a transaction.
A-Atomicity
C-Consistent
I-Isolation
D-Durable
Acceptance test cases are based on what ?
a. Requirements
b. Design
c. Code
d. Decision table
ANSWER : a
Code Coverage is used as a measure of what ?
a. Defects
b. Trends analysis
c. Test Effectiveness
d. Time Spent Testing
ANSWER : c
Boundary value testing
a. Is the same as equivalence partitioning tests
b. Test boundary conditions on, below and above the edges of input and output equivalence classes
c. Tests combinations of input circumstances
d. Is used in white box testing strategy
ANSWER : b
Which one of the following are non-functional testing methods ?
a. System testing
b. Usability testing
c. Performance testing
d. Both b & c
ANSWER : d
Which of the following tools would be involved in the automation of regression test ?
a. Data tester
b. Boundary tester
c. Capture/Playback
d. Output comparator.
ANSWER : c
Incorrect form of Logic coverage is:
a. Statement Coverage
b. Pole Coverage
c. Condition Coverage
d. Path Coverage
ANSWER : b
Which of the following is not a quality characteristic listed in ISO 9126 Standard ?
a. Functionality
b. Usability
c. Supportability
d. Maintainability
ANSWER : c
To test a function, the programmer has to write a _________ which calls the function to be tested and passes it test data.
a. Stub
b. Driver
c. Proxy
d. None of the above
ANSWER : b
Pick the best definition of quality
a. Quality is job one
b. Zero defects
c. Conformance to requirements
d. Work as designed
ANSWER : c
Fault Masking is
a. Error condition hiding another error condition
b. Creating a test case which does not reveal a fault
c. Masking a fault by developer
d. Masking a fault by a tester
ANSWER : a
One Key reason why developers have difficulty testing their own work is :
a. Lack of technical documentation
b. Lack of test tools on the market for developers
c. Lack of training
d. Lack of Objectivity
ANSWER : d
During the software development process, at what point can the test process start ?
a. When the code is complete.
b. When the design is complete.
c. When the software requirements have been approved.
d. When the first code module is ready for unit testing
ANSWER : c
In a review meeting a moderator is a person who
a. Takes minutes of the meeting
b. Mediates between people
c. Takes telephone calls
d. Writes the documents to be reviewed
ANSWER : b
How much testing is enough ?
a. This question is impossible to answer
b. This question is easy to answer
c. The answer depends on the risk for your industry, contract and special requirements
d. This answer depends on the maturity of your developers
ANSWER : c
A common test technique during component test is
a. Statement and branch testing
b. Usability testing
c. Security testing
d. Performance testing
ANSWER : a
Statement Coverage will not check for the following.
a. Missing Statements
b. Unused Branches
c. Dead Code
d. Unused Statement
ANSWER : a
Independent Verification & Validation is
a. Done by the Developer
b. Done by the Test Engineers
c. Done By Management
d. Done by an Entity Outside the Project’s sphere of influence
ANSWER : d
a. Requirements
b. Design
c. Code
d. Decision table
ANSWER : a
Code Coverage is used as a measure of what ?
a. Defects
b. Trends analysis
c. Test Effectiveness
d. Time Spent Testing
ANSWER : c
Boundary value testing
a. Is the same as equivalence partitioning tests
b. Test boundary conditions on, below and above the edges of input and output equivalence classes
c. Tests combinations of input circumstances
d. Is used in white box testing strategy
ANSWER : b
Which one of the following are non-functional testing methods ?
a. System testing
b. Usability testing
c. Performance testing
d. Both b & c
ANSWER : d
Which of the following tools would be involved in the automation of regression test ?
a. Data tester
b. Boundary tester
c. Capture/Playback
d. Output comparator.
ANSWER : c
Incorrect form of Logic coverage is:
a. Statement Coverage
b. Pole Coverage
c. Condition Coverage
d. Path Coverage
ANSWER : b
Which of the following is not a quality characteristic listed in ISO 9126 Standard ?
a. Functionality
b. Usability
c. Supportability
d. Maintainability
ANSWER : c
To test a function, the programmer has to write a _________ which calls the function to be tested and passes it test data.
a. Stub
b. Driver
c. Proxy
d. None of the above
ANSWER : b
Pick the best definition of quality
a. Quality is job one
b. Zero defects
c. Conformance to requirements
d. Work as designed
ANSWER : c
Fault Masking is
a. Error condition hiding another error condition
b. Creating a test case which does not reveal a fault
c. Masking a fault by developer
d. Masking a fault by a tester
ANSWER : a
One Key reason why developers have difficulty testing their own work is :
a. Lack of technical documentation
b. Lack of test tools on the market for developers
c. Lack of training
d. Lack of Objectivity
ANSWER : d
During the software development process, at what point can the test process start ?
a. When the code is complete.
b. When the design is complete.
c. When the software requirements have been approved.
d. When the first code module is ready for unit testing
ANSWER : c
In a review meeting a moderator is a person who
a. Takes minutes of the meeting
b. Mediates between people
c. Takes telephone calls
d. Writes the documents to be reviewed
ANSWER : b
How much testing is enough ?
a. This question is impossible to answer
b. This question is easy to answer
c. The answer depends on the risk for your industry, contract and special requirements
d. This answer depends on the maturity of your developers
ANSWER : c
A common test technique during component test is
a. Statement and branch testing
b. Usability testing
c. Security testing
d. Performance testing
ANSWER : a
Statement Coverage will not check for the following.
a. Missing Statements
b. Unused Branches
c. Dead Code
d. Unused Statement
ANSWER : a
Independent Verification & Validation is
a. Done by the Developer
b. Done by the Test Engineers
c. Done By Management
d. Done by an Entity Outside the Project’s sphere of influence
ANSWER : d
Which one of the following describes the major benefit of
verification early in the life cycle ?
a. It allows the identification of changes in user requirements.
b. It facilitates timely set up of the test environment.
c. It reduces defect multiplication.
d. It allows testers to become involved early in the project.
ANSWER : C
a. It allows the identification of changes in user requirements.
b. It facilitates timely set up of the test environment.
c. It reduces defect multiplication.
d. It allows testers to become involved early in the project.
ANSWER : C
Integration testing in the small:
a. tests the individual components that have been developed.
b. tests interactions between modules or subsystems.
c. only uses components that form part of the live system.
d. tests interfaces to other systems.
ANSWER : B
Static analysis is best described as:
a. the analysis of batch programs.
b. the reviewing of test plans.
c. the analysis of program code.
d. the use of black box testing.
ANSWER : C
Alpha testing is:
a. post-release testing by end user representatives at the developer's site.
b. the first testing that is performed.
c. pre-release testing by end user representatives at the developer's site.
d. pre-release testing by end user representatives at their sites.
ANSWER : C
The most important thing about early test design is that it:
a. makes test preparation easier.
b. means inspections are not required.
c. can prevent fault multiplication.
d. will find all faults.
ANSWER : C
a. tests the individual components that have been developed.
b. tests interactions between modules or subsystems.
c. only uses components that form part of the live system.
d. tests interfaces to other systems.
ANSWER : B
Static analysis is best described as:
a. the analysis of batch programs.
b. the reviewing of test plans.
c. the analysis of program code.
d. the use of black box testing.
ANSWER : C
Alpha testing is:
a. post-release testing by end user representatives at the developer's site.
b. the first testing that is performed.
c. pre-release testing by end user representatives at the developer's site.
d. pre-release testing by end user representatives at their sites.
ANSWER : C
The most important thing about early test design is that it:
a. makes test preparation easier.
b. means inspections are not required.
c. can prevent fault multiplication.
d. will find all faults.
ANSWER : C
Which of the following statements about reviews is true?
a. Reviews cannot be performed on user requirements specifications.
b. Reviews are the least effective way of testing code.
c. Reviews are unlikely to find faults in test plans.
d. Reviews should be performed on specifications, code, and test plans.
ANSWER : D
Test cases are designed during:
a. test recording.
b. test planning.
c. test configuration.
d. test specification.
ANSWER : D
A configuration management system would NOT normally provide:
a. linkage of customer requirements to version numbers.
b. facilities to compare test results with expected results.
c. the precise differences in versions of software component source code.
d. restricted access to the source code library.
ANSWER : B
What is the main reason for testing software before releasing it ?
a. to show that system will work after release http://www.chetanasforum.com
b. to decide when the software is of sufficient quality to release
c. to find as many bugs as possible before release
d. to give information for a risk based decision about release
ANSWER : d
Which of the following statements is not true
a. performance testing can be done during unit testing as well as during the testing of whole system
b. The acceptance test does not necessarily include
a regression test
c. Verification activities should not involve testers (reviews, inspections etc.
d. Test environments should be as similar to production environments as possible
ANSWER : c
When reporting faults found to developers, testers should be:
a. as polite, constructive and helpful as possible
b. firm about insisting that a bug is not a "feature" if it should be fixed
c. diplomatic, sensitive to the way they may react to criticism
d. All of the above
ANSWER : d
In which order should tests be run ?
a. the most important tests first
b. the most difficult tests first(to allow maximum time for fixing.
c. the easiest tests first(to give initial confidence.
d. the order they are thought of
ANSWER : a
a. as polite, constructive and helpful as possible
b. firm about insisting that a bug is not a "feature" if it should be fixed
c. diplomatic, sensitive to the way they may react to criticism
d. All of the above
ANSWER : d
In which order should tests be run ?
a. the most important tests first
b. the most difficult tests first(to allow maximum time for fixing.
c. the easiest tests first(to give initial confidence.
d. the order they are thought of
ANSWER : a
The later in the development life cycle a fault is discovered, the more expensive it is to fix. why ?
a. the documentation is poor, so it takes longer to find out what the software is doing.
b. wages are rising
c. the fault has been built into more documentation, code, tests, etc
d. none of the above
ANSWER : c
Which is not true-The black box tester
a. should be able to understand a functional specification or requirements document
b. should be able to understand the source code.
c. is highly motivated to find faults
d. is creative to find the system's weaknesses
ANSWER : b
A test design technique is a.
a process for selecting test cases
b. a process for determining expected outputs
c. a way to measure the quality of software
d. a way to measure in a test plan what has to be done
ANSWER : a
Test ware(test cases, test dataset).
a. needs configuration management just like requirements, design and code
b. should be newly constructed for each new version of the software
c. is needed only until the software is released into production or use
d. does not need to be documented and commented, as it does not form part of the released software system
ANSWER : a
An incident logging system
a only records defects
b is of limited value
c is a valuable source of project information during testing if it contains all incidents
d. should be used only by the test team.
ANSWER : c
Increasing the quality of the software, by better development methods, will affect the time needed for testing (the test phases. by:
a. reducing test time
b. no change
c. increasing test time
d. can't say
ANSWER : a
Coverage measurement
a. is nothing to do with testing
b. is a partial measure of test thoroughness
c. branch coverage should be mandatory for all software
d. can only be applied at unit or module testing, not at system testing
ANSWER : b
Which of the following is true ?
a. Component testing should be black box, system testing should be white box.
b. if u find a lot of bugs in testing, you should not be very confident about the quality of software
c. the fewer bugs you find, the better your testing was
d. the more tests you run, the more bugs you will find.
ANSWER : b
What is the important criterion in deciding what testing technique to use ?
a. how well you know a particular technique
b. the objective of the test
c. how appropriate the technique is for testing the application
d. whether there is a tool to support the technique
ANSWER : b
Table of
Contents
1. Introduction................................................................................................................... 99
1.1. Test Plan
Objectives................................................................................................ 99
2. Scope........................................................................................................................... 99
2.1. Data
Entry............................................................................................................... 99
2.2. ReportsFile
Transfer................................................................................................ 99
2.3. File
Transfer............................................................................................................ 99
2.4. Security..................................................................................................................... 99
3. Test Strategy............................................................................................................... 100
3.1. System Test.............................................................................................................. 100
3.2. Performance Test...................................................................................................... 100
3.3. Security Test............................................................................................................. 100
3.4. Automated Test......................................................................................................... 100
3.5. Stress and Volume Test............................................................................................. 100
3.6. Recovery Test........................................................................................................... 100
3.7. Documentation Test.................................................................................................. 100
3.8. Beta Test.................................................................................................................. 100
3.9. User Acceptance Test................................................................................................ 100
4. Environment
Requirements........................................................................................... 101
4.1. Data Entry workstations............................................................................................. 101
4.2 MainFrame................................................................................................................. 101
5. Test
Schedule.............................................................................................................. 101
6. Control
Procedures...................................................................................................... 101
6.1 Reviews..................................................................................................................... 101
6.2 Bug Review meetings................................................................................................. 101
6.3 Change Request......................................................................................................... 102
6.4 Defect Reporting........................................................................................................ 102
7. Functions
To Be Tested................................................................................................ 102
8. Resources
and Responsibilities...................................................................................... 102
8.1. Resources................................................................................................................ 103
8.2. Responsibilities..................................................................................................... 103
9. Deliverables................................................................................................................. 104
10. Suspension
/ Exit Criteria............................................................................................. 105
11. Resumption
Criteria...................................................................................................... 105
12. Dependencies.............................................................................................................. 105
12.1 Personnel
Dependencies........................................................................................ 105
12.2 Software
Dependencies......................................................................................... 105
12.3 Hardware Dependancies........................................................................................... 105
12.3 Test Data
& Database............................................................................................ 105
13. Risks........................................................................................................................... 105
13.1. Schedule................................................................................................................. 105
13.2. Technical................................................................................................................. 105
13.3. Management........................................................................................................... 106
13.4. Personnel................................................................................................................ 106
13.5 Requirements........................................................................................................... 106
14. Tools............................................................................................................................ 106
15. Documentation............................................................................................................. 106
16. Approvals..................................................................................................................... 107
1. Introduction
The company has outgrown its current payroll system & is
developing a new system that will allow for further growth and provide
additional features. The software test department has been tasked with testing
the new system.
The new system will do the following:
§ Provide
the users with menus, directions & error messages to direct him/her on the
various options.
§ Handle
the update/addition of employee information.
§ Print
various reports.
§ Create
a payroll file and transfer the file to the mainframe.
§ Run
on the Banyan Vines Network using IBM compatible PCs as data entry terminals
1.1. Test Plan Objectives
This Test Plan for the new Payroll
System supports the following objectives:
§ Define
the activities required to prepare for and conduct System, Beta and User
Acceptance testing.
§ Communicate
to all responsible parties the System Test strategy.
§ Define
deliverables and responsible parties.
§ Communicate
to all responsible parties the various Dependencies and Risks
2. Scope
2.1. Data Entry
The new payroll
system should allow the payroll clerks to enter employee information from IBM
compatible PC workstations running DOS 3.3 or higher. The system will be menu
driven and will provide error messages to help direct the clerks through
various options.
2.2. Reports
The system will
allow the payroll clerks to print 3 types of reports. These reports are:
§
A pay period transaction report
§ A
pay period exception report
§ A
three month history report
2.3. File Transfer
Once the employee
information is entered into the LAN database, the payroll system will allow the
clerk to create a payroll file. This file can then be transferred, over the
network, to the mainframe.
2.4. Security
Each payroll clerk
will need a user id and password to login to the system. The system will
require the clerks to change the password every 30 days.
3. Test Strategy
The test strategy consists
of a series of different tests that will fully exercise the payroll system. The
primary purpose of these tests is to uncover the systems limitations and
measure its full capabilities. A list of the various planned tests and a brief
explanation follows below.
3.1. System Test
The System tests will focus on the behavior of the
payroll system. User scenarios will be executed against the system as well as
screen mapping and error message testing. Overall, the system tests will test
the integrated system and verify that it meets the requirements defined in the
requirements document.
3.2. Performance Test
Performance test will
be conducted to ensure that the payroll system’s response times meet the user
expectations and does not exceed the specified performance criteria. During
these tests, response times will be measured under heavy stress and/or volume.
3.3. Security Test
Security tests will
determine how secure the new payroll system is. The tests will verify that
unauthorized user access to confidential data is prevented.
3.4. Automated Test
A suite of automated tests will be developed to
test the basic functionality of the payroll system and perform regression
testing on areas of the systems that previously had critical/major defects. The
tool will also assist us by executing user scenarios thereby emulating several
users.
3.5. Stress and Volume Test
We will subject the payroll system to high input
conditions and a high volume of data during the peak times. The System will be
stress tested using twice (20 users) the number of expected users.
3.6. Recovery Test
Recovery tests will
force the system to fail in a various ways and verify the recovery is properly
performed. It is vitally important that all payroll data is recovered after a
system failure & no corruption of the data occurred.
3.7. Documentation Test
Tests will be conducted to check the accuracy of
the user documentation. These tests will ensure that no features are missing,
and the contents can be easily understood.
3.8. Beta Test
The Payroll department will beta tests the new
payroll system and will report any defects they find. This will subject the system to tests that could not be
performed in our test environment.
3.9. User Acceptance Test
Once the payroll
system is ready for implementation, the Payroll department will perform User
Acceptance Testing. The purpose of these tests is to confirm that the system is
developed according to the specified user requirements and is ready for
operational use.
4. Environment Requirements
4.1. Data Entry workstations
§
20 IBM compatible PCs (10 will be used by the
automation tool to emulate payroll clerks).
§ 286
processor (minimum)
§ 4mb
RAM
§ 100
mb Hard Drive
§ DOS
3.3 or higher
§ Attached
to Banyan Vines network
§ A
Network attached printer
§ 20
user ids and passwords (10 will be used by the automation tool to emulate
payroll clerks).
4.2 Main Frame
§
Attached to the Banyan Vines network
§ Access
to a test database (to store payroll information transferred from LAN payroll
system)
5. Test Schedule
§
System Test 6/16/98 - 8/26/98
§
Beta Test 7/28/98 - 8/18/98
§
User Acceptance Test 8/29/98 - 9/03/98
6. Control Procedures
6.1 Reviews
The project team
will perform reviews for each Phase. (i.e. Requirements Review, Design Review,
Code Review, Test Plan Review, Test Case Review and Final Test Summary Review).
A meeting notice, with related documents, will be emailed to each participant.
6.2 Bug Review meetings
Regular weekly
meeting will be held to discuss reported defects. The development department
will provide status/updates on all
defects reported and the test department will provide addition defect
information if needed. All member of the project team will participate.
6.3 Change Request
Once testing begins,
changes to the payroll system are discouraged. If functional changes are
required, these proposed changes will be discussed with the Change Control
Board (CCB). The CCB will determine the impact of the change and if/when it
should be implemented.
6.4 Defect Reporting
When defects are
found, the testers will complete a defect report on the defect tracking system.
The defect tracking Systems is
accessible by testers, developers & all members of the project team. When a
defect has been fixed or more information is needed, the developer will change
the status of the defect to indicate the current state. Once a defect is
verified as FIXED by the testers, the testers will close the defect report.
7. Functions To Be Tested
The following is a list of
functions that will be tested:
§ Add/update
employee information
§ Search
/ Lookup employee information
§ Escape
to return to Main Menu
§ Security
features
§ Scaling
to 700 employee records
§ Error
messages
§ Report
Printing
§ Creation
of payroll file
§ Transfer
of payroll file to the mainframe
§ Screen
mappings (GUI flow). Includes default settings
§ FICA
Calculation
§ State
Tax Calculation
§ Federal
Tax Calculation
§ Gross
pay Calculation
§ Net
pay Calculation
§ Sick
Leave Balance Calculation
§ Annual
Leave Balance Calculation
A Requirements Validation Matrix will “map” the test cases
back to the requirements. See Deliverables.
8. Resources and Responsibilities
The Test Lead and Project Manager
will determine when system test will start and end. The Test lead will also be
responsible for coordinating schedules, equipment, & tools for the testers
as well as writing/updating the Test Plan, Weekly Test Status reports and Final
Test Summary report. The testers will be responsible for writing the test cases
and executing the tests. With the help of the Test Lead, the Payroll Department
Manager and Payroll clerks will be responsible for the Beta and User Acceptance
tests.
8.1. Resources
The test
team will consist of:
§ A
Project Manager
§ A
Test Lead
§ 5
Testers
§ The
Payroll Department Manager
§ 5
Payroll Clerks
8.2. Responsibilities
Project Manager
|
Responsible for Project schedules
and the overall success of the project. Participate on CCB.
|
|
|
Lead Developer
|
Serve as a primary contact/liaison between the development
department and the project team.
Participate on CCB.
|
|
|
|
|
Test Lead
|
Ensures the overall success of the test cycles. He/she
will coordinate weekly meetings and will communicate the testing status to
the project team.
Participate on CCB.
|
|
|
Testers
|
Responsible for performing the actual system testing.
|
|
|
Payroll
Department Manager
|
Serves as Liaison between Payroll department and project
team. He/she will help coordinate the Beta and User Acceptance testing
efforts. Participate on CCB.
|
|
|
Payroll Clerks
|
Will assist in performing the Beta and User Acceptance
testing.
|
9. Deliverables
Deliverable
|
Responsibility
|
Completion Date
|
|
|
|
Develop Test cases
|
Testers
|
6/11/98
|
|
|
|
Test Case Review
|
Test Lead, Dev. Lead, Testers
|
6/12/98
|
|
|
|
Develop Automated test suites
|
Testers
|
7/01/98
|
|
|
|
Requirements Validation Matrix
|
Test Lead
|
6/16/98
|
|
|
|
Obtain User ids and Passwords for payroll system/database
|
Test Lead
|
5/27/98
|
|
|
|
Execute manual and automated tests
|
Testers & Test Lead
|
8/26/98
|
|
|
|
Complete Defect Reports
|
Everyone testing the product
|
On-going
|
|
|
|
Document and communicate test status/coverage
|
Test Lead
|
Weekly
|
|
|
|
Execute Beta tests
|
Payroll Department Clerks
|
8/18/98
|
|
|
|
Document and communicate Beta test status/coverage
|
Payroll Department Manager
|
8/18/98
|
|
|
|
Execute User Acceptance tests
|
Payroll Department Clerks
|
9/03/98
|
|
|
|
Document and communicate Acceptance test status/coverage
|
Payroll Department Manager
|
9/03/98
|
|
|
|
Final Test Summary Report
|
Test Lead
|
9/05/98
|
10. Suspension / Exit Criteria
If any defects are found which seriously impact the test
progress, the QA manager may choose to
Suspend testing. Criteria that will justify test suspension
are:
§ Hardware/software
is not available at the times indicated in the project schedule.
§ Source
code contains one or more critical defects, which seriously prevents or limits
testing progress.
§ Assigned
test resources are not available when needed by the test team.
11. Resumption Criteria
If testing is suspended, resumption
will only occur when the problem(s) that caused the suspension has been
resolved. When a critical defect is the cause of the suspension, the “FIX” must
be verified by the test department before testing is resumed.
12. Dependencies
12.1 Personnel Dependencies
The test team requires experience
testers to develop, perform and validate tests. These
The test team will also need the
following resources available: Application developers and Payroll Clerks.
12.2 Software Dependencies
The source code must
be unit tested and provided within the scheduled time outlined in the Project
Schedule.
12.3 Hardware Dependencies
The Mainframe, 10 PCs (with specified
hardware/software) as well as the LAN environment need to be available during
normal working hours. Any downtime will affect the test schedule.
12.3 Test Data & Database
Test data (mock employee
information) & database should also be made available to the testers for
use during testing.
13. Risks
13.1. Schedule
The schedule for
each phase is very aggressive and could affect testing. A slip in the schedule
in one of the other phases could result in a subsequent slip in the test phase.
Close project management is crucial to meeting the forecasted completion date.
13.2. Technical
Since this is a new payroll
system, in the event of a failure the old system can be used. We will run our
test in parallel with the production system so that there is no downtime of the
current system.
13.3. Management
Management
support is required so when the project falls behind, the test schedule does
not
get squeezed to make up for the
delay. Management can reduce the risk of delays by supporting the test team
throughout the testing phase and assigning people to this project with the
required skills set.
13.4. Personnel
Due to the aggressive schedule, it
is very important to have experienced testers on this project. Unexpected
turnovers can impact the schedule. If attrition does happen, all efforts must
be made to replace the experienced individual
13.5 Requirements
The test plan and test schedule
are based on the current Requirements Document. Any changes to the requirements
could affect the test schedule and will need to be approved by the CCB.
14. Tools
The Acme Automated test tool will be used to help test the
new payroll system. We have the licensed product onsite and installed. All of
the testers have been trained on the use of this test tool.
15. Documentation
The following documentation will be available at the end of
the test phase:
§
Test Plan
§ Test
Cases
§ Test
Case review
§ Requirements
Validation Matrix
§ Defect
reports
§ Final
Test Summary Report
While testing a web application you need to
consider following Cases:
• Functionality Testing
• Performance Testing
• Usability Testing
• Server Side Interface
• Client Side Compatibility
• Security
Functionality:
In testing the functionality of the web sites the following should be tested:
• Links
i. Internal Links
ii. External Links
iii. Mail Links
iv. Broken Links
• Forms
i. Field validation
ii. Error message for wrong input
iii. Optional and Mandatory fields
• Database
* Testing will be done on the database integrity.
• Cookies
* Testing will be done on the client system side, on the temporary Internet files.
Performance :
Performance testing can be applied to understand the web site’s scalability, or to benchmark the performance in the environment of third party products such as servers and middleware for potential purchase.
• Functionality Testing
• Performance Testing
• Usability Testing
• Server Side Interface
• Client Side Compatibility
• Security
Functionality:
In testing the functionality of the web sites the following should be tested:
• Links
i. Internal Links
ii. External Links
iii. Mail Links
iv. Broken Links
• Forms
i. Field validation
ii. Error message for wrong input
iii. Optional and Mandatory fields
• Database
* Testing will be done on the database integrity.
• Cookies
* Testing will be done on the client system side, on the temporary Internet files.
Performance :
Performance testing can be applied to understand the web site’s scalability, or to benchmark the performance in the environment of third party products such as servers and middleware for potential purchase.
When
you don’t have enough time to test the application what you will do?
I am saying this because it is some times or most of the times
not possible to test the whole application within the specified time. In such
situations its better to find out the risk factors in the projects and
concentrate on them.
Here are some points to be considered when you are in such a situation:
Find out Important functionality is your project?
Find out High-risk module of the project?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which parts of the code are most complex, and thus most subject to errors?
Which parts of the application were developed in rush or panic mode?
What do the developers think are the highest-risk aspects of the application?
What kinds of problems would cause the worst publicity?
What kinds of problems would cause the most customer service complaints?
What kinds of tests could easily cover multiple functionalities?
Considering these points you can greatly reduce the risk of project releasing under less time constraint.
Here are some points to be considered when you are in such a situation:
Find out Important functionality is your project?
Find out High-risk module of the project?
Which functionality is most visible to the user?
Which functionality has the largest safety impact?
Which functionality has the largest financial impact on users?
Which aspects of the application are most important to the customer?
Which parts of the code are most complex, and thus most subject to errors?
Which parts of the application were developed in rush or panic mode?
What do the developers think are the highest-risk aspects of the application?
What kinds of problems would cause the worst publicity?
What kinds of problems would cause the most customer service complaints?
What kinds of tests could easily cover multiple functionalities?
Considering these points you can greatly reduce the risk of project releasing under less time constraint.
Types of Black Box Testing
Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
System testing - testing is based on overall requirements specifications; covers all combined parts of a system.
Integration testing - testing combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially mainly to client/server and distributed systems.
Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes
Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Computability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment
Acceptance testing - determining if software is satisfactory to a customer.
Comparison testing - comparing software weaknesses and strengths to competing products
Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
System testing - testing is based on overall requirements specifications; covers all combined parts of a system.
Integration testing - testing combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially mainly to client/server and distributed systems.
Incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
End-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
Load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
Stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
Usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes
Recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Computability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment
Acceptance testing - determining if software is satisfactory to a customer.
Comparison testing - comparing software weaknesses and strengths to competing products
Alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
Beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Defect Severity determines
the defect's effect on the application where
as Defect Priority determines the defect urgency of repair.
Severity is given by Testers and Priority by Developers
1. High Severity & Low Priority : For example an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request.
2. High Severity & High Priority : In the above example if there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently.
3. Low Severity & High Priority : If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault.
4. Low Severity & Low Priority : If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.
Severity is given by Testers and Priority by Developers
1. High Severity & Low Priority : For example an application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority because this fault can be fixed in the next release as a change request.
2. High Severity & High Priority : In the above example if there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently.
3. Low Severity & High Priority : If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault.
4. Low Severity & Low Priority : If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priority.
Testing types
* Functional testing here we are checking the behavior of the software.
* Non-functional testing here we are checking the performance, usability, volume, security.
Testing methodologies
* Static testing : In static testing we are not executing the code
ex: Walk troughs’, Inspections, Review
* Dynamic testing: In dynamic testing we are executing the code
ex: Black box , White box
* Functional testing here we are checking the behavior of the software.
* Non-functional testing here we are checking the performance, usability, volume, security.
Testing methodologies
* Static testing : In static testing we are not executing the code
ex: Walk troughs’, Inspections, Review
* Dynamic testing: In dynamic testing we are executing the code
ex: Black box , White box
Testing techniques
* White box
* Black box
Testing levels
* Unit testing
* Integration testing
* System testing
* Acceptance testing
* White box
* Black box
Testing levels
* Unit testing
* Integration testing
* System testing
* Acceptance testing
No comments:
Post a Comment