1) What is testing?
A: Testing is the process of evaluating a system or its components with the intent to find that whether it satisfies the specified requirements or not. In order to produce quality product in the end and hence customers satisfaction.
2) What are Basic Software Testing?
A: There are two basic types of software testing:-
1) Black Box Testing:
In this one will perform testing only on the functional part of an application with out having knowledge on structural part. This is done by Testers. It is a Type of Functional Testing.
2) White Box Testing :
In this one will performs testing only on the structural part of an application. This is done by Developers.
2) What are Types of Testing?
A:
1. Unit Testing :
Unit testing is a software development process in which the smallest testable parts of an application are called Units.The testing of an individual unit or group of related units are called Unit Testing. It falls under the class of white box testing. It is often done by the programmer.
2. Integration Testing:
Integration testing is the phase in software testing in which individual software modules are combined and tested as a group. It may fall under both white box testing and black box testing.
3.Functional Testing:
Functional testing is the testing the application aganist business requirements. It falls under the class of black box testing.
2. Regression testing :
Regression testing is performed to make sure that previously working functionality still works after chnages made in system.
Usually this type of testing is done in two situations.
* When ever the test engineer identifies the defects, send to development department after rectification developer will release the next build once the next build is released the test engineers will check the defect functionality as well as the related functionality once again.
* When ever some new features are added to the appliction, next build released to the testing department then the test engineer will check all the related features of those new features once again.
Note:
* Testing the new features for the first time is not regression testing.
* Random testing also falls under regression testing.
2. Re Testing :
It is one type of testing in which one will perform testing on same functionality again and again with multiple sets of values in order to come to conclusion to that build is stable.
* Retesting starts from the first build continuous to the last build
* Regression testing starts from the second build continuous to the last build.
* During the regression testing also retesting will be conducted. Some people called it as re and regression testing.
3.Compatibility Testing : It is a type of testing in which one will install the application into multiple environments prepared with different combinations in order to conform whether the application is suitable with those environment or not.
* This type of testing is more important for products rather than projects.
Example: Skype product
Its should support multiple envionments like windows(XP,7,8),Mac,Android,.......etc.
Bowsers like (chrome,FF,Opera,IE 7,8,9,10,......etc) also come into picture when we test web applications.
4. End to End Testing : It is a type of testing in which one will perform testing all the end to end scenarios of the application. Like Give input verify the output result.
5. Adhoc Testing : It is a type of testing performed without planning and documentation. The strength of adhoc testing is that important defects can be found quickly.
6. Smoke Testing : It is used to determine whether there are serious problems with a piece of software. Smoke Testing is considered as the surface level testing which is always used to validate that build provided by development to QA team is ready to accept for further testing. So in this way Smoke Testing is done by development team before releasing means submitting the build to the Software Testing team..
7. Sanity Testing : It determines whether it is reasonable to proceed with further testing. Sanity Testing is the subset of Regression Testing and it is performed when we do not have enough time for doing testing.Sanity testing is performed after the build has clear the Smoke test and has been accepted by QA team for further testing, sanity testing checks the major functionality with finer details.
8. Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.
9. Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
3) What are Testing Levels?
A: The below are levels of testing:-
This testing begins with unit testing, followed by tests of progressively higher-level combinations of units called modules or builds.
Top-Down integration
This testing, the highest-level modules are tested first and progressively lower-level modules are tested after that.
A: Step 1: Requirements Reveiw
We review the Software requirements and design. It may take 10-20 days. Documents like SRS,BRS,FRS,DDS....etc
Step 2 : Test Planning
Once you have gathered a general idea of what needs to be tested, you ‘plan’ for the tests.
HR(Human resource).
Training requirement.
Recruitment.
What should be tested
What not to be tested
Test Estimation
Test Schedule
Step 3 : Test Design
You design your tests on the basis of detailed requirements/design of the software .
Step 4: Environment Setup
You setup the test environment (server/client/network, etc) with the goal of replicating the end-users’ environment
Hardware requirement.
Software requirement.
Step 5 : Execution of Test Cases :
You execute your Test Cases/Scripts in the Test Environment to see whether they pass.
Test Case Management Tool : HP Quality Center
Step 7: Test Report
Some other information can be part of test summary report:
What is Bug Life Cycle?
A: Bug : " A computer bug is an error, failure, or fault in a product or computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program's source code or its design."
Life cycle of Bug :
Log new defect
When tester logs any new bug the mandatory fields
Bug status description :
These are various stages of bug life cycle. The status caption may vary depending on the bug tracking system you are using.
1. New : When QA files new bug.
2. Deferred : If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.
3. Assigned : ' Assigned to' field is set by project lead or manager and assigns bug to developer.
4. Resolved/Fixed : When developer makes necessary code changes and verifies the changes then he/she can make bug status as' Fixed 'and the bug is passed to testing team.
5. Could not reproduce : If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as 'CNR'. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.
6. Need more information : If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, the he/she can mark it as " Need more information". In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.
7. Reopen : If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ' Reopen ' so that developer can take appropriate action.
8. Closed : If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ' Closed '.
9. Rejected/ Invalid : Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.
Bug reporting Tools : JIRA, BUGZILLA
Q:l What is Test Plan?
The primary aim of this document is to highlight key considerations in Performance Testing and to provide an insight into the rigor and depth of performance testing
As part of the performance testing process, one needs to gather statistical information, examine logs of system state histories, determine system performance under natural and artificial conditions and alter system modes of operation.
Performance testing complements functional testing. Functional testing can validate proper functionality under correct usage and proper error handling under incorrect usage. It cannot, however, tell how much load an application can handle before it breaks or performs improperly. Finding the breaking points and performance bottlenecks, as well as identifying functional errors that only occur under stress requires performance testing.
The purpose of Performance testing is to demonstrate that
• The application processes required transaction volumes within specified response times in a real-time production database (Speed).
• The application can handle various user load scenarios (stresses), ranging from a sudden load “spike” to a persistent load “soak” (Scalability).
• The application is consistent in availability and functional integrity (Stability).
• Determination of minimum configuration that will allow the system to meet the formal stated performance expectations of stakeholders
Basis for inclusion in Load Test
High frequency transactions: The most frequently used transactions have the potential to impact the performance of all of the other transactions if they are not efficient.
Mission Critical transactions The more important transactions that facilitate the core objectives of the system should be included, as failure under load of these transactions has, by definition, the greatest impact.
Read Transactions: At least one READ ONLY transaction should be included, so that performance of such transactions can be differentiated from other more complex transactions.
Update Transactions At least one update transaction should be included so that performance of such transactions can be differentiated from other transactions.
The objective of Benchmark tests is to determine end-to-end timing of various critical business processes and transactions while the system is under low load with a production sized database.
The best time to execute benchmark tests is at the earliest opportunity. Developing performance test scripts at such an early stage provides opportunity to identify and remedy serious performance problems and expectations before load testing commences.
A key indicator of the quality of a benchmark test is its repeatability. That is, the re-execution of a performance test should give the same set of results. If the results are not the same each time, differences in results cannot be attributed to changes in the application, configuration or environment being tested.
Stress Tests:
Stress tests have one primary objective, and that is to determine the maximum load under which a system fails, and how it fails.
It is important to know in advance if a ‘stress’ situation will result in catastrophic system failure or if all components of the system simply ‘just go really slow’. Catastrophic failures often require the restarting of various infrastructures and contribute to downtime, stressful work environments for support staff and management, as well as possible financial loss and breaching of SLAs.
Targeted Infrastructure Tests:
The objective of Targeted Infrastructure tests is to individually test isolated areas of an end-to-end system configuration. This type of testing would include communications infrastructure such as:
-Load balancers;
-Web servers;
-Applications servers;
-Databases.
Targeted Infrastructure testing allows for the identification of any performance issues that would fundamentally limit the overall ability of a system to deliver at a given performance level. Targeted Infrastructure testing separately generates load on each component of an end-to-end system, measuring the response of each component under load.
Each test can be simple, focusing specifically upon the individual component being tested. It is often wise to execute Targeted Infrastructure tests upon isolated components prior to Load or Stress testing as it is much easier to identify (and quicker to rectify) performance issues in this situation rather than in a full end-to-end test.
Soak Tests (Endurance Testing):
Objective of soak testing is to identify any performance problems that may appear after a system has been running at a high level for an extended period of time. It is possible that a system may ‘stop’ working after a certain number of transactions have been processed, maybe due to:
-Serious memory leaks that would eventually result in a memory crisis;
-Failure to close connections between tiers of a multi-tiered system which could halt some or all modules of a system;
-Failure to close database cursors under some conditions which could eventually result in the entire systems stalling;
-Gradual degradation in response time of some function as internal data structures become less efficient during a long high intensity test.
Volume Tests:
Volume tests are tests directly relating to throughput, and are usually associated with the testing of ‘messaging’, ‘batch’ or ‘conversion’ type processing situations.
The objectives of Volume tests are:
- To determine throughput associated with a specific process or transaction;
- To determine the ‘capacity drivers’ associated with a specific process or transaction.
Volume testing a system involves the focusing of throughput through a system function (say, in bytes) rather than response time of a system function (say, in seconds).
It is important when designing Volume tests that the capacity drivers are identified prior to the execution of the Volume testing to ensure meaningful results are recorded. Capacity drivers in a batch processing function could be:
Record Types: The record types contained within one specific batch job run may require significant CPU processing while other record types may invoke substantial database and disk activity. Some batch processing function can also contain aggregation processing, and the mix of data contained within a batch job can significantly impact the processing requirements of the aggregation phase.
Database Size: The total amount of processing effort for a batch processing function may also depend upon the size and make-up of the database the batch job is interacting with.
Failover Tests:
Objective of Failover Test is to get the system under test into steady state and start failing components (servers, routers, etc) and observe how response times are effected during and after the failover and how long the system takes to transition back to steady state.
Failover testing determines what will occur if multiple web-servers are being used under peak anticipated load, and one of them dies. Does the load balancer used in this architecture react quickly enough? Can the other web-servers handle the sudden dumping of extra load?
Network Sensitivity Tests:
Network sensitivity tests specifically focus on Wide Area Network (WAN) limitations and network activity (traffic, latency, error rates, etc) and then measure the impact of that traffic on an application that is bandwidth dependant. The primary objectives of Network Sensitivity Tests are:
• Determine impact on system response time over a WAN;
• Determine the capacity of a system based on a given WAN;
• Determine the impact on a system under test that is under ‘dirty’ communications load.
Response time is the primary metric of measure for Network Sensitivity testing, and is recorded as part of scenario test execution. Response time can be estimated as –
Response Time = Transmission Time + Delays + Client Processing Time + Server Processing Time
Where:
Transmission Time = Data to be transferred divided by bandwidth
Delays = Number of turns multiplied by ‘Round Trip’ response time
Client Processing Time = Time taken on users software to fulfill request
Server Processing Time = Time taken on server computer to fulfill request
When to Start Performance Testing
A common practice is to start performance testing only after functional, integration, and system testing are complete; that way, it is understood that the target application is “sufficiently sound and stable” to ensure valid performance test results.
However, the problem with the above approach is that it delays performance testing until the latter part of the development lifecycle. Then, if the tests uncover performance-related problems, one has to resolve problems with potentially serious design implications at a time when the corrections made might invalidate earlier test results. In addition, the changes might destabilize the code just when one wants to freeze it, prior to beta testing or the final release.
A better approach is to begin performance testing as early as possible, just as soon as any of the application components can support the tests. This will enable users to establish some early benchmarks against which performance measurement can be conducted as the components are developed.
When to Stop Performance Testing
The conventional approach is to stop testing once all planned tests are executed and there is consistent and reliable pattern of performance improvement. This approach gives users accurate performance information at that instance. However, one can quickly fall behind by just standing still. The environment in which clients will run the application will always be changing, so it’s a good idea to run ongoing performance tests.
Another alternative is to set up a continual performance test and periodically examine the results. One can “overload” these tests by making use of real world conditions. Regardless of how well it is designed, one will never be able to reproduce all the conditions that application will have to contend with in the real-world environment.
Following are the prerequisite which should be in place before performance testing is commenced –
• Quantitative, relevant, measurable, realistic, achievable requirements
As a foundation to all tests, performance requirements should be agreed prior to the test. This helps in determining whether or not the system meets the stated requirements. The following attributes will help to have a meaningful performance comparison.
• Stable system
A test team attempting to construct a performance test of a system whose software is of poor quality is unlikely to be successful. If the software crashes regularly, it will probably not withstand the relatively minor stress of repeated use. Testers will not be able to record scripts in the first instance, or may not be able to execute a test for a reasonable length of time.
• Realistic test environment
The test environment should ideally be the production environment or a close simulation and be dedicated to the performance test team for the duration of the test. A test environment that bears no similarity to the actual production environment may be useful for finding obscure errors in the code, but is, however, useless for a performance test.
• Controlled test environment
Performance testers require stability not only in the hardware and software in terms of its reliability and resilience, but also need changes in the environment or software under test to be minimized. Automated scripts are extremely sensitive to changes in the behavior of the software under test. Test scripts designed to drive client software GUIs are prone to fail immediately, if the interface is changed even slightly. Changes in the operating system environment or database are equally likely to disrupt test preparation as well as execution and should be strictly controlled.
• Performance testing toolkit
The execution of a performance test must be, by its nature, completely automated. However, there are requirements for tools throughout the test process. Main tool requirements for Performance Testing Toolkit are as following -
Test database creation/maintenance
Load generation tools
Resource monitoring
Reporting Tools
The typical performance testing methodology includes four phases: preparation, development, execution and results summary as shown in the diagram below.
Performance Test Preparation: The first phase starts prior to commencing the performance testing. Project Manager / QA Manager perform preparation tasks such as planning, designing, configuring the environment setup etc.
Script Development: The second phase involves creating the performance test scenarios and relevant test scripts that will be used to test the system.
Test Execution/Analysis: The third phase includes running the scenario. The data gathered during the run is then used to analyze system performance, develop suggestions for system improvement and implement those improvements. The scenarios may be iteratively rerun to achieve load test goals.
Test Results Reporting: The purpose of the last phase is to report the outcome of the work performed for the load test.
The first step in a successful implementation is to perform preparation tasks which include planning, analysis/design, defining “white box” measurement, configuring the environment setup, completing product training and making any customization, if needed.
Planning Purpose of planning is to define the implementation goals, objectives, and project timeline. Project managers and/or technical leads typically perform the planning phase in conjunction with the implementation teams –
• Project goals broadly define the problems that will be addressed and the desired outcome for testing.
• The project objectives are measurable tasks that, once completed, will help meet the goals.
• The project timeline will outline the sequence, duration and staff responsibility for each task.
Analysis/Design In this context, the analysis/design should first identify a set of scenarios that model periods of critical system activity. This analysis is especially important in global operations where one continent’s batch processing is running concurrently with another continent’s online processing.
High volume business processes/transactions should be built into the test. Choosing too few transactions might leave gaps in the test while choosing too many will expand the script creation time. It is effective to model the most common 80% of the transaction throughput; trying to achieve greater accuracy is difficult and expensive. This is typically represented by 20% of the business processes—roughly five to 10 key business processes for each system module.
Margin for Error Since load testing is not an exact science, there should be accommodations made to provide a margin for error in the test results. This can compensate for poor design and help avoid false positives or negatives. A load test should include at least one stress test or a peak utilization scenario. A stress test will overdrive the system for a period of time by multiplying the load by some factor—120% or greater. Peak utilization will address the testing of peak system conditions.
White-Box Measurement The white-box measurement section defines the tools and metrics used to measure internal system
Under-test (SUT) performance. This information helps to pinpoint the cause of external performance issues. It also leads to recommendations for resolving those issues.
Environment Setup The purpose of the environment setup phase is to install and configure the system under test. Preparation includes setting up hardware, software, data, performance test tool and white-box tools. Since the sole purpose of this test environment is to conduct performance tests, it must accurately represent the production environment. It is crucial to know the specifications of the Web server, databases, or any other external dependencies the application might have.
• Software In addition to the hardware required for a load test, the test bed must also have fully installed and functioning software. Since Performance Test Tool functions, “just like a user,” the system would need to successfully support all user actions.
• Network it is probably impossible to accurately model each and every network access (FTP, print, Web browse, e-mail download, etc.), it is judicious to examine the current network utilization and understand the impact of incremental network traffic.
• Geography Often the application under test will support a global enterprise. In this environment tests may often need to be run at remote sites across the WAN. WAN connectivity needs to be emulated in the lab, or assumptions must be made.
• Interfaces Large systems seldom service a company’s entire information needs without interfacing to existing legacy systems. The interfaces to these external data sources need to be emulated during the test, or excluded with supporting analysis and justification.
During the script development phase, the test team builds the tests specified in the design phase. This depends on the number of tests, test complexity and the quality of the test design.
Initial Script Development
It is desirable to have a high degree of transparency between virtual users and real human users. In other words, the virtual users should perform exactly the same tasks as the human users. At the most basic level, any performance test tool offers script capture by recording test scripts as the users navigate the application. This recording simplifies test development by translating user activities into test code. Scripts can be replayed to perform exactly the same actions on the system. These scripts are specified in the design and should be self-explanatory. Any of these following issues can easily increase script development time. –
Lack of Functional Support
One of the most important factors in script creation productivity is the amount of functional support provided—access to individuals who understand application functionality. This manifests itself when a test team member encounters a functional error while scripting—the business process won’t function properly. The team member typically has to stop since he or she is not equipped with the skills to solve the issue. At that point, script creation is temporarily halted until a functional team member helps resolve the issue.
Poor Quality of Test Design
The second script development factor is the quality of the test design. Ideally the test design should specify enough information for an individual with little or no application experience to build tests. System test documentation is often an excellent source of this information. Often designs are incorrect or incomplete. As a result, any omission will require functional support to complete script development.
Low Process Stability
To load/stress test a large system, the system’s business processes first need to function properly. It is typically not effective to attempt to load test a system that won’t even work for one user. This typically means that the system needs to be nearly completed.
System Changes
A key factor in script development is the frequency of system changes. For each system revision, test scripts need to be evaluated. Tests may require simple rework or complete reconstruction. While testing tools are engineered to minimize the effect of system change, limiting the system changes will reduce scripting time.
Availability of Test Data
The system will need to be loaded with development test data. This data often comes from a legacy-system conversion and will be a predecessor to the volume data for the test.
Parameterization Script
Replaying the same user actions is not a load test. This is especially true for large multi-user systems where all the users perform different actions. Virtual user development should create a more sophisticated emulation—users should iteratively perform each business process with varying data. Script development next extends the tests to run reliably with parameterized data. This process reflects the randomness of the user population activity.
Build Volume Data Parallel to script development
Volume data should be constructed to support the execution of the load test. Typically business processes consume data—each data value may be used only once. As a result, there needs to be sufficient data to support large numbers of users running for a number of iterations— often 10,000 items or more.
The execution/analysis phase is an iterative process that runs scenarios, analyzes results and debugs system issues. Test runs are performed on a system that is representative of the production environment. Performance test tool is installed on driver hardware that will create traffic against the application under test.
Data Seeding
The system should be “pre-seeded” with data consumed by the testing process. To keep testing
productivity high, there should be enough data to support several iterations before requiring a system refresh.
System Support
The purpose of system support is to help interpret performance results and white-box data. While the Performance tool will describe what occurred, the system support could help to describe why and suggest how to remedy the problems. These suggestions can be implemented and the tests rerun. This iterative process is a natural part of the development process, just like debugging.
Light Load
The first step is to run the scenario’s test scripts with a small numbers of users. Since the scripts functioned properly in the development environment, the emphasis should be to recreate this functional environment for execution. Any new script execution errors will typically indicate system configuration differences. It is advisable to avoid script modifications at this stage and concentrate on system-under-test installation.
Heavy Load
Finally the last step is to run a full-scale load test. This typically consumes 50% of the total execution/analysis time. Once the entire scenario is running, the effort shifts to analyzing the transaction response times and white-box measurements. The goal here is to determine if the system performed properly.
Finally, the results summary describes the testing, analysis, discoveries and system improvements, as well as the status of the objectives and goals. This typically occurs after the completion of testing and during any final “go live” preparation that is outside the scope of testing.
The Performance Test deliverables could be:
• Performance Test Strategy
• Load Scenarios
• Virtual User Scripts
• Status/Analysis Reports
• Performance Test Summary Document
Following measurements can be illustrated in performance test report –
Attempted Connections: The total number of times the virtual clients attempted to connect to the AUT (Application under Test).
Connect Time: The time it takes for a virtual client to connect to the application being tested (the ABT), in seconds. In other words, the time it takes from the beginning of the HTTP request to the TCP/IP connection.
Hit Time: The time it takes to complete a successful HTTP request, in seconds. (Each request for each database transaction, business logic execution, etc. is a single hit). The time of a hit is the sum of the Connect Time, Send Time, Response Time, and Process Time.
Load Size: The number of virtual clients running concurrently.
Receive Time: The elapsed time between receiving the first byte and the last byte. (Network traffic)
Response Time: The time it takes the ABT to send the object of an HTTP request back to a Virtual Client, in seconds. In other words, the time from the end of the HTTP request until the Virtual Client has received the complete item it requested (Wait Time + Receive Time).
Successful Hits: The total number of times the virtual clients made an HTTP request and received the correct HTTP response from the ABT. (Each request for each database transaction, business logic execution, etc. is a single hit).
Transactions/sec (passed): The number of completed, successful transactions performed per second.
Transactions/sec (failed): The number of incomplete failed transactions per second.
Bandwidth Utilization: Assesses the network health during performance testing.
Memory Utilization: Comparison of memory usage on the server before and during the load test.
%CPU Utilization: Comparison of % CPU utilization on the server before and during the load test.
DB connections: Variation in the number of open db connections
***********************************<><><><><>**********************************
A: Testing is the process of evaluating a system or its components with the intent to find that whether it satisfies the specified requirements or not. In order to produce quality product in the end and hence customers satisfaction.
2) What are Basic Software Testing?
A: There are two basic types of software testing:-
1) Black Box Testing:
In this one will perform testing only on the functional part of an application with out having knowledge on structural part. This is done by Testers. It is a Type of Functional Testing.
2) White Box Testing :
In this one will performs testing only on the structural part of an application. This is done by Developers.
2) What are Types of Testing?
A:
1. Unit Testing :
Unit testing is a software development process in which the smallest testable parts of an application are called Units.The testing of an individual unit or group of related units are called Unit Testing. It falls under the class of white box testing. It is often done by the programmer.
2. Integration Testing:
Integration testing is the phase in software testing in which individual software modules are combined and tested as a group. It may fall under both white box testing and black box testing.
3.Functional Testing:
Functional testing is the testing the application aganist business requirements. It falls under the class of black box testing.
- Functional testing is mainly concentrating on Customer requirements.
- It is performed based on User's perspective.
2. Regression testing :
Regression testing is performed to make sure that previously working functionality still works after chnages made in system.
Usually this type of testing is done in two situations.
* When ever the test engineer identifies the defects, send to development department after rectification developer will release the next build once the next build is released the test engineers will check the defect functionality as well as the related functionality once again.
* When ever some new features are added to the appliction, next build released to the testing department then the test engineer will check all the related features of those new features once again.
Note:
* Testing the new features for the first time is not regression testing.
* Random testing also falls under regression testing.
2. Re Testing :
It is one type of testing in which one will perform testing on same functionality again and again with multiple sets of values in order to come to conclusion to that build is stable.
* Retesting starts from the first build continuous to the last build
* Regression testing starts from the second build continuous to the last build.
* During the regression testing also retesting will be conducted. Some people called it as re and regression testing.
3.Compatibility Testing : It is a type of testing in which one will install the application into multiple environments prepared with different combinations in order to conform whether the application is suitable with those environment or not.
* This type of testing is more important for products rather than projects.
Example: Skype product
Its should support multiple envionments like windows(XP,7,8),Mac,Android,.......etc.
Bowsers like (chrome,FF,Opera,IE 7,8,9,10,......etc) also come into picture when we test web applications.
4. End to End Testing : It is a type of testing in which one will perform testing all the end to end scenarios of the application. Like Give input verify the output result.
5. Adhoc Testing : It is a type of testing performed without planning and documentation. The strength of adhoc testing is that important defects can be found quickly.
6. Smoke Testing : It is used to determine whether there are serious problems with a piece of software. Smoke Testing is considered as the surface level testing which is always used to validate that build provided by development to QA team is ready to accept for further testing. So in this way Smoke Testing is done by development team before releasing means submitting the build to the Software Testing team..
7. Sanity Testing : It determines whether it is reasonable to proceed with further testing. Sanity Testing is the subset of Regression Testing and it is performed when we do not have enough time for doing testing.Sanity testing is performed after the build has clear the Smoke test and has been accepted by QA team for further testing, sanity testing checks the major functionality with finer details.
8. Acceptance testing -Normally this type of testing is done to verify if system meets the customer specified requirements. User or customer do this testing to determine whether to accept application.
9. Load testing – Its a performance testing to check system behavior under load. Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
10. Stress testing – System is stressed beyond its specifications to check how and when it fails. Performed under heavy load like putting large number beyond storage capacity, complex database queries, continuous input to system or database load.
11. Performance testing – Term often used interchangeably with ‘stress’ and ‘load’ testing. To check whether system meets performance requirements. Used different performance and load tools to do this.
12. Recovery testing – Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Comparison testing – Comparison of product strengths and weaknesses with previous versions or other similar products.
Alpha testing – In house virtual user environment can be created for this type of testing. Testing is done at the end of development. Still minor design changes may be made as a result of such testing.
Beta testing – Testing typically done by end-users or others. Final testing before releasing application for commercial purpose.
3) How many types of Testing techniques & Testing Methods?
A: Black Box Testing : It is a method of testing in which one will perform testing only on the functional part of an application with out having knowledge on structural part. This black box test engineers will perform. It is a Type of Functional Testing.
White Box Testing : It is a method of testing in which one performs testing on the structural part of an application.
Gray Box Testing : It is a method of testing in which one will perform testing on both the functional and structural part of an application.
Static Vs Dynamic Testing : It is method of testing in which Reviews, walkthroughs, or inspections are referred to as Static testing, where as actually executhin programmed code with a given set ot test cases is referred to as dynamic testing. Static testing can be omitted, and unfortunately in practice often is . Dynamic testing takes place when the program itself is used. Dynamic testing may begin before the program is 100% complete in order to test particular sections of code and are applied to discrete functions or modules. Typical techniques for this are either using stubs/drives or execution from a debugger environment.
3) What are Testing Levels?
A: The below are levels of testing:-
Unit Testing:
This type of testing is performed by the developers before the setup is handed over to the testing team to formally execute the test cases.The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.
Integration Testing:
Integration testing is testing
in which a group of components are combined to produce output. Also, the interaction
between software and hardware is tested in integration testing .There are two methods of doing Integration Testing Bottom-up Integration testing and Top Down Integration testing.
Bottom-up integrationThis testing begins with unit testing, followed by tests of progressively higher-level combinations of units called modules or builds.
Top-Down integration
This testing, the highest-level modules are tested first and progressively lower-level modules are tested after that.
System Testing:
This is the next level in the testing and tests the system as a whole. Once all the components are integrated, the application as a whole is tested rigorously to see that it meets Quality Standards. This type of testing is performed by a specialized testing team.
- The application is tested in an environment which is very close to the production environment where the application will be deployed.
Acceptance Testing
This is arguably the most importance type of testing as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client.s requirements. The QA team will have a set of pre written scenarios and Test Cases that will be used to test the application.
4) What are Functional and Non-Functional Testing?
A: Functional:
1) In this testing is performed whether application meets the customer requirements
2) It is also called Black Box testing.
Non-Functional:
What is differnce between Regresion and Re-Testing?
A: Regression:
1) It is performed when build comes with issues fixed, to verify whether the issues are fixed and it doesn't affect any related features.
2) This starts from second build to last
Re-Testing:
1) It is performed when functionality need to tested again and again with multiple set of values.
2) This starts from first build to last.
What is differnce between Intergration ,System Intergration and System Testing?
What is differnce between Sanity and Smoke Testing?
A:
Sanity :
1) It is reasonable to proceed further testing or not.
2) This testing is performed by QA
3) It is usually performed after Smoke testing
4) It is the subset of regression testing,when no time for doing testing
Smoke:
1) It is used to determine whether there are serious problems with a piece of software
2) This Testing is performed by Dev team.
3) It is performed before Sanity.
What is differnce between Validation and Verification?
A:
Verification process describes whether the outputs are according to inputs or not.
Validation process describes whether the software is accepted by the user or not.
What is SDLC?(Software Development Life Cycle) ?
4) What is STLC Testing? Write Testing process?A: Functional:
1) In this testing is performed whether application meets the customer requirements
2) It is also called Black Box testing.
Non-Functional:
What is differnce between Regresion and Re-Testing?
A: Regression:
1) It is performed when build comes with issues fixed, to verify whether the issues are fixed and it doesn't affect any related features.
2) This starts from second build to last
Re-Testing:
1) It is performed when functionality need to tested again and again with multiple set of values.
2) This starts from first build to last.
What is differnce between Intergration ,System Intergration and System Testing?
What is differnce between Sanity and Smoke Testing?
A:
Sanity :
1) It is reasonable to proceed further testing or not.
2) This testing is performed by QA
3) It is usually performed after Smoke testing
4) It is the subset of regression testing,when no time for doing testing
Smoke:
1) It is used to determine whether there are serious problems with a piece of software
2) This Testing is performed by Dev team.
3) It is performed before Sanity.
What is differnce between Validation and Verification?
A:
Methods of Verification
1. Walkthrough
2. Inspection
3. Review
Methods of Validation
1. Testing
2. End Users
Verification process describes whether the outputs are according to inputs or not.
Validation process describes whether the software is accepted by the user or not.
Verification : It generally comes first-done before validation.
Validation : It generally follows after verification.
A: SDLC is a
process used by Software industry to Design, Develop and test high quality
softwares . It is also called software development process.
A typical SDLC
contains the following steps:-
Stage1: Requirements
Analysis
- It is the fundamental and most important stage in SDLC .
- It is Performed by the senior members of the team with inputs from customer, sales depart, market surveys, and domain experts in industry.
Stage2: Defining requirements:
- Once the Requirement analysis is done the next step is to clearly define and document the product requirements and get them approved from the customer or the market analysts.
- SRS(Software requirement specification ) is a document which consists of all product requirements to be designed and developed during life cycle .
- Examples of documents prepared SRS,BRS,FRS....
Stage 3: Designing the product architecture:
- Based on the requirements specified in SRS, the product architecture is proposed and documented in a DDS - Design Document Specification
- A design approach clearly defines all the architectural modules of the product along with its communication and data flow representation with the external and third party modules (if any).
Stage 4: Coding:
- In this stage of SDLC the actual development starts and the product is built. The programming code is generated as per DDS during this stage.
- Different high level programming languages such as C, C++, Pascal, Java, and PHP are used for coding.
- The programming language is chosen with respect to the type of software being developed.
Stage 5: Testing:
- This is testing stage of the product where products defects are reported, tracked, fixed and retested, until the product reaches the quality standards defined in the SRS.
Stage 6: Deployment
- Once the product is tested and ready to be deployed it is released formally in the appropriate market.
- The product may first be released in a limited segment and tested in the real business environment (UAT- User acceptance testing).
A: Step 1: Requirements Reveiw
We review the Software requirements and design. It may take 10-20 days. Documents like SRS,BRS,FRS,DDS....etc
Step 2 : Test Planning
Once you have gathered a general idea of what needs to be tested, you ‘plan’ for the tests.
HR(Human resource).
Training requirement.
Recruitment.
What should be tested
What not to be tested
Test Estimation
Test Schedule
Step 3 : Test Design
You design your tests on the basis of detailed requirements/design of the software .
- Test Cases/ Test Scripts/Test Data
- Requirements Traceability Matrix
Step 4: Environment Setup
You setup the test environment (server/client/network, etc) with the goal of replicating the end-users’ environment
Hardware requirement.
Software requirement.
Step 5 : Execution of Test Cases :
You execute your Test Cases/Scripts in the Test Environment to see whether they pass.
Test Case Management Tool : HP Quality Center
- Test Results (Incremental)
- Defect Reports
Step 7: Test Report
Contents of a Test summary report:
Introduction:
- Name of project
- Release No.
- Reference documents: <Reference can be given of test planning document or test strategy document or project plan>
- Target Audience: <Who all will be the recipients of this report>
- Test Cycle:
Testing Type / Level: <Mention testing type / level like unit testing or system testing or integration testing or it may be a functional testing or performance testing etc>
Test Suite Data:
- Number of test suites planned.
- No of test suites executed
- No of test suites could not be executed <Categorize it in two types: Due to some show stopper or due to some other reasons>
Test Case Data:
- Number of test cases planned.
- Number and percentage of test cases implemented.
- Number and percentage of test cases executed.
- Number and percentage of test cases passed.
- Number and percentage of test cases failed (total and by severity).
Defect Data:
- Total open bugs
- New bugs found
- Open bugs of previous release
- Reopened bugs
- Bugs marked as "Not a Bug"
- Bugs marked as "Deferred"
Risks / open issues / support required: <Mention here all potential risks and current open issues with assignment of action items to specific stakeholder. Mention the details of support required, if any.>
Deviations from signed off test plan: <if any>
- Testing team details.
- Testing dates
- Test team locations (if working from different remote locations)
- Traceability matrix can also be attached in test summary report
- Mention the test tracking tool used, test results location etc.
A: Bug : " A computer bug is an error, failure, or fault in a product or computer program that prevents it from working correctly or produces an incorrect result. Bugs arise from mistakes and errors, made by people, in either a program's source code or its design."
Life cycle of Bug :
Log new defect
When tester logs any new bug the mandatory fields
Bug status description :
These are various stages of bug life cycle. The status caption may vary depending on the bug tracking system you are using.
1. New : When QA files new bug.
2. Deferred : If the bug is not related to current build or can not be fixed in this release or bug is not important to fix immediately then the project manager can set the bug status as deferred.
3. Assigned : ' Assigned to' field is set by project lead or manager and assigns bug to developer.
4. Resolved/Fixed : When developer makes necessary code changes and verifies the changes then he/she can make bug status as' Fixed 'and the bug is passed to testing team.
5. Could not reproduce : If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as 'CNR'. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.
6. Need more information : If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, the he/she can mark it as " Need more information". In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.
7. Reopen : If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ' Reopen ' so that developer can take appropriate action.
8. Closed : If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ' Closed '.
9. Rejected/ Invalid : Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.
Bug reporting Tools : JIRA, BUGZILLA
Q:l What is Test Plan?
Ans. A document describing the scope, approach, resources, and schedule of testing activities. It identifies test items, features to be tested, testing tasks, who will do each task, and any risks requiring contingency planning.
Q. What is Test Scenario?
Ans. Identify all the possible areas to be tested (or) what to be tested.
Ans. Identify all the possible areas to be tested (or) what to be tested.
Real time Interview Questions:-
1) What is the most challenging situation you had during testing?
2) In an application currently in production, one module of code is being modified. Is it necessary to re-test the whole application or is it enough to just test functionality associated with that module?
3) In an application currently in production, one module of code is being modified. Is it necessary to re-test the whole application or is it enough to just test functionality associated with that module?
4) If you have tested software and no issues found after that software is tested by another tester and he found defects ,how do you feel at this position?
5) You have tested software works fine and given to production, In productions issues have found then what is your next step?
Objective and Scope
The primary aim of this document is to highlight key considerations in Performance Testing and to provide an insight into the rigor and depth of performance testing
2 Performance Testing
Performance Testing can be viewed as the systematic process of collecting and monitoring the results of system usage and analyzing them to aid system improvement towards desired results.As part of the performance testing process, one needs to gather statistical information, examine logs of system state histories, determine system performance under natural and artificial conditions and alter system modes of operation.
Performance testing complements functional testing. Functional testing can validate proper functionality under correct usage and proper error handling under incorrect usage. It cannot, however, tell how much load an application can handle before it breaks or performs improperly. Finding the breaking points and performance bottlenecks, as well as identifying functional errors that only occur under stress requires performance testing.
The purpose of Performance testing is to demonstrate that
• The application processes required transaction volumes within specified response times in a real-time production database (Speed).
• The application can handle various user load scenarios (stresses), ranging from a sudden load “spike” to a persistent load “soak” (Scalability).
• The application is consistent in availability and functional integrity (Stability).
• Determination of minimum configuration that will allow the system to meet the formal stated performance expectations of stakeholders
Basis for inclusion in Load Test
High frequency transactions: The most frequently used transactions have the potential to impact the performance of all of the other transactions if they are not efficient.
Mission Critical transactions The more important transactions that facilitate the core objectives of the system should be included, as failure under load of these transactions has, by definition, the greatest impact.
Read Transactions: At least one READ ONLY transaction should be included, so that performance of such transactions can be differentiated from other more complex transactions.
Update Transactions At least one update transaction should be included so that performance of such transactions can be differentiated from other transactions.
2.1 Types of Performance Testing
Benchmark Testing:The objective of Benchmark tests is to determine end-to-end timing of various critical business processes and transactions while the system is under low load with a production sized database.
The best time to execute benchmark tests is at the earliest opportunity. Developing performance test scripts at such an early stage provides opportunity to identify and remedy serious performance problems and expectations before load testing commences.
A key indicator of the quality of a benchmark test is its repeatability. That is, the re-execution of a performance test should give the same set of results. If the results are not the same each time, differences in results cannot be attributed to changes in the application, configuration or environment being tested.
Stress Tests:
Stress tests have one primary objective, and that is to determine the maximum load under which a system fails, and how it fails.
It is important to know in advance if a ‘stress’ situation will result in catastrophic system failure or if all components of the system simply ‘just go really slow’. Catastrophic failures often require the restarting of various infrastructures and contribute to downtime, stressful work environments for support staff and management, as well as possible financial loss and breaching of SLAs.
Targeted Infrastructure Tests:
The objective of Targeted Infrastructure tests is to individually test isolated areas of an end-to-end system configuration. This type of testing would include communications infrastructure such as:
-Load balancers;
-Web servers;
-Applications servers;
-Databases.
Targeted Infrastructure testing allows for the identification of any performance issues that would fundamentally limit the overall ability of a system to deliver at a given performance level. Targeted Infrastructure testing separately generates load on each component of an end-to-end system, measuring the response of each component under load.
Each test can be simple, focusing specifically upon the individual component being tested. It is often wise to execute Targeted Infrastructure tests upon isolated components prior to Load or Stress testing as it is much easier to identify (and quicker to rectify) performance issues in this situation rather than in a full end-to-end test.
Soak Tests (Endurance Testing):
Objective of soak testing is to identify any performance problems that may appear after a system has been running at a high level for an extended period of time. It is possible that a system may ‘stop’ working after a certain number of transactions have been processed, maybe due to:
-Serious memory leaks that would eventually result in a memory crisis;
-Failure to close connections between tiers of a multi-tiered system which could halt some or all modules of a system;
-Failure to close database cursors under some conditions which could eventually result in the entire systems stalling;
-Gradual degradation in response time of some function as internal data structures become less efficient during a long high intensity test.
Volume Tests:
Volume tests are tests directly relating to throughput, and are usually associated with the testing of ‘messaging’, ‘batch’ or ‘conversion’ type processing situations.
The objectives of Volume tests are:
- To determine throughput associated with a specific process or transaction;
- To determine the ‘capacity drivers’ associated with a specific process or transaction.
Volume testing a system involves the focusing of throughput through a system function (say, in bytes) rather than response time of a system function (say, in seconds).
It is important when designing Volume tests that the capacity drivers are identified prior to the execution of the Volume testing to ensure meaningful results are recorded. Capacity drivers in a batch processing function could be:
Record Types: The record types contained within one specific batch job run may require significant CPU processing while other record types may invoke substantial database and disk activity. Some batch processing function can also contain aggregation processing, and the mix of data contained within a batch job can significantly impact the processing requirements of the aggregation phase.
Database Size: The total amount of processing effort for a batch processing function may also depend upon the size and make-up of the database the batch job is interacting with.
Failover Tests:
Objective of Failover Test is to get the system under test into steady state and start failing components (servers, routers, etc) and observe how response times are effected during and after the failover and how long the system takes to transition back to steady state.
Failover testing determines what will occur if multiple web-servers are being used under peak anticipated load, and one of them dies. Does the load balancer used in this architecture react quickly enough? Can the other web-servers handle the sudden dumping of extra load?
Network Sensitivity Tests:
Network sensitivity tests specifically focus on Wide Area Network (WAN) limitations and network activity (traffic, latency, error rates, etc) and then measure the impact of that traffic on an application that is bandwidth dependant. The primary objectives of Network Sensitivity Tests are:
• Determine impact on system response time over a WAN;
• Determine the capacity of a system based on a given WAN;
• Determine the impact on a system under test that is under ‘dirty’ communications load.
Response time is the primary metric of measure for Network Sensitivity testing, and is recorded as part of scenario test execution. Response time can be estimated as –
Response Time = Transmission Time + Delays + Client Processing Time + Server Processing Time
Where:
Transmission Time = Data to be transferred divided by bandwidth
Delays = Number of turns multiplied by ‘Round Trip’ response time
Client Processing Time = Time taken on users software to fulfill request
Server Processing Time = Time taken on server computer to fulfill request
2.2 When to Start and Stop Performance Testing?
When to Start Performance Testing
A common practice is to start performance testing only after functional, integration, and system testing are complete; that way, it is understood that the target application is “sufficiently sound and stable” to ensure valid performance test results.
However, the problem with the above approach is that it delays performance testing until the latter part of the development lifecycle. Then, if the tests uncover performance-related problems, one has to resolve problems with potentially serious design implications at a time when the corrections made might invalidate earlier test results. In addition, the changes might destabilize the code just when one wants to freeze it, prior to beta testing or the final release.
A better approach is to begin performance testing as early as possible, just as soon as any of the application components can support the tests. This will enable users to establish some early benchmarks against which performance measurement can be conducted as the components are developed.
When to Stop Performance Testing
The conventional approach is to stop testing once all planned tests are executed and there is consistent and reliable pattern of performance improvement. This approach gives users accurate performance information at that instance. However, one can quickly fall behind by just standing still. The environment in which clients will run the application will always be changing, so it’s a good idea to run ongoing performance tests.
Another alternative is to set up a continual performance test and periodically examine the results. One can “overload” these tests by making use of real world conditions. Regardless of how well it is designed, one will never be able to reproduce all the conditions that application will have to contend with in the real-world environment.
2.3 Pre-Requisites for Performance Testing
Following are the prerequisite which should be in place before performance testing is commenced –
• Quantitative, relevant, measurable, realistic, achievable requirements
As a foundation to all tests, performance requirements should be agreed prior to the test. This helps in determining whether or not the system meets the stated requirements. The following attributes will help to have a meaningful performance comparison.
• Stable system
A test team attempting to construct a performance test of a system whose software is of poor quality is unlikely to be successful. If the software crashes regularly, it will probably not withstand the relatively minor stress of repeated use. Testers will not be able to record scripts in the first instance, or may not be able to execute a test for a reasonable length of time.
• Realistic test environment
The test environment should ideally be the production environment or a close simulation and be dedicated to the performance test team for the duration of the test. A test environment that bears no similarity to the actual production environment may be useful for finding obscure errors in the code, but is, however, useless for a performance test.
• Controlled test environment
Performance testers require stability not only in the hardware and software in terms of its reliability and resilience, but also need changes in the environment or software under test to be minimized. Automated scripts are extremely sensitive to changes in the behavior of the software under test. Test scripts designed to drive client software GUIs are prone to fail immediately, if the interface is changed even slightly. Changes in the operating system environment or database are equally likely to disrupt test preparation as well as execution and should be strictly controlled.
• Performance testing toolkit
The execution of a performance test must be, by its nature, completely automated. However, there are requirements for tools throughout the test process. Main tool requirements for Performance Testing Toolkit are as following -
Test database creation/maintenance
Load generation tools
Resource monitoring
Reporting Tools
3 Performance Testing Methodology
The typical performance testing methodology includes four phases: preparation, development, execution and results summary as shown in the diagram below.
Performance Test Preparation: The first phase starts prior to commencing the performance testing. Project Manager / QA Manager perform preparation tasks such as planning, designing, configuring the environment setup etc.
Script Development: The second phase involves creating the performance test scenarios and relevant test scripts that will be used to test the system.
Test Execution/Analysis: The third phase includes running the scenario. The data gathered during the run is then used to analyze system performance, develop suggestions for system improvement and implement those improvements. The scenarios may be iteratively rerun to achieve load test goals.
Test Results Reporting: The purpose of the last phase is to report the outcome of the work performed for the load test.
3.1 Performance Test Preparation
The first step in a successful implementation is to perform preparation tasks which include planning, analysis/design, defining “white box” measurement, configuring the environment setup, completing product training and making any customization, if needed.
Planning Purpose of planning is to define the implementation goals, objectives, and project timeline. Project managers and/or technical leads typically perform the planning phase in conjunction with the implementation teams –
• Project goals broadly define the problems that will be addressed and the desired outcome for testing.
• The project objectives are measurable tasks that, once completed, will help meet the goals.
• The project timeline will outline the sequence, duration and staff responsibility for each task.
Analysis/Design In this context, the analysis/design should first identify a set of scenarios that model periods of critical system activity. This analysis is especially important in global operations where one continent’s batch processing is running concurrently with another continent’s online processing.
High volume business processes/transactions should be built into the test. Choosing too few transactions might leave gaps in the test while choosing too many will expand the script creation time. It is effective to model the most common 80% of the transaction throughput; trying to achieve greater accuracy is difficult and expensive. This is typically represented by 20% of the business processes—roughly five to 10 key business processes for each system module.
Margin for Error Since load testing is not an exact science, there should be accommodations made to provide a margin for error in the test results. This can compensate for poor design and help avoid false positives or negatives. A load test should include at least one stress test or a peak utilization scenario. A stress test will overdrive the system for a period of time by multiplying the load by some factor—120% or greater. Peak utilization will address the testing of peak system conditions.
White-Box Measurement The white-box measurement section defines the tools and metrics used to measure internal system
Under-test (SUT) performance. This information helps to pinpoint the cause of external performance issues. It also leads to recommendations for resolving those issues.
Environment Setup The purpose of the environment setup phase is to install and configure the system under test. Preparation includes setting up hardware, software, data, performance test tool and white-box tools. Since the sole purpose of this test environment is to conduct performance tests, it must accurately represent the production environment. It is crucial to know the specifications of the Web server, databases, or any other external dependencies the application might have.
• Software In addition to the hardware required for a load test, the test bed must also have fully installed and functioning software. Since Performance Test Tool functions, “just like a user,” the system would need to successfully support all user actions.
• Network it is probably impossible to accurately model each and every network access (FTP, print, Web browse, e-mail download, etc.), it is judicious to examine the current network utilization and understand the impact of incremental network traffic.
• Geography Often the application under test will support a global enterprise. In this environment tests may often need to be run at remote sites across the WAN. WAN connectivity needs to be emulated in the lab, or assumptions must be made.
• Interfaces Large systems seldom service a company’s entire information needs without interfacing to existing legacy systems. The interfaces to these external data sources need to be emulated during the test, or excluded with supporting analysis and justification.
3.2 Script Development
During the script development phase, the test team builds the tests specified in the design phase. This depends on the number of tests, test complexity and the quality of the test design.
Initial Script Development
It is desirable to have a high degree of transparency between virtual users and real human users. In other words, the virtual users should perform exactly the same tasks as the human users. At the most basic level, any performance test tool offers script capture by recording test scripts as the users navigate the application. This recording simplifies test development by translating user activities into test code. Scripts can be replayed to perform exactly the same actions on the system. These scripts are specified in the design and should be self-explanatory. Any of these following issues can easily increase script development time. –
Lack of Functional Support
One of the most important factors in script creation productivity is the amount of functional support provided—access to individuals who understand application functionality. This manifests itself when a test team member encounters a functional error while scripting—the business process won’t function properly. The team member typically has to stop since he or she is not equipped with the skills to solve the issue. At that point, script creation is temporarily halted until a functional team member helps resolve the issue.
Poor Quality of Test Design
The second script development factor is the quality of the test design. Ideally the test design should specify enough information for an individual with little or no application experience to build tests. System test documentation is often an excellent source of this information. Often designs are incorrect or incomplete. As a result, any omission will require functional support to complete script development.
Low Process Stability
To load/stress test a large system, the system’s business processes first need to function properly. It is typically not effective to attempt to load test a system that won’t even work for one user. This typically means that the system needs to be nearly completed.
System Changes
A key factor in script development is the frequency of system changes. For each system revision, test scripts need to be evaluated. Tests may require simple rework or complete reconstruction. While testing tools are engineered to minimize the effect of system change, limiting the system changes will reduce scripting time.
Availability of Test Data
The system will need to be loaded with development test data. This data often comes from a legacy-system conversion and will be a predecessor to the volume data for the test.
Parameterization Script
Replaying the same user actions is not a load test. This is especially true for large multi-user systems where all the users perform different actions. Virtual user development should create a more sophisticated emulation—users should iteratively perform each business process with varying data. Script development next extends the tests to run reliably with parameterized data. This process reflects the randomness of the user population activity.
Build Volume Data Parallel to script development
Volume data should be constructed to support the execution of the load test. Typically business processes consume data—each data value may be used only once. As a result, there needs to be sufficient data to support large numbers of users running for a number of iterations— often 10,000 items or more.
3.3 Test Execution/Analysis
The execution/analysis phase is an iterative process that runs scenarios, analyzes results and debugs system issues. Test runs are performed on a system that is representative of the production environment. Performance test tool is installed on driver hardware that will create traffic against the application under test.
Data Seeding
The system should be “pre-seeded” with data consumed by the testing process. To keep testing
productivity high, there should be enough data to support several iterations before requiring a system refresh.
System Support
The purpose of system support is to help interpret performance results and white-box data. While the Performance tool will describe what occurred, the system support could help to describe why and suggest how to remedy the problems. These suggestions can be implemented and the tests rerun. This iterative process is a natural part of the development process, just like debugging.
Light Load
The first step is to run the scenario’s test scripts with a small numbers of users. Since the scripts functioned properly in the development environment, the emphasis should be to recreate this functional environment for execution. Any new script execution errors will typically indicate system configuration differences. It is advisable to avoid script modifications at this stage and concentrate on system-under-test installation.
Heavy Load
Finally the last step is to run a full-scale load test. This typically consumes 50% of the total execution/analysis time. Once the entire scenario is running, the effort shifts to analyzing the transaction response times and white-box measurements. The goal here is to determine if the system performed properly.
3.4 Test Results Reporting
Finally, the results summary describes the testing, analysis, discoveries and system improvements, as well as the status of the objectives and goals. This typically occurs after the completion of testing and during any final “go live” preparation that is outside the scope of testing.
The Performance Test deliverables could be:
• Performance Test Strategy
• Load Scenarios
• Virtual User Scripts
• Status/Analysis Reports
• Performance Test Summary Document
Following measurements can be illustrated in performance test report –
Attempted Connections: The total number of times the virtual clients attempted to connect to the AUT (Application under Test).
Connect Time: The time it takes for a virtual client to connect to the application being tested (the ABT), in seconds. In other words, the time it takes from the beginning of the HTTP request to the TCP/IP connection.
Hit Time: The time it takes to complete a successful HTTP request, in seconds. (Each request for each database transaction, business logic execution, etc. is a single hit). The time of a hit is the sum of the Connect Time, Send Time, Response Time, and Process Time.
Load Size: The number of virtual clients running concurrently.
Receive Time: The elapsed time between receiving the first byte and the last byte. (Network traffic)
Response Time: The time it takes the ABT to send the object of an HTTP request back to a Virtual Client, in seconds. In other words, the time from the end of the HTTP request until the Virtual Client has received the complete item it requested (Wait Time + Receive Time).
Successful Hits: The total number of times the virtual clients made an HTTP request and received the correct HTTP response from the ABT. (Each request for each database transaction, business logic execution, etc. is a single hit).
Transactions/sec (passed): The number of completed, successful transactions performed per second.
Transactions/sec (failed): The number of incomplete failed transactions per second.
Bandwidth Utilization: Assesses the network health during performance testing.
Memory Utilization: Comparison of memory usage on the server before and during the load test.
%CPU Utilization: Comparison of % CPU utilization on the server before and during the load test.
DB connections: Variation in the number of open db connections
***********************************<><><><><>**********************************
Very valuable post...! This information shared is helpful to improve my knowledge skill. Thank you...!
ReplyDeleteSoftware Testing Training in Chennai | Software Testing Training in Anna Nagar | Software Testing Training in OMR | Software Testing Training in Porur | Software Testing Training in Tambaram | Software Testing Training in Velachery
Thank you for sharing valuable information. Nice post. I enjoyed reading this post. The whole blog is very nice found some good stuff and good information here Thanks.
ReplyDeleteSoftware Testing Services
Functional Testing Services
Test Automation Services
QA Automation Testing Services
Regression Testing Services
API Testing Services
Compatibility Testing Services
Performance Testing Services
Security Testing Services
Vulnerability Testing Services
Very valuable post.
ReplyDeleteSoftware Testing Training in Chennai | Certification | Online Courses
Software Testing Training in Chennai | Certification | Online Training Course | Software Testing Training in Bangalore | Certification | Online Training Course | Software Testing Training in Hyderabad | Certification | Online Training Course | Software Testing Training in Coimbatore | Certification | Online Training Course | Software Testing Training in Online | Certification | Online Training Course
i am glad to discover this page : i have to thank you for the time i spent on this especially great reading !! i really liked each part and also bookmarked you for new information on your site.Top QA Companies
ReplyDeleteTop Automation Testing Companies
Top Mobile App Testing Companies
Top Performance Testing Companies
Amazing Post. keep update more information.
ReplyDeleteSelenium Taining in Hyderabad
Selenium Training in Gurgaon