SQA Careers   |   SQA Adepts   |   SQA forums   |   SQA Blogs   |    SQA Discussion Boards   |   SQA Links
 
Web VAssure.com
Services
 
 
Our Products
Home > Services > Testing Labs > Performance Testing
Test Tool Expertise

Enterprises are increasingly turning to Web Services to enable them to integrate legacy applications and to link new, distributed applications in service-oriented architectures. Because they sit at the core of next-generation IT infrastructures, these critical links must be thoroughly tested for functionality and performance before they are deployed, and managed for service levels in production.

With e-TEST suite and OneSight you can thoroughly test and manage your Web services. e-TEST suite is the leading solution for functional and performance testing of Web services.

In the first phase of Web services adoption, three capabilities are the most important for Web services testing tools:

Testing SOAP messages: moving beyond using SOAP as the interface to the Web service to testing the format of the messages themselves.

Testing WSDL files and using them for test plan generation : WSDL files contain metadata about Web services' interfaces. Testing tools can use these WSDL files to generate test plans automatically.

Web service consumer and producer emulation : when a testing tool generates test messages that it sends to a Web service, it is emulating the consumer for that Service. In addition to the Web service producer, the consumer of the service also sends and receives SOAP messages. A Web services testing tool should therefore emulate the Web service producer as well as the consumer.


Phase Two (2003 - 2005): Testing Service-Oriented Architectures

As new products and services on the market resolve the issues with Web services security, management, and transactions, companies will be able to exchange Web service messages with other companies (customers, suppliers, and partners) in a more uninhibited, loosely-coupled manner. Enterprises and established groups of business partners will find that UDDI-based service registries will become a critical enabler of the dynamic discovery of Web services within controlled environments.

During this second phase of Web services adoption, the following capabilities for Web services testing tools become important:

Testing the publish, find, and bind capabilities of a SOA : three fundamental characteristics of Service-oriented architectures are the publish, find, and bind capabilities of the constituent Web services. Web service testing tools should test each side of this triangle.

Testing the asynchronous capabilities of Web services : today's early uses of Web services are often as simplified remote procedure calls or requests for documents. In addition to these straightforward synchronous uses, SOAP also supports asynchronous messages, including notification and alert messages. Web services testing tools should test each kind of SOAP message.

Testing the SOAP intermediary capability : the SOAP specification provides for message intermediaries. A particular SOAP message will typically have a designated recipient, but may also have one or more intermediaries along the message route that take actions based upon the instructions provided to them in the header of the SOAP message. Web services testing tools must verify the proper functionality of these intermediaries.

Quality of service monitoring : Traditionally, IT management is the province of operations, while software testing belongs to development. This distinction will gradually blur as Service-oriented environments become more prevalent. Therefore, Web services testing tools should have runtime testing capabilities in addition to their design time capabilities.


Phase Three (2004 and beyond): Testing Dynamic Runtime Capabilities

As service-oriented architectures mature and Web service orchestration and business process automation tools become increasingly prevalent, Web services will be dynamically described at runtime. In this phase, Web service consumers dynamically discover Web services as in phase two, but now those Services may change every time a consumer invokes them. Furthermore, the concept of Web service "consumer" and "producer" become less meaningful in phase three, because complex orchestrations of Web services will be the norm. Businesses will expose coarse-grained Web services to their other companies (for example, a "product catalog" Service), and those Web services will actually consist of a large, dynamic collection of finer grained services (say, "product list," "current inventory," "preferred pricing") that may themselves consist of even more finely- grained Web services. In addition, each of these component Web services may contain Services provided by other companies (for ! example, "credit approval").

Web services testing tools in phase three should have the following capabilities:

Web services orchestration testing : One of the most powerful uses of Web Services is orchestration: combining many fine-grained Web services into larger, coarse-grained services. Such orchestrations of Web services typically involve more than one company.

Web services versioning testing : rolling out new versions of Web services, especially when they are combined into complex orchestrations, will be especially complex and risky. For such rollouts to be successful, therefore, the enterprise must have testing tools that can test the new versions of individual Web services either in production, or in an environment that parallels the production environment as closely as possible.


Data Driven Testscripts:

Data-driven test scripts enables you to test the web service application in different scenarios by just changing the test data in an external data source. Thus, enabling you to extensively test all the aspects of the web service application in terms of test data.

You can create data-driven test scripts to send data source values as part of a request to a server. A single test script can be created to test with multiple sets of data where the values are substituted at runtime from an external database or CSV file. This facilitates maximum script re-usability and eliminates the need to re-generate test scripts for each set of data.


Real World Scenarios Testing:

ccurately simulate a large number of virtual users performing a defined set of operations (or business cases) in your web service application.
Group the individual user scenarios as user profiles and associate each user profile with different load levels (normal, ramp-up, or burn-in) as load test cases to capture real-life user testing.
Configure various workload types to test your web service application under different load and stress conditions. This includes:

• Load Test (Normal Workload) - This test measures the capability of your web service application under anticipated production workload. It runs the load test for a constant number of virtual users (steady-state workload) until the given test duration time has passed.

• Peak Tests (Ramp-up Workload) - Ramp-up test determines the peak load at which your web service application fails to respond. It simulates heavy load by gradually increasing the number of users at defined periods until the count reaches the maximum number of users.

• Burn-In Tests (Burn-in Workload) - This test helps you to identify issues with web services when a heavy load is hit for an extended period of time. You can exit the test only based on the specified exit criteria

 

Server and Database Monitors

When users accessing your web service application report a problem, you need to identify the source of the problem which could be in the network, or it could lie with a database or the web server. To monitor all the key elements that drive your entire web service application infrastructure, you need to have specific monitors to collect data from your web servers or databases. QEngine monitors critical web server parameters and database parameters for MySQL and Oracle databases. This provides better insights into the performance of your web servers and databases that form the core components of your web service application.

Configure server monitors to monitor the resource utilization such as CPU and memory usage of your web servers.
Define monitors in Windows or Linux machines to collect data from web servers or databases running in local or remote machines. QEngine uses WMI to monitor server resources running in remote Windows machines and Telnet/CLI to monitor server resources in Linux machines.
Configure MySQL or Oracle monitors to collect the database parameters specific to a database. Parameters collected for MySQL include:

• Thread Details - Threads connected, created, running, cached, etc.
• Connection Details - Max_used_connections, Aborted clients, Aborted connections, etc.
• Temporary Table Details - Created_tmp_disk_tables and Created_tmp_tables.
• Throughput Details - Bytes_received and Bytes_sent.
• Query Details - Total Number of reads, Total Number of writes, Slow_queries, etc,
• Table Related Statistics - Table_locks_waited, Open_tables etc.

 

Storage & Interoperability Testing

When you're introducing a new hardware or software product or system into an existing environment, its important that each piece of the system communicates and works together properly. Interoperability testing early in the design process is essential to keep end users satisfied, minimize technical support costs and remain competitive.

Our Interoperability experts will thoroughly test, optimize and document your storage device for use with independent software vendors (ISV) packages on a wide variety of hardware and Operating System (OS) platforms. We run all necessary tests and provide detailed test reports and problem-tracking data and obtain any necessary certifications.

VAssure's in-depth knowledge and experience enables us to optimize your software product, device for maximum performance on various ISV packages, thus insuring that your product will operate properly with leading ISV software.

The Percept team tests and certifies your products quickly and effectively so you can launch them on schedule and with confidence, knowing that they will perform as expected for your customers.

Interface Testing

Percept also offers in-depth knowledge of :

SCSI,
iSCSI
and Fibre Channel interface testing and test script development.

Our developers have years of experience working with various storage technologies, such as 8mm, AIT, DLT, LTO, Hard Disk, Removable Disk, and automation devices that support these formats.

Black box testing : not based on any knowledge of internal design or code. Tests are based on requirements and functionality.
White box testing : based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.
Unit testing : the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.
Incremental integration testing : continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.
Integration testing : testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
Functional testing : lack-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)
System testing : lack-box type testing that is based on overall requirements specifications; covers all combined parts of a system.
End-to-end testing : similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.
Sanity testing or smoke testing : typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.
Regression testing : re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.
Acceptance testing : final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.
Load testing : testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.
Stress testing : term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.
Performance testing : term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.
Usability testing : testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.
Install/uninstall testing : testing of full, partial, or upgrade install/uninstall processes.
Recovery testing : testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
Fail-over testing : typically used interchangeably with 'recovery testing'
Security testing : testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.
Compatibility testing : testing how well software performs in a particular hardware/software/operating system/network/etc. environment.
Exploratory testing : often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing : similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.
Context-driven testing : testing driven by an understanding of the environment, culture, and intended use of software. For example, the testing approach for life-critical medical equipment software would be completely different than that for a low-cost computer game.
User acceptance testing : determining if software is satisfactory to an end-user or customer.
Comparison testing : comparing software weaknesses and strengths to competing products.
Alpha testing : testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.
Beta testing : testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.
Mutation testing : a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.


QA Services Overview : IV & V Services:

VAssure has proven Independent Validation &Verification experience that ensures faster delivery of quality software, with less risk, at lower costs. VAssure works with its clients development team from the initial stages of the software development process to build quality into the product. Our Testing/QA consultants have extensive experience in testing methodologies, full lifecycle testing, test automation, and training.

VAssure possesses expertise in varied vertical domains including Insurance, Financial Services, Manufacturing, Energy & Utilities, Retail, Telecom, Hospitality, Logistics & transportation, Healthcare and Media. This ensures reduced time for application knowledge transition.

We have expertise in both manual and automated testing. Manual testing services comprise test planning, preparation and execution of test cases and defect reporting, ensuring test coverage of product requirements. In automated testing, we offer a broad spectrum of services starting from defining the automation strategy to tools selection and preparing and executing test scripts. Our well-defined process utilizes industrys leading testing tools from Mercury Interactive, Segue, Compuware, Empirix, Rational, and Open Source (e.g. JUnit, CppUnit), client and homegrown tools. We have executed QA projects across the OSS/BSS domain including Handsets/Devices, Customer Care (CRM/Contact Center), Billing, Web Portals, Order Management, EAI, Distribution, Retail, and Content.

We have been delivering testing services for global clients consistently meeting their stringent quality, schedule and cost expectations. We have successfully tested operating system utilities, J2EE compliant application servers and middleware products. We have enhanced security of web applications by scrutinizing over 7 million lines of code. We have built client-specific domain and QA process knowledge through long-term relationships allowing us to provide high-end services such as QA strategy, User acceptance testing and Test management support to a number of our clients.

Our clients, ISVs in particular, have benefited from VAssures unique e-Testing service resulting in substantial reduction of product risks. In the automation space, not only have we helped clients with application-specific tool selection, but also converted thousands of test cases into test scripts in legacy, client-server and web space. We have testing exposure to a wide variety of technologies and enterprise packages in the areas of CRM, Billing, EAI, and Network, Infrastructure and more.

We have defined and implemented client/domain-specific QA Methodologies, process tool kits by adopting flexible, component-based approach with emphasis on repeatability and reusability to accelerate project cycles and improve quality. We also have dedicated test labs for some of our clients.

QA Service Offerings

VAssure provides Test Process Outsourcing, Test Automation, Test Process Consultancy, and Training services. These business and technology-based testing services within the V&V framework, are performed under strict guidelines to address your data security concerns.

Test Process Outsourcing : VAssure offers Test Process Outsourcing services to clients by supporting the clients current testing needs, specializing in their future technology stack and providing uninterrupted SLA-based service. The service covers Verification & Validation activities across software development life cycle. Offshore leveraging ensures substantial reduction in cost of quality without compromising on service levels.

Test Automation : Automating the testing processes using testing tools helps to reduce the resources involved in manual testing. This includes tool selection, building application-specific test framework, test environment set up, test case review for automation readiness, creating test specifications for test scripts, and deployment at client site. We also develop utilities for generating test data, maintaining test base and coordinating with development team for build and release management.

Test Process Consultancy : This service is designed to meet your QA process improvement needs, such as process optimization for reducing testing cycle times, and defect reduction. Our quality consultants also carry out process diagnostics on your quality process. This involves study of your quality process and structured interviews of stakeholders and practitioners to identify strengths and improvement areas. We analyze definition, awareness and implementation gaps to recommend a short-term, mid-term and long-term road map to meet your quality goa We utilize our knowledge and experience of industry standard process improvement models such as CMMI, TPI and Six Sigma to support your process improvement programs.

Test Base Maintenance : This involves maintaining the application test base and making incremental changes when the application undergoes change. The changes are done systematically to avoid exponential increase of test cases and resultant delay in executing the test suite. A well-established knowledge transition process ensures that knowledge transition happens smoothly and rapidly.

Test Strategy : A product's test strategy ties the product's release and sign-off criteria to its business objectives.

The overall testing strategy is defined in collaboration with the customer. In order to take key dependencies into account, test planning, testcase design, test automation and test execution are aligned with the development schedule. Meaningful test scheduling requires a clear understanding of ETA's and sequencing for :

Completion of low- and high-level specifications;
Code-complete (coding for everything but bug-fixes stops);
Completion of component unit-testing (when QA can begin interaction testing);
UI-freeze (after which QA can be confident that (a) UI level automation will not break repeatedly due to fluctuating UI and screen layouts, and (b) that API-level automation will not be undermined by changes in interfaces/API's).

Identifying key trade-offs is essential, for it is impossible to test all scenarios, cover the full configuration matrix, and automate all test cases, while remaining within the practical limits of time and budget. Trade-offs on the development side need to be in concert with trade-off development trade-offs; otherwise, development and test will have conflicting priorities. This is the stage in which project focus is established.

We identify the features, components, sub-components, and items to be tested and the range of tests to be carried out. In addition to available automation we also estimate other required and possible automation. We catalog the tools used by the customer, potentially useful off-the-shelf tools, and internal Aztecsoft tools that may be used for the project.

We identify which features/components will be tested manually, which will be tested via automation, and what kind of automation tool is required (script-based, GUI-based, proprietary, off-the-shelf, etc.).

To summarize, the key activities are as follows:

Define project scope & commitments
Define terms of reference
Set customer expectations
Tie together the business objectives of the STQE project with the release/sign off criterion and associated testing activity
Integrate the STQE processes with development lifecycle
Partition the problem into manageable test plans
Identify key dependencies & trade-offs
Scope resource requirements

Test Planning : The next step is Test Planning, which defines the approach for testing. The first task is to establish, and seek confirmation from the customer, a clear understanding of the project and its deliverables.

Exhaustive analysis ensures that there is no mismatch between our understanding and the customer's requirements. For example we determine whether we should construct a clean test environment for each test run, or we can use system imaging to shorten test-bed setup.

All relevant product, interface, component, and other external dependencies are identified and the timeframe for delivering the results is computed. The resulting plan is presented in industry-standard format to the customer; further steps are not taken without customer acceptance.

Here are the key steps for Test Planning:

Define release criteria (with the release manager)
Outline and prioritize the testing effort.
Chart test automation requirements
Identify resource requirements at various stages of testing
Set up calendar-based activity plan
State reporting mechanism & establish communication model
Configure team including number, type, and seniority of resources and length of time required, mapped each resource onto the activity plan.

 

Home | Company | Services | Engagement Model | Infrastructure | Insight | SQA Careers | SQA Adepts | Site Map | Contact us
Privacy Policy | Terms & Conditions | Disclaimer