Knowledge Share

Knowledge is NOT Power IMPLEMENTATION of knowledge is Power!!!
 
HomePortalGalleryCalendarFAQRegisterUsergroupsLog in

Share | 
 

 post by Madhuri

View previous topic View next topic Go down 
AuthorMessage
Admin
Admin


Posts : 141
Points : 407
Reputation : 0
Join date : 2007-12-29
Location : Chennai

PostSubject: post by Madhuri   Sat Dec 29, 2007 7:25 pm

Weaknesses
Anything not covered in regression series.
- Repeating the same tests means not looking for the bugs that can be found by other tests.
- Pesticide paradox.
- Low yield from automated regression tests.
- Maintenance of this standard list can be costly and distracting from the search for defects.
Benefits
- The tests exist already (no need for new design, or new implementation, b
there will be maintenance cost)
- Many regulators and process inspectors like them.
- We are investing in reuse we can afford to take time to craft each test carefully. Making it more likely to be powerful in future use.
- This is the dominant paradigm for automated testing. So it is relatively easy to justify and there are lots of commercial tools.
- Implementation of automation tools is often relatively quick and easy 9though maintenance can be nightmare)
Goal

- Good regression testing gives clients confidence that they can change the product (or product environment).

Regression software changes


System design changes


Low level design document changes
Change algorithm logic
Change component structure


System implementation
Component changes
Internal data types& name
Internal structures. Such as class relation ships.
Control flow & data flow
Internal functions.


Component interface changes
Call signatures
Message interactions
Protocol messages & formats.

Technology or language changes.




REGRESSION IMPACTS

Types of system changes
Types of product impacts
Requirement changes
Affect design, coding & testing document

Design changes
Affect coding & tests
Affect associated components
Affect system architecture
Affect related component interactions


Implementation changes
Affect test cases, test data, test scripts
Affect test specification
Code change impacts


Test changes
Affect other tests
Affect test documentation

Document changes
Affect other documents



Why do we regression testing

In any application new functionalities can be added so the application has to be tested to see whether the added functionalities have affected the existing functionalities or not. Here instead of retesting all the existing functionalities baseline scripts created for these can be return and tested.



Regression types of changes



Requirement analysis
Requirements specs. Changes.
Add new functional features.
Change current functional features.
Delete existing functions features.


System design
System architecture charges.
Change component interactions.
Add new components/subsystems.
Update existing components.
Delete existing components.



High level design doc changes.
Change state–based behaviors.
Change component interfaces.
Change database design.
Change GUI design.
Change function design.







How do we regression testing?

Various automation tools can be used to perform regression testing like Win Runner, Rational Robot and Silk test.






PERFORMANCE TESTING
Testing to see system functions in acceptable time frame under simultaneous user load
Ex :- response time, bottle necks(decline performance, bench mark(top limit)

Performance testing focuses

• System process speed (max./min./average)
-system processes, tasks, transactions, responses
-data retrieval, data loading

• system throughput (max./min./average)
-loads, messages, tasks, processes.


• System latency(max./min./average)
- message /event, task/process

• system utilization (max./min./average)
-network, server/ client machines

• System availability (component-level/system-level)
-component/system, services/function
-system network, computer hardware/software

• system reliability( component-level/system-level)

-component/system, services/function
-system network, computer hardware/software

• system scalability (component-level/system-level)
-load /speed /throughput boundary
-improvements on process speed, throughput

• System successes/failure rates for
-communications, transactions, connections
-call processing, recovery

• domain -specific/ application-specific
-agent performance
-real-time report generation speed
-Workflow performance

Performance testing process

• Understand system and identify performance requirements
• Identify performance test objectives and focuses
• Define performance test strategy
• Define /select performance evaluation models
• Define/select performance test criteria
• Define and identify performance test metrics
• Identify the needs of performance test tools
• Define performance test environment
• Write performance test plan
• Develop performance test tools and support environment
• Setup target system and performance test beds
• Design performance test cases and test suite
• Performance test execution and data collection
• Performance analysis and reporting


Performance testing tools

They are classified into

• Simulators
-Message based or table-based simulators
-State- based simulators

• Model- based Data generators
-Pattern -based data generators
-Random data generators

• Performance data collectors & tracking tools
-Performance tracking tools

• Performance evaluation & analysis tool
-Performance metric computation
-Model-based performance evaluation tool

• Performance monitors
-Sniffer Microsoft performance monitor
-External third party tools

• Performance report generators


Objectives

- Refers to test activities on checking system performance
- To confirm and validate the specified system performance requirements
- To check the current product capacity to answer the questions from customers and marketing people
- To identify performance issues and performance degradation in a given system


PERFORMANCE EVALUATION
Using a well-defined approach to study, analyze, and measure the performance of a given system

Tasks & scope
Collect system performance data
Define system performance metrics
Model system performance
Measure, analyze, estimate system performance
Present and report system performance

Objective
Understand product capacity
Discover system performance issues
Measure and evaluate system performance
Estimate and predict system performance

Needs
Well- defined performance metrics
Well- defined performance evaluation models
Performance evaluation tools and supporting environment

Performance Evaluation Metrics


• Performance metrics
-Call request process time
-Event interaction latency
• Throughput metrics(Component & System level)
-Call processing throughput
-Call load throughput rate
• Availability metrics(Component & System level)
• Reliability metrics(Component & System level)
• Scalability metrics
• Utilization metrics





Common used Performance Metrics
(for components/systems)

- Functional or process speed metrics
- User response time metric
- Communication metric
- Transaction speed metric
- Latency metric

Performance metrics for portal V.5. products

- Call process time metric
- Call process speed metric
- Event latency metric
-


PERFORMANCE EVALUATION - MODELS

A well defined formal model which depicts different prospects of system performance of a system


Why do we need performance evaluation models?

- To present the system performance properties
- To provide a guideline for engineers to find the strategy on performance evaluation
- To set up a foundation to define performance metrics
- To identify the needs of the target performance environment.
-
Performance Evaluation Approaches


• Performance testing: ( during production)
-Measure and analyze the system performance based on performance test data and results

• Performance simulation:( pre-production)
-Study and estimate system performance using a simulation approach

• Performance measurement at the customer site(post production)
-Measure and evaluation system performance during system operations
Difference between regression automation tool & performance automation tool

Regression test tools capture tests and play them back later time .the capture and play back feature is fundamental to regression testing.

Performance test tool determine the load a server can handle. And must have a feature to stimulate many users from one machine, scheduling and synchronize different users, able to measure the net work load different number of simulated users

FUNCTIONAL TESTING

Testing functional requirements of an application
Connectivity among components of an application
Ex:- properties and methods of an object are working good
Test engineers should perform this type of testing
Back to top Go down
http://knowledgeshare.forumotion.com
 
post by Madhuri
View previous topic View next topic Back to top 
Page 1 of 1
 Similar topics
-
» [ANSWERED] Programs did not appear under "Programs and Features" after winreducer post installation
» [ANSWERED] Integrating win7 x64 post sp1 updates + ie11 problem
» Post Installation OOBE
» [phpbb3] Like system that shows who liked a post
» [TUTs] Auto Post - Cτng c? t? ??ng post bΰi

Permissions in this forum:You cannot reply to topics in this forum
Knowledge Share :: Testing :: MANUAL TESTING-
Jump to: