Knowledge Share
Would you like to react to this message? Create an account in a few clicks or log in to continue.

Knowledge Share

Knowledge is NOT Power IMPLEMENTATION of knowledge is Power!!!
 
HomePortalGalleryLatest imagesRegisterLog in

 

 Testing Definitions

Go down 
AuthorMessage
Admin
Admin



Posts : 142
Points : 59554454
Reputation : 0
Join date : 2007-12-29
Location : Chennai

Testing Definitions Empty
PostSubject: Testing Definitions   Testing Definitions Icon_minitimeSat Dec 29, 2007 7:17 pm

Contents
• Black Box Testing
• Boundary Value Analysis
• Cause/Effect Analysis
• Condition Coverage
• Decision Coverage
• Decision Tables
• Distribution/Deployment Testing
• Equivalence Partitioning
• Formal Acceptance Testing
• Functional Testing
• Installation Testing
• Integration Testing
• LCSAJ (Linear code sequence and jump)
• Link Testing
• Module Testing
• Multiple Condition Coverage
• Multi-user Testing
• Non-Functional Testing
• Operational Acceptance Testing
• Performance Testing
• Portability Testing
• Proprietary Software
• Prototype Testing
• Recovery Testing
• Regression Testing
• Review
• Shrink Wrap Software
• Statement Coverage
• Subsystem Testing
• Support Software
• System Testing
• Test Tools
• Third Party Software
• Unit Testing
• Usability Testing
• User Acceptance Testing (UAT)
• Volume Testing
• Walkthrough
• White Box Techniques
Black Box Testing A strategy which is concerned, not with the internal software structure, but with finding circumstances in which the software does not behave according to its specification or reasonable expectation.
It is applicable to both low and high order testing. It supplements the White Box approach at unit level, and replaces it at higher levels.
It is capable of detecting incorrect or missing code (i.e. potential scenarios overlooked by developers).
There are three standards, whose applicability depends upon the nature of software under test:
• Equivalence Partitioning
• Boundary Value Analysis
• Cause/Effect Analysis
Boundary Value Analysis This complements equivalence partitioning by looking at the boundaries of the input equivalence classes. Test cases are devised that exercise the module with data chosen at these boundaries and also with data chosen to exercise the module on the boundaries of output data as well. For example, if an input is specified from zero to 255, the equivalence partitioning leads to values of less than 0, between 0 and 255 and greater than 255 as test input. Boundary-value analysis would suggest adding the following values:
-1 0 1
254 255 256
Cause/Effect Analysis This requires that one must write sufficient test cases to explore the entire set of output conditions caused by a combination of input conditions, or equivalence classes of input conditions.
This approach is necessary where required behavior depends upon a number of related factors or a sequence of events.
Condition Coverage Where decisions are made on the basis of complex conditional statements (e.g. . if X < Y/4 and (X > 0 or Z < X) then ....) the decision coverage criterion can still miss faults in these individual conditions. The condition coverage criterion makes us invent test cases that exercise the outcome of each conditional component in a complex decision.
Decision Coverage This technique requires enough test cases to be devised so that each decision has a true and false outcome at least once. The statement coverage criterion will ensure that every statement is exercised and, in particular, every decision. But it will not necessarily exercise every decision outcome. For example, while one test case (X= 0)is sufficient to cover. the statement
if X = 0 then S := 0.
This does not prove that the right action is taken if X is not zero.
Decision Tables A decision table can specify the functional requirements of some programs. If a program is specified in this way, checks of specification consistency and completeness can be carried out.
Distribution/Deployment Testing This involves testing of the processes used to install the system in the live environment. The processes may be manual, automated or a combination of the two.
As well as these testing activities, the other activity which should take place at this stage is planning for a review of system effectiveness after its installation. This review should consider items such as:
• To what extent is the system being actively used by the intended user population?
• Are users experiencing any problems with the system?
• Have all users received adequate training in the use if the system?
• Are on-line help and support arrangements satisfactory?
Equivalence Partitioning This technique relies on looking at the set of valid inputs specified for a module and dividing it up into classes of data that should, according to the specification, be treated identically. One set of test data is then devised to represent each equivalence class. The premise is that any representative will be as good as any other in finding faults in the handling of that class.
Formal Acceptance Testing The purpose of Formal Acceptance Testing is to demonstrate that the developed system meets the clients’ requirements, as defined in the agreed controlling document, i.e. the Requirement Specification or Functional Specification. Acceptance tests relate to the functionality of the system and testing must be selected to demonstrate the agreed acceptance criteria. They are usually a subset of the system tests.
Functional Testing Functional testing means testing that the item does what it is supposed to do, and does not do what it is not supposed to. In other words, it is tested against its functional requirements, both positive and negative, as stated in the item design specification. It is sometimes known as black box testing, since it requires no knowledge of the inner workings of the item.
Note: An 'item' may be anything from a Unit (module) to a complete system.
Installation Testing To check the quality of the database and installation scripts and to ensure that Installation will work.
To check timings needed for installation, for migration of data for instance.
Integration Testing Checks that the system interfaces correctly with other internal systems, e.g. the interfaces between a stock system and a purchase ledger.
Ensures that the system interfaces correctly, at both control and data levels, with external systems.
Ensures that the system functions correctly, at both control and data levels, within the overall business/technical environment in which it will operate.
LCSAJ
(Linear code sequence and jump) Usually identified by source code line numbers - Start line of linear sequence of executable statements, End line of linear sequence, Target line to which control is passed at the end of the linear sequence.
Link testing Also known as Low Level Integration Testing.
Ensures that modules, which have been individually Unit Tested, can be linked together, that data and control can be passed correctly between them and that they continue to function effectively.
Link Testing should be performed during each iteration within the timebox allowing the team to complete areas of functionality before moving on to the next iteration. This is the primary objective. Link Testing is preceded by Unit Testing to ensure accordance with the functional objectives and the Link Testing will allow an 'independent' check on the deliverable. A member of the team other than the developer himself should plan and perform Link Testing.
Module Testing See Unit Testing.
Multiple Condition Coverage This requires sufficient test cases such that all possible combinations of conditional outcomes in each decision are tested at least once. This is the most stringent criterion that can be applied relating to decisions. It improves further on the decision or condition coverage criterion by adding test cases that exercise all possible combinations of all the individual conditions in each decision.
Where a module only has decisions with simple conditions, it is sufficient to use the decision coverage criterion.
Multi-user Testing To ensure that the application continues to operate successfully and meets the performance requirements when operating with high levels of multiple access.
To assess the impact of the system on the overall performance of other systems running on the same hardware.
Includes multi-user concurrent access to the system, to prove locking, system processes and data integrity.
Non-Functional Testing The selection of and extent of non-functional testing must be defined by engineering judgment based on the requirements of the overall system. The Test Manager must consider inclusion of tests in the following areas:
• Response times Does the system fulfill requirements for response times and throughput? The approaches commonly in use include load/stress testing and volume testing.
• Procedural This should include tests of the HCI, user documentation, any human procedures necessary for system operation.
• Capacity What will happen when the database contains a year's worth of data? Will it still function correctly - and quickly enough?
• Stress What happens when multiple users are all entering data at the same time? Does the system slowly grind to a halt, or even crash?
• Reliability Will the system run for several hours without problems or interruption, or is it prone to unexpected crashes?
• Load For networked applications, does the system perform acceptably under "live" network conditions, i.e. with normal network traffic. This is particularly important for internet applications.
• Access security Is it possible for anyone to wander up to access the system?
• Data security/Recoverability What would happen if the system crashed in the middle of a busy morning, or if the database server failed? Would important data be lost forever?
• Communications Can users gain access to the system via the network, or using a dial-up link?
• Ease of use Is the system easy for non-experts to pick up and use? Not all users will be experienced and recently-trained.
• Speed of use Is the user interface geared to work at the speed of the users, or is it too cumbersome?
• Convenience Does the system fit well with the user's way of working?
Operational Acceptance Testing When the system goes live, it will need to be supported by operations staff. This can mean anything from routine database backups to running extensive batch processes. All of these processes must be tested before hand-over can take place. Operational acceptance testing therefore:
• Ensures that the system meets the acceptance criteria of the maintenance functions.
• Checks that the system performs as expected from an operational perspective.
• Enables operations to test their own procedures and to demonstrate that they are able and ready to use the new software.
Performance Testing To ensure that time critical or high volume parts of the system meet the performance requirements of related Service Level Agreements when operating in an environment similar to production.
Portability Testing To ensure that the results obtained after a change in machine environment are the same as those obtained under the old environment, e.g. transfer from development to production environment.
In client server environments there may be many combinations of hardware and software across client workstations, network connections and servers. Portability Testing would ensure that the application works across all combinations.
Proprietary Software (also known as shrink wrap software) Full testing of bought-in proprietary software is not usually possible. Instead, evidence should be obtained regarding the suitability of proprietary software by the release of the software to a defined specification and perhaps knowledge of others using the software in a similar environment.
Prototype Testing A DSDM prototype serves two different roles:
• It is a partial build of the system that will be delivered.
• It is a technique for gathering information to clarify functional or non-functional requirements.
Back to top Go down
https://knowledgeshare.forumotion.com
 
Testing Definitions
Back to top 
Page 1 of 1
 Similar topics
-
» How Much Testing is enough
» What is the testing life cycle
» Guidelines for Manual Testing
» Security and Portability Testing
» Comparison between White box & Black box testing

Permissions in this forum:You cannot reply to topics in this forum
Knowledge Share :: Testing :: MANUAL TESTING-
Jump to: