The earliest version of automation support for the RBT process goes back to 1970 in IBM. William Elmendorf of the Poughkeepsie Labs developed a system called TELDAP (Test Library Design Automation Program). It was written in APL and brought the largest mainframes IBM had to their knees. Mr. Elmendorf is one of the short list of fathers of black box testing. He is the initial developer of equivalence class testing with boundary analysis. In 1980 one of our clients, Bank of America, created another mainframe version called CEGAR (Cause Effect Graphing Analysis Routine). Hitachi in Japan, another of our clients, also created yet another proprietary version in 1981. By 1987 PC technology made it economically viable to create a desk top version. William Elmendorf left IBM and joined in partnership with Bender & Associates to create a whole new incarnation, initially called SoftTest. By 1991 Mr. Elmendorf decided to retire. Bender & Associates became the sole owner of SoftTest. In 2002 the product was renamed BenderRBT (the RBT standing for Requirements Based Testing) and the company renamed to Bender RBT Inc. In 2005 a new component called Quick Design was added which supports three additional test design engines based on Pair-Wise Testing. In subsequent years various improvements were done refining the graphical front end to aid in drawing the cause-effect graphs, add to the exports, and refine the test design engine. Release 8.0 is the most significant release since Release 1.0 of the SoftTest version. The Cause-Effect Graphing test design engine was totally redesigned to take advantage of advances in the algorithms in path sensitizing from the hardware world. Major features such as Neoning were added to allow the tester to manually set the states of nodes on the graph and see the states extrapolated forwards and backwards within the graph. Scalability was addressed to aid in the long term integration with playback tools, test library managers, and requirements managers. Exports were expanded to include the OMG’s TestIF as well as rich text formats of any of the text based reports. RBT now supports all languages for user defined inputs – e.g. the user can enter the Title, Node names, Node descriptions, and Notes in Mandarin.
Bender RBT Inc.'s BenderRBT is a requirements-based, functional test case design system that drives clarification of application requirements and designs the minimum number of test cases for maximum functional coverage. By thoroughly evaluating application requirements for errors and logical inconsistencies, BenderRBT enables project teams to refine and validate the requirements earlier in the development cycle. The earlier in the cycle requirement errors are found and corrected, the less costly and time-consuming they are to fix. BenderRBT uses the requirements as a basis to design the minimum number of test cases needed for full functional coverage. BenderRBT then allows project teams to review both the requirements and the test cases in a variety of formats, including a logic diagram and structured English functional specification, to ensure that the requirements are correct, complete, fully understood and testable. Most testing activities, and the tools that support them, can be divided into the following seven activities:
BenderRBT addresses defining the test completion criteria, designing functional tests to meet the necessary criteria, verifying the test coverage, and assists in verifying test results and in maintaining the test library.
Feature/Benefit Table
Choice of Two Test Design Methods BenderRBT comes with two distinct test case design engines. When you invoke RBT directly you will be given a choice of which you would like to use.
RBT Test Design Engine Options Cause-Effect Graphing (C-E Graphing) takes you to the Graphing based test engine. Quick Design (QD) takes you to the Pairs-Wise based test engines. This includes Orthogonal Pairs and Optimized Pairs. C-E Graphing is intended for business critical, mission critical, and/or safety critical functions. It ensures that you not only got the right answer, but that you got the right answer for the right reason. It addresses the fact that multiple defects can sometimes cancel each other out. C-E Graphing ensures that defects are propagated to an observable point where testers can see the problem. QD is aimed at testing user interfaces (e.g., web pages, screens in client server applications. It is also applicable in designing configuration tests and quick shake-downs of even critical functions. Both C-E Graphing and QD address reducing the nearly infinite number of potential tests down to small, highly optimize test libraries. They both have full constraint rules support (One and Only One, Exclusive, Inclusive, Requires, and Masks) to ensure that the tests created are physically possible while still supporting full negative testing. BenderRBT Cause-Effect Graph Based Test Design Engine Better Requirements Developing high-quality applications begins with the requirements. Requirements must be deterministic and unambiguous in order to ensure that the application is developed and tested accurately. RBT assists project teams in analyzing and reviewing the application requirements to eliminate logical inconsistencies and errors. Using cause-effect graphing, an innovative approach which graphically displays relationships and constraints between application nodes (inputs and outputs), the project team can analyze every aspect of the functional requirements in RBT. RBT then evaluates the recorded information to identify precedence problems in relations and logical errors. RBT provides detailed analysis information in a variety of easy-to-read formats. Analysts and project stakeholders collaboratively can review the natural language test cases generated by RBT, enabling them to identify and correct any requirement errors earlier in the development cycle. Cause-Effect Graphing A proven technique for effective requirements validation and test case design, cause-effect graphing is the process of transforming specifications into a graphic representation. This graphic representation depicts the functional relationships and conditions present in the requirements, illustrating how each input relates to every other input, as well as every output. Constraints and observability of nodes also are established during this process, allowing the project team to identify potential problem areas. In developing the cause-effect graph, the test team evaluates the requirements for completeness, consistency, sufficient level of detail and lack of ambiguity, often finding defects that otherwise would not be found until integration testing.
BenderRBT's Graphic Front-End The graphic front end to RBT allows project teams to quickly create cause-effect graphs, complete with node relationships, constraints, and attributes. When a node is created, users are prompted to enter the required attributes, reducing the risk of incompletely defined nodes. When the cause-effect graph is completed, RBT then designs the test cases based on the requirements depicted in the graph. RBT also uses the cause-effect graph to further evaluate the requirements for logical consistency. The project team can use the test cases generated by RBT to review requirements with stakeholders, or they can use the structured English requirements document automatically generated by RBT. The more readable the requirements are, the more likely the project team is to develop the right application.
BenderRBT's Script Test Definitions Report details every step of the test cases designed, including the input conditions and the expected results (or effects) of each step. Localization Support All of the user entered information - Graph Title, Notes, Node Names, Node Descriptions - can be entered in any language. RBT will then generate the all of its output using this information. Here is the above graph built using Chinese:
Here is an example of a test generated:
Minimum Tests In many testing environments, tests are developed using “gut feel” or combinatorics-based methods. Gut feel testing relies on individual testers to develop the tests to be used, while combinatorics-based testing uses all possible combinations of the inputs. While these test development methods are widely used, they do not ensure full functional coverage, let alone guarantee the minimum number of required tests. BenderRBT uses a mathematically rigorous algorithm to determine the minimum number of test cases required for full functional test coverage. For instance, in an application with 37 inputs, an exhaustive combinatorics-based approach will result in over 130 billion possible test cases. A gut feel testing approach might reduce this number to 50 or 100 tests, but there is no way to know whether they are the right tests for the application. Because the skill level and experience of the individual testers may vary, there is no way to guarantee a high level of functional coverage. In this example, RBT reduces the possible number of test cases to only 22 in a one second. Since these tests are based on the actual, documented requirements, the test team will be testing 100% of the application's functionality. This minimum set of tests cases also significantly decreases the amount of time required to design and build tests, reducing the overall testing effort. In every comparison study our clients have done over the years, RBT has reduced the number of necessary tests by a minimum 4X for equivalent coverage. For groups just using “gut feel” testing it has been closer to a 10X reduction. Maximum Coverage Using a gut feel test design approach, the test team can not be sure that their tests cover 100% of the application's functionality. In fact, studies have shown that in gut feel testing environments, the tests only cover an average of 30-40% of the application's functionality. RBT's proven automated test case design approach ensures that the functional test coverage will achieve 100%, with the minimum number of tests. RBT carefully evaluates all of the cause and effect information it is given to reduce the possible number of test cases to a minimum set that is functionally complete. RBT also cross-references the functions with the test cases. When evaluated with the status of executed tests, this information allows the project team to calculate the percentage of functionality running correctly. Management then can make an informed decision about whether the application is ready for production. Protecting Your Investment In Test Cases The Cause-Effect Graphing process is an iterative one. You generally graph, review the results, and tune the graph until you are sure the requirements are solid and that the graph reflects those requirements. You then implement the test cases. When you commit to building the executable tests you want to ensure that RBT knows that this set of tests is the one you are implementing. This will allow you to protect your investment in these tests. If RBT if aware of existing tests, it can evaluate those tests as the requirements and graph change. How much coverage do the old tests give you? What new tests will you need? What modifications have to be made to the old tests? RBT can answer those questions for you. Therefore, RBT gives you a number of options in generating test cases.
The Run New option will design a new set of tests based on the graph you have just entered. The Run Old option will evaluate the coverage of a set of existing tests against the current version of the graph. The Run Both option will evaluate the coverage of a set of existing tests and then supplement these tests to complete the coverage of the graph. Note: This feature can be used to factor in test cases that were not designed by RBT. There is a dialog for allowing the user to tell RBT about existing test cases, regardless of their source. Matrix Views When planning the testing phase, it is important to understand the functional coverage of each test case, as well as the state of each node in each test case. RBT provides two matrix views that show this information in detail. The Coverage Matrix shows which functional variations are covered by each test. It also illustrates that every test exercises at least one functional variation not covered by any other test. Using this matrix, the test team can be sure that they are testing 100% of the application's functionality. RBT's Definition Matrix summarizes the input and output conditions included in each of the test cases generated by RBT. Both of these matrixes may be exported to Excel for further annotation by the tester.
BenderRBT's Functional Coverage Matrix identifies which functional variations are in which test cases. An “X” means that the variation is in two or more tests. A “#” means the variation is only in one test.
BenderRBT's Coverage Analysis Matrix allows the project team to quantifiably determine the status of testing. When one or more test cases are selected, the Coverage Analysis function calculates the selected test cases' percentage of weak and strong functional coverage.
Fewer Tests Dialog This feature allows you to enter in a number less than or equal to the number of total tests and have RBT determine which is the optimal subset of tests - i.e. which tests would give you the greatest possible coverage.
BenderRBT's Definition Matrix uses a table format to display the state of each node in each test case, allowing at-a-glance understanding of each test case. Strong Support For Agile Agile projects are highly iterative within and across releases. Common problems on agile projects are that tests are often a sprint behind and specifications are never fully documented. In addition to the ability to protect the investment in tests implemented from prior versions of the graphs, RBT can generate a Functional Specification from the models. This user story from a dental insurance application: Resulted in this Cause-Effect Graph:
Which in turn generated tests such as:
And generated this Functional Specification: This ensures that the code, the tests, and the specifications are all provably in sync at the time of the release.
Quick Design has multiple test case design engines, all based on Pair-Wise testing. One is used for Orthogonal Pairs - create a balanced set of tests with pairs in equal numbers of tests to the extent possible. This is used fro designing tests for configuration testing and for creating seed tests for performance testing. The two other engines are for Optimized Pairs testing - cover the set of pairs with the minimal number of tests. Quick Design allows you to design tests in just minutes. You just identify each test input Variable. For each Variable you define the States you want to test.
QD concatenates the Variable description with the State description in the generated test scripts. This saves typing and ensures consistent wording of test scripts. In the above example the final description would read "The customer is a Corporate customer". If needed, you then apply constraints across the Variables/States which identify combinations of data which are physically impossible at this point in the system. However, you still want to do full negative testing.
In this example the constraint rule is that only corporate customers may have building loans. Other functions prior to this one would have rejected any attempt by retail customers or government customers from getting this type of loan. The production data base would not contain any building loans for any customer other than corporate customers. Therefore, we do not want to generate any tests at this point contrary to this rule. Note, however, that in testing the predecessor functions you should have tried creating a building loan for the other customer types. The test result should have been that the loan application was rejected. Quick Design then generates all possible pairs across the Variables/States. This is documented in the Pairs Report.
Note that two of the pairs have a yellow "I" next to them. These are the infeasible pairs - i.e. they violated the constraint we set up. Quick Design then merges the pairs into tests, again ensuring that no constraints are violated. You have two choices in generating tests: Orthogonal Pairs or Optimized Pairs. In Orthogonal Pairs testing each pair occurs the same number of times across the set of test cases. In Optimized Pairs each pair is in at least one test. The goal is to do this in the fewest number of tests possible. We generally recommend orthogonal pairs for configuration testing and optimized pairs for function testing.
As in the Cause-Effect Graphing component, you have the options of creating new tests, evaluating old tests, supplementing old tests as needed, and revising descriptions. You can also define pre-existing tests not created by Quick Design. As in Cause-Effect Graphing, you get the coverage report.
Quick Design also has a utility to calculate coverage based on which tests passed. You can also define subsets of the set of tests with maximum coverage. You also get the Test Definition Matrix.
33. Minimum System Requirements
BenderRBT Test Case Design Tool – Release 8.0 Cause-Effect Graphing Test Case Design Component
Bender-RBT-Product-Overview
HOME/ RETURN TO MENU SELECTION
|