This is hence my endeavor to write something about automation frameworks that might help my testing fraternity. The blog lists down various automation Frameworks that are used in the industry and explains as to how they tag along with each other. Lastly I have jotted down some quick guidelines that may help while creating the framework.
The intent is to give you a kick-start on various automation frameworks and not create a detailed article. What suits you the best would be another discussion altogether…
Automation Frameworks:
The framework is a layer that can be applied to an automation tool to make it user-friendlier to the various users like Testers, Developers and Business. A test automation framework is a set of assumptions, concepts, and practices that provide support for automated software testing.
It widens the number of users that can benefit by using an automation tool to functionally test an application. By implementing a framework, an automation tool user no longer has to be a developer. .
Basing an automated testing effort on using only a automation tool to record and play back test cases has its own drawbacks. Running long and complex tests is time consuming and expensive when using only a capture tool. Because these tests are created ad hoc, their functionality can be difficult to track and maintain ( or reuse), and they can be costly to maintain too.
A better choice for an automated testing team that's just getting started might be to use a test automation framework, defined as a set of assumptions, concepts, and practices that support for automated testing.
This blog describes the five industry wide frameworks:
Modular Framework:
This is only of the simplest frameworks to learn. To create this framework all you require is the creation of small, independent scripts that represent modules, sections, and functions of the AUT (application-under-test). These small scripts are then used in a hierarchical fashion to construct larger tests, realizing a particular test case.
For example:
Post recording a small test flow, cut copy paste the section (that defines the control or the process) into a separate low-level independent reusable actions. The Top most level just makes calls to these actions with various parameters. For these reusable actions, it is advisable to make them as generic as possible. You can also use various switch case statements by defining various cases with specific statements. For example, if you are automating Windows Calculator - the main script will just call the “Add” action with various parameters. On the other hand the Add action will have switch case statements with different cases having recognition strings of various numeric buttons. Please see the figure below for details:
From this very simple example you can see how this framework yields a high degree of modularization and adds to the overall maintainability of the test suite. If a control (or its recognitions string) gets changed, all you need to change is the bottom-level action that calls that control, not all the actions that test that control.
Library Architecture Framework:
The LAF is very similar to the Modular framework explained above and offers the same advantages, but it divides the AUT into subroutines and functions instead of scripts. This framework requires the creation of library files (VBS libraries, APIs, DLLs, and so on) that represent modules, sections, and functions of the AUT. These library files are then associated (called) directly from the test case script.
In this, the abstraction level rests by creating different libraries containing functions to perform various operations. For example: one of the library can just be dedicated to define excel functions being used in the test suite. The other library can be defined for capturing various generic functions or customized checkpoints.
Just as in Modular Framework, if a control or its recognition string gets changed on the Calculator, all you need to change is the library file, and all test cases that call that control are updated
Data-driven Framework:
Simple test scripts have test data embedded into them. This leads to a problem
that when test data needs to be updated actual script code must be changed. This might not be a big deal for the person who originally crated the script but for a tester not having much programming experience the task is not so easy. If the script is long and non-structured, the task is hard for everyone. Another problem with having the test data inside test scripts is that creating similar tests with slightly different test data always requires programming / additional effort. The task may be easy—original script can be copied and test data edited—but at least some programming knowledge is still required. This kind of reuse is also problematic because one particular change in the tested system may require updating all scripts. Because of these problems embedding test data into scripts is clearly not a viable solution when building larger test automation frameworks.
A better approach is reading the test data from external data sources and executing test based on it. This approach is called data-driven testing. Because data-driven test data is tabular it is natural to use spreadsheet programs to edit it.
Data-driven testing is a framework where test input and output values are read from data files (QTP Datatables, Excel files, ODBC sources, CVS files, DAO objects, ADO objects and so on) and are loaded into variables in captured or manually coded scripts. Navigation through the program, reading of the data files, and logging of test status and information are all coded separately in the script.
One most important aspect is that the automation script is enriched to act differently for different type of data (although for reading different types of data – you are required to create customized driver scripts). This is done by using various switch case or if else statements. The same module is called for different sets of test data.
For example: if you need to add a trade with various currencies in a deal settlement engine, these currencies will be stored in the data tables. One module will be created to read these data tables unless the deals are entered using all rows present in the data table.
The biggest limitation of the data-driven approach is that all test cases are similar and creating new kinds of tests requires implementing new driver scripts that understand different test data. In general test data and driver scripts are closely coupled and need to be synchronized if either changes. Another disadvantage of data-driven testing is the initial set-up effort which requires programming skills and management.
Keyword-driven/table-driven Framework:
Answer to limitations of data driven testing is Keyword-driven (or Table-driven) Framework. This is a approach where not only the test data but also instructions telling what to do with the data are taken from test scripts and put into external input files. These instructions are called keywords and testers can use them to construct test cases freely. The basic idea—reading test data from external files and running tests based on it—stays the same as in data-driven testing.
This framework requires the development of keywords, independent of the test automation tool used to execute the tests and the test script code that "drives" the AUT and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the AUT is documented in a table as well as in step-by-step instructions for each test.
It is typically an application-independent automation framework designed to process our tests. These tests are developed as data tables using a keyword vocabulary that is independent of the test automation tool used to execute them. This keyword vocabulary should also be suitable for manual testing
For example, to verify the value of a user ID textbox on a login page, we might have a data table record as seen below:
Once you've created your data table(s), you simply write a program or a set of scripts that reads in each step, executes the step based on the keyword contained the Action field, performs error checking, and logs any relevant information.
These scripts would look something like this:
Main Script / Program
Connect to data tables.
Read in row and parse out values.
Pass values to appropriate functions.
Close connection to data tables.
Menu Module
Set focus to window.
Select the menu pad option.
Return.
Pushbutton Module
Set focus to window.
Push the button based on argument.
Return.
Verify Result Module
Set focus to window.
Get contents from label.
Compare contents with argument value.
Log results.
Return.
From this example you can see that this framework requires very little code to generate many test cases. The data tables are used to generate the individual test cases while the same code is reused.
By now you can relate how QTP uses Keyword Driven Framework – so you see the same data tables while creating your actions – right?
Here is a small diagram that will depict the pictorial representation of a Keyword Driven Framework
Hybrid Test Automation Framework
This is a combination of Keyword driven and data driven framework. Over my experience I have seen that every framework gets tailored according to AUT and hence becomes a Hybrid Framework. While automating scripts, the developer starts creating / reusing the modules / scripts along side pulling data from various data sources. Hence this tailor made framework is called as Hybrid Framework. In this framework, the common functions are written in vbs files (library files), which are called by the main driver script. The test steps are written in the spreadsheets through which both the data and instructions are extracted via code. You can implement modularity by nesting test scripts and using the library files (or objects) to implement functions, procedures, or methods.
For example : If you see the figure below – the framework has been tailored to have a modular concept at every possible level. The Driver is the first script that is executed. The Driver invokes QTP compiler to call intended scripts. These scripts are nothing but instructions written in .XLS spreadsheets. The compiler reads an instruction that contains the data as well as the keywords. We have a keyword processesor (which is nothing but a set of code that) determines what QTP should do once it receives a keyword. The data is fetched from data tables and global variables where ever needed. Once the entire link is established, the compiler hits the AUT (performs the intended action.)
This illustrates various common frameworks used in the automation industry. Last but not the least, below are various best practices which can be followed while creating the automation framework.
Some Best Practices while creating Frameworks in QTP
· Make sure the associated add-ins are loaded when QTP is started. Before starting any test, Change Object Repository to Per-Action mode (if not already)
· Make sure that QTP has recognized all objects. In other words, ALL recording should be Context-Sensitive (No Low-Level or Analog Recording)
· Record as much of the test as possible. Then enhance the test by coding – Using Loops, Conditional statements, Checkpoints, VBScript Functions, Win32APIs, QTP Object Model Reference
· No Test Data should be included in the Test Scripts. All test data should be defined in the data sheets. This means when Test Data changes, Test Scripts should require no change at all
· Use meaningful variable names, proper comments and indentation. Proper comments should be seen, especially incase of checkpoints.
· Call out the purpose of the test script, input parameters, test data, pre-requisites and assumptions, if any, in the test script comments explicitly.
· After a Test Script is completed, replay it 3-4 times to make sure it works fine and there are no problems in Object Recognition. Restart QTP and replay the script again to make sure it works.
· No commented code should be seen in the scripts.
· All test scripts should be pointing to the same Login Reusable at a common location in VSS.
· All Calls to the reusable action should be parameterized and in the same manner.
· Action Reusable: The piece of code that can be re used by other test should recorded in a separate action and call in various test.
· Make sure all browser instances are closed before executing a test. This helps in avoiding unnecessary blockage of memory.
· Check the option for allowing other Mercury products to run with QTP simultaneously. This helps in smooth execution of more than one mercury application simultaneously. For example QTP and Multi Test Manager
· Limit use of Recovery Scenarios to unexpected errors
– Do not use recovery scenarios when the error messages are predictable
– Use conditional logic for predictable errors
– Makes script execution more reliable
If you have reached this long, I would appreciate if you can share your feedback about this article. Looking forward to hear from you...
Anupreet Singh Bachhal
asbachhal@yahoo.com