TESTEROPS

A pragmatic approach to QA and OPS

Multiple Scenarios and Data Feeder

In the last tutorial, I had explained about how we can run a simple Gatling load simulation using Typescript. We ran a simple simulation and understood how we can construct a simple test scenario in Gatling.

Let’s forward and now we’ll try to run multiple scenarios in a single script.

Multiple Scenarios

The main benefit of scenario in a Gatling simulation is that you can define multiple scenarios and run it in a single test as a part of your load / performance testing.

let’s modify the script we used earlier and define a new scenario

We’ve just replicated the same scenario as we did it in the previous script. However, the changes are going to be done in setUp on how to run this

If you see closely the setUp method, you will find the added new tests in this – but will new way of running.

For the second scenario, we are going to be using the rampUsers method to ramp up to 10 users in 5 seconds.

Now we will run these tests using

npx gatling run --simulation multiple_scenarios

This will run and then generate the reports –

So in this way you can run multiple scenarios in a single test, using different scenario in the same test script. This is very convenient way of changing the scenarios in a different tests.

Data Feeder

In one or more situations, the need is to test using an external data provider – that feeds data into your system. This could be anything – an excel or csv file, an external DB or a JSON file. So in those scenarios, you would need to repeat the tests for the amount of data that is being fed into the tests. Let’s see how we can do that.

Let’s create a new .csv file named session and put some random data related to the computer database in it.

We have two fields – search criteria and search name – with the two fields name being searchCriterion and searchComputerName

Let’s put this in a separate directory called resources

We’ll create a new gatling simulation file for this test scenario – and then we will create two methods – searchMethod and checkItem and feed data from the csv into the functions.

We create a similar simulation function that we created in the last test and then provide a setUp function as an argument.

We would add a feeder, which uses csv method from gatling to search the csv file.

Next we, would define a searchMethod that will take the items from the feeder and then another method checkItem that will iterate it on the different pages – pagination in simple terms.

Let’s under stand the searchMethod.

  • We first make a hit to the base URL.
  • Then pause for 1 second
  • Then import the data feeder that we defined above.
  • Then hit the URL with the search criteria – using the data from data feeder as a query param.
  • Then in the results we check for the computer name and save the href.
  • Then make a http call to that URL to check that the URL returns 200

Next, we’ll define a method checkItem which will check the pagination up to the defined number of pages that we have.

In this method, we’re using the repeat method 4 times to go to pagination up to 4 pages .

Now, let’s add the httpProtocol that we need to define – where we will define the baseurl and headers etc.

And then we’ll add the scenario and the setUp methods where we’re going to add the load scenarios and the methods that we need to execute.

In the scenario, you can see that I’m passing the two methods that have been created in above sections in the .exec() method.

And, in the setUp , there is a rampUsers methods that will inject a stock of users distributed evenly on a given period of time – which is mentioned using during method.

Now let’s add a command in the scripts section in package.json file

computerdatabase": "tsc --noEmit && gatling run --typescript --simulation searchComputerDatabase


The tsc --noEmit command is used to type-check the TypeScript code without emitting JavaScript files, ensuring our TypeScript code is correct , whereas the second part is the gatling command syntax to run this particular simulation.

RUN

Once I’ve run this test – this test completes in 19 seconds and then we get this output in terminal

If you see the report, I like how they have already aggregated the output for us to show the minute details – especially the last part – if you see , gives a nice summary of the overall test scenario. This is so much better than some other tools. You can also see this information in reports – which is again a plus point.

In the next edition, we’ll see about assertions, running all tests and once and maybe we’ll try to run these tests in docker. See you till then.

If you like the article/blog, be sure to comment on the post. You can find the tests in this Github repo