Sunday, June 7, 2015

GenyControl: Calabash Android + Genymotion with Jenkins integration

Genymotion

Genymotion is a widely used Android emulator, which is an alternative to the vanilla emulator bundled with the Android SDK. It runs virtual devices on Oracle VM VirtualBox making them much faster.

Calabash

Calabash is my choice of automation tool for iOS and Android devices.

GenyControl

GenyControl is a collection of shell scripts I wrote to control Genymotion devices and run Calabash tests. Its purpose is to make sure that only the requested Genymotion device is running. Also to wait until the emulator is properly started and booted before the tests are run. In this article I will explain how it works and how to use it. Feel free to clone it from GitHub:
https://github.com/madarasz/GenyControl

It is composed of two files: control_genymotion.sh and run_test.sh

control_genymotion.sh

control_genymotion.sh defines functions to start and stop emulators and wait until the requested emulators are functional. The most important functions:

  • stop_all_genymotion()
    Stops all running Genymotion devices. It sends the poweroff signal to the VMs and kills any remaining processes. This is a safe way to stop these virtual devices.
  • get_genymotions_running()
    Starts requested Genymotion device(s) and waits until they are operational. You need to pass the name of the Genymotion device you wish to use. First it waits until the device appears on the adb devices list. Then it waits until the "Android" logo disappears which signifies that the booting process is done and the device is operational.
  • get_all_genymotion_names()
    Lists the names of all available Genymotion devices.

run_test.sh

run_test.sh prepares the Genymotion device (using the functions of control_genymotion.sh) and runs the Calabash tests. In order to make it work, you need to set the following environment variables:
  • $DEVICE: device name of requested Genymotion simulator


  • $PNAME: package name of the application to be tested (i.e. com.madarasz.exampleapp)
  • $APK_PATH: path+filename of the apk to be tested (i.e. build/apk/example.apk)
  • $MORE_PARAMS: additional parameters to be used in the Calabash run command (i.e. --tags @smoke)
  • $COLORS: set it to "yes" if you want to have ANSI color codes in the output (making it prettier if your terminal supports such)
  • add the directory of adb (Android Debug Bridge) and player (Genymotion VM player - default directory on Mac: /Applications/Genymotion.app/Contents/MacOS) to $PATH

Jenkins integration - Let's put it all together

Plugins

For this example, I use the following Jenkins plugins:

Setup

The Jenkins integration goes the following way:
  1. Put control_genymotion.sh and run_test.sh into your project folder.
  2. Create a new job as a Freestyle project.
  3. Configure Source Code Management, so Jenkins will check out your project from SVN or Git. (Alternatively you set Use custom workspace in Advanced Project Options to use a local folder for the project files.)
  4. Set environment variables and enable Color ANSI Console Output in Build Environment section.


  5. Add an Execute shell build step like so:

    set +x
    source run_test.sh


    (I use "set +x" to avoid echoing every bash command in the Console Output)
  6. Configure reporting in Post-build Actions. Add both Cucumber reports:

Results

If you have done everything right, you should have:
  • nice Cucumber reports



  • colorful logs in the Console Output



  • Test Result Trend graph


Monday, May 4, 2015

Calabash with hybrid apps, webviews

Automating hybrid apps with Calabash

Hybrid apps

Hybrid mobile apps use HTML, CSS and Javascript to create the functionality and looks of the app instead of using native, platform-dependant elements. It's basically a mobile browser wrapped inside your app.

The usual business reasons to build a hybrid app over a native one are:
  • utilising existing front-ender knowledge of web technologies, less native developer experience needed
  • platform independence, you can use the same HTML, CSS, Javascript code on Android, iOS, Windows Phone - thus development time is reduced
  • use of app development frameworks like PhoneGap or Titanium which might use hybrid technology
The main drawback of hybrid apps over native is performance. Your app will be slower and require more memory because of the additional browser component running and interpreting web elements. Also you might loose the native look and feel of the platform. (I would suggest going native if you have the budget.)

Testing hybrid apps with Calabash

In order to run integration tests on the hybrid app, you will need to interact with the web elements of the app. For the following examples I will use the Calabash framework for automation. (Calabash-Android v0.5.8, Calabash-iOS v0.14.0)

Calabash query for WebViews

The general Calabash query for WebView elements goes like this:
query("<WebView class> css:'<css selector><additional filters>")
The WebView class will depend on the platform (iOS or Android) and the WebView implementation used by the app. Ask the developers of the app or run a query("*") on Calabash console if you are in doubt. More on this topic later.

The css selector will filter for the html elements using id, class or tag name. Examples:
query("webView css:'#customerform'")
query("webView css:'.external-link'")
query("webView css:'a'")
touch("webView css:'a#customerform'")
The additional filters may be used to filter the web element more. Be aware that the text value of elements may be in the text or textContent attribute depending on the platform and the WebView class. (Use Calabash console queries to find out which to use.)
touch("webView css:'a' {textContent CONTAINS 'Expenses'}") 
You can use these queries in any Calabash commands that take a query string as a parameter, for example touch() or wait_for_element_exists().

You may have several WebViews in an app. You can query for all WebViews with the all keyword:
query("all webView css:'*'")

Summary of WebView class names

platform WebView type WebView class
iOS UIWebView webView
UIWebView
WKWebView WKWebView
Android Standard WebView webView
Crosswalk org.xwalk.core.internal.XWalkContent
Titanium ti.modules.titanium.ui.widget.webview.TiUIWebView
ti.modules.titanium.ui.widget.webview.TiUIWebView$NonHTCWebView
PhoneGap
Cordova
cordovaWebView

iOS WebViews classes

Currently on iOS you have two different WebView classes: UIWebView and WKWebView. Usually additional development frameworks (PhoneGap, Titanium, etc.) do not change the WebView classes.

UIWebView was the original class used by apps. You can refer to it in two ways:
query('webView')
query('UIWebView')
WKWebView is available from iOS 8. You can refer to it like this:
query('WKWebView')

Android WebViews classes

As for Android, the development framework (PhoneGap, Titanium, etc.) changes the WebView class in question.

Standard Android WebViews

If you are using the vanilla Android WebViews, go with the standard class name:
query('webView')

Crosswalk WebViews

The Crosswalk component is usually used to eliminate the WebView version dependencies for different Android OS and Chromium versions. You query like this:
query('org.xwalk.core.internal.XWalkContent') 

Titanium WebViews

The Titanium framework also provides it's own WebView class. You can refer to it as such:
query('ti.modules.titanium.ui.widget.webview.TiUIWebView$NonHTCWebView')
The NonHTCWebView part might sound strange, but it's an extension for the TiUIWebView class to avoid crashes on HTC Sense devices.

PhoneGap / Cordova WebViews

Apache Cordova is a set of device APIs that allow a mobile app developer to access native device function such as the camera or accelerometer from JavaScript. It also has its own WebView class. The PhoneGap framework uses the Cordova engine. In both cases, you can refer to the WebView like this:
query('cordovaWebView')

When clicking fails

In the past I have ran into some problems with the limited Calabash support of new WebView classes. If your queries execute, but touch commands fails, you can always try clicking via JavaScript like so:
evaluate_javascript("org.xwalk.core.internal.XWalkContent","document.getElementById('submit').click()")
This example clicks the web element with the id 'submit'.

Links - to read

Sunday, April 19, 2015

Performance testing with jMeter + Jenkins integration

Goal

In this post I will explain how to measure the performance of a web service endpoint with jMeter. We are interested in the behavior of the server using an increasing load. To make things a bit complicated, we will have to log in with each of our test users first and then request the web service endpoint in question.

Tools

jMeter

Our main tool is jMeter, which is a Java application designed to load test functional behavior and measure performance. It was originally designed for testing Web Applications but has since expanded to other test functions.

download link: http://jmeter.apache.org/download_jmeter.cgi

jMeter plugins

We will use additional components provided in the jMeter Plugins Standard Set. We will also rely on the PerfMon Server Agent to monitor server side metrics during the performance test.

download linkhttp://jmeter-plugins.org/downloads/all/
plugin installation manualhttp://jmeter-plugins.org/wiki/PluginInstall/
PerfMon Server Agent manualhttp://jmeter-plugins.org/wiki/PerfMonAgent/

Metrics

  • request load: We will define the load by number of active users who are making request to the service endpoint. (We can also define load by requests / time, I will talk about this later.) To measure performance on different load levels, we will use a staircase-like load.
  • response time: This is the main metric that we measure during the performance test, the time between the request sent and response received. As the server gets more busy, response time will get bigger.
  • error percentage: When the server reaches a critical load, it will reply with an error instead of the regular response. This will help us identify the maximum load the server is capable of serving without any errors. (Usually perceived performance is already bad at this point.)
  • server side
    • CPU usage: This gives us an idea how busy the server is with our requests.
    • Memory consumption: This might reveal memory leaks, especially if we are running long performance tests (soak/endurance tests).

Implementation

Test Plan

This is the top level component which contains the Thread groups (scenarios).

HTTP Request Defaults


to add: right click on Test Plan >> Add >> Config Element >>  HTTP Request Defaults
This defines the default server properties where all requests go to. If you are changing server, change values here.

Thread Groups


to add: right click on Test Plan >> Add >> Thread Group >> jp@gc Stepping Thread Group
This defines the staircase-like load on the server defined by the number of users.

HTTP Cookie Manager (if you want logged in users)

to add: right click on Thread Group >> Add >> Config Element >> HTTP Cookie Manager
This components stores the JSESSIONID cookies during run. You do not have to set up anything, just have it in the Thread Group.

Once Only Controller

to add: right click on Thread Group >> Add >> Logic Controller>> Only Once Controller
We will need this to make sure we perform the login request with each test user only once

HTTP Request


to add: right click on element >> Add >> Sampler >> HTTP Request
This sends the requests for the server. In our scenario, we will need two. One for the login request, create it under the Only Once Controller and another one with the actual service request that we are measuring. Define the path where the requests should go in relative to the already defined Server Defaults. Also you can add parameters if needed (for login request).

Listeners

to add: right click on element >> Listeners >> "name of listener"
These components gather and visualise the results during the performance tests.

Aggregate Report

Gathers info about all the requests sent in the Thread Group in a table fashion. Check out Min, Max, Average and Error values.

jp@gc Response Times Over Time


This provides our main measurement metrics

jp@gc Active Threads Over Time

This should show the same staircase-like figure as the Thread Group. I have this graph just for the possibility to superimpose it on the Response Times Over Time graph.

Response Time Graph


This is a more smoother version of the Response Times Over Time graph.

jp@gc Composite Graph


You can superimpose different graphs into one. Use it to combine information on Response- and Active Threads Over Time data.

View Results Tree


Use this listener when you want to inspect the actual request and response. Use it mainly during the development of performance tests, so you can validate requests. In the example above, I use it to determine the success of login request.

PerfMon Metrics Collector

By installing the jMeter Server Agent on the backend (docs: http://jmeter-plugins.org/wiki/PerfMonAgent/), you can inspect the CPU and memory consumption of the backend during the performance test. Use PerfMon Metrics Collector listener to gather data. You can also put the data on a composite graph like so:


Adding parameters

We might want to run the test with different settings, so it would be nice to parameterize some of them. In this example, we set the host name and port as a parameter.

to add: right click Test plan >> Config Element >> User Defined Variables

We can use the __P(parameter_name,default_value) function. It's always wise to set a default value, so you can spare configuration time later on.


Let's use the parameters in the HTTP Request Defaults element like so:


Running from command line

You can run the jMeter test from command line. In order to extract the measurements, we can specify which data should be saved and how.

In the Response Times Over Time listener specify an output filename, for example: measurement.jtl. We will use xml format, because it can be consumed by a Jenkins plugin. You can either select the data saved by clicking the configure button, or you can specify the defaults in the jmeter.properties file found in the jmeter/bin folder. Set the following properties:
jmeter.save.saveservice.output_format=xml
jmeter.save.saveservice.data_type=true
jmeter.save.saveservice.label=true
jmeter.save.saveservice.response_code=true
jmeter.save.saveservice.successful=true
jmeter.save.saveservice.thread_name=true
Here is the example to run the jMeter test with the xml output and setting a parameter for the host:
jmeter -n -t path_to_jmeter_project/Backend_Performance.jmx -J host=other.someserver.com -l measurement.xml

Integration with Jenkins

We will need the Jenkins performance plugin, install it on your Jenkins: https://wiki.jenkins-ci.org/display/JENKINS/Performance+Plugin

Create a new Jenkins job. Add an Execute Shell build step with the shell command we just discussed. Add a Publish Performance Test result report post-build step to set up the reporting. You can set error thresholds if you would like to. Under big load the server will probably start replying with errors.


Now you can run the Jenkins job to execute the test.



You can drill down the results by clicking Performance Trend >> Last report >> Response time trends:


Links - to read


 

Blogger news

Blogroll

About