Monday, October 11, 2010

Spice your tests with a hint of randomness (WIP)

Why should Continuous Integration-tests have randomness?


Randomness will allow you to have more test coverage after you have executed test more than once. Test coverage meaning coverage over input space (allowed input values).

Randomness gives your tests possibility to catch annoying input data (if input data is randomized) specific bugs that are unlikely to be covered without randomized data.

There is no point executing same unit test against same unchanged code -- and this is what is usually happening when running all your tests.

And then the math:
  • lets assume that there is a bug that shows up in 10% of the input space (for example when the tested functions integer argument is dividable by 10.. TODO: think a better example)
  • this means that a test executions possibility to not detect this bug is 90%
  • if the test uses randomized inputs and it can choose any input then the possibility that the test misses the bug in N executions is 0.9^N (the test will find the bug in 100 executions with a possibility grater than 99.99%)
  • if the test hasn't randomized input it doesn't matter how many times it will be executed - it will either find the bug first time or not find it at all (the test will find the bug in 100 -- or million executions wit a 10% possibility)


Problems with randomness



  • Repeatability

  • Readability

  • More work than with a simple nonrandom data



How to handle these problems



  • Repeatability - log the seed of the random number generator

  • Readability

  • More work than with a simple nonrandom data - one randomized test can cover in multiple executions many nonrandom test cases, data generators can be used in many tests if they are organized nicely

Thursday, October 7, 2010

Implementing asynchronous Robot Framework keywords

Before tests can be run I have to start all the necessary systems (processes, servers, stuff and things). Let's imagine that there are multiple systems that need to be started. I would like to start them all at the same time and wait until they are all ready to rock 'n' roll before starting my tests (assume that it would take a lot longer if I started all the systems in sequence).

So how to do this with Robot? Basic idea is to have re-usable keywords for starting each system and a keyword for waiting until the systems are ready so that testing can begin (and we don't have to use ugly and unreliable sleeps). Doing this on Robot keyword level makes it possible to have different combinations of systems in different tests.

*** Settings ***
Documentation Example of using parallel things
Suite Setup StartSystems
Library SystemStarterLibrary.py

*** Test Cases ***
... Here should be my tests

*** Keywords ***
StartSystems
${SYSTEM1_STARTED}= Asynchronously Start System 1
${SYSTEM2_STARTED}= Asynchronously Start System 2
Wait until ${SYSTEM1_STARTED} ${SYSTEM2_STARTED}




The way I'm going to implement this is by using python decorator that executes the function that it decorates in a separate thread. The decorated function will return the thread object so that it can be used to implement the waiting functionality.

I'm using this little code for the decorator.

After I've imported that to my SystemStarterLibrary.py I can implement system starter functions as normal functions.

@run_async
def asynchronously_start_system_1():
.. do stuff to start system 1

@run_async
def asynchronously_start_system_2():
.. do stuff to start system 2


Now all I need to do is to implement Wait until.

def wait_until(*stuff):
for something in stuff :
something.join()



This is kind of OK but it will wait forever if starting of some system will take forever. So it is better to have some timeout that will tricker setup failure after the timeout.

*** Keywords ***
StartSystems
[Timeout] 5 minutes
${SYSTEM1_STARTED}= Asynch Start System 1
${SYSTEM2_STARTED}= Asynch Start System 2
Wait until ${SYSTEM1_STARTED} ${SYSTEM2_STARTED}


And that should do it.

Wednesday, October 6, 2010

Hello World Robot Framework library

This is the way I did my first Robot Framework keyword library. It should show the basic steps to add your own python keywords.

Let's do it in a test-driven way!

Failing test case

Add a file called HelloWorld.txt. This is our robot test suite file.

Add following text to the file:

*** Test Cases ***
HelloWorld
Hello World



After this run pybot HelloWorld.txt - this will execute your Hello World test case.
Output should be something like:

==============================================================================
HelloWorld
==============================================================================
HelloWorld | FAIL |
No keyword with name 'Hello World' found.
------------------------------------------------------------------------------
HelloWorld | FAIL |
1 critical test, 0 passed, 1 failed
1 test total, 0 passed, 1 failed
==============================================================================


Now we have a failing test case! So we can begin to implement our super cool Hello World keyword.

Keyword file

Add a file called HelloWorld.py to the same directory as our HelloWorld.txt test suite.

Add following text to the file:

def hello_world():
print "HELLO WORLD!"


Now we have implemented our fine keyword that prints "HELLO WORLD!". Although our test still fails..

Passing test case

We have to import our super cool library to our test suite. Add following lines to the HelloWorld.txt (before test cases):

*** Settings ***
Library HelloWorld.py



After this run pybot HelloWorld.txt - and watch it PASS:
==============================================================================
HelloWorld
==============================================================================
HelloWorld.HelloWorld
==============================================================================
HelloWorld | PASS |
------------------------------------------------------------------------------
HelloWorld.HelloWorld | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
HelloWorld | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================



Thats it.

Installing Robot Framework on Ubuntu

Lately I've been learning to use the robot framework. These are my notes on how to install it.

Installing pybot - normal robot thing

First install python if it's not already installed.


sudo apt-get install python


Then install easy_install and robotframework.


sudo apt-get install python-setuptools
sudo easy_install robotframework


After this you should have pybot (normal robot thing) installed.


pybot --version
== Should output something like ==>
Robot Framework 2.5.4 (Python 2.6.5 on linux2)


Installing jybot - jython version of robot

I'll assume you have done all the things to install pybot so far (it also installed jybot and all you have to do now is to install correct version of jython).

First a word of warning: Ubuntu is still (you should check is this still valid point if your reading this in the future) using old version of Jython that doesn't work with current jybot.

So we have to first download Jython 2.5 (or later) from jython webpage. Follow jythons installation instructions.

Add JYTHON_HOME to PATH so that you can use it.

After this jybot should work.


jybot --version
== Should output something like ==>
Robot Framework 2.5.4 (Jython 2.5.1 on java1.6.0_20)

Sunday, December 27, 2009

Null Pointer Exception

NullPointerException is the most important exception in Java
Most of the exceptions that I've encountered during the time I've been a programmer have been NullPointerExceptions. I've never really thought that there was something wrong about it until now.

I recently watched a presentation from Infoq given by Tony Hoare called Null Reference: The Billion Dollar Mistake. In the presentation Tony Hoare talks about how he invented the null reference. The presentation got me thinking.

A NullPointerException means that there is a bug in the code. So what's wrong that this information is given as a NullPointerException? Well in my opinion the information is given too late. It should have been caught at the time of coding! The programmer shouldn't made that mistake at the first place.

I know that I'm not the only one trying to get riddof NullPointerExceptions. In Spec#, C# and Java (with help of @NotNull annotation) languages there's a way that you can define that a certain value can't be null, in Haskell there is no null.. To me these approaches seem to be too limiting - they don't really seem to be solutions as they don't give us anything to replace nulls with. But evidently they partly solve the problem by not allowing nulls.

One approach is the Null Object pattern. I think that this is an anti-pattern in most cases. Instead of having that ugly NullPointerException a program now really thinks that there is nothing wrong and handles the empty thing as it was something - This just seems to me as doing a lot of unnecesary operations for nothing. When using the Null Object pattern a programmer introduces a second emptiness that in worst case means that there has to be two checks for null value and for the new Null Object.

One interesting object that I've found (this was mentioned by someone during Hoares presentation) is Option in Scala. It forces client code to check if the value is empty or not. After that the client can safely get the value in the option-container and all the rest of the code can assume that the value is not null.

I think that the Scala approach has something to it: you should explicitly state that a value can be empty (instead of explicitly stating that a value shall not be empty).

This can't be done by returning null (null doesn't force client code to check anything). Instead return something like the Option or have another method that checks if the value is empty (and throw an exception when someone tries to get that value when it is empty).

In my opinion public method parameters should not be allowed to be null - use method overloading (if the language your working with can do that) instead and ensure that there are no null parameters. If a method can have a null parameter it either has to check for that in some part (it's really two methods) or it doesn't use the parameter at all (introduces a way to forward an ugly null to some other part of source code where the null can do more evil stuff).

Saturday, December 26, 2009

When the stack trace just isn't enough

When debugging I usually can find the problem simply by looking at the stack trace or by executing the test case in a debugger. Well then there's those cases when the problem isn't so simple.

Here are my tips for these situations:
  1. Don't panic or give up
  2. Make the system fail earlier
  3. Testing testing testing
  4. Find out the exact version where the bug was introduced to the system
Make the system fail earlier

You most likely can spot what went wrong from a stack trace (object foo was null when it shouldn't be etc.) so you can put assertions to the code that will fail (check if foo is set to null whenever someones setting foo). This can also make it simpler to reproduce the bug as there could be cases where things go wrong but from a user point of view everything is OK.

Testing testing testing

You can reveal points about a bug that are difficult to detect from code simply by testing. In my opinion testing is more objective method than diving directly to the source code (you will always look at the part of the code that you assume to be responsible of the bug --- and sometimes at least I my self will blame the wrong piece of code). For example it can be beneficial to figure out the test cases that are nearly identical to the one that reveals the bug but don't reveal it. This can dramatically narrow down the lines of code that could be responsible of the bug in question.

Shortening a test case that reveals a bug is almost always a good idea. A short test case can be executed quickly and it will focus on the components that contain the bug in question.


Find out the exact version where the bug was introduced to the system

With a quickly executable test case finding out the first version where the bug was introduced should be easy. Once the committed change that introduced the bug is known there usually is only few lines of code that could cause the bug.

This method is great as it can work also in situations where the source code is unfamiliar. Unfortunately there are situations where the method fails or doesn't help much. The bug could have been there forever, another bug in version history can get in your way, developers could have bad habits to do huge weekly / monthly commits etc..