Arquillian: Wait until message has been processed by MessageDriven Bean using JMX

I have the following scenario:

There is a queue called AllotmentQueue which is read by a (here not shown Message Driven Bean) which reads the message and calls the AllotmentUpdateService. The AllotmentUpdateService parses the XML message and calls the AllotmentService to update the Allotments in the database. I am using WildFly 8.0.0.Final as Application Server. I would like to test all this using Arquillian. I will write a message to the Queue and the call the AllotmentService and verify that all data has been inserted correctly:

This is the logic to write the message:
Now I would like to check the result:
Those tests do not work reliably. The reason is that the processing happens asynchronously, see the yellow arrows in this picture.

A work around would be to wait for a while (Thread.sleep), but this is not very elegant, since I never know how long the processing will be, because some files are huge and some not, additionally the performance of each machine varies. So sleeping for a long time will solve the problem, but my tests will run a very long time.

A much better solution is to check the content of the queue using JMX. First I need to know the ObjectName of the MBean, that controls the AllotmentQueue. I can find it using jvisualvm and connect to the running WildFly:

In my case the ObjectName is

There are some attributes in the MBean that can tell us details about the queue:

The attribute deliveringCount tells us, how many messages are currently in delivery. So after publishing the message, deliveringCount will be 1 and when the processing has finished it will be 0 again. So we will extend the test to check the MBean before and after publishing the message:

First we need to read the value of deliveringCount using JMX. Here is a private method for that:
This code is not optimized for performance, but this is ok, because it is in the testing code. Now as first step, we will check that deliveringCount is 0:

The @InSequence(1) will ensure that this method is called before all other methods with an @InSequence greater that 1. The publishing method will be @InSequence(2):
Now we need to wait until the message has been processed:

We will fail after 20 seconds, this value can be adjusted if it is not sufficient. But we will never wait longer the required. Theoretically there might be a racing condition, the we will check instantly that deliveringCount is 0 before it actually jumps to 1, but this has not happened to me before. In this case we could wait for maybe 1000ms at the beginning of the method (Yes, asynchronous is asynchronous).

Finally we will do the actual business logic tests. Note that @InSequence is 10, meaning that they will run after the 3, but in no particular order, which is ok for those tests:
As a final result, all tests run and the test does not need longer than the pure execution time:

This solution should work with all Application Servers and standalone JMS implementations as long as they provide this information via JMX.


Using Burp for backup of Raspberry Pi

I am currently running two Raspberry Pi, one for Icinga (see here for another blog post about this) and the other for OpenHAB. The experience has shown, that the SD cards in the Raspberry can get broken quickly. It is no problem to install Raspian, but I spent hours configuring everything and therefore I want to have a backup of my configuration. I have chosen "burp" as backup solution.

Burp contains of two parts:

  • A server that can run singly on your local network, ideally on the NAS
  • A client that runs on each machine in the network, sending the backup data to the centrally running server.
I use burp a little bit different. I don't want to run any custom software on my NAS, but still want the data to be stored centrally on the NAS, so the architecture is:
  • The QNAP NAS has a /Backup directory which stores all the backup data
  • This Backup directory is exported via NFS to all machines
  • On each machine there is a burp server and client running that stores the data on the mounted Backup directory.

Create and export the Backup directory via NFS

In this step we will create a Backup directory and export it via NFS to all clients that need it.

Create NFS export on NAS

In my case I want the data to be stored on my QNAP NAS which is able to provide a directory using NFS. NFS used IP-based authentication, so this is how I need to configure it. First I create the backup directory:

and make it available to the two RPi IP addresses:

Mount the backup media

This step needs to be done on all machines that do backup and run burp. In my case I want to backup to an NFS share that is provided by by NAS, I add the following line to /etc/fstab:

Now, after running "mount -a" I see the mounted drive with "mount":

I the configuration is wrong you will see this error message:

In this case, check your configuration.

Install and configure Burp

Burp has a more detailed documentation, but for the impatient, here is a short version.

Install Burp

Burp can be installed by simply running "apt-get update" and "apt-get install burp":

We need some dependencies, so additionally run "apt-get install librsync-dev libz-dev libssl-dev uthash-dev" for the rest that we need.

Configure the burp server

Edit the file /etc/burp/burp-server.conf to tell the server to do the backups to /media/backup:

Start the server

Edit the file /etc/defaults/burp and change the value of RUN=no to RUN=yes:

Now start the service using "service burp start" verify that it is running:

The server will also automatically start after a reboot.

Configure the client

We need to configure the client in the file "/etc/burp/burp.conf". Change cname to a unique name, in my case "openhab" and a "include=" line for each directory that needs to be backed up:


In /etc/burp/burp.conf we tell the client that we are "openhab" and the we have the password "abcdefgh". In order for the server to accept that, we need to copy the file /etc/burp/clientconfdir/testclient to /etc/burp/clientconfdir/openhab which contains the password for our client "openhab":

Initial start of the client

Simply launch "burp" and there will be some certificate magic which happens only the first time.

On subsequent starts of "burp" there will only this message:

Note: "burp" implies "burp -a l" which is the command for list existing backups. So calling "burp" anytime will do no harm.

First backup

Run "burp -a b", you will see some lengthy output of the backup process ending with the following screen:

Verify the backup with "burp" (which implies "burp -a l" for listing the backups).

There is now one single backup with the number "0000001". After a few days (on another machine) it looks like this:

In this case we can see, that there were 88 backups done (one daily), each about after 24 hours.

Activate cron for periodic backups

Add the file "/etc/cron.d/burp" and put the following content inside:

This will launch "burp -a b" three times per hour. Note that the server configuration automatically reduces the frequency of backups to one daily. You can change this behavior if you need a higher frequent backup.

Using Burp

Burp now runs and will automatically do daily backups. Surely burp allows you to browse historic backups, list the files and do restores. For example will "burp -a l -b 82" list all files that are contained in backup number 82. "burp -a r ..." allows you to restore files.

Verify the backup location

Last, but not least: Take a look on the NFS directory to really verify that the backups are located on the right place. I mounted the backup directory on my Mac and can see the following filesystem structure:

You can see the following:
  • The machines "icinga" and "openhab" store their backups in different directories.
  • "icinga" has a directory called "deltas.forward" which contains the historic backups since this machine does backups for some longer time.
  • I am able to access the files directly without any magic, which can be really nice in emergency situations. Some backup solutions work good, but require a running client installation in order to access the backup data.
If you like this solutions, please add a comment.


Monitoring the network using Icinga running on Raspberry Pi, Pushover and Nagify

The problem

My family and I live in a house with a lot of technologies and devices, for example
  • Multiple Access Point spread over the house
  • A NAS which is used as storage and backup device (8)
  • A Mac mini acting as "iTunes" content server, mostly for the Apple TV in the living room
  • A SIP based telephony solution which is based on Asterisk and SIPgate (5)
  • A firewall and internet router based on pfSense (4)
  • Multiple telephones Snom M9
  • An Elgato Netstream Sat, a satellite TV network streaming server that provides TV on the Ethernet (7)
A picture of my 19" rack in the basement with some of the mentioned devices:
The rest of the hardware is:
  • (1) Patch panel that terminates the CAT 7 cables in the house
  • (2) 24-port Gigabit Ethernet switch, the main hub for the internal network
  • (3) An old 24-port Fast-Ethernet switch for the perimeter network, currently not used
  • (4) An Alix 2D13 board in a 1HE rack case running pfSense
  • (5) Another Alix board in the same rack case running Askozia, an Asterisk distribution with a nice web front end.
  • (6) A plain old dumb DSL modem, the PPPoE is run on pfSense
  • (9) A Raspberry Pi this blog is about!
Sometimes there are problems like:
  • The balance on SIPgate gets low
  • The connection between Asterisk and SIPgate is stuck and telephony does not work at all.
  • Some of the Access Points in the house stop working correctly, mostly due to heat. This mostly happens in summer.
  • The storage on the NAS is full, so backups of the Laptops stop working.
Most of the problems can be resolved simply:
  • The connection between Asterisk and SIPgate can be fixed by resetting the firewall state tables.
  • The Access Points simply need a reboot or hard reset and continue working for months.
My family members often complained that the telephone did not work, which was simple to fix, but I wanted to monitor and fix it, before anybody else detects it. You see, I have to deliver a SLA that goes  much beyond standard business demand. So what I needed was a monitoring solution with the following requirements:
  • A monitoring solution that is able to flexibly monitor my heterogenous network
  • The possibility to push notification on my iPhone and give me the possibility to take a deeper look on the reason.
  • A separate and therefor independent infrastructure for monitoring. I did not want to add this monitoring to any of my existing devices and servers.
  • Cheap: I am not willing spend a lot of money for neither the hardware, the software and running costs (power)

The solution

To summarize the solution:
  • I have a Raspberry Pi running Raspbian as Operating System
  • It runs Icinga (a Nagios compatible monitoring solution)
  • Notifications are pushed via Pushover to my iPhone, also available for Android.
  • Nagify on the iPhone is a front end for Nagios and Icinga. I am sure that there is an alternative to Android. Nagify has some nifty features, that I will explain later.

Raspberry Pi

The Raspberry Pi is a nice piece of hardware, because it only costs about 40€ and offers a complete computer. It uses a SD-card as memory and only needs about 4W of power. I would have preferred another Alix board, which I run successfully and without any problems for the last 5 years, but with the rack casing this solution is much more expensive. The Raspberry Pi is not very fast, but absolutely ok for running Icinga (with some exceptions). The setup of Raspberry Pi is very easy:
  • Buy an SD-card with at least 8 GB capacity. I have 32 GB, but this is far more then ever needed.
  • Download and install Raspbian. There is no need to use NOOBS.
  • Enable if not already a DHCP server in the local network and connect the Pi to the local network using an Ethernet cable.
  • Stick in the SD-card
  • Power up the Pi
  • Your DHCP server should be able to show you the IP address of the last assigned IP. Make it static and give it a nice DNS entry like "icinga" or "monitor" or whatever.
  • Reset the Pi to get the new IP address
  • You should be able to login with "ssh pi@icinga" with the password "raspberry". In my case there is no more password needed, because I added my public keys for ssh.


I evaluated multiple products
  • Nagios, the grandfather
  • Icinga, a more modern approach
  • Shinken, a very modern approach
  • Zabbix
  • Hyperic
Reasons for Icinga are:
  • It is fully compatible to Nagios so thousands of existing scripts e.g. from Nagios Exchange can be used.
  • There are already packages for Raspian
  • It has a nicer interface than Nagios
  • It has a large community
  • It is lightweight enough to run on the Pi
The basic steps for setting up Icinga are:
  • Install Icinga, MySQL, IDO2DB and Nagios-Plugins. This page offers a nice description.
  • Learn how this configure Nagios(!) objects like Hosts, Host groups, Services and Service groups. The original Nagios document is very good. Understand that there is inheritance in the object definitions that can save a lot of time. The configurations basically happen by editing the *.cfg files. There are some tools that will help you, but I learned the format and how inheritance works so I am edit the files directly by my own.
After that you should be able to login as icingaadmin on http://icinga.yourdomain/icinga/ and see the login screen. Note: Icinga offers a new web fronted called Icinga-Web. It looks nice, but on the Pi it is so slow, that it does not make fun. So stick with the old Icinga frontend which works nice and offers all we need. Here is how mine looks after I configured all:
The one warning that I currently have is the balance on my SIPgate account:
As final steps you should do the following:

  • Configure a provider for Dynamic DNS like DynDNS
  • Configure HTTPS instead or additionally to HTTP so you are able to access Icinga like https://icinga.yourdomain/icinga/
  • Configure your router/firewall to forward the port from external, so you are able to access Icinga from the internet. You ask why? For Nagify...


Nagify is an App for the iPhone that offers you basically the same functionality like the original Icinga web frontend. As the name suggests it was originally developed for Nagios, but also supports Icinga. If you have configured HTTPS/Dynamic DNS and the Port forwarding, you should be able to configure your external URL to Nagify so you will be able to check Icinga remotely.

Overview of the hosts

List of warnings

Warning in detail with the possibility to acknowledge it


Pushover is a nice service that offers an API for sending notifications and Apps for iPhone and Android that display those notifications. The web frontend looks like this:
As you can see, I have registered two iPhones as devices and Icinga as application. Pushover is a perfect solution where Icinga can send us notifications to the iPhone. The following steps are needed:
  • Take the script notify_by_pushover.sh and put it to your /usr/local/bin directory. I prefer adding own data to /usr/local to have a clear separation and easier backup. As you can see, I found this script elsewhere, but I modified it to contain this "ngfy:"-URL which can be used by Pushover, you will see later.
  • Add those command definitions to your configuration. Be sure to configure your correct application and ID settings, I xxx-ed mine out to avoid spam :-)
  • Configure the contacts and contact groups to actually use the Pushover commands and script to send notifications.
If everything is configured correctly, a notification looks like this:
In detail:
As you can see, the modified script contains a URL that starts with ngfy:. This URL has been defined by Nagify so tipping on this link will bring you directly to the service screen in Nagify:
This nice URL feature is described on the Nagify homepage under "FAQ" and "Launch by URL"


I found the solution that I was looking for. It is inexpensive and offers all I needed. Since I activated it, I received some notifications so I was able to fix them quickly. There are open points I did not mention so far:
  • There is monitoring of Icinga itself. A solution would be to have a secondary instance of Icinga just for monitoring the first one. I ordered a rack mount for my Pi which offers space for two, so maybe the second slot get a Pi for exactly this.
  • Pushover uses Apple's Pushover Notifications and Apple recommends not to use this type of notifications for business critical notifications, because Apple cannot guarantee that they will be delivered. I read into this and there are circumstances where messages can be lost, but for this business case the reliability is absolutely sufficient.
  • The Icinga installation and especially the object configuration needs to be backed up. I can imaging that running a Unix on a SD-card in read-write mode will wear it much, so there should be an easy way to re-setup Icinga on a new SD-card if the current gets worn out. I found a nice solution with my NAS. If you are interested in, contact me and I will write a blog post about it.
  • The notification itself requires that the internet connection is available. In case it should drop, the notifications will not pass through. A solution would be to use some device/old GSM phone that would be able to send SMS in such a case (or even instead of Pushover). I would take this into consideration if the internet line would fail often, but it doesn't
I did not write much about the scripts that do the actual monitoring. Tell me if you want to know more.