Tuesday, 31 May 2016

Open source Fortran parallel debugging

If you develop parallel code in Fortran, your options for parallel debuggers are not that many. There are a some very good commercial parallel debuggers (mainly TotalView and DDT), and if you are using any decent-size supercomputer to run your code, chances are that these are already installed in the machine.

But from time to time I need to be able to debug code on my Linux workstation while developing new Fortran code. We do have a license for the Intel Fortran Compiler, and in previous versions this shipped with a graphical debugger (IDB) which was very nice for serial applications, but they stopped shipping it around 2013, so I decided to look for an alternative, with GDB.

Before we go for parallel debugging, let's go first for serial code debugging.

Fortran + GDB (serial code)

The issue with GDB is that it doesn't play nicely with Fortran. Let's see an example with the following code:

My current setting is:
* Distribution: Linux  4.2.3-200.fc22.x86_64
* gfortran: GNU Fortran (GCC) 5.1.1 20150618 (Red Hat 5.1.1-4)                                        
* gdb: GNU gdb (GDB) Fedora 7.9.1-20.fc22

I'm able to look at the array datos, put I cannot look for subarrays, like datos(1,:,:), the pointer pdatos is OK if viewed in full, but I can't again look for subarrays of it, like pdatos(1,:)

So we will need some modified version of gdb that plays nicely with Fortran. One possible solution is to use a gdb obtained from Archer (git) http://sourceware.org/gdb/wiki/ArcherBranchManagement, branch archer-jankratochvil-vla, though I haven't used that one and I don't know how it plays along with Fortran.

Another solution is to use the modified version of gdb that comes with the Intel compiler: gdb-ia (I'm not sure if one can get gdb-ia as a standalone download, without the need to get an Intel compiler license).

With our current Intel Compiler version (2016.1.150), the versions of ifort and gdb-ia are:

* ifort: ifort (IFORT) 16.0.1 20151021
* gdb-ia: GNU gdb (GDB) 7.8-16.0.558

With these settings, if we try to use the Intel compiler provided and then debug with gdb-ia, things don't work poperly. Access to the array "datos" seems OK, but if we try to access it via the pointer "pdatos" we don't get it to work:

In principle you can access to any data if you know your way around pointers and you could use syntax like

(gdb) p *((real *)my_heap + 2)

(see http://numericalnoob.blogspot.com.es/2012/08/fortran-allocatable-arrays-and-pointers.html for examples and explanations), but this quickly becomes very cumbersome.

But if we compile with gfortran and then use gdb-ia to debug the code, then allocatable arrays, pointer to them, and subarrays of them seem to work no problem:

Fortran + GDB (parallel code)

So now that we have a working environment for serial code, we need the jump to be able to debug parallel code. GDB is not designed to work in parallel, so we need some workaround to make it a viable platform to debug in parallel.

The usual advice is to run a variant of the following:

mpirun -np xterm -e gdb ./program

so, for example, if we are running our program with 4 processors, then 4 xterms will open, and in each of them we will have a gdb session debugging one of the MPI rank processes. The problem with this is, obviously, that we will have to go through each xterm to advance through the code, and soon this will become very cumbersome, due to having to change from window to window all the time and also because all the xterms will take up too much screen space.

So I wanted to find a solution that is more convenient (in terms of not having to replicate all the gdb commands in all windows) and also that can make better use of the available screen space.

First attempt

My first attempt involved the many xterms method above, but then with two improvements:

  1. I would use x-tile (http://www.giuspen.com/x-tile/) to automatically tile all the xterms and maximize their use of screen space.
  2. I would use keyboardcast (https://launchpad.net/keyboardcast) in order to control all the xterms from one single application.

This was more ore less OK as I was testing this on a PC with Ubuntu on it, but for other distributions keyboardcast seems to have a lot of dependencies (the source code can be downloaded from archive.ubuntu.com/ubuntu/pool/universe/k/keyboardcast/keyboardcast_0.1.1.orig.tar.gz), and also I could not use it for remote machines, since keyboardcast only knows about X applications running locally (or at least I couldn't find a way to control terminals launched in a remote server to which I had connected with ssh -X)

Second attempt

So I looked for another solution, one which I could use remotely and which didn't depend on installing packages with many external dependencies. A semi-decent solution that I found was to submit the mpirun job in a remote server where every process is sent to its own screen running gdb-ia (screen as in http://linux.die.net/man/1/screen) and then remotely use terminator (https://launchpad.net/terminator/http://gnometerminator.blogspot.com.es/p/introduction.html) to connect to those running screens, with the added benefit that I can control all the gdb sessions simultaneously and, thanks to screen, I can even stop debugging in one machine, go to another one and continue debugging in the place where I left it off.

So, let's see the details. Let's assume the following simple Fortran+MPI code, a variation on the serial code above:

Which I compile with gfortran_5.1.1 and its derived mpif90 (with library version OpenMPI openmpi-1.10.2 in this case, though that version of MPI should not matter in principle) in remote server "duna", which is the same FC22 machine where I was doing the serial tests above.

mpif90 -g -o test_mpi_gfortran test_mpi.F90

And (as suggested in https://bfroehle.com/2011/09/14/debugging-mpi-python/), I launch it as:

mpirun -np 4 screen -L -m -D -S mpi env LD_LIBRARY_PATH=$LD_LIBRARY_PATH gdb-ia -tui ./test_mpi_gfortran

(you can include the & in the end if you want to get back the terminal at the remote server, but I prefer it like this, so when I finish the debugging session I can just Ctrl-C this terminal and I will not leave any leftover processes hanging around).

That line has created 4 screens sessions, and in each one a gdb-ia process will be running. So now it is time to connect to them, and I can easily do it from my client workstation (in this particular case running Ubuntu 14.04).

  • I start terminator, and create 4 tabs. Then, from the dropdown menu I select "Broadcast all", and then ssh to the remote server (just doing it in one of the tabs will replicate all the keystrokes to the other tabs, so the four terminals will connect to the remote server). 
  • Then we need to connect each of the terminals to one of the screen sessions. 
    • If I use (as suggested in https://bfroehle.com/2011/09/14/debugging-mpi-python/gnome-terminal, then I have the same issue as before, that I will not be able to control all of them at the same time. 
    • If from terminator (while "broadcasting all" is still active) I type "screen -RR -p mpi" in one of the terminals, then it looks like all of them connect to the same screen session, which we obviously don't want.
    • For the moment, an ugly hack (let me know if you have a better idea) is to make each of the terminals wait some random seconds, which we can do in bash with:
         sleep $[ ( $RANDOM % 20 ) + 1 ] ; screen -RR -p mpi

This is obviously not very robust, so I should look for a better way, but for the moment it will make sure that each of the terminal tabs will connect to the screen session with some interval of time between them, which works most of the time (if when you start typing anything you see that a keystroke shows more than once, then it means that some terminals tried to connect simultaneously to the same screen session giving trouble, so you should redo, perhaps with a longer sleep time.

Now, terminator is very powerful, and if you prefer to have dettached tabs to see simultaneously what is going on in each processor, you can definitely do it. For example see  http://unix.stackexchange.com/questions/89339/how-do-i-run-the-same-linux-command-in-more-than-one-tab-shell-simultaneously for an example of running a grid of 8x4 terminals using terminator.

So now, if you know your way around the TUI interface, you can just control all the processors at once, or just one by one (by selecting "Broadcast none"), and you will be able to inspect properly allocatable arrays, pointers, etc.

With Emacs + GDB integration

I don't like the TUI interface that much, and I would like to use Emacs GDB mode instead, but this version of gdb-ia doesn't play very nicely with Emacs, and on calling gdb from within Emacs, I get the following error:

~$ Error: you did not specify -i=mi on GDB's command line!

To solve the issue (I've been told that this won't be necessary in future releases of gdb-ia) we need to create a wrapper script (let's call it gdb_wrap.sh):

And now, for the final touch, in the remote server we just define another script (let's call it edbg):

So now in the remote server we can do:
mpirun -np 4 screen -L -m -D -S mpi env LD_LIBRARY_PATH=$LD_LIBRARY_PATH edbg ./test_mpi_gfortran

This will do the same as before, but instead of launching 4 gdb's in the remote server with the TUI interface, we will have four Emacs (one for each MPI process) and each one with its GDB interface (which is quite a usable interface if we run gdb-many-windows).

As an example, you can see a very simple debugging session in the following video, where I start a 4-processor job with mpirun in the remote server "duna", and then at my "carro" workstation I launch terminator with 4 terminals, which I control all at the same time thanks to "Broadcast all" option, and as we can see towards the end, each terminal is running a different process, and you can see that when I print "my_id" or the contents of the "pdatos(1,:)" pointer array each process shows its own contents.

Any comments/suggestions to make the debugging session more comfortable/useful are very welcome.

Sunday, 8 May 2016

Solo Teide climb (from sea level, route 0-4)

Mount Teide is the highest point in Spain (3718m), and although I have climbed it several times, this was my first attempt from sea level.

My original plan was to do it on March 5, 2016, but just two weeks before that date it started to snow very heavily and by March 5 the roads and paths were still closed and had to postpone it, though Teide looked very beautiful covered by snow.

Picture from "La Opinión de Tenerife" (http://www.laopinion.es/multimedia/fotos/sociedad/2016-02-21-41226-nieve-teide-febrero-2016.html)

My second attempt was for April 9, 2016. But the paths were still closed, so I had to postpone it again.

The third attempt was for May 7, 2016. Paths were open, and everything looked fine for that date, but just the day before I read this at the Teide National Park Facebook page:


which basically means that there will be hunters in the paths that I had to use, in order to control the muflon population.

I'm not sure what would scare me more, a hunter shooting near me or a close encounter with a muflon...

Luckily for me the hunting would be only on Friday (May 6), so I could go ahead on May 7.


This is a long route and my plan was to do it on my own all the way to the peak, then take the cable car down to the main road where my family would pick me up to go back home. So before attempting the climb, there were some preparations to be done:

* In order to climb Mt. Teide all the way to the top, you need a permit, which you can get online at: https://www.reservasparquesnacionales.es/real/parquesnac/usu/html/detalle-actividad-oapn.aspx?ii=6ENG&cen=2&act=1

* It is obviously wise to check the weather.  All was looking very good for May  7 (from http://www.meteoexploration.com/forecasts/Teide/?lang=en)

* I didn't want to go down on foot, so it is also wise to check whether the cable car is working, and I even bought a ticket online just in case. This can be done at: https://www.volcanoteide.com/en/teide_cable_car/prices_and_opening_times

* The last cable car on the way down departs the station at 16:45, so I had to make sure I would reach the station before that time. To estimate how long I would need to do the whole route I looked at Wikiloc and based my estimates on these two: http://www.wikiloc.com/wikiloc/view.do?id=5483810, http://www.wikiloc.com/wikiloc/view.do?id=3235182. Based on these routes and my statistics for previous hikes, I estimated I would need about 11 hours to do the whole climb. That meant to start walking at 05:00 to be at the peak at 16:00.

* Blisters have been a serious issue for me in previous hikes, and this was going to be a long one, so I got extra prepared and I bought: special socks (two pairs, so I could change the wet ones for dry ones en route), blister prevention cream, blister prevention tape, and band-aids. I dind't want blisters to stop me, so the plan was to stop every two-three hours and inspect the feet for possible signs of blisters and try to stop them before they could grow.

* The day before the climb I packed everything, with special attention for water (5 liters, about 50cl/hour, plus about another liter to drink just before starting).

Kids didn't let me go to sleep until about 22:45 on Friday and, I guess due to anxiety, I was awake already by 03:00 on Saturday. My original plan was to wake up at 04:00, but it was clear I would not be able to sleep any more, so I got out of bed at 03:15. In the long run this was very lucky, because breakfast and my anti-blisters preparations took me longer than I thought. I had to force myself to have a decent breakfast at that time, then drove to Playa de El Socorro where the route starts, and I started walking at around 05:10, just a few minutes after the original plan.

The whole route took me just below 11 hours, and the Wikiloc tracking can be seen at http://es.wikiloc.com/wikiloc/view.do?id=13233788:

Powered by Wikiloc

Some pictures taken during the climb:

Time to start...

 By dawn I was at the beautiful "Mirador El Asomadero"

The goal still looking pretty far!

Flowering season (1)

Flowering season (2)

La Fortaleza

The view at around lunch time

Down there it was pretty cloudy, but very sunny up here!

All lava rocks when getting closer to the peak.

And finally the summit!

The cable car on the way down.

If you like the scenery, but don't want to do the hard work, you can see the whole route in the following video (available also at: https://youtu.be/bvaJrf7CqT4), made by uploading the GPS generated track to Google Earth, as explained in a previous post (the mobile phone crashed at least three times during the climb, so the GPS data had some gaps, and this shows in the video as some jumps, sorry!). Music track: Kostbar, from the album Lux by Afenginn (https://afenginn.bandcamp.com/album/lux)

This is obviously a hard route, and in my case I would have been happier going a bit slower, specially towards the end, when my legs were getting a bit tired, but I had to force myself to almost walk non-stop for the last few hours in order to be at the cable car station on time. 

Overall it was a perfect day: the weather was very good, so I didn't need to worry about rain and it was not so cold that I had to take extra layers of clothing, just a t-shirt and a wind-proof jacket for the coldest parts. My main worries were blisters and boredom (since I did this on my own). All my anti-blisters preventions paid off, and surprisingly I had almost no blisters at all (only a couple of small ones in the little toes). Prevention for boredom was provided by my son, who lent me his MP3 player, which I filled with BBC podcasts to keep me entertained. 

A friend is teasing me with going now for the 0-4-0 route (i.e. the same I did here, but then going all the way back to the starting point again). Let's see.... stay tuned!

Monday, 29 June 2015

Stabilizing and slowing down videos

Those of us with a regular camera and no tripod are used to shaky videos. To fight this disease, we can use software to help stabilizing it. In my current system, Ubuntu 14.04, the software 'transcode' already has everything you need, and it is extremely easy to get it working. Assuming you have a shaky video, just issue the following two commands:

transcode -J stabilize -i ORIGINAL.MOV
transcode -J transform -i ORIGINAL.MOV -y xvid -o STABILIZED.MOV

The result can be pretty good. As a comparison, see below (left: original; right: after stabilizing).

Another nice thing to be able to do with videos is to slow them down in certain parts, but creating interpolated frames between the original ones to allow for smooth transition between frames. This can be accomplished with 'slowmoVideo'. The user interface allows you to select which parts you want faster or slower by changing the slope of the line describing the relative speed of the rendered video vs. the original one. As an example of how the GUI looks like:

And a demo of the resulting video, where we slow down the part from second 2 to 5 approximately (again left: original; right: modified video).

Thursday, 7 May 2015

GPX hyperlapse

In the last post I described how to create a virtual hiking tour (http://angel-de-vicente.blogspot.com.es/2015/04/creating-visual-tour-of-hiking-tour.html), and then I wondered if I could do something similar, but at street level, taking images from Google Street View. The goal was to take GPS data (either from a route that I have followed myself on foot, on bike, etc. or from data points generated by the driving directions of Google maps).

If we want driving directions from Google Maps, one easy way to generate GPS data points in a .GPX file is to:

  • use google maps to generate a route

  • grab the URL generated above in Google Maps and feed it to GPS Visualizer to get a .gpx file with the GPS data points following the given route

If, instead, we want GPS points from a route that we did previously, we can just, for example, download the GPX file directly from Endomondo.

At these point, we can try our luck with sites like http://gpxhyperlapse.com/ or http://alban.atchoum.fr/hyperlapse/ but if we want finer control, we will need to do some extra work.

If we use the .gpx files above, the results will not be very good, since there will be not many data points and/or due to the GPS receiver limitations, the points can be outside roads, resulting in shaky and jumpy street view images. So, we can do two things to improve this.

First, we are going to generate more data points by interpolating, with GPSbabel  (see http://www.gpsbabel.org/htmldoc-development/filter_interpolate.html) For example, to get data points every 5 meters:

gpsbael -i gpx -f track.gpx -x interpolate,distance=0.005k -o gpx -F newtrack.gpx

Second, we can try to fit those GPS points to a proper road (assuming we were on a bike tour, running, etc. on roads). To do this, we can use the site https://mapmatching.3scale.net/. You need to apply for an API key. Once you do that, we can convert our original shaky GPS data with

curl -X POST -H 'Content-Type: application/gpx+xml' --data-binary @newtrack.gpx "http://test.roadmatching.com/rest/mapmatch/?app_id=YOUR_APP_ID&app_key=YOUR_APP_KEY&output.waypoints=true" -o output.xml 

This works OK with the GPX file from Endomondo, which has time stamps, but it will break (the developer knows about this, so perhaps it is fixed when you try it) for the Google Maps generated GPX file, which doesn't have time stamps. To fix it, we just have to add timestamps to the GPX file before accessing test.roadmatching.com. So I turn all the trkpt from something like:

to something like (you can just put the same timestamp for all the trkpt's):

The output from TrackMatching matches the roads much better, hopefully, than the original GPX data, but it comes in an unfamiliar format that I have not been able to convert easily to a GPX file (if you know how to do it with some GPS conversion software, please let me know). I was too lazy to write a script to do the necessary transformations automatically, and instead I used search/replace in my text editor, so that from the output.xml file I extract all the wpt ....="" stuff and modify each waypoint of the form wpt y="28.465961" x="-16.269638" to (if you don't know how to do search/replace with regular expressions, this is a good time to learn!):

Now in the .gpx file obtained with GPSBabel, delete all the trkpt entries (so there will be an empty section, and in their place put all these new trkpt definitions.

With the help of GPS Visualizer again, you can verify that the new .gpx file is matched to roads better than the original one. Just use the options to convert a .gpx file to Google Maps as follows:

As an example, this is how the track looks like with the .gpx file directly downloaded from Endomondo:

And this after massaged by TrackMatching:

Now that we have .gpx files with many data points and nicely following roads, it is time to get the Google Street View images to put everything together. The code I used is a minimal variation of the code available at http://pastebin.com/FnsY8QFR (discovered at http://www.cyclechat.net/threads/my-cycling-video-c-o-python-strava-google.135566/). In case the pastebin expires, I uploaded the code to GitHub (https://gist.github.com/cac3a4434c4bd5b756ea.git). You can download it with:

git clone https://gist.github.com/cac3a4434c4bd5b756ea.git

The file gpxhyper.py is still very crude, so you will have to change things by hand. It will work by default on a file called input.gpx (either change that or make a symbolic link named input.gpx pointing to the file you want to work with). Next, leave uncommented the appropriate line: for the .gpx file coming from Endomondo:

gpx_trackseg = gpx_file.getroot()[1][3] # For Endomondo .gpx

For the .gpx file coming from Google Maps:

gpx_trackseg = gpx_file.getroot()[3][1] # For GPSBABEL .gpx

You should also have a Google Street View Image API Key (if you don't have one, you can get instructions at https://developers.google.com/maps/documentation/streetview/#api_key). Put it in gpxhyper.py (in place of YOUR_GOOGLE_API_KEY) and execute it as:

python gpxhyper.py

and all the corresponding images will be downloaded from Google Street View.

From these images use your favourite method to create a video. For example:

avconv -r 10 -i %5d.jpeg -b:v 1000k input.mp4

(Downloading the images takes a while, so make sure you calculate the appropriate rates beforehand. I found that around 10 FPS is a good number above (the images are taken at intervals, and if you try a normal video rate of 24FPS, the video will be too shaky). Assuming the 5 meters interval given to GPSBabel above, this will mean a virtual speed of 50 m/s or 180 km/h. Depending on the effect you want to create and the place where the images are taken (for example a city vs. a very open road with nothing near), this might be too fast or too slow. You will have to experiment a bit).

As an example, here it is the result for an Endomondo generated track (a bicycle route):

And here for a .gpx file generated via Google Maps as explained above (for this one, at the beginning there were some 'bad' frames that I removed by hand with Kdenlive video editor):

Thursday, 30 April 2015

Creating a virtual hiking tour

Lately I'm back to hiking and when I do a route I use my mobile phone with a GPS application (at the moment I use Wikiloc) to record the track. Last week I climbed Teide, and Wikiloc offers the possibility of easily embedding the data from their site, which is nice:

But you shouldn't stop there and you can also create a guided tour of the route. Doing it is quite simple, and I list below the steps I took to create the video tour of the Teide climb:

  • First, download the file from Wikiloc in KML format, and open it with Google Earth. Then, select the route and you can see an icon in the right part of the left panel that says "Play Tour" (play it a couple of times first to cache all the necessary images). Options to modify the speed, camera angles, etc. are available at Tools::Options::Touring.
  • Then, when we are ready, we just play it and at the same time record it with, for example, Kazam (30 FPS gives pretty good quality).
  • The resulting file is pretty big. To make it smaller we can use WinFF with output as MPEG-4 (very high quality). As an example, from 656MB this turns the file into 71MB, which can now be mildly edited with Kdenlive, and we render it all together for Web site::YouTube 1280x720. The resulting video is now 51MB, and we can upload it directly to YouTube: https://youtu.be/HgzXkAG7XjM 

Thursday, 12 February 2015

Classical guitar progress logging (Feb'15)

With many obvious mistakes... plenty to improve yet, but I want to move on... (This time I did found the webcam, but synchronizing audio and video is really tough sometimes...) :

Tuesday, 16 December 2014

Classical guitar progress logging (Dec'14)

After almost a year without playing the guitar. For this one, I couldn't find the webcam, so only audio, at SoundCloud: