Friday, 30 April 2010

FlightGear in a testing 2x3 display wall.

FlightGear's support for multiple computers and multiple cameras is impressive. As a test, I managed to run FlightGear in a 2x3 display wall. One of the rows shows the view from the cockpit, while the other row shows views of the plane from outside at different angles. This was pretty easy to set up, although I didn't bother to align the monitors or the cameras very carefully. This is running in 3 PCs. Each PC controls two monitors (one column) with NVidia drivers and Twinview. Hopefully I will post the details next week...

Sunday, 25 April 2010

2x2 Demo Tiled Display Wall

I wrote in a previous post about my experience with the Viz roll and Rocks Cluster in order to get a tiled display wall. Since that looked like quite a lot of work, I decided to start afresh, and build a tiled display wall from "scratch" (i.e. without the help of the Viz roll), so that I could better control all the process. During this week I hope to do a write-up of what I did, but for the moment I recorded a mini-demo of the wall, which can be seen at Vimeo.

Monday, 12 April 2010

Adding Virtual Linux boxes (running in a Windows host) to a Condor Pool

As the manager of a Condor pool, I've decided to increase the number of CPUs available for Condor. Until now Condor at our site run only in Linux machines, but there are also quite a lot of Windows boxes around which can be useful. But our users develop code in Linux, so my goal was to provide more Linux machines (though these would be Virtual Machines running inside the Windows host) to the final users.

Condor provides a Virtual Machine Universe. This way we could run a linux virtual machine inside a Windows host, but the support in Windows is limited to VMWare. I'm not sure if we could use the VMWare Player to avoid costs, but for the moment I have tried a different route: POVB.

POVB stands for Pools of Virtual Boxes and the project "is focused on creating Linux-based VirtualBox virtual machines to deploy Condor pools in an Windows environment".

The code can be downloaded at  SourceForge and the installation in a single machine is very easy. I'm now running version 1.4.3 in a Windows XP (service pack 3) PC (AMD Athlon 64 X2 Dual Core Processor 6000+ 3.01 Ghz, with 3.25 GB of RAM) and these notes reflect this version. In order to install POVB in a single machine you just need to run the script INSALL.BAT inside the povb-1.4.3 folder. This will install VirtualBox, a CentOS virtual machine, and the Windows services to get everything running. The script takes some time, since it is downloading quite a lot of stuff from the Internet, so patience will help. When all is done, you will see a povb directory in C:\

Before rebooting the computer, you need to change some basic settings in the file C:\povb\condor_status\personal_config.txt (I personalized DOMAIN, CM_FULLNAME, CM_SHORTNAME and CM_IPADDRESS, and didn't touch the rest.

Now, after rebooting the machine, all necessary stuff will be started automatically. By default you get a 32 bits machine running CentOS, although you can create your own VM and modify it to your heart's content (as we will see later).

To verify that everything is working correctly, you can first check in the Windows Host (with the Windows Task Manager) that the process VBoxHeadless.exe is running (with a previous version of POVB I got stuck here due to a problem with detecting correctly the number of CPUs in my PC). If this is not the case, you can start VirtualBox manually and try to start the povb VM to figure out possible errors.

Assuming that VBoxHeadless.exe is running in the Windows PC, then you should check that the VM got registered with the Condor pool. The name of the machine is worker with the MAC address included. For instance, in my newly included VM, I get:

angelv@vaso:/etc/condor$ condor_status | grep -i worker
slot1@worker_EEFF0 LINUX      INTEL  Unclaimed Idle     1.000   821  0+00:00:04
slot2@worker_EEFF0 LINUX      INTEL  Unclaimed Idle     0.800   821  0+00:00:05

The name of the actual machine is: 

angelv@vaso:/etc/condor$ condor_status -l | grep -i worker | grep -i machine
Machine = ""

So, the last step is just to verify that the VM can actually run jobs. Once you have found the name of the machine, you can check its attributes. In particular we are interested in HOSTINFO_HostOsLoad and HOSTINFO_POVBLoad, since these were problematic in my case. You can find whether these show up with the following command:

angelv@vaso:/etc/condor$ condor_status -l | grep -i hostinfo
CpuBusy = ((HOSTINFO_HostOsLoad - HOSTINFO_POVBLoad) >= 0.500000)
Start = ((HOSTINFO_HostOsKeyboardIdle > 15 * 60) && (((HOSTINFO_HostOsLoad - HOSTINFO_POVBLoad) <= 0.500000) || (State != "Unclaimed" && State != "Owner")))
HOSTINFO_HostOsLoad = 0.010000
HOSTINFO_POVBLoad = 0.010000

If you cannot see them, you might have found the same problem I did, which it looks like it is related to regional settings. If you open the file C:\povb\condor_status\machine_stats.txt and HostOsLoad dn POVBLoad are written with a comma (e.g. 0,04), then you have the same problem I did. The developers of POVB are aware of this problem, but until they have a chance of fixing it, the following workaround did the trick.

In Windows, stop the POVB service, open VirtualBox and change the virtual hard disk povb_primary_hd.vdi from "Immutable" to "Normal" and start the povb VM manually (the root password is by default YouReallyNeedToChangeMe! ). Once it starts you can change to the condor user (su - condor), where you will see all the Condor stuff.  In its home directory /home/condor/ you can find the Condor software together with the configuration files, logs etc. The main configuration file is located in /home/condor/etc/condor_config, with secondary config files in /home/condor/condor_config_local. Logs and the execute and spool directories are located in /home/condor/local.localhost.

Of particular interest here is the file If you run it and get HostOsLoad and POBVLoad with commas, then you can easily solve it by renaming this file to and creating a new file:

$ cat
/home/condor/ | sed 's/,/./' -

Once this is in place HOSTINFO_HostOsLoad and HOSTINFO_POVBLoad will start appearing in the VM information you get with condor_status, and then you will be able to use this VMs as regular Linux PCs in your Condor pool.

If you had to change the file, then you can just substitute the povb_primary_hd.vdi file that comes with the POVB distribution with the one in C:\povb (just in case, copy it when VirtualBox is not running. You can stop the POVB service via the Control Panel).

Another issue that I had for our setting is that I only want to run the Linux VMs after hours, because even if the VM is not being used by Condor, VirtualBox can consume quite a lot of RAM and I don't want that our users notice it. For this I just created two scripts in C:\povb, one with net start povb_service and the other one with net stop povb_service, and I scheduled them according to our needs, so that the POVB service is not running during working hours.

With this in place, I have started spreading Linux VMs in a few Windows test PCs. If all goes well, then next step will be to create my own VMs. For this, there is a guide in:

Skype in Ubuntu Karmic Koala (9.10)

It was sometime since I tried to install Skype in Ubuntu. At my workstation I had no problems at all, by following the instructions in the following page

sudo apt-get install ia32-libs lib32asound2 libqt4-core libqt4-gui
* Then download and install the current Skype .deb package from the Skype website:
wget -O skype_ubuntu-current_amd64.deb
sudo dpkg -i skype-ubuntu-current_amd64.deb

I will have to try now in the laptop and the netbook to see if they give me any trouble...

Wednesday, 7 April 2010

Hack to get Chromium working with Rocks 5.3

Rocks is "an open-source Linux cluster distribution that enables end users to easily build computational clusters, grid endpoints and visualization tiled-display walls".

Recently I've been using it to get a tiled-display wall, but found that it was not working as expected. Some software that it is supposed to come with SAGE is not there, and Chromium is not working except for the simplest of applications (I only managed to get it working with glxgears). A fix to the Viz Roll is coming, but in the meantime I figured how to get it working by installing my own version of Chromium. Since I've been asked to share this, I put these quick notes here (please let me know if something is not working or something is missing, so I can improve them) in case it can be of interest to anyone, but remember that this is just a quick hack. Hopefully a patched version of the Viz roll will be available soon.

For my setting I have four monitors and two nodes.
I created the file layout.xml:


With this, in simple mode I got glxgears working OK, but when I tried to run anything else (I tried with Stellarium), I get messages like:

CR Warning(tile-0-0:18331): __tcpip_read_exact() error: Bad address
CR Warning(tile-0-0:18331): Bad juju: 10485760 197100 on socket 0xc
CR Warning(tile-0-0:18331): CRServer: Last client disconnected - exiting.

Hence this hack... The Viz roll is supposed to work in mode=simple for Chromium and mode=meta for SAGE. I didn't want to have to change the mode, so I did the following to have Chromium working in mode=meta.

[root@vaiven ~]# rocks remove tile
[root@vaiven ~]# rocks add tile layout layout.xml
[root@vaiven ~]# rocks sync tile mode=meta

I downloaded Google Earth (32 bits application), installed the library dependencies and made sure that it was working OK when Chromium was disabled. The it was a matter of downloading Chromium and compiling both the 32 bits and the 64 bits versions (I put copies in my home directory, at /home/angelv/cr-1.9 and /home/angelv/cr-1.9-32).

Then I created two scripts, to enable and disable my Chromium:

[angelv@vaiven ~]$ cat enable_cr
cd /home/angelv/cr-1.9/lib/Linux
ln -s
ln -s

cd /home/angelv/cr-1.9-32/lib/Linux
ln -s
ln -s

[angelv@vaiven ~]$ cat disable_cr
cd /home/angelv/cr-1.9/lib/Linux

cd /home/angelv/cr-1.9-32/lib/Linux
[angelv@vaiven ~]$

and added PATH and LD_LIBRARY_PATH stuff to my .bashrc:

export PATH="/home/angelv/cr-1.9/bin/Linux:$PATH"
export LD_LIBRARY_PATH="/home/angelv/cr-1.9/lib/Linux:/home/angelv/cr-1.9-32/lib/Linux:$LD_LIBRARY_PATH"

I also had to change the .crconfigs file in my home directory, so that it will take my own Chromium configuration file:

[angelv@vaiven ~]$ cat .crconfigs
old-rocks* /opt/rocks/bin/rocks start chromium %p %m
* /home/angelv/first_angel.conf %p
[angelv@vaiven ~]$

And I created the file /home/angelv/first_angel.conf as shown below. Note that this file has a hard-coded configuration for my two compute nodes in meta mode, and that the node.AutoStart stuff has been changed, since for some reason launching the crserver via ssh gave the errors mentioned above. In its place I created a basic which is also included below.

[angelv@vaiven ~]$ cat first_angel.conf
import sys
from mothership import *


appnode = CRApplicationNode('')
tilesortspu = SPU('tilesort')
appnode.Conf( 'show_cursor', 1 )
cr = CR()

renderspu = SPU('render')
renderspu.Conf('fullscreen', 1)
#renderspu.Conf('window_geometry', [0, 0, 512, 512])
renderspu.Conf( 'show_cursor', 1 )

node = CRNetworkNode('tile-0-0.local')
node.AddTile(0, 0, TILE_WIDTH, TILE_HEIGHT)
#node.AutoStart( ["/usr/bin/ssh","tile-0-0", "DISPLAY=:0.0 /bin/sh -c
'/home/angelv/cr-1.9/bin/Linux/crserver -mothership vaiven:10000 -port 7000'"] )
node.AutoStart( ["/usr/bin/ssh",'-x',"tile-0-0", "/home/angelv/
0.0 crserver vaiven 10000 7000"] )

tilesortspu.AddServer(node, protocol='tcpip', port=7000)


renderspu = SPU('render')
renderspu.Conf('fullscreen', 1)
#renderspu.Conf('window_geometry', [0, 0, 512, 512])
renderspu.Conf( 'show_cursor', 1 )

node = CRNetworkNode('tile-0-1.local')
node.AddTile(2560, 0, TILE_WIDTH, TILE_HEIGHT)
#node.AutoStart( ["/usr/bin/ssh","tile-0-1", "DISPLAY=:0.0 /bin/sh -c
'/home/angelv/cr-1.9/bin/Linux/crserver -mothership vaiven:10000 -port 7000'"] )
node.AutoStart( ["/usr/bin/ssh",'-x',"tile-0-1", "/home/angelv/
0.0 crserver vaiven 10000 7000"] )
#node.AutoStart( ["/usr/bin/ssh",'-x',"tile-0-1", " 0.0
/home/angelv/cr-1.9/bin/Linux/crserver vaiven 10000"] )

tilesortspu.AddServer(node, protocol='tcpip', port=7000)

demo = sys.argv[1]


[angelv@vaiven ~]$

[angelv@vaiven ~]$ cat
export DISPLAY=:$1
$2 -mothership $3:$4 -port $5

[angelv@vaiven ~]$

With these modifications, after running enable_cr, Google Earth (32 bits), atlantis (64 bits) and Stellarium (64 bits) were fine.

These were quick notes that I took while I was doing the modifications, so most probably something is missing. If you find something missing, please let me know. For the time being I'm trying another software stack for the display wall, so I'm not planning on improving this, but I might come back to the Viz Roll in the near future...

I've got a plan (a swim plan)

I love swimming, but motivation dwindles sometimes. To keep going, there is nothing like a good coach, but at my age and my swimming level there is no chance of that. Alternatively one can get personalized swim plans that make the lap routine much more enjoyable. I've been using SiwmPlan for a couple of months now, and I would really recommend it. You keep good practice by doing warm-up and cool down exercises, and the plans provide a big variety of different exercises, which make the swimming time more productive and enjoyable.

Getting ready to get back to formal classical guitar education

As a way to impose some routine into my classical guitar learning, I have decided to go back to formal music education, and I'm getting ready for the entrance exams at our local Conservatory, which will take place sometime in June. I'm hoping to get into 3rd year (if it was only about playing the guitar I think I could perhaps try getting into 4th year, but this would involve a music harmony exam as well, and I'm not ready for this just yet...). Now, where do I find the time to practice????