XFCE moving icons around when changing monitor sizes

XFCE saves desktop icon positions in a file whose name depends on the desktop size. It means that icon positions are not preserved when switching from laptop-only to laptop + external screen.

Now I run this at the beginning of my XFCE session. Whenever a new positions file is created, I move it back to the default filename (which doesn't depend on the desktop size).

    #!/usr/bin/perl -w

    my $dir = "/home/bgoglin/.config/xfce4/desktop/";

    chdir $dir;

    open WATCH, "inotifywait -m -e moved_to . |"
      or die "Failed to notifywait.\n";

    while ($line = ) {
      next unless $line =~ m/(.*) MOVED_TO (.*)/;
      my $file = $2;
      next if $file =~ m/\.new$/;
      next unless $file =~ m/icons.screen0-.*\.rc/;
      unlink "icons.screen0.rc.bak";
      rename "icons.screen0.rc", "icons.screen0.rc.bak";
      print "Moving $file to icons.screen0.rc\n";
      rename $file, "icons.screen0.rc";
    }

Besoin d'un GPS? Evitez Tomtom, c'est de l'arnaque

Je suis l'heureux propriétaire d'une voiture avec GPS intégré. Si c'est pratique au premier abord (notamment le joystick sur la console centrale), ca devient beaucoup plus compliqué quand on creuse un peu.

Tout d'abord, la carte Tomtom n'est mise à jour que tous les 3 mois, il faut payer pour avoir la mise à jour, et elle n'est en fait jamais vraiment à jour! Combien de fois on se retrouve en plein champ parce que Tomtom n'est pas au courant des travaux réalisés il y a un an... Tous les sites proposant des cartes sur Internet y arrivent mais pas Tomtom.
Ensuite, leur fameux service Live marche bien, à condition d'attendre 5 minutes avant de démarrer. Il est si long à récupérer les infos de trafic qu'on est déjà coincé dans les bouchons avant qu'il nous prévienne. Pas du tout pratique en ville. Et je ne parle pas des nombreuses fois où le Live ne répond pas.
Jusqu'il y a peu, il proposait une recherche Google pour trouver tout et n'importe quoi. Tous les clients qui paient pour ce service l'ont vu récemment remplacé par Tomtom places qui est juste d'une nullité incroyable. Au final, on se retrouve à chercher sur son smartphone puis à entrer le résultat dans le GPS. Super... Quitte à utiliser le smartphone, autant prendre un GPS dessus...
Bref, on se demande à quoi ça sert de payer pour leur Live ou pour des mises à jour de carte.

Pire, leur méthode de distribution logicielle est nulle. Impossible de revenir à une version antérieure quand la dernière est bugguée. Et quand la dernière est bugguée au point de planter définitivement votre GPS (de nombreux utilisateurs ont récemment du aller chez leur garagiste pour réparer), ils la laissent en ligne sans prévenir qu'elle va peut-etre aussi planter votre GPS. Entre fin novembre 2012 et mi-janvier 2013, des centaines de clients se sont fait avoir alors que Tomtom savait dans quel cas le bug se produisait. Mais ils ont refusé de le préciser dans là où on télécharge les mises à jour. Bravo.
Et inutile de demander une indemnisation. Ils se considèrent comme non-responsables des délais de réparation soi-disant imposés par nos garagistes. Sauf que ces derniers ne savent pas réparer (ben oui, un garagiste répare des voitures, pas des GPS) et Tomtom ne leur fournit pas d'info, donc ca prend des semaines. Bref, un mois et demi avec un GPS planté et un service client qui nous ignore. Sympa.

Quand on voit qu'un GPS gratuit comme Waze sur smartphone a une carte plus à jour et des infos de trafic aussi bonnes, inutile d'envisager de donner un seul euro à Tomtom.

Remote Console Access with IPMI on Dell R710

Our local servers are moving from Dell Poweredge 2950 to R710. A couple years ago, I wrote a guide for Remote Console Access through IPMI 2.0 on the 2950. Some noticeable changes are needed for R710, so here's a new updated guide. I also added some notes about R815 and R720 at the end, since they are very similar.

You should first choose a new sub-network for IPMI. Although the IPMI network traffic uses a regular physical interface, it has a different MAC address and should use different IP address. If your boxes have 10.0.0.x regular IP addresses, you may for instance use 10.0.99.x for IPMI. Adding corresponding hostnames (for instance xxx-ipmi for host xxx) in your DNS or /etc/hosts file might be good too.

At the end of the BIOS boot, press Ctrl-e to enter the Remote Access Setup and enable actual IPMI Remote Access (note that some models can also be configured from Linux using ipmitool after loading some ipmi kernel modules).

  • Set IPMI over LAN to on (requires iDRAC6 LAN)
  • Enter the LAN parameters menu:
    • Keep NIC Selection on Shared
    • Set IPv4 Address source to Static
    • Set IPv4 Address to 10.0.99.x
    • Set Subnet Mask to 255.255.255.0 and set Default Gateway if needed
    • You may set Host Name string to something like xxx-ipmi but it does not seem that useful anyway
  • Enter the LAN User Configuration menu:
    • Set Account User Name to some login
    • Enter a password in Enter Password and again below in Confirm Password
  • By the way, while you're there, you may enter LCD Configuration and set your user-defined string in LCD Line 1

IPMI is now configured correctly. You should be able to ping the IPMI IP addresses for the master node (assuming you properly enabled the 10.0.99.x network there).

    $ ping 10.0.99.x

Now, you may for instance reboot a node using the following line. Replace cycle with status to see the status, off to shutdown, on to start.

    $ ipmitool -I lan -H 10.0.99.x -U login -P passwd chassis power cycle

Now we need to configure console redirection. It makes it possible to send the BIOS, GRUB, and kernel output through IPMI on the network. Note that the Second Serial port should be used. So usually you will use COM2/ttyS1. After booting, press F2 to enter the BIOS. Go in the Serial Communication menu:

  • Set Serial Communication to On with Console Redirection via COM2
  • Keep External Port Address to Serial Device1=COM1,Serial Device2=COM2 (some models enforce Device2 for the serial redirection)
  • Set External Serial Connector to Serial Device1 (some models don't allow using the same device here and for console redirection)
  • Keep Failsafe Baud Rate to 115200
  • Keep Remote Terminal Type to VT100/VT220
  • Set Redirection After Boot to Enabled

With this configuration, you should see the BIOS and GRUB output remotely using:

    $ ipmitool -I lanplus -H 10.0.99.x -U login -P password sol activate

Then we want to see the kernel booting remotely. This is done by adding the following to the kernel command line:

    console=tty0 console=ttyS1,115200n8

With GRUB2 on Debian, you should open /etc/default/grub and add these options to GRUB_CMDLINE_LINUX. By the way, you probably want to uncomment GRUB_TERMINAL=console and remove the quiet option nearby. Everything will be propagated to /boot/grub/grub.cfg when running update-grub.

And finally, you might want to get a console login remotely through IPMI. To do so, add the following line to /etc/inittab:

    T0:23:respawn:/sbin/getty -L ttyS1 115200n8 vt100

With all this setup, the above ipmitool sol activate line will display the same thing than the physical console on the machine, which makes it very nice to configure the BIOS, change the kernel, debug, ... Note that ~ is the control character when using the console redirection. And ~. may be used to leave the console. Also ipmitool sol deactivate may help if somebody did not leave the console correctly.

Update for R815 (2012/05/30): The configuration for the R815 is very similar. I met some harder constraints about serial device configuration in the BIOS, everything is already explained above.

Update for R720 (2012/05/31): On recent PowerEdge models, the IPMI config is directly available in the BIOS setup menus, no need to hit Ctrl-e during boot anymore. Just go in the BIOS with F2 as usual, then enter the iDRAC config. The following menus are similar to those described above. The other difference is that the R720 doesn't seem to work well with the IPMI lan interface. Always passing lanplus instead of lan to ipmitool -I seems to work fine.

Encrypting part of /home

I am preparing my switch to a new laptop at work in the next weeks. I am considering adding encryption to part of the hard drive, but I don't want to dramatically decrease performance. Encrypting the swap device or some .foo directories in $HOME looks like a good idea to protect private keys, keyrings, ... But encrypting git clones of large projects is probably useless.

So I am thinking of just having a small /home encrypted partition (a couple GB). I'd keep .foo directories in $HOME and only have symlinks to another non-encrypted partition where all my actual source code and other non-sensitive files would be.

Does this make any sense?

Debian/X.org notes - Radeon KMS in unstable, enabled by default

Now that we have DRM from 2.6.33 in latest 2.6.32 kernel in unstable, I just uploaded Radeon KMS and DRI2 to unstable. xserver-xorg-video-radeon 1:6.12.192-2 even enables KMS by default. Please test it.

In case of problems, you may for instance disable KMS by changing modeset to 0 in /etc/modprobe.d/radeon-kms.conf. You may also downgrade to testing where xserver-xorg-video-radeon 1:6.12.6-1 does not enable/support KMS.

Make sure you run linux-image-2.6.32-4-$arch or later so that you actually have DRM from 2.6.33 and the radeon kernel module gets loaded early by udev. Otherwise, you may experience problems like this. You may need to add radeon to /etc/modules as a temporary fix.

Debian/X.org notes - Bug triaging while waiting for DRM 2.6.33

Almost nothing interesting happened recently in X.org in Debian. But interesting things are coming soon.


First, radeon KMS and DRI2 will enter unstable soon. xserver-xorg-video-radeon 1:6.12.191-1 is currently in experimental. People seem to be happy with it so far, and upstream is taking very good care of bug reports as usual.

The next 2.6.32 kernel will contain DRM from 2.6.33. It first means that the radeon KMS driver not in staging anymore. Once this new kernel is uploaded, I'll put the new xserver-xorg-video-radeon in unstable (6.13.0 is expected soon, but 6.12.191 already looks good so far).

DRM from 2.6.33 will also brings nouveau support. It means that we will build libdrm-nouveau and upload a new xserver-xorg-video-nouveau. However, it also means that we need somebody to maintain this. And nobody in the team has a nvidia board to test packages so... If you want nouveau in Debian, please help.


While waiting for all these, we have been triaging the BTS a bit. Kibi is helping a lot by triaging recent intel bugs (many regressions fixed in recent kernels). I spent some time during the week-end triaging some old bugs. I closed more than a hundred of them, and pinged another hundred. We still have more than 1100 bugs open. It is not so bad compared to 1500-2000 when nobody maintains X (aka often), but still way too much.

Some of my bug closing might look a bit rude. But we had so many bug reports a couple years ago that are irrelevant today. Keeping them open would be meaningless. For instance, many input problems are obsolete since a lot of the input code was rewritten, we switched to input-hotplug, and then hal to udev. Another example is intel lockups (we had a lot of them after driver 2.2 arrived). But XAA and EXA were dropped in favor of UXA, DRI1 was dropped for DRI2, and KMS arrived. So it's useless to keep these obsolete and irrelevant bugs that cannot be debugged nowadays.


As usual, the Debian X team needs a lot of help. Again, if you want nouveau in Debian, please help.

Debian/X.org notes - Radeon KMS and DRI2 in experimental

Now that libdrm-radeon1 is in unstable, I just uploaded a snapshot of the radeon driver to experimental (xserver-xorg-video-radeon package version 1:6.12.99+git20100201.a887818f-1). So you may now get KMS and DRI2 working, assuming you have a recent kernel (I am running 2.6.32-trunk-686 here). This driver even contains some early support for r8xx boards.

To check whether KMS is working, look for radeon kernel modesetting in dmesg. To check whether DRI2 is working, look for DRI2 in /var/log/Xorg.0.log.

Make sure the radeon kernel module is loaded early (which means: don't let X load it late in the boot, otherwise you may experience this bug). I had to add radeon to /etc/modules and put options radeon modeset=1 in /etc/modprobe.d/. In the past, I also needed agpmode=-1 there but it didn't seem to make any difference with latest packages.

Then, actual DRI2 support requires Mesa packages rebuilt against libdrm-radeon1. This is in experimental as well now. Look for libgl1-mesa-dri and other Mesa packages version 7.7-3.

Don't forget that these packages are in experimental for a good reason, they may not work. But at least basic things seem to work fine on my Radeon X300 (rv370). And don't forget that the X team needs help, otherwise these packages may never make it to unstable...

Fun with SuperMicro BIOS and PCI-NUMA

We have a SuperMicro machine with a X8DAH motherboard at work. It contains 2 Intel Xeon Nehalem X5550 (8 cores, 16 threads total) with 3 GPUs. As several Nehalem motherboards, there are actually 2 IO hubs, one near each socket.

  ---------   ------------   ------------   ---------
  | Mem#0 |===| Socket#0 |===| Socket#1 |===| Mem#1 |
  ---------   ------------   ------------   ---------
                   ||             ||
               -----------   -----------
               | IOHub#0 |===| IOHub#1 |
               -----------   -----------
                   ||             ||
                 GPU#0         GPU#1+2

So PCI devices behind one IO Hub are closer to one socket than to the other one. So DMA performance depends on where the target memory is located: in the memory near one socket, or in the other memory node. The motherboard manual tells us which PCI slots are actually behind which IO hub (and thus near which socket/memory). And benchmarking our GPUs confirms the actual position of each PCI devices in the above picture. But we want to find out such information automatically to ease deployment and portability of applications. Linux may report such information through sysfs:

  $ cat /sys/bus/pci/devices/0000:{02:00.0,84:00.0,85:00.0}/local_cpulist
  0,2,4,6,8,10,12,14
  0,2,4,6,8,10,12,14
  0,2,4,6,8,10,12,14

However, this is wrong since 0,2,4,6,8,10,12,14 means near socket #0 while 2 GPUs are actually near socket #1 (CPUs 1,3,5,7,9,11,13,15). This could have been a bug in the Linux kernel, but it's actually a bug in the BIOS (Linux just needs to report what the BIOS tells). So we talked to SuperMicro about it and tried upgrading the BIOS.


The first BIOS upgrade (from 1.0 to 1.0b) went kind of bad: the machine didn't boot anymore at all, not even any BIOS message on screen. Fortunately, we removed the GPUs and it booted again. But Linux didn't have any NUMA information at all. It was just saying there was a single NUMA node instead of 2. So we just forgot about all this mess and downgraded back to the older BIOS.

Another BIOS update came out recently (1.0c) so I contacted SuperMicro to know if it was worth upgrading. At some point, they asked me to try disabling NUMA in the current BIOS. The machine didn't boot anymore... except after removing some GPUs. Exactly as above. It seems that there is an incompatibility between disabling NUMA in the BIOS and having multiple GPUs in the machine. And the first BIOS upgrade apparently disabled NUMA by default, causing all the above problems with BIOS 1.0b.


So we had to try upgrading again, and make sure NUMA wasn't left disabled by default again. Instead of going back to 1.0b, I upgraded the BIOS to the latest release (1.0c) directly. And now the machine finally reports the right PCI-NUMA information!

  $ cat /sys/bus/pci/devices/0000:{02:00.0,84:00.0,85:00.0}/local_cpulist
  0-3,8-11
  4-7,12-15
  4-7,12-15

You might have noticed that CPU numbering changed in the meantime (CPU number interleaving is different), but I don't care since we have hwloc (Hardware Locality) to deal with it. Now the development version of our lstopo tool reports the whole machine topology, including PCI, as expected:


In short, if you have a X8DAH motherboard, don't disable NUMA in the BIOS (why would you do that anyway?) since it causes boot failures in some cases (when 3 GPUs are connected here), and upgrade to 1.0c if you care about memory/PCI locality/performance (which is probably the case anyway).

Debian/X.org notes - i865 fixed, Xserver 1.6 entering testing soon

No Xorg update entered testing since Lenny was released. The last big remaining bug in unstable was the Intel driver locking up on i865 when the UXA/GEM acceleration is used (and 2.8.x only supports UXA so there is no work around). See #541307.

Fortunately, Eric Anholt found out that it was caused by a kernel bug in the intel-agp driver. The fix is not in vanilla 2.6.31, so you'll have to apply the patch or wait for an updated 2.6.31.x kernel to be released.

Anyway, the Intel driver 2.8.1 as well as Xserver 1.6 and Mesa 7.5 will enter testing soon. If you have a i865, make sure your kernel contains the above fix or you'll likely experience lockups soon after X startup.

Update: If building the intel-agp driver as a module, you will also need another small patch to export the clflush_cache_range() function to modules.

Update: Everything just entered testing for real.

Debian/X.org notes - Quick Updates

Another round of quick notes about X in unstable while the XSF team is on vacation.

Intel Driver and PAE kernels

If upgrading to xserver-xorg-video-intel broke X or made it very slow, make sure you are not using a PAE/bigmem kernel. The Intel driver now enforces UXA for acceleration. But UXA requires GEM support in the kernel, and GEM is not compatible with PAE before 2.6.31. So if you have PAE in your kernel (CONFIG_HIGHMEM64), i.e. for instance if you are using a -bigmem kernel, your Xorg.0.log will say:

  (EE) intel(0): [drm] Failed to detect GEM.  Kernel 2.6.28 required.
  (EE) intel(0): Failed to become DRM master.

Obviously, 2.6.30 should be enough when 2.6.28 is required. But 2.6.30 with PAE is not.

Return of the DRI2 breakage

It looks like the DRI2 breakage in Xserver 1.6.1.901-3 wasn't enterily fixed in 1.6.2. According to #538637, Xserver 1.6.2 didn't work with KDE4.2 effects when built against Mesa 7.4.4. Fortunately, Mesa 7.5 is in unstable now, and the new Xserver 1.6.2.901-1 was built against it.