You are viewing np237

np237
03 May 2011 @ 01:12 pm

Since this has been a major request from users for a long time, I can only cool with the idea of seeing the Debian project support a rolling release. However I’m not pleased with the proposed ideas, since they don’t actually include any serious plan to make this happen. Sorry guys, but a big GR that says « We want a pony rolling release to happen » doesn’t achieve anything.

Let me elaborate. First of all, discussions have focused a lot on what to do when we’re in a freeze phase. Numerous cool ideas have been proposed, including PPAs (which again, won’t happen until someone implements them). This is all good, but this is only the tip of the iceberg. Above all, before wondering what can happen in a freeze that lasts 20% of the time, let’s wonder what can happen for the 80% remaining time. Once you have something that works in the regular development phase, you can tune it to make it happen, even if in a less optimal way, when the distribution is frozen. So let’s not put the cart before the horse.

There are three options if you want to make a rolling release happen.

  1. Make unstable usable. to make it happen, you have to prevent the disasters that rarely but unavoidably happen here. You don’t want to make all rolling systems unusable because someone broke grub or uploaded a new version of udev that doesn’t work with the kernel.
  2. Make testing usable. This sounds easy since RC-buggy packages are already prevented from migrating, but actually it is not. A large number of RC bugs are discovered at the time of testing migration, when some packages migrate and others don’t. Worst of all, they require several days to be fixed, and it is very often that they require several months, when one of the packages gets entangled in a transition.
  3. Create a new suite for rolling usage.

The proponents of the CUT project obviously believe in option 2. Unfortunately, I haven’t seen many things that could make it happen. A possible way to fix the situation would be to run large-scale regression testing on several upgrade paths. I don’t know if there are volunteers for this, but that won’t be me. That would also imply to make a lot of important bugs RC, since they could have a major effect on usability, but the release team will not be keen to make it happen.

Because of the testing situation, when someone asks me for a rolling release, I point her to unstable with apt-listbugs. As of today, this is the closest thing we have to a rolling release, so we should probably examine more deeply option 1. Is it that complicated to write a tool to prevent upgrades to broken packages? A 2-day delay in mirror propagation and a simple list of broken packages/versions (like the #debian-devel topic, would be enough. Add an overlay archive, that works like experimental, and you can now handle freezes smoothly. Wait… isn’t that aptosid? We would probably gain a lot of insight from the people who invented this, instead of trying to reinvent the wheel.

Finally, option 3 could open new horizons. There’s a risk that it might drive users away from the testing and unstable suites, and this makes us wonder how we could have proper testing for our packages. Still, build a process that would (and that’s really only an example) freeze unstable every month, give people 10 days to fix the most blatant issues, add a way to make security updates flow in from unstable, and you have a really nice rolling distribution.

So overall, it only requires people to make things happen. You want option 2 to happen? Instead of working on GR drafts, start working with maintainers and release managers on ways to avoid breakage in testing. You want option 3 to happen? Start it as a new .debian.net service and see how it works. Personally, I’d be in favor of offering aptosid developers to become DDs and offer their solution as a Debian service. It would bring in new people rather than driving away existing developers from working on our releases.

 
 
 
np237

Today we gathered the representatives of different distributions that are present at GNOME.Asia to discuss what GNOME could do to improve its support for distributions that distribute it, especially in matters of long-term support.

It is kind of sad that there weren’t any representatives from Canonical nor Red Hat, but the discussion turned out really interesting and we learned a lot about the packaging habits of each other. Furthermore, there were several concrete leads that were explored, which will lead to proposals from the GNOME foundation to all distributions.

Helping with long-term support

The most widespread GNOME version in the LTS releases that happened recently is 2.30, which is used by Debian squeeze, Ubuntu LTS 10.04, RHEL 6, and Solaris 11. It looks like an accident, but on the other hand:

  • GNOME 2.32 isn’t really suitable as is for an entreprise distribution;
  • Linux distributions agreed on a kernel version to support long-term, so this had an impact on their release schedules, and this might well happen again for the next release.
In the future, a decision to use a common GNOME release could, anyway, only come from the distributions themselves, not from GNOME.

A proposal that many people agreed upon was to give distribution maintainers commit access to old branches that GNOME module maintainers don’t touch anymore. This way they could share their patches more easily and make new releases of these old branches. This would imply, of course, setting up rules about what changes are allowed, that distributions would have to agree upon (how to treat feature additions for example).

Managing bugs

Currently it is hard to tell, for a distributor, whether other distributions are affected too and whether they have released a fix for that. It was agreed upon that Launchpad’s feature of linking bugs between distributions, including version tracking, would exactly fill that need.

One of the solutions would then be to add such a feature to Bugzilla, but it is a lot of work since currently it doesn’t have any kind of version tracking. Another proposal was to deploy a new Launchpad instance to do serve as a hub between downstream bug systems and the GNOME Bugzilla. The condition for this to work would be to make it extremely easy to clone bugs between it and Bugzilla, and also if possible from the downstream bug systems.

On the side-related topic of how not to crawl under bugs, it might be possible to get bugs forwarded with a single command from the Debian BTS to Bugzilla, using the XML-RPC interface. Upstream also considers that bugs sent to Debian are generally of higher quality than those from e.g. Ubuntu, and would be OK with us routing some of them directly to upstream (like we already do for Evolution).

Communicating about the availability of patches

Currently distributors are hardly ever informed that patches relevant for their distribution have been committed. They often learn of them by sheer luck while lurking on Bugzilla.

The distributors-list ML is clearly the relevant media for that purpose, but it is clearly not used enough. It would need to be advertised more among both GNOME module maintainers, and among downstream maintainers as well.

On this matter, the disappearance of the x.y.3 GNOME releases (starting with 2.28) was evoked. The problem was that most of those releases were about insufficient changes to justify e.g. stable updates in distributions. The proposed solution is to encourage maintainers of modules with bugs to fix to release new versions (through an annoucement on desktop-devel-announce), and to send a list of modules with new versions to downstream distributors so that they can integrate them. This avoids the GNOME release team the hassle of making a new release, while still giving distributions that use them some bugfixes.

Providing a new service to LTS distributions

The idea of having the GNOME foundation employ a person to gather, on the GNOME side, all changes that are relevant to older GNOME versions, and prepare new stable versions, was discussed. This would be a new service for which commercial distributions would need to pay a fee.

It’s not clear how this information would be privately disclosed and the impact on non-commercial distributions. But it doesn’t seem likely that e.g. Red Hat would be interested since they employ a lot of core GNOME hackers who are already doing this job.


I don’t know what impact these proposals can have on GNOME packaging in Debian, but apart from the last one that I find dubious, it seems that they could greatly improve our support of GNOME in stable Debian releases, be it by having more versions to upload during the freeze, or by having more stuff in point releases. Frédéric Muller promised to come back to us with more concrete stuff.

 
 
 
np237
01 April 2011 @ 03:51 pm

For the whole week, I’ve been in Bangalore for the GNOME.Asia 2011 hackfest. I’ve been delegated by Stefano to represent Debian here, and my employer EDF has agreed to cover for travel costs since they are very interested in first-hand information the future of the Linux desktop and sharing our work on scientific computing.

It’s been a really exciting week; I’ve spent quite some time packaging missing pieces of GNOME 3.0 (well, the release candidate versions of course) in experimental, together with Fred Peters. I think it’s reaching a usable state now, so we’ll probably soon provide metapackages to make it easily installable.

The latest developments of the Shell make it a very exciting piece of software, with a strong focus on usability. Many things were written about it, but in the end my main criticism would be that it lacks some functionality - for example, the combined clock/weather/locations applet will be greatly missed. The good news is that it is extremely customizable, and with all the libraries being made accessible through GObject introspection, there are many features that are accessible from it. If you know how to write JavaScript, now is the time to write your favorite extension.

On the good news front, Vincent Untz also spent a lot of time improving the so-called “legacy mode”, which is more and more starting to look like the Shell without special effects, and with all the features from gnome-panel 2.x that are still here. We will try in Debian to cover all uses cases that there were for GNOME 2 with GNOME 3 technology, so that panel lovers are not left behind.

I’ve also proposed an update to the dh_gsettings proposal, which will provide the same functionality as dh_gconf and allow to easily set distribution-specific overrides. It is still missing a way to set mandatory settings, which might come as a problem for some corporate users, but this is planned for a future version of GSettings.

Today, we’re having a business track where I and representatives of other companies (Oracle, Lanedo, Dexxa) are sharing experiences about making money with free software. Unfortunately the local organizers didn’t manage to gather many people, despite our being in a city with an incredible number of IT industries.

Tomorrow, the public conference starts, and this should be the opposite: we’re expecting around 1000 people, which is a great achievement for a free software conference.

For an unrelated topic, being around so many GNOME hackers has some interesting side effects; I’ve been added to Planet GNOME. So, hey, hello Planet GNOME readers!

 
 
 
np237

There have been a lot of web browsers embedding the Gecko engine, especially through the gtkmozembed “library” (it was not really a proper library but let’s call it like that). I remember being a happy user of galeon, which went on as epiphany, but there were also all these small applications that just need a good HTML renderer in one of their widgets, like yelp, or several Python applications using python-gtkmozembed.

Anyone having had to deal with these applications, especially the most complex ones, could tell you a few things:

  • the Mozilla developers never gave a friggin’ damn about embedding;
  • they don’t know how to develop an easily embeddable library;
  • the notion of a stable interface is a very arcane concept that they don’t wish to grasp.

So, today, it is official: Mozilla is dropping gtkmozembed from their codebase.

I don’t think this will come as a surprise to anyone. You can’t develop a new version of a behemoth, monolithic application every 3 months while still caring about the interfaces underneath. Embedded applications have been migrating to webkit over the recent years, and those that don’t do it really soon will die.

The interesting part of the announcement is not here. It can be found hidden in a bug report: a stable and versioned libmozjs will just never happen.

What does it mean?

First of all, it means that Debian and Ubuntu will have to go on maintaining their own versioning of libmozjs so that it can be linked to in a decent way by applications using the SpiderMonkey JS engine. It also means that this version will have to be bumped more often.

But it also puts into question the whole future of SpiderMonkey as a separate library. With a shortened release cycle, the Mozilla developers will be tempted to add more specific interfaces to SpiderMonkey, reducing its genericity in favor of its use in Firefox itself. This will produce less and less useful libmozjs versions, until we reach the point when they’ll make the same announcement as above, with s/gtkmozembed/libmozjs/.

This is especially relevant in the context of the GNOME Shell, which is at the core of the GNOME 3 experience. The developers deliberately chose to avoid using JavaScriptCore (the JS library inside webkit) through the Seed engine, and used GJS instead, that relies on libmozjs. In my opinion this was done for frivolous reasons (being able to use more language extensions); and not only this put the GNOME developers in an awkward situation where 2 JS interpreters compete in the same desktop, but now it puts a risk on a technology which is at the core of the desktop.

One of the reasons for the limited adoption of JSCore is that it lies currently in the same library as Webkit, which is a huge dependency. I’ve been very glad to learn that Gustavo is considering the idea of splitting it. We need to provide an escape route for applications using libmozjs, and it looks like more than a decent one. I hope that GNOME Shell follows it sooner than later.

 
 
 
np237
24 March 2011 @ 08:54 am
 
 
 
np237

A few weeks ago, at work, we were looking for a solution to a tricky printing problem: how to manage, in a centralized infrastructure, a large number of locations, worstations and printers?

One of the consultants working for us came up with a great idea. With only a 20-line patch to CUPS, workstations would be able to find which printers are on the same location. 20 lines of code, instead of a complex virtualisation solution? This is exactly the kind of reasons why we use free software: when there’s something wrong, you can fix it. When you need something more, you can code it.

Now, many others could benefit of such an improvement, and we don’t want to maintain a forked version of CUPS, so we forwarded it upstream, who looked interested. But upstream now being Apple, they requested a stupid copyright assignment agreement.

I will leave to the reader’s imagination the complexity of getting such a document signed in a Fortune 500 company with no business with Apple. This will, of course, not happen - and if the decision was mine, the answer would have been a clear “No.” No, because I want to improve free software, not to contribute to Apple’s proprietary version. No, because copyleft is about giving as much as you take.

How many contributions are being left out of CUPS because of this stupid copyright assignment? It looks to me that such software is doomed to remain crippled as long as companies like Apple are in charge of their maintenance.

There is free software. And there is free software by Apple. And Oracle. And Canonical.

 
 
 
np237
07 March 2011 @ 01:39 pm

At first, it looked nice:

But then, it was more like:

 
 
 
np237
25 December 2010 @ 11:09 am

My only contribution will be: merry FSMas to all!

 
 
 
np237
01 December 2010 @ 08:12 pm

We’ve come a long way since the times when you needed to configure 2 X servers in XDM just to be able to use 2 X sessions at once. However there was still some way to go until recently. A number of bugs that could be wrongly attributed to bugs in the X server or in the desktop environment were actually caused by the display manager doing crap.

GDM up to 2.20

Since the introduction of the “flexible X servers” feature, GDM hadn’t evolved much on the matter of user switching. What it used to do was pretty straightforward:

  • a specific protocol can be invoked by the gdmflexiserver command;
  • the gdm daemon spawns a new X server on an empty console;
  • it initiates another login process in it;
  • when the session exits, or if the user clicks on “Quit” instead of logging in, the X server exits.

It is interesting to note that VT (console) switching is purely handled by the X server. When starting, the new server switches the current VT to where it is. When exiting, it automatically switches back to the VT from which it was launched.

While very simple, this idea fails to work correctly every time you try to do something more complicated than starting a temporary session for a guest and exiting it. For example, if you start two of them, there is a chance that, when the X server switches back to the console it was run from, there is nothing left running in this console, leaving you with the funny Control-Alt-Fn shortcuts to find your way back to a X server. You will also meet interesting race conditions when trying to switch back to an existing session from the login window.

GDM 2.28 and above

In the process of rewriting the code entirely, the GDM developers tried to address a number of those shortcomings, making use of D-Bus and ConsoleKit. The new design is slightly more complicated, however.

  • The gdmflexiserver tool will first try to look for an existing login window in another X server, and just switch to the VT it is in if it finds one.
  • Otherwise, the daemon starts a new slave process with a new X server and a new login window, in a very similar way to what older versions did.
  • When logging in as a user with an existing session, it switches to the VT it is in, but leaves the login window and its X server running.
  • When going into a new session, the X server is simply left to die at the end of the session, and to switch back to the VT from which it was launched.

Not killing the X server in some cases partly addresses the problems caused by letting it switch back to the original VT when exiting. However in several ways the cure is worse than the disease.

  • First of all, it will leave unused X servers, with all processes used by the login window - and that makes quite a number of them, with GDM now using a minimal GNOME session.
  • When there is such a login window remaining, ConsoleKit will refuse to let you shut down your computer, being lured into thinking there is someone else using it.
  • It doesn’t solve the inconsistency issue. When you leave a session, you can find either of: a login window, a screensaver unlock dialog, or a black screen.

Getting it to work

The modular architecture of GDM makes it possible to improve the situation. (Possible but not easy because of the millefeuille of classes.) However, it is merely a band-aid unless you fix the root issue: the X server knowing better than you which VT it should switch to when exiting.

Fortunately Xorg now features an option to avoid that behavior: -novtswitch. So the first step is to enable it, and let the GDM daemon (or slave) handle VT switching through ConsoleKit. With that, the following changes are possible.

  • When switching to an existing session, don’t leave a X server behind. You can now kill it safely without risking a VT switch.
  • On the opposite, when exiting a session, always respawn a login window on the same VT.
  • The last step is to stop making a difference between the first launched X server (called a static server) and the flexible servers. The only remaining difference between a static display and a flexible one is that the static one honors automatic login/timed login settings.

The result

With all these changes the behavior of the display manager is finally completely consistent.

  • When exiting a session, regardless of it, you always find the same login window.
  • There is never an unused process left.
  • You will never find yourself facing a black screen with only keyboard shortcuts to leave you out of it.

Interestingly enough, this is very similar to what user switching looks like on Vista or MacOS X.

So what now? These changes are stabilized for Debian squeeze, but of course it has been long overdue to get them accepted upstream, along with the very large number of Debian-specific changes that still lie in our packages.

 
 
 
np237
13 November 2010 @ 12:29 pm

If you use pbuilder, you probably already use cowbuilder too, in order to save on chroot instantiation time. You also probably use ccache in order to save on compilation time.

If you do that, the longest time taken by your build is, by far, the time needed to install the build-dependencies, because dpkg likes to fsync() every file it writes. It’s a good thing it does that on your main system, but in a disposable chroot you really, really don’t care what happens to it if the system crashes. Thanks to Mike, I discovered eatmydata, and tried it with cowbuilder.

If you want to try it out, add this to your pbuilderrc file:

EXTRAPACKAGES="eatmydata"

if [ -z "$LD_PRELOAD" ]; then
  LD_PRELOAD=/usr/lib/libeatmydata/libeatmydata.so
else
  LD_PRELOAD="$LD_PRELOAD":/usr/lib/libeatmydata/libeatmydata.so
fi
export LD_PRELOAD

You will also need to install eatmydata in your chroot, unless you want to regenerate it from scratch. And now you can enjoy your super-fast builds.