?

Log in

No account? Create an account
 
 
03 May 2011 @ 01:12 pm
Rolling release  

Since this has been a major request from users for a long time, I can only cool with the idea of seeing the Debian project support a rolling release. However I’m not pleased with the proposed ideas, since they don’t actually include any serious plan to make this happen. Sorry guys, but a big GR that says « We want a pony rolling release to happen » doesn’t achieve anything.

Let me elaborate. First of all, discussions have focused a lot on what to do when we’re in a freeze phase. Numerous cool ideas have been proposed, including PPAs (which again, won’t happen until someone implements them). This is all good, but this is only the tip of the iceberg. Above all, before wondering what can happen in a freeze that lasts 20% of the time, let’s wonder what can happen for the 80% remaining time. Once you have something that works in the regular development phase, you can tune it to make it happen, even if in a less optimal way, when the distribution is frozen. So let’s not put the cart before the horse.

There are three options if you want to make a rolling release happen.

  1. Make unstable usable. to make it happen, you have to prevent the disasters that rarely but unavoidably happen here. You don’t want to make all rolling systems unusable because someone broke grub or uploaded a new version of udev that doesn’t work with the kernel.
  2. Make testing usable. This sounds easy since RC-buggy packages are already prevented from migrating, but actually it is not. A large number of RC bugs are discovered at the time of testing migration, when some packages migrate and others don’t. Worst of all, they require several days to be fixed, and it is very often that they require several months, when one of the packages gets entangled in a transition.
  3. Create a new suite for rolling usage.

The proponents of the CUT project obviously believe in option 2. Unfortunately, I haven’t seen many things that could make it happen. A possible way to fix the situation would be to run large-scale regression testing on several upgrade paths. I don’t know if there are volunteers for this, but that won’t be me. That would also imply to make a lot of important bugs RC, since they could have a major effect on usability, but the release team will not be keen to make it happen.

Because of the testing situation, when someone asks me for a rolling release, I point her to unstable with apt-listbugs. As of today, this is the closest thing we have to a rolling release, so we should probably examine more deeply option 1. Is it that complicated to write a tool to prevent upgrades to broken packages? A 2-day delay in mirror propagation and a simple list of broken packages/versions (like the #debian-devel topic, would be enough. Add an overlay archive, that works like experimental, and you can now handle freezes smoothly. Wait… isn’t that aptosid? We would probably gain a lot of insight from the people who invented this, instead of trying to reinvent the wheel.

Finally, option 3 could open new horizons. There’s a risk that it might drive users away from the testing and unstable suites, and this makes us wonder how we could have proper testing for our packages. Still, build a process that would (and that’s really only an example) freeze unstable every month, give people 10 days to fix the most blatant issues, add a way to make security updates flow in from unstable, and you have a really nice rolling distribution.

So overall, it only requires people to make things happen. You want option 2 to happen? Instead of working on GR drafts, start working with maintainers and release managers on ways to avoid breakage in testing. You want option 3 to happen? Start it as a new .debian.net service and see how it works. Personally, I’d be in favor of offering aptosid developers to become DDs and offer their solution as a Debian service. It would bring in new people rather than driving away existing developers from working on our releases.

 
 
 
(Anonymous) on May 3rd, 2011 07:58 pm (UTC)
Re: Care to elaborate what's needed for option 2?
I've been running stable on my server, testing on my laptop for about four years now. I started by installing Etch on both machines just after its release, then I upgraded the laptop to Lenny. Is it true that testing is broken for months at a time? Maybe in the sense that you couldn't get a working system if you installed from the servers six months after the last release; but in my experience it works pretty well starting from stable and updating the packages every few days.

Yes, I do get stuck with RC bugs in some packages for an extended period, and I get impatient when a new Gnome version is half-installed (thanks for all your great work, by the way). But I wonder whether the real solution to the problems with testing isn't to improve the transitions.

Would it be possible to calculate the dependencies more automatically? Could some packages be built against testing, to avoid getting tangled in transitions? Would it make sense to decouple architectures for testing, so that a FBTFS for testing-i386 wouldn't stop a successfully built package from making it into testing-amd64? As some people have suggested, maybe some RC bugs should not be released with, but shouldn't block a package from getting into testing outside of a freeze.

I'm not a programmer; I'm not sure what's possible. Renaming testing wouldn't make any difference to me, and the creation of a new rolling suite might or might not improve things. But I think there's been too much emphasis on rolling as a solution to developers being frustrated with the freeze, and not enough attention to the problems users have with testing.

I'm sure Debian will eventually find a good solution.

mgregoire