You are viewing jwboyer

pointless pontifications of a back seat driver [entries|archive|friends|userinfo]
jwboyer

[ userinfo | livejournal userinfo ]
[ archive | journal archive ]

Fedora 22 and Kernel 4.0 [Apr. 16th, 2015|10:36 am]
jwboyer
[Tags|]

As Ryan noted yesterday, Fedora 22 is on track to ship with the 4.0 kernel release. So what does that mean in the grand scheme of things? In short, not much.

The major version change wasn't done because of any major feature or change in process or really anything exciting at all. Linus Torvalds changed it because he felt the minor version number was getting a bit large and he liked 4.0 better. It was really a whim more than any thing contained within the kernel itself. The initial merge window builds of this kernel in Fedora were even called 3.20.0-rc0.gitX until the 4.0-rc1 release came out.

In fact, this kernel release is one of the more "boring" releases in a while. It has all the normal fixes and improvements and new hardware support one would normally expect, but overall it was a fairly quiet release. So version number aside, this is really more of the same from our beloved kernel development community.

However, there is one feature that people (and various media sites) seem to have keyed in on and that is the live patching functionality. This holds the promise of being able to patch your running kernel without rebooting. Indeed, this could be very useful, but it isn't quite there yet. And it also doesn't make a whole lot of sense for Fedora at this time. The Fedora kernels have this functionality disabled, both in Fedora 22 and Rawhide.

What was merged for 4.0 is the core functionality that is shared between two live patching projects, kPatch and kGraft. kPatch is being led by a development team in Red Hat whereas kGraft is being developed by a team from SuSE. They both accomplish the same end result, but they do so via a different approach internally. The two teams met at the Linux Plumbers conference last year and worked on some common ground to make it easier to merge into mainline rather than compete with each other. This is absolutely awesome and an example of how new features should be developed upstream. Kudos to all those involved on that front.

The in-kernel code can accept patches from both methods, but the process and tools to create those patches are still being worked on in their upstream communities. Neither set are in Fedora itself, and likely won't be for some time as it is still fairly early in the life of these projects. After discussing this a bit with the live patching maintainer, we decided to keep this disabled in the Fedora kernels for now. The kernel-playground COPR does have it enabled for those that want to dig in and generate their own patches and are willing to break things and support themselves.

In reality, we might not ever really leverage the live patching functionality in Fedora itself. It is understandable that people want to patch their kernel without rebooting, but the mechanism is mostly targeted at small bugfixes and security patches. You cannot, for example, live patch from version 4.0 to 4.1. Given that the Fedora kernel rebases both from stable kernel (e.g. 3.19.2 to 3.19.3) and major release kernels over the lifetime of a Fedora release, we don't have much opportunity to build the live patches. Our update shipping infrastructure also isn't really geared towards quick, small fixes. Really, the only viable target for this functionality in Fedora is likely on the oldest Fedora release towards the end of its lifecycle and even then it's questionable whether it would be worth the effort. So I won't say that Fedora will never have a live patch enabled kernel, but there is a lot of work and process stuff to be figured out before that ever really becomes an option.

So that's the story around the 4.0 kernel. On the one hand, it sounds pretty boring and is likely to disappoint those hoping for some amazing new thing. On the other hand, it's a great example of the upstream kernel process just chugging along and delivering pretty stable quality kernels. As a kernel maintainer, I like this quite a bit. If you have any questions about the 4.0 kernel, or really any Fedora kernel topics, feel free to email us at the Fedora kernel list. We're always happy to discuss things there.
link1 comment|post comment

Fedora kernel position [Jan. 27th, 2015|10:22 am]
jwboyer
[Tags|]

As you might have seen Paul blog about, Red Hat has an immediate opening for a Fedora kernel maintainer position on my team. This is actually a fairly rare thing, as we don't have a lot of churn in our department and most of the engineering positions we hire for are primarily RHEL roles. If you have kernel experience and love working on fast-paced and frequently updated kernels, then this might be a good role for you.

The job writeup is accurate in terms of what we expect, but it is also kind of broad. That is primarily because the role is too. Yesterday davej wrote a bit about how working on a Fedora kernel is like getting a 10,000ft view of everything. It's actually a really good analogy, and Dave would know as he did it longer than anyone. We deal with a lot of varied issues, on an even more varied set of hardware. This isn't a traditional development job. Being curious and willing to learn is key to enjoying a distro kernel maintainer role.

That being said, we're also looking at ways to make a bigger impact both upstream and in Fedora itself. Filling this position is a key part of that and I'm excited to see how it plays out. If you're interested in it, please don't hesitate to send me questions via email or on IRC. Also be sure to apply via the online job posting here:

http://jobs.redhat.com/jobs/descriptions/fedora-kernel-engineer-westford-massachusetts-job-1-5076703
linkpost comment

32-bit is the zombie of kernels. It needs your help. [Jan. 20th, 2015|09:49 am]
jwboyer
[Tags|]

TLDR: Use 32-bit x86 kernels? Want to keep using them? Want to make sure they continue to work? Please help!

My kids recently convinced me to get them Minecraft. This in turn has caused lots of discussions about Minecraft. In an effort to be able to have anything close to resembling a coherent conversation with them, I've been playing the game myself a little bit. If you've been living under a rock like me, you might not know anything about it. I'll spare you all the never-ending details, but there is one part that recently got me thinking about Fedora and 32-bit and kernels.

See, in the game there are zombies. They're not particularly dangerous by themselves. They're slow, and they kind of moan and come after you but you can usually deal with them without really any effort. They only come out at night, and if you catch them outside at sunrise they burst into flames. Unless there's a large group of them, you basically ignore them.

In Fedora, there are x86 machines running 32-bit kernels. They're not particularly dangerous by themselves. They're slow, and they kind of stumble around a lot. If you shine a light on the dim corners of the kernel dealing with that code, it usually bursts into flames. Most upstream developers ignore them. Clearly they're a lot like Minecraft zombies, except there are always lots of them and they are never, ever the same.

This makes dealing with 32-bit kernels in Fedora actually fairly difficult. With upstream focusing almost entirely on x86_64, there isn't a massive amount of interest in solving 32-bit x86 kernel problems. That isn't to say that huge issues won't be fixed, but they are clearly not a priority (fortunately, they are also rare). There are other cases where the standard advice is to not use 32-bit kernels for things. Have more than 2GiB of DRAM? Don't use 32-bit kernels because PAE is a hack. Want to run VMs? Don't use 32-bit kernels. Transparent hugepages (or hugepages in general)? 32-bit is not your friend.

Then there is the variety of workloads people are using 32-bit kernels for. Some of them are old laptops that have crusty ACPI implementations. Some of the are 32-bit servers that are running constantly and stressing various things. Crazily enough, some people even run 32-bit kernels on 64-bit CPUs. That last one is a pet peeve of mine, but I won't dive into that here. The ISA variety is a headache as well, with some CPUs not supporting PAE so that we have to build different kernels for i686 and PAE-capable i686 machines.

When you take the above, add in the bug backlog we get from the just as widely varied x86_64 machines, the fact that our 32-bit hardware is rather limited for testing, the 32-bit x86 kernels in Fedora are simply pretty low on the priority list. We build them, we make sure we grab any fixes we see or are pointed to for them, but in the larger picture the time we spend on 32-bit specific issues isn't benefiting the majority of Fedora users. So the kernels linger on.

Not surprisingly, I'm not the first person to notice this. Just today I've had 2 discussions on what to do about i686 in Fedora, and Smooge posted his idea for a way forward. Others have had similar ideas. RHEL 7 does not include 32-bit kernels at all. I'm not going to comment on those proposals yet, but it does seem to at least confirm a bit of what we see on the kernel side of things.

So what can be done here? Should we kill the 32-bit x86 kernels? Should we kill one of them? Do we spend time on a solution I previously thought about a long time ago? I don't have answers for all of that at the moment. However, in listening to a very detailed dissertation on Minecraft zombie solutions from my son, his last solution was "or you could just cure the ones that are villagers". Apparently some of the zombies in the game can be cured. Some of them, the ones that are legitimately useful otherwise, can be saved.

Can we accomplish that in Fedora for 32-bit x86 kernels? There are most certainly sane uses of 32-bit, albeit on a reduced scale overall. So in the face of all the other challenges we have in dealing with this, I'm asking for help. We're asking for help. The kernel team has asked for help before, but it is understandably daunting for us to say "Hey! The whole kernel could use help! Should be fun!" This is a call to action on a much more limited scale. So if you use x86 32-bit kernels, and you want to see them better supported then speak up. Send us an email on the Fedora kernel list, dig through bugzilla for 32-bit kernel issues, find us on IRC.

Who knows, with a little community help and elbow grease, we could get some of these issues resolved. We could cure some of the kernel zombies. The alternative is the status quo, where we're waiting for the proverbial sun to rise.
link7 comments|post comment

It's not always technical [Oct. 9th, 2014|10:45 am]
jwboyer
[Tags|]

Anyone that has been around Fedora long enough knows that FESCo is the technical steering committee. As Fedora committees go, I think it's one of the best. It works well, it remains on task, and it has typically been a good example of how to get things done in Fedora. But it's not infallible. Like everyone, sometimes even FESCo can be blinded by its own purpose. Yesterday was, in my opinion, one of those times.

We had a ticket on the agenda that really shouldn't have been there yet. A community member opened it and suggested that the FPC (Fedora Packaging Committee) wasn't functioning well and needed fixing. If you attended the meeting or read the logs, you probably saw me use words like "angry" and "absurd". However, it wasn't about the subject of the ticket itself. There's nothing wrong with someone opening a ticket on that topic and it's arguably under FESCo's responsibility. I was upset about how FESCo was (mis)handling it.

See, the ticket was opened about the FPC, but it didn't include any of the FPC members on CC. Then further proposals were made on how to fix the FPC, still without discussing them with the FPC. The proposals also suggested moving work to the Fedora Product working groups, and only one of those WGs was contacted. So by the time it got to the meeting, we had a proposal to fix another committee without bringing them into the discussion, and push work to other groups without talking to all of them to see if they even wanted to do this.

When I pointed this out in the meeting, the collective reaction was a shrug. It was very much (paraphrased) "oops, yeah we should have CC'd more people. That was bad. But we have a proposal so we're going to talk about it anyway". And that, again in my opinion, is not acceptable. When I asked another member privately what was going on, they said this was just normal FESCo. That I found flat out scary. FESCo doesn't govern from top down. It can't. We don't have actual resources to direct or money to pay people, so we very much rely on collaboration across the project.

Yes, there was a proposal. Yes, it was present in the ticket. Yes, even if we closed the ticket we'd likely have to copy a lot of it to a new ticket (or just reopen it), but the order of operations surrounding this was entirely backwards. It serves no purpose to have a technical discussion about something that hasn't been discussed with the stakeholders themselves. Even from technical point of view it's wrong because you don't have all the relevant information.

Some of this may stem from the fact that as Fedora has grown, our process isn't holding up. We have 5 more governing bodies now with the various Fedora.next working groups. We have more complicated technical challenges that have many moving parts and groups. With all of the additional layers, it's easy to forget someone. It's also easy to forget that things are going to be naturally slower when you have to pull more people into a discussion. But you can't forget to have that discussion in the first place.

In the end, FESCo decided to do the proper thing and ask the proposal submitter to start a broader discussion with the relevant parties. I'm glad that was the case, but it should have been a prerequisite before we even discussed it. I think we learned a lesson and we instituted a process change for tickets that should help with this going forward. That's good because I think we at times focus too much on the engineering side of things and ignore the fact that it impacts people. We just need to remember that in a technical committee things are not always technical.
linkpost comment

Fedora kernel git tree [Oct. 1st, 2014|06:48 pm]
jwboyer
[Tags|]

As mentioned in my last post, a number of people over the years have asked about an exploded source git tree for the Fedora kernel. We've never done this, primarily because our build tool wouldn't be able to use it, and it didn't really have any perceived value for the maintainers. However, it's come up enough times that I thought I'd give it a shot. Creating it was easy enough, but whether it winds up being useful to anyone is something that you'll need to help with. This post will cover what it is, how it works, where it is, and possible future plans.

For the TLDR, see the bottom.

What is it: This is maybe the git tree you are looking for

If you've ever worked on packaging software with RPM (or apt or a number of other package managers), there are two basic ways to do it. You can use the upstream release tarballs and apply patches via a spec file, or you can use a snapshot of the upstream development repository as the "tarball". The former is how the Fedora kernel is packaged, and it presents a nice separation of upstream and the patches Fedora has added. However, that means that every time a new upstream kernel is released we wind up rebasing our patch set on top of it. When doing the initial creation of the git tree, I thought about a number of ways to handle this.

We could use feature branches for each patch/patchset and do a merge in the main branch. That would be how upstream development works for patchsets, but some of our patches are rather long-lived for various reasons. Doing merges like this over time would lead to a very tangled history and it wouldn't really provide much benefit to anyway.

Another way would be to start with a base version, add our patches, and then merge in the upstream kernel changes on top of that. This is essentially the opposite of the scenario above. It still results in a rather tangled history over time, and it actually sends the wrong message: this git tree isn't an upstream. It's supposed to be a downstream representation of what we've added/changed. We don't want people developing new kernel drivers/features against the Fedora git tree. We want them to develop against upstream.

Another factor in figuring this all out was our build system. As I mentioned, koji isn't setup to take random exploded source git trees as build input. It expects things from particular places and in particular ways, which is perfectly fine. To make this git tree a "source" for koji, we'd have to use it as a snapshot tree and not list out the patches and such in the spec file. There are distros that do this, and maybe that's suitable for them but I find it valuable to look at the SRPMs from a distro and be able to easily see what's changed [sideways looking emoji].

So in the end, that means the git tree became an exact mirror of what happens to the kernel RPM in Fedora. Namely, it rebases on top of whatever upstream is doing. Now typically this is a terrible thing to do to downstream consumers. It means they need to force update their tracking branches and such. I'd be very hesitant about it if not for the fact that it's mostly just an ease of use thing. Another point is that it's analogous to how linux-next operates (albeit for different reasons). Overall, I think this is the best trade off between "git tree exists" and "maintainer remains sane". The follow section should help illustrate why as well.

How it works: all your rebase are belong to us

How it works is pretty simple. The tree tracks the main upstream git repo in the master branch, and the Fedora content for a release is contained on the "rawhide" branch. As of right now, only the rawhide kernels are tracked here, mostly due to time but also somewhat to see if this whole experiment is worthwhile. So when we go to update the Fedora kernel package in rawhide (which is typically a daily build), we do:

git remote update; git checkout master; git merge origin/master; git checkout rawhide; git rebase master;

That sequence pulls down the latest from Linus onto the master branch, then we rebase the rawhide branch on top of the changes we just pulled. We fix up any issues during the rebase, and then we're set.

At this point we could call it good and just push it to the remote repository, but that didn't really feel worthwhile. I wanted to gain something out of this as well, so I started thinking about how I could actually use the tree. I came up with a couple of things.

Since we're rebasing the patches in git, we don't need to do it separately in the Fedora package repo. There's no sense in doing work twice. After some initial renaming of some patches and such, I now use the git tree to generate the patches we add to the spec file by using git format-patch master.. and a script to copy them to the working dir on my machine. This means we always have a nice fresh copy of the patches for that specific upstream base. It does mean that each patch typically gets one line of change (the sha hash of the commit) everyday, but I don't think that's a big deal. This actually saves me time now and it helps keep our patches fairly "clean". They all apply with git-am and most of them have changelogs and such. An overall improvement from before.

I thought the tree itself could still be more useful to people as well. As the last commit of each update, I include the generated kernel configs that we build the kernel with. Our config setup in the package is somewhat confusing to people that don't work with it daily, and it's not obvious how all the fragments go together. This has them all in one place.

Then I realized that isn't particularly helpful without a way to track things (remember, we rebase every time). To fix that, I add an annotated git tag for every build we do in koji. The annotation points to the build task URL in koji itself. Someone that wanted to could browse to a particular tag and have a directly link to the Fedora package the source represents. You can also diff between tags, etc. I added the tags starting around the 3.17-rc4 builds.

(IMPORTANT: the link does not mean that package was built from this git tree. It simply points to the build that best represents it. Official builds are done from the Fedora package repo via SPEC file and tarballs/patches as mentioned before.)

With those things in place, we push out the results to the master and rawhide branches for the day, and call it good.

But what about when other Fedora maintainers add, remove, or modify a patch? Thankfully that happens less frequently than most people think. When it does, git rebase -i is my friend. It lets me rebase the patches while editing or amending, etc. If you haven't used it before I would highly recommend learning how it works.

Whither yon git tree?

OK, OK. You've probably had enough of my blabbering so you can find the git tree here:

https://git.kernel.org/cgit/linux/kernel/git/jwboyer/fedora.git

What's next?

To be honest, I'm not sure. The obvious thing would be to add the release branches (F20 and F21. F19 is dead to me.) so that will probably happen. Beyond that, I don't see much changing at this point. I'm very curious to see if people use this, and if they use it for something more than a web viewer of the tree. Time will tell!

Overall I'm glad I went through the exercise. It's been informative and beneficial, even if only in a small way.

TLDR:
What: exploded source git tree for Fedora kernel package (rawhide only for now)
How: rebases with every push; annotated git tags pointed to corresponding builds
Where: https://git.kernel.org/cgit/linux/kernel/git/jwboyer/fedora.git
Next: Up to you mostly!
link2 comments|post comment

Flock 2014: Over but the work is just starting [Aug. 10th, 2014|02:23 pm]
jwboyer
[Tags|]

It is traditional when you go to a Fedora conference to do blog posts about your experience there. I think they're great, and I've enjoyed reading those posted by others. There are some great talk recaps and it's always interesting to see how people are spending their time.

For one reason or another, I'm not one of those that manages to do this on a daily basis though. That usually means I try and do a week long recap of what happened, and the information is largely stale or redundant and I don't think it makes for great reading or communicating. I thought about this a bit and I think instead I'm going to talk about what I'm planning on doing now that Flock is over that came as a direct result of things that occurred while I was there. Hopefully that will be a bit more entertaining and really keep me honest with those plans.

Custom Kernel builds

A number of people came up to me and asked how to build custom kernels. The instructions we have on our wiki page are probably fairly valid, but I really expect that they are stale in certain areas that lead people to have trouble. The kernel maintainers never really build kernels the way the wiki page suggests, so it might be time for a revamp. I'll look this over and give some examples of common cases (enabling config options, adding patches) and hopefully this will lead to people having fewer issues.

A Fedora kernel git tree

We've often talked about having an exploded source tree. The problem with that is that it's of little benefit to the Fedora kernel maintainers. Koji builds from the SRPM and that's from the tarball, patches, and spec file. However, getting a kernel tree created is likely only difficult from the start, and keeping it maintained could probably be automated. It would have to rebase the branches, as that is literally what we do in the SRPM, but if people find it useful and it isn't overly difficult to create, we might as well try. Plus, it's a good way to spend a day or two when I need a break from the daily grind.

Packaging changes

A couple people spoke with me about some possible packaging changes for the kernel. Things like the kernel-source package that Fedora shipped long ago. To be honest, not all of these will probably happen but there are a few minor things we'll likely change.

Bug workflow

Unsurprisingly, this is one of the larger topics I left with things to work on. For starters, we want to try and automate some of the triage we do. It shouldn't be massively difficult to write something that parses an oops and figures out the correct part of the kernel it came from. We'll likely start with ABRT bugs, since those are mostly formatted in a uniform way and work from there (we might even be able to get ABRT to do this at some point as well.) We can also pull from the retrace server once that is able to filter out results. This is a pretty broad description of what to do, and that's because there are many options and directions we can take. Hopefully some automation will really help us get time back to work on different things.

Rawhide kernel handling

In my kernel talk, Owen asked me "Is the rawhide kernel the right kernel for rawhide?" He was referring to the fact that we build with debug options enabled, and this has performance impacts. Those impacts are less noticeable to a human now that we leave SLUB debugging off, but if you're using perf or some automated performance metrics it is going to be easy to see it as slower than a normal build. Even ignoring the performance aspect, there are still days where we'd like to get a build out for "adventurer" testing before it hits the main rawhide compose, but we don't have a great way to do that. I'm going to think about how we can use the kernel-tests repo, the autotest harness we have, and some of the other tools to see if there's a way we can improve both the rawhide kernel quality and the number of people willing to test rawhide kernels. Nothing concrete yet, but I think some of the work we've done over the course of this past year will really help.

Fedora Governance

This really has nothing to do with the kernel. The discussions I had over the course of the conference around governance were really good. The dedicated session Toshio and Haikel held was well attended, and participation was excellent. I'm looking forward to working with the broader Fedora community to hammer out the rest of the details in the plan and start executing on that soon. There's a lot of work to do, but out of everything at Flock it's the one thing I am most eager to jump into. I honestly wish I could spend more dedicated time on working on broader Fedora issues in general.

I'm sure there are things I've forgotten. If I talked with you about something that isn't on this list, send me a reminder. After a week of meetings and conference, my brain can get a little full.

As always, the best part of Flock was see old friends again and meeting new people. I had good conversations with everyone I spoke with, and it was a really great time. If you haven't already, be sure to thank the Flock organizers. Having pitched in a bit over the past 2 years, I can tell you that it isn't at all easy to get this conference planned, and it can be exhausting while it's running.

Looking forward to working with everyone over the course of this year to see what awesome things we can bring to the next Flock!
link2 comments|post comment

At the playground [Jul. 8th, 2014|03:25 pm]
jwboyer
[Tags|]

A while ago, we had a thread on the Fedora kernel list where people were expressing a desire for easily accessible kernels that contained new and shiny technologies still under development. We've had a number of discussions like that over the years, and none of them ever seem to pan out to something remotely feasible. The patches in question tend to be larger, with no clear upstream path. The one thing we've learned from uprobes, execshield, and now Secure Boot is that carrying non-upstream patches is kind of a nightmare. It makes us different from upstream, creates a maintenance burden, and actually leads to those patches taking even longer to get upstream because there's no impetus once they're "in-distro". So we've been sufficiently hit with the cluebat enough to say "no major non-upstream patches" and to be honest that approach has been working well.

Except that precludes people from working on them, getting userspace ready for them, and improving them. There's a catch-22 there. The features aren't making upstream progress because nobody is using them, but nobody is using them because they aren't easily accessible. Not everyone is willing to build their own kernel just to play with a new toy. Fedora hasn't been willing to take the hit so they can easily use something that might not actually get upstream (see aufs). So what can anyone do?

The Fedora kernel team has thought about this for a while. How do we deliver something people are asking for without impacting the rest of the distro. The rawhide nodebug repo has shown that simply delivering a side-band kernel doesn't drastically increase the maintenance burden, but that isn't really a great comparison. That is literally the same kernel, just with the debug options disabled. However, the introduction of Coprs in Fedora gives us another option.

The kernel team first used a Copr to work on the kernel-core/kernel-modules packaging split before landing it in rawhide. The Cloud team was able to point their instances at that repo and test it out. This actually worked out really well and we fixed a number of bugs before they hit rawhide. The content of the kernel was still a 100% match to the mainline Fedora kernel, but the packaging difference was enough to prove the mechansim worked well enough.

So with that in mind, I've decided to create a kernel-playground Copr. The idea behind this is to provide easily consumable kernel builds with some of the things people have been requesting for a while now. Before we get to specifics, here's the playground rules:

* This is not a "supported" kernel. Bugs filed against the Fedora kernel component with the kernel-playground kernel will be closed. If the bug clearly lands in one of the features being carried, we'll make attempts to forward the information to the relevant upstream developers. If there are issues we can still discuss them, but we want to keep the Fedora distro and it's process/tracking isolated from this like we would with any other non-standard kernel.

* The kernel-playground build will roughly track rawhide, with selected features/patches added on top. I say roughly because it will not receive a build for every rawhide build done like the nodebug repo does. The tentative plan is to update it once per upstream -rc and final release.

* x86_64 only. In the future, other 64-bit architectures might be possible.

* There are no guarantees around feature/patch additions. We'll do our best to keep things integrated, but we might have to drop additions if they become too much work. There are also no plans to "graduate" features from the playground kernel to the standard Fedora kernel. "When it gets upstream" is still the default answer there.

* In all likelyhood, this kernel will crash on your machine at some point. It will probably even eat data. DO NOT RUN IT ON ANYTHING YOU RELY ON HEAVILY. Make sure whatever data you care about is backed up, etc. We provide this AS-IS, with no warranty, and are not liable for anything. (The same is true of Fedora kernels as well, but doubly so for kernel-playground.)

OK, now that we have that out of the way, let's talk about what is actually in kernel-playground. At the moment there are two additions on top of the standard rawhide kernel; overlayfs (v22) and kdbus.

Overlayfs is one of the top competing "union" filesystems out there, and has actually been posted for review for the past few releases. It has the best chance of landing upstream sometime this decade, and there has been interest in it for quite a while. I believe things like Docker would also be able to make use of it as a backend. I'll track upstream submissions and update accordingly.

kdbus is of course the thing that Lennart Poettering and Kay Sievers have been talking about at various conferences for a while now. It is the in-kernel d-bus replacement. It has not been submitted for upstream review yet, but systemd already has support for it and things seem to be progressing well there.

I'm open to adding more content if there's a reasonable enough purpose to do so. Some interest in the live-patching kernel solutions (kpatch/kGraft) has been expressed, though we won't be producing patch updates so that still requires heavy user involvement to actually be useful. Backported btrfs updates from the latest development tree has also been floated as an idea. If you have a feature/technology you'd like to see, shoot an email to the Fedora kernel list and we can chat about it.
link6 comments|post comment

Apr/May Fedora kernel release overview [May. 20th, 2014|10:42 am]
jwboyer
[Tags|]

A brief release overview of the past several weeks below. If you have any questions, please chime in!

F19:

Currently at 3.13.11 but should be rebasing to the 3.14.4 kernel that has been in updates-testing very soon.

F20:

Currently at 3.14.4. Nothing pending. The upstream stable maintainer has been behind in doing stable releases and still has several hundred patches to wade through. More 3.14.y updates will be coming as he works through the backlog.

Hans has been working on several backlight issues in F20 (and F19). There are a number of models fixed, but several still remain. If you are having backlight issues, take a look at his blog post on how to debug these:

http://hansdegoede.livejournal.com/13889.html

Many thanks to the reporters and testers on the existing bugs, and especially to Hans!

Rawhide (F21):

Currently at 3.15.0-0.rc5.git3.1. Continuing towards 3.15 final.

We've had several packaging changes in rawhide over the past few weeks. The kernel-core/kernel-modules split landed and has, from all indications, been much of a non-event after the first few days. That is good to see, since it should be minimal impact.

We also enabled two features that Kyle McMartin provided. The first is that the kernel packages will now have Provides(foo.ko) included for every kernel module the package provides. That lets us switch Requires from specific kernel packages (like kernel-modules-extra) to Requires on the specific kernel modules needed. With that we can move modules between packages more freely without breaking dependencies in the userspace packages that need certain modules.

Kyle's other feature was to compress the installed modules with xz. The kmod utility has been able to handle compressed modules for quite some time. This actually has significant space savings on the installed size, and the performance impact on module loading is pretty negligible. Combined with the kernel packaging split, this should make the Cloud people rather happy. Thanks Kyle!
linkpost comment

DevConf 2014 [Feb. 17th, 2014|01:52 pm]
jwboyer
[Tags|]

I recently had the privilege of attending DevConf 2014, and it was really impressive. This was the first time I had ever traveled to the Czech Republic and I found the people to be extremely pleasant and helpful. The city of Brno was really easy to get around and the conference accommodations were nice.

The conference itself was more than I expected it to be, both in terms of scale and quality. That's not to say that I expected it to be poor quality, but at all conferences there are some talks or tracks that wind up being less worthwhile than you expect. That wasn't the case here. Every talk I attended was well done and very informative. Despite the conference organizers warning that the presenters were engineers and not public speakers, I thought everyone did a great job. Clearly this is a continuing trend because the conference was packed with people. It was almost to the point where a larger venue would be needed.

Day One

The first talk I attended was Thomas Graf's Linux networking stack overview. He took a very complex topic and presented it in a clear manner without overwhelming people. It was very well done, and I'll probably refer back to his slides when looking at networking issues for quite a while. I really like these kind of overview talks, because while I stare at kernel code all day long, it's hard to get a bigger picture of the whole stack if you don't work on it daily. Very helpful.

After that I attended Alex Larsson's talk on Docker. Docker was everywhere at the conference and the hype around it was pretty evident. Being a kernel guy not very much in tune with the higher level userspace stacks, I wanted to understand why people were excited and what usecases might be present for Fedora. Alex did a great job of talking to that hype and then was brave enough to give a good demo that worked. His talk was great and while I might not see myself using Docker in the future, it was definitely informative enough for me to explain Docker to someone else later on.

It was interesting to go from the Docker talk to Colin Walters' talk on OSTree. Where Docker seems designed to make it easy to swap out "stacks" of things, OSTree is very focused on building an immutable tree and having that be the basis for testing and deployment. The ability to build the trees on the server and deploy them on both bare-metal and virtual machines is interesting to me. I can certainly see this being very helpful as Fedora is making progress towards a release. It removes the ambiguity around what "version" of Fedora you're testing, since the trees are named and immutable whereas a typical system on Branched is a collection of packages that may or may not match the latest releases. I think this could also provide a great base for things like cloud-images, where you don't really focus on running an install for a great duration and upgrading it as you go. I really appreciated Colin's approach to the presentation as well. He presented both benefits and downsides to his ideas, which is not something you often see from someone presenting a new technology.

After that I did a bit of "hallway tracking". Catching up with people I haven't seen in a while, and meeting several that I've never met in person for the first time. As is typical with most conferences, the hallway track is very useful and this was no exception.

I went to the Continuous Integration with OBS talk, given by Adrian Schröter from SUSE. It was a very interesting presentation. Being able to see what OBS does differently and in-addition to koji is always good. I had looked at OBS somewhat in depth a few years ago, and the SUSE team has really continued to improve and polish it since then. Adopting some of the same features within Fedora would be beneficial, but whether that will be done is unknown to me.

The last talk of the day for me was Jeff Scheel's Linux on POWER. He did a great overview of the hardware advances in POWER recently, and what IBM is looking to accomplish with Linux on POWER. The push to get KVM running on POWER will hopefully help adoption there, as it gives others more familiar with Linux on x86 a common set of tools and interfaces to work with. To hear that the traditional IBM value adds of RAS features were also being worked on for Linux enablement was somewhat surprising and refreshing as well.

After that it was off to a dinner and then back to the hotel for sleep.

Day Two

Day two started with Thorsten Leemhuis's kernel overview talk. For those that have read Thorsten's online summaries of what goes into each kernel release, this was much in the same but with great additional detail around it. I really enjoyed the presentation. Even working on the kernel all day, I still found a few things that I had missed myself. Plus, having someone else's perspective on things is always refreshing.

Immediately after I stayed for Lukas Czerner's advanced filesystem/storage features talk. This was a great overview of some of the new and complex capabilities various filesystems can accomplish, as well as some of the new trends in storage coming down the pipe. It also served as a good lead-in to a later talk from Ric Wheeler on the storage side of things. Very well presented, and worth viewing the video for anyone that is interested in these areas.

I had intended to go to the kdbus talk next, but I had already seen the video from LCA on that subject and figured the talk would be much of the same. So it was outside for a bit of hallway track again. After that, I went to Ric's aforementioned talk on the new storage technology coming out. Specifically shingled drives (SMR) and persistent memory. It's interesting to see how PM is driving changes to the block and fs layers, as the IOPs numbers for those devices are very high. Combining PM with SMR seems like an interesting way to maximize the performance and capacity issues. It will be interesting to see how things play out in this space.

Later in the afternoon I attended Dan Walsh's Secure Containers talk. As usual, Dan had a great amount of information on the work he and others are doing to secure Linux containers, why you'd use containers, and what that entails. Of course, this naturally included mention of Docker. I'm curious how the adoption of container technology will drive changes in the market place over what the adoption of virtualization has done already. Thus far it seems Containers are primarily used for scaling and sandboxing at a lower cost than virt, where virt is needed for full isolation and higher security needs areas. Openshift and Docker certainly have a lot of hype around them, so I'm wondering if the security issue will be a less of a factor in which technology gets deployed going forward.

Right after Dan's talk I stayed for Kyle McMartin's Linux porting talk. This gave a great overview of the difficulties in porting free software (not just the Linux kernel) to a new architecture. The number of assumptions that have been made in various chunks of code is what really drives a lot of the work, and Kyle presented a good summary of the bulk of those issues. He finished off with some "war stories", which are always entertaining. I had seen bits and pieces of this talk at Flock last summer, but not the whole thing. It was well worth it, and had some good updates to it. I would recommend watching the video if you're interested in the 64-bit ARM work being done, or porting work in general.

That evening was the conference party, which was really nice. The food provided was tasty, and it was good to catch up with people. I retired a bit early as jet lag seems to catch up to me on the second day for some reason, but I still had a great time.

Day Three

Fedora day! To say this was Fedora day is somewhat misleading. There was talk about Fedora, implications of things in Fedora, and using Fedora for things throughout the conference. It was ever present and this was great. However, the final day of the conference was dedicated to Fedora and this is the main reason I came.

I didn't actually give a "state of the Fedora kernel" talk as I normally do at conferences. That talk is mostly the same, and frankly the presentations throughout the first two days on kernel topics were far more interesting. Instead, I was there to discuss Fedora.next and the related Working Groups.

Matt Miller kicked off the Fedora.next overview. This was a great way to start, since much of the day following revolved around what we're doing and why. When the video is available, I would highly recommend those trying to catch up on this to watch it in the entirety. Very informative. The questions at the end were actually one of my favorite parts. There were some great questions and it's clear that we need to do a better job of communicating reasoning, status, and goals in the future.

Following Matt's overview, we had a panel discussion with representatives from all of the various Working Groups. It was the first time the liaisons were all in one physical location, and it was very interesting to hear the questions and the answers each gave to them. Overall, I would say we're all on basically the same page, which is good given that a lot of things are still not clearly defined and we've been doing most of this work via email and IRC. Nice to see that distributed... er... development? works well even for things that don't relate to code. I was somewhat surprised that there weren't more specific questions on the Workstation product, particularly given some of the more energetic discussions we've had on the mailing lists so far. Was the summary of the product I gave was sufficient to qualm many fears, or were people so entirely skeptical of the whole thing that they think it will fail and doesn't matter? I have no idea. The questions we did get were well thought out and I hope that our answers were as well reasoned. Again, I highly recommend watching the video of this session. I really enjoyed participating and would definitely do it again, perhaps at Flock.

The last session of the conference I attended was the Meet Your FESCo session. I've been to several of these, and they're usually pretty interesting. After introductions they took questions and nobody seemed to want to break the ice. So I asked FESCo how they see the interaction between themselves and the Fedora Board working with the Fedora.next change being driven primarily through FESCo. The answers were interesting to me, in that most people seemed to want to stick with the status quo here. FESCo driving technical change/decisions and the Board acting as a more general project wide body. That differs somewhat from my opinion, but that's probably a discussion for another day.

Overall I thought DevConf was very worthwhile. I wish I could have gotten to some of the lightning talks on the first day, and some of the other talks in the afternoon of the second day, but there's only so much time. I hope to have the same opportunity next year, and I'm looking forward to being back in the Czech Republic for Flock in Prague this summer.
link2 comments|post comment

Saying NO less [Jan. 14th, 2014|02:30 pm]
jwboyer
[Tags|]

This post isn't about my kids, but they're the prompt for it so bear with me.

My kids participate in an after school program that is designed to foster creative thinking. They do spontaneous problem solving and "outside-the-box" kind of thinking. Watching them do this and hearing about the ideas they come up with has been pretty interesting so far. Their solutions to problems aren't always practical, but it's their interaction with their team members that I really noticed.

They are the youngest two kids in the group, by at least 2 years and in some cases 4. Sometimes when the coach gives a problem, the older kids say "oh, we can't do that because " or "that won't work because " etc. The coach is great and works really hard to get them to think harder, but she said my kids very rarely do that to begin with. Instead they just throw out all kinds of ideas. Need to simulate water? Use lots of blue M&Ms. Need to make something look smelly without actually making it stink? Use pipe cleaners to create comic style scent waves coming off of your body. Some just aren't feasible to do, but then the group discusses the ideas and figures out _why_ they aren't really practical. And a lot of the time the ideas are great. All it took was an unjaded (and sometimes naive) point of view.

That got me thinking about Fedora and open source in general. Fedora is 10 years old now, and despite the Fedora.next stuff we're currently working through, we're pretty entrenched in our ways. We have RPM and packaging guidelines and processes. They've worked to produce 20 versions of Fedora, and there is a lot of momentum behind them. However, that doesn't mean they're the best long term thing going forward.

We have developers writing applications that bundle things. We have people wanting to make app containers that are portable so they're distro-agnostic. We have things flying in the face of "Unix tradition". The response to a lot of this stuff within Fedora (and other distros) has been mixed, but there is quite a bit of "we can't do that", or "it's too hard", or . Even the Fedora.next stuff is met with a lot of resistance based on the grounds that it's a rather large and significant undertaking. There are lots of people saying "it can't be done" or that it won't matter. They may be right. They may also be wrong.

Even within the kernel team, I tend to have knee-jerk "no" answers. Usually with good intentions, but often without really exploring it in detail. It doesn't really help expand my knowledge base and it probably leaves proposers of things with a sour taste in their mouths. I don't know anyone that likes being told a blanket "no" without an explanation.

Thinking about all of this made me wonder if we're missing some great ideas because we've lost that unjaded point of view. So I think from now on I'm going to try pretty hard to consider suggestions with a more open mind. If I think something isn't really practical or feasible, I'll try and respond with reasoning as to why or I'll respond with what it would cost to do and let the proposer work that out. Maybe this won't really lead to anything new, or maybe it will lead to doing something new that ultimately fails. But at least we have the possibility of making something better or learning from a mistake. I guess I'd rather have that than sit around being stagnant and eventually irrelevant. Here's to thinking outside the box.
link6 comments|post comment

navigation
[ viewing | most recent entries ]
[ go | earlier ]