RSS Atom Add a new post titled:

Since gitorious' shutdown I decided it was time to start hosting my own git repositories for my own little projects (although the company which took over gitorious has a Free software offering it seems that their hosted offering is based on the proprietary version, and in any case once bitten, twice shy and all that).

After a bit of investigation I settled on using gitolite and gitweb. I did consider (and even had a vague preference for) cgit but it wasn't available in Wheezy (even backports, and the backport looked tricky) and I haven't upgraded my VPS yet. I may reconsider cgit this once I switch to Jessie.

The only wrinkle was that my VPS is shared with a friend and I didn't want to completely take over the gitolite and gitweb namespaces in case he ever wanted to setup git.hisdomain.com, so I needed something which was at least somewhat compatible with vhosting. gitolite doesn't appear to support such things out of the box but I found an interesting/useful post from Julius Plenz which included sufficient inspiration that I thought I knew what to do.

After a bit of trial and error here is what I ended up with:

Install gitolite

The gitolite website has plenty of documentation on configuring gitolite. But since gitolite is in Debian its even more trivial than even the quick install makes out.

I decided to use the newer gitolite3 package from wheezy-backports instead of the gitolite (v2) package from Wheezy. I already had backports enabled so this was just:

# apt-get install gitolite3/wheezy-backports

I accepted the defaults and gave it the public half of the ssh key which I had created to be used as the gitolite admin key.

By default this added a user gitolite3 with a home directory of /var/lib/gitolite3. Since they username forms part of the URL used to access the repositories I want it to include the 3, so I edited /etc/passwd, /etc/groups, /etc/shadow and /etc/gshadow to say just gitolite but leaving the home directory as gitolite3.

Now I could clone the gitolite-admin repo and begin to configure things.

Add my user

This was simple as dropping the public half into the gitolite-admin repo as keydir/ijc.pub, then git add, commit and push.

Setup vhosting

Between the gitolite docs and Julius' blog post I had a pretty good idea what I wanted to do here.

I wasn't too worried about making the vhost transparent from the developer's (ssh:// URL) point of view, just from the gitweb and git clone side. So I decided to adapt things to use a simple $VHOST/$REPO.git schema.

I created /var/lib/gitolite3/local/lib/Gitolite/Triggers/VHost.pm containing:

package Gitolite::Triggers::VHost;

use strict;
use warnings;

use File::Slurp qw(read_file write_file);

sub post_compile {
    my %vhost = ();
    my @projlist = read_file("$ENV{HOME}/projects.list");
    for my $proj (sort @projlist) {
        $proj =~ m,^([^/\.]*\.[^/]*)/(.*)$, or next;
        my ($host, $repo) = ($1,$2);
        $vhost{$host} //= [];
        push @{$vhost{$host}} => $repo;
    }
    for my $v (keys %vhost) {
        write_file("$ENV{HOME}/projects.$v.list",
                   { atomic => 1 }, join("\n",@{$vhost{$v}}));
    }
}
1;

I then edited /var/lib/gitolite3/.gitolite.rc and ensured it contained:

LOCAL_CODE                =>  "$ENV{HOME}/local",

POST_COMPILE => [ 'VHost::post_compile', ],

(The first I had to uncomment, the second to add).

All this trigger does is take the global projects.list, in which gitolite will list any repo which is configured to be accessible via gitweb, and split it into several vhost specific lists.

Create first repository

Now that the basics were in place I could create my first repository (for hosting qcontrol).

In the gitolite-admin repository I edited conf/gitolite.conf and added:

repo hellion.org.uk/qcontrol
    RW+     =   ijc

After adding, committing and pushing I now have "/var/lib/gitolite3/projects.list" containing:

hellion.org.uk/qcontrol.git
testing.git

(the testing.git repository is configured by default) and /var/lib/gitolite3/projects.hellion.org.uk.list containing just:

qcontrol.git

For cloning the URL is:

gitolite@${VPSNAME}:hellion.org.uk/qcontrol.git

which is rather verbose (${VPSNAME} is quote long in my case too), so to simplify things I added to my .ssh/config:

Host gitolite
Hostname ${VPSNAME}
User gitolite
IdentityFile ~/.ssh/id_rsa_gitolite

so I can instead use:

gitolite:hellion.org.uk/qcontrol.git

which is a bit less of a mouthful and almost readable.

Configure gitweb (http:// URL browsing)

Following the documentation's advice I edited /var/lib/gitolite3/.gitolite.rc to set:

UMASK                           =>  0027,

and then:

$ chmod -R g+rX /var/lib/gitolite3/repositories/*

Which arranges that members of the gitolite group can read anything under /var/lib/gitolite3/repositories/*.

Then:

# adduser www-data gitolite

This adds the user www-data to the gitolite group so it can take advantage of those relaxed permissions. I'm not super happy about this but since gitweb runs as www-data:www-data this seems to be the recommended way of doing things. I'm consoling myself with the fact that I don't plan on hosting anything sensitive... I also arranged things such that members of the groups can only list the contents of directories from the vhost directory down by setting g=x not g=rx on higher level directories. Potentially sensitive files do not have group permissions at all either.

Next I created /etc/apache2/gitolite-gitweb.conf:

die unless $ENV{GIT_PROJECT_ROOT};
$ENV{GIT_PROJECT_ROOT} =~ m,^.*/([^/]+)$,;
our $gitolite_vhost = $1;
our $projectroot = $ENV{GIT_PROJECT_ROOT};
our $projects_list = "/var/lib/gitolite3/projects.${gitolite_vhost}.list";
our @git_base_url_list = ("http://git.${gitolite_vhost}");

This extracts the vhost name from ${GIT_PROJECT_ROOT} (it must be the last element) and uses it to select the appropriate vhost specific projects.list.

Then I added a new vhost to my apache2 configuration:

<VirtualHost 212.110.190.137:80 [2001:41c8:1:628a::89]:80>
        ServerName git.hellion.org.uk
        SetEnv GIT_PROJECT_ROOT /var/lib/gitolite3/repositories/hellion.org.uk
        SetEnv GITWEB_CONFIG /etc/apache2/gitolite-gitweb.conf
        Alias /static /usr/share/gitweb/static
        ScriptAlias / /usr/share/gitweb/gitweb.cgi/
</VirtualHost>

This configures git.hellion.org.uk (don't forget to update DNS too) and sets the appropriate environment variables to find the custom gitolite-gitweb.conf and the project root.

Next I edited /var/lib/gitolite3/.gitolite.rc again to set:

GIT_CONFIG_KEYS                 => 'gitweb\.(owner|description|category)',

Now I can edit the repo configuration to be:

repo hellion.org.uk/qcontrol
    owner   =   Ian Campbell
    desc    =   qcontrol
    RW+     =   ijc
    R       =   gitweb

That R permission for the gitweb pseudo-user causes the repo to be listed in the global projects.list and the trigger which we've added causes it to be listed in projects.hellion.org.uk.list, which is where our custom gitolite-gitweb.conf will look.

Setting GIT_CONFIG_KEYS allows those options (owner and desc are syntactic sugar for two of them) to be set here and propagated to the actual repo.

Configure git-http-backend (http:// URL cloning)

After all that this was pretty simple. I just added this to my vhost before the ScriptAlias / /usr/share/gitweb/gitweb.cgi/ line:

        ScriptAliasMatch \
                "(?x)^/(.*/(HEAD | \
                                info/refs | \
                                objects/(info/[^/]+ | \
                                         [0-9a-f]{2}/[0-9a-f]{38} | \
                                         pack/pack-[0-9a-f]{40}\.(pack|idx)) | \
                                git-(upload|receive)-pack))$" \
                /usr/lib/git-core/git-http-backend/$1

This (which I stole straight from the git-http-backend(1) manpage causes anything which git-http-backend should deal with to be sent there and everything else to be sent to gitweb.

Having done that access is enabled by editing the repo configuration one last time to be:

repo hellion.org.uk/qcontrol
    owner   =   Ian Campbell
    desc    =   qcontrol
    RW+     =   ijc
    R       =   gitweb daemon

Adding R permissions for daemon causes gitolite to drop a stamp file in the repository which tells git-http-backend that it should export it.

Configure git daemon (git:// URL cloning)

I actually didn't bother with this, git http-backend supports the smart HTTP mode which should be as efficient as the git protocol. Given that I couldn't see any reason to run another network facing daemon on my VPS.

FWIW it looks like vhosting could have been achieved by using the --interpolated-path option.

Conclusion

There's quite a few moving parts, but they all seems to fit together quite nicely. In the end apart from adding www-data to the gitolite group I'm pretty happy with how things ended up.

Posted Sat 16 May 2015 18:52:03 BST Tags:

Since gitorious has now shutdown I've (finally!) moved the qcontrol homepage to: http://www.hellion.org.uk/qcontrol.

Source can now be found at http://git.hellion.org.uk/qcontrol.git.

Posted Sun 10 May 2015 14:01:30 BST Tags:

I recently wrote a blog post on using grub 2 as a Xen PV bootloader for work. See Using Grub 2 as a bootloader for Xen PV guests over on https://blog.xenproject.org.

Rather than repeat the whole thing here I'll just briefly cover the stuff which is of interest for Debian users (if you want all full background and the stuff on building grub from source etc then see the original post).

TL;DR: With Jessie, install grub-xen-host in your domain 0 and grub-xen in your PV guests then in your guest configuration, depending on whether you want a 32- or 64-bit PV guest write either:

kernel = "/usr/lib/grub-xen/grub-i386-xen.bin"

or

kernel = "/usr/lib/grub-xen/grub-x86_64-xen.bin"

(instead of bootloader = ... or other kernel = ..., also omit ramdisk = ... and any command line related stuff (e.g. root = ..., extra = ..., cmdline = ... ) and your guests will boot using Grub 2, much like on native.

In slightly more detail:

The forthcoming Debian 8.0 (Jessie) release will contain support for both host and guest pvgrub2. This was added in version 2.02~beta2-17 of the package (bits were present before then, but -17 ties it all together).

The package grub-xen-host contains grub binaries configured for the host, these will attempt to chainload an in-guest grub image (following the Xen x86 PV Bootloader Protocol) and fall back to searching for a grub.cfg in the guest filesystems. grub-xen-host is Recommended by the Xen meta-packages in Debian or can be installed by hand.

The package grub-xen-bin contains the grub binaries for both the i386-xen and x86_64-xen platforms, while the grub-xen package integrates this into the running system by providing the actual pvgrub2 image (i.e. running grub-install at the appropriate times to create an image tailored to the system) and integration with the kernel packages (i.e. running update-grub at the right times), so it is the grub-xen which should be installed in Debian guests.

At this time the grub-xen package is not installed in a guest automatically so it will need to be done manually (something which perhaps could be addressed for Stretch).

Posted Sun 18 Jan 2015 09:23:35 GMT Tags:

After becoming a DM at Debconf12 in Managua, Nicaragua and entering the NM queue during Debconf13 in Vaumarcus, Switzerland I received the mail about 24 hours too late to officially become a DD during Debconf14 in Portland, USA. Nevertheless it was a very pleasant surprise to find the mail in my INBOX this morning confirming that my account had been created and that I was officially ijc@debian.org. Thanks to everyone who helped/encouraged me along the way!

I don't imagine much will change in practice, I intend to remain involved in the kernel and Debian Installer efforts as well as continuing to contribute to the Xen packaging and to maintain qcontrol (both in Debian and upstream) and sunxi-tools. I suppose I also still maintain ivtv-utils and xserver-xorg-video-ivtv but they require so little in the way of updates that I'm not sure they count.

Posted Tue 02 Sep 2014 18:58:27 BST Tags:

It's taken a while but all of the pieces are finally in place to run successfully through Debian Installer on ARM64 using the Debian ARM64 port.

So I'm now running nightly builds locally and uploading them to http://www.hellion.org.uk/debian/didaily/arm64/.

If you have CACert in your CA roots then you might prefer the slightly more secure version.

Hopefully before too long I can arrange to have them building on one of the project machines and uploaded to somewhere a little more formal like people.d.o or even the regular Debian Installer dailies site. This will have to do for now though.

Warning

The arm64 port is currently hosted on Debian Ports which only supports the unstable "sid" distribution. This means that installation can be a bit of a moving target and sometimes fails to download various installer components or installation packages. Mostly it's just a case of waiting for the buildd and/or archive to catch up. You have been warned!

Installing in a Xen guest

If you are lucky enough to have access to some 64-bit ARM hardware (such as the APM X-Gene, see wiki.xen.org for setup instructions) then installing Debian as a guest is pretty straightforward.

I suppose if you had lots of time (and I do mean lots) you could also install under Xen running on the Foundation or Fast Model. I wouldn't recommend it though.

First download the installer kernel and ramdisk onto your dom0 filesystem (e.g. to /root/didaily/arm64).

Second create a suitable guest config file such as:

name = "debian-installer"
disk = ["phy:/dev/LVM/debian,xvda,rw"]
vif = [ '' ] 
memory = 512
kernel = "/root/didaily/arm64/vmlinuz"
ramdisk= "/root/didaily/arm64/initrd.gz"
extra = "console=hvc0 -- "

In this example I'm installing to a raw logical volume /dev/LVM/debian. You might also want to use randmac to generate a permanent MAC address for the Ethernet device (specified as vif = ['mac=xx:xx:xx:xx:xx:xx']).

Once that is done you can start the guest with:

xl create -c cfg

From here you'll be in the installer and things carry on as usual. You'll need to manually point it to ftp.debian-ports.org as the mirror, or you can preseed by appending to the extra line in the cfg like so:

mirror/country=manual mirror/http/hostname=ftp.debian-ports.org mirror/http/directory=/debian

Apart from that there will be a warning about not knowing how to setup the bootloader but that is normal for now.

Installing in Qemu

To do this you will need a version of http://www.qemu.org which supports qemu-system-aarch64. The latest release doesn't yet so I've been using v2.1.0-rc3 (it seems upstream are now up to -rc5). Once qemu is built and installed and the installer kernel and ramdisk have been downloaded to $DI you can start with:

qemu-system-aarch64 -M virt -cpu cortex-a57 \
    -kernel $DI/vmlinuz -initrd $DI/initrd.gz \
    -append "console=ttyAMA0 -- " \
    -serial stdio -nographic --monitor none \
    -drive file=rootfs.qcow2,if=none,id=blk,format=qcow2 -device virtio-blk-device,drive=blk \
    -net user,vlan=0 -device virtio-net-device,vlan=0

That's using a qcow2 image for the rootfs, I think I created it with something like:

qemu-img create -f qcow2 rootfs.qcow2 4G

Once started installation proceeds much like normal. As with Xen you will need to either point it at the debian-ports archive by hand or preseed by adding to the -append line and the warning about no bootloader configuration is expected.

Installing on real hardware

Someone should probably try this ;-).

Posted Tue 29 Jul 2014 20:36:50 BST Tags:

I've recently packaged the sunxi tools for Debian. These are a set of tools produce by the Linux Sunxi project for working with the Allwinner "sunxi" family of processors. See the package page for details. Thanks to Steve McIntyre for sponsoring the initial upload.

The most interesting component of the package are the tools for working with the Allwinner processors' FEL mode. This is a low-level processor mode which implements a simple USB protocol allowing for initial programming of the device and recovery which can be entered on boot (usually be pressing a special 'FEL button' somewhere on the device). It is thanks to FEL mode that most sunxi based devices are pretty much unbrickable.

The most common use of FEL is to boot over USB. In the Debian package the fel and usb-boot tools are named sunxi-fel and sunxi-usb-boot respectively but otherwise can be used in the normal way described on the sunxi wiki pages.

One enhancement I made to the Debian version of usb-boot is to integrate with the u-boot packages to allow you to easily FEL boot any sunxi platform supported by the Debian packaged version of u-boot (currently only Cubietruck, more to come I hope). To make this work we take advantage of Multiarch to install the armhf version of u-boot (unless your host is already armhf of course, in which case just install the u-boot package):

# dpkg --add-architecture armhf
# apt-get update
# apt-get install u-boot:armhf
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  u-boot:armhf
0 upgraded, 1 newly installed, 0 to remove and 1960 not upgraded.
Need to get 0 B/546 kB of archives.
After this operation, 8,676 kB of additional disk space will be used.
Retrieving bug reports... Done
Parsing Found/Fixed information... Done
Selecting previously unselected package u-boot:armhf.
(Reading database ... 309234 files and directories currently installed.)
Preparing to unpack .../u-boot_2014.04+dfsg1-1_armhf.deb ...
Unpacking u-boot:armhf (2014.04+dfsg1-1) ...
Setting up u-boot:armhf (2014.04+dfsg1-1) ...

With that done FEL booting a cubietruck is as simple as starting the board in FEL mode (by holding down the FEL button when powering on) and then:

# sunxi-usb-boot Cubietruck -
fel write 0x2000 /usr/lib/u-boot/Cubietruck_FEL/u-boot-spl.bin
fel exe 0x2000
fel write 0x4a000000 /usr/lib/u-boot/Cubietruck_FEL/u-boot.bin
fel write 0x41000000 /usr/share/sunxi-tools//ramboot.scr
fel exe 0x4a000000

Which should result in something like this on the Cubietruck's serial console:

U-Boot SPL 2014.04 (Jun 16 2014 - 05:31:24)
DRAM: 2048 MiB


U-Boot 2014.04 (Jun 16 2014 - 05:30:47) Allwinner Technology

CPU:   Allwinner A20 (SUN7I)
DRAM:  2 GiB
MMC:   SUNXI SD/MMC: 0
In:    serial
Out:   serial
Err:   serial
SCSI:  SUNXI SCSI INIT
Target spinup took 0 ms.
AHCI 0001.0100 32 slots 1 ports 3 Gbps 0x1 impl SATA mode
flags: ncq stag pm led clo only pmp pio slum part ccc apst 
Net:   dwmac.1c50000
Hit any key to stop autoboot:  0 
sun7i# 

As more platforms become supported by the u-boot packages you should be able to find them in /usr/lib/u-boot/*_FEL.

There is one minor inconvenience which is the need to run sunxi-usb-boot as root in order to access the FEL USB device. This is easily resolved by creating /etc/udev/rules.d/sunxi-fel.rules containing either:

SUBSYSTEMS=="usb", ATTR{idVendor}=="1f3a", ATTR{idProduct}=="efe8", OWNER="myuser"

or

SUBSYSTEMS=="usb", ATTR{idVendor}=="1f3a", ATTR{idProduct}=="efe8", GROUP="mygroup"

To enable access for myuser or mygroup respectively. Once you have created the rules file then to enable:

# udevadm control --reload-rules

As well as the FEL mode tools the packages also contain a FEX (de)compiler. FEX is Allwinner's own hardware description language and is used with their Android SDK kernels and the fork of that kernel maintained by the linux-sunxi project. Debian's kernels follow mainline and therefore use Device Tree.

Posted Mon 21 Jul 2014 19:10:23 BST Tags:

For my local backup regimen I use flexbackup to create a full backup twice a year and differential/incremental backups on a weekly/monthly basis. I then upload these to a new amazon S3 bucket for each half year (so each bucket corresponds to the a full backup plus the associated differentials and incrementals).

I then set the bucket's lifecycle to archive to glacier (cheaper offline storage) from the month after that half year has ended (reducing costs) and to delete it a year after the half ends. It used to be possible to do this via the S3 web interface but the absolute date based options seem to have been removed in favour of time since last update, which is not what I want. However the UI will still display such lifecycles if they are configured and directs you to the REST API to set them up.

I had a look around but couldn't any existing CLI tools to do this directly but I figured it must be possible with curl. A little bit of reading later I found that it was possible but it involved some faff calculating signatures etc. Luckily EricW has written Amazon S3 Authentication Tool for Curl (AKA s3curl) which automates the majority of that faff. The tool is "New BSD" licensed according to that page or Apache 2.0 license according to the included LICENSE file and code comments.

Setup

Following the included README setup ~/.s3curl containing your id and secret key (I called mine personal which I then use below).

Getting the existing lifecycle

Retrieving an existing lifecycle is pretty easy. For the bucket which I used for the first half of 2014:

$ s3curl --id=personal -- --silent http://$bucket.s3.amazonaws.com/?lifecycle | xmllint --format -
<?xml version="1.0" encoding="UTF-8"?>
<LifecycleConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
  <Rule>
    <ID>Archive and Expire</ID>
    <Prefix/>
    <Status>Enabled</Status>
    <Transition>
      <Date>2014-07-31T00:00:00.000Z</Date>
      <StorageClass>GLACIER</StorageClass>
    </Transition>
    <Expiration>
      <Date>2015-01-31T00:00:00.000Z</Date>
    </Expiration>
  </Rule>
</LifecycleConfiguration>

See GET Bucket Lifecycle for details of the XML.

Setting a new lifecycle

The desired configuration needs to be written to a file. For example to set the lifecycle for the bucket I'm going to use for the second half of 2014:

$ cat s3.lifecycle
<LifecycleConfiguration>
  <Rule>
    <ID>Archive and Expire</ID>
    <Prefix/>
    <Status>Enabled</Status>
    <Transition>
      <Date>2015-01-31T00:00:00.000Z</Date>
      <StorageClass>GLACIER</StorageClass>
    </Transition>
    <Expiration>
      <Date>2015-07-31T00:00:00.000Z</Date>
    </Expiration>
  </Rule>
</LifecycleConfiguration>
$ s3curl --id=personal --put s3.lifecycle --calculateContentMd5 -- http://$bucket.s3.amazonaws.com/?lifecycle

See PUT Bucket Lifecycle for details of the XML.

Posted Sun 06 Jul 2014 11:45:27 BST Tags:

Update: Closely followed by 0.5.4 to fix an embarassing brown paper bag bug:

  • Correct argument handling for system-status command

Get it from gitorious or http://www.hellion.org.uk/qcontrol/releases/0.5.4/.

I've just released qcontrol 0.5.3. Changes since the last release:

  • Reduce spaminess of temperature control (Debian bug #727150).
  • Support for enabling/disabling RTC on ts219 and ts41x. Patch from Michael Stapelberg (Debian bug #732768).
  • Support for Synology Diskstation and Rackstation NASes. Patch from Ben Peddell.
  • Return correct result from direct command invocation (Debian bug #617439).
  • Fix ts41x LCD detection.
  • Improved command line argument parsing.
  • Lots of internal refactoring and cleanups.

Get it from gitorious or http://www.hellion.org.uk/qcontrol/releases/0.5.3/.

The Debian package will be uploaded shortly.

Posted Fri 11 Apr 2014 16:04:01 BST Tags:

At the Mini-debconf in Cambridge back in November there was an ARM Sprint (which Hector wrote up as a Bits from ARM porters mail). During this there a brief discussion about using GRUB as a standard bootloader, particularly for ARM server devices. This has the advantage of providing a more "normal" (which in practice means "x86 server-like") as well as flexible solution compared with the existing flash-kernel tool which is often used on ARM.

On ARMv7 devices this will more than likely involve chain loading from the U-Boot supplied by the manufacturer. For test and development it would be useful to be able to set up a similar configuration using Qemu.

Cross-compilers

Although this can be built and run on an ARM system I am using a cross compiler here. I'm using gcc-linaro-arm-linux-gnueabihf-4.8-2013.08_linux from Linaro, which can be downloaded from the linaro-toolchain-binaries page on Launchpad. (It looks like 2013.10 is the latest available right now, I can't see any reason why that wouldn't be fine).

Once the cross-compiler has been downloaded unpack it somewhere, I will refer to the resulting gcc-linaro-arm-linux-gnueabihf-4.8-2013.08_linux directory as $CROSSROOT.

Make sure $CROSSROOT/bin (which contains arm-linux-gnueabihf-gcc etc) is in your $PATH.

Qemu

I'm using the version packaged in Jessie, which is 1.7.0+dfsg-2. We need both qemu-system-arm for running the final system and qemu-user to run some of the tools. I'd previously tried an older version of qemu (1.6.x?) and had some troubles, although they may have been of my own making...

Das U-boot for Qemu

First thing to do is to build a suitable u-boot for use in the qemu emulated environment. Since we need to make some configuration changes we need to build from scratch.

Start by cloning the upstream git tree:

$ git clone git://git.denx.de/u-boot.git
$ cd u-boot

I am working on top of e03c76c30342 "powerpc/mpc85xx: Update CONFIG_SYS_FSL_TBCLK_DIV for T1040" dated Wed Dec 11 12:49:13 2013 +0530.

We are going to use the Versatile Express Cortex-A9 u-boot but first we need to enable some additional configuration options:

  • CONFIG_API -- This enables the u-boot API which Grub uses to access the lowlevel services provided by u-boot. This means that grub doesn't need to contains dozens of platform specific flash, mmc, nand, network, console drivers etc and can be completely platform agnostic.
  • CONFIG_SYS_MMC_MAX_DEVICE -- Setting CONFIG_API needs this.
  • CONFIG_CMD_EXT2 -- Useful for accessing EXT2 formatted filesystems. In this example I use a VFAT /boot for convenience but in a real system we would want to use EXT2 (or even something more modern)).
  • CONFIG_CMD_ECHO -- Just useful.

You can add all these to include/configs/vexpress_common.h:

#define CONFIG_API
#define CONFIG_SYS_MMC_MAX_DEVICE   1
#define CONFIG_CMD_EXT2
#define CONFIG_CMD_ECHO

Or you can apply the patch which I sent upstream:

$ wget -O - http://patchwork.ozlabs.org/patch/304786/raw | git apply --index
$ git commit -m "Additional options for grub-on-uboot"

Finally we can build u-boot:

$ make CROSS_COMPILE=arm-linux-gnueabihf- vexpress_ca9x4_config
$ make CROSS_COMPILE=arm-linux-gnueabihf-

The result is a u-boot binary which we can load with qemu.

GRUB for ARM

Next we can build grub. Start by cloning the upstream git tree:

$ git clone git://git.sv.gnu.org/grub.git
$ cd grub

By default grub is built for systems which have RAM at address 0x00000000. However the Versatile Express platform which we are targeting has RAM starting from 0x60000000 so we need to make a couple of modifications. First in grub-core/Makefile.core.def we need to change arm_uboot_ldflags, from:

-Wl,-Ttext=0x08000000

to

-Wl,-Ttext=0x68000000

and second we need make a similar change to include/grub/offsets.h changing GRUB_KERNEL_ARM_UBOOT_LINK_ADDR from 0x08000000 to 0x68000000.

Now we are ready to build grub:

$ ./autogen.sh
$ ./configure --host arm-linux-gnueabihf
$ make

Now we need to build the final grub "kernel" image, normally this would be taken care of by grub-install but because we are cross building grub we cannot use this and have to use grub-mkimage directly. However the version we have just built is for the ARM target and not for host we are building things on. I've not yet figured out how to build grub for ARM while building the tools for the host system (I'm sure it is possible somehow...). Luckily we can use qemu to run the ARM binary:

$ cat load.cfg
set prefix=(hd0)
$ qemu-arm -r 3.11 -L $CROSSROOT/arm-linux-gnueabihf/libc \
    ./grub-mkimage -c load.cfg -O arm-uboot -o core.img -d grub-core/ \
    fat ext2 probe terminal scsi ls linux elf msdospart normal help echo

Here we create load.cfg which is the setup script which will be built into the grub kernel, our version just sets the root device so that grub can find the rest of its configuration.

Then we use qemu-arm-static to invoke grub-mkimage. The "-r 3.11" option tells qemu to pretend to be a 3.11 kernel (which is required by the libc used by our cross compiler, without this you will get a fatal: kernel too old message) and "-L $CROSSROOT/..." tells it where to find the basic libraries, such as the dynamic linker (luckily grub-mkimage doesn't need much in the way of libraries so we don't need a full cross library environment.

The grub-mkimage command passes in the load.cfg and requests an output kernel targeting arm-uboot, core.img is the output file and the modules are in grub-core (because we didn't actually install grub in the target system, normally these would be found in /boot/grub). Lastly we pass in a list of default modules to build into the kernel, including filesystem drivers (fat, ext2), disk drivers (scsi), partition handling (msdos), loaders (linux, elf), the menu system (normal) and various other bits and bobs.

So after all the we now have our grub kernel in core.img.

Putting it all together

Before we can launch qemu we need to create various disk images.

Firstly we need some images for the 2 64M flash devices:

$ dd if=/dev/zero of=pflash0.img bs=1M count=64
$ dd if=/dev/zero of=pflash1.img bs=1M count=64

We will initialise these later from the u-boot command line.

Secondly we need an image for the root filesystem on an MMC device. I'm using a FAT formatted image here simply for the convenience of using mtools to update the images during development.

$ dd if=/dev/zero of=mmc.img bs=1M count=16
$ /sbin/mkfs.vfat mmc.img

Thirdly we need a kernel, device tree and grub configuration on our root filesystem. For the first two I extracted them from the standard armmp kernel flavour package. I used the backports.org version 3.11-0.bpo.2-armmp version and extracted /boot/vmlinuz-3.11-0.bpo.2-armmp as vmlinuz and /usr/lib/linux-image-3.11-0.bpo.2-armmp/vexpress-v2p-ca9.dtb as dtb. Then I hand coded a simple grub.cfg:

menuentry 'Linux' {
        echo "Loading vmlinuz"
        set root='hd0'
        linux /vmlinuz console=ttyAMA0 ro debug
        devicetree /dtb
}

In a real system the kernel and dtb would be provided by the kernel packages and grub.cfg would be generated by update-grub.

Now that we have all the bits we need copy them into the root of mmc.img. Since we are using a FAT formatted image we can use mcopy from the mtools package.

$ mcopy -v -o -n -i mmc.img core.img dtb vmlinuz grub.cfg ::

Finally after all that we can run qemu passing it our u-boot binary and the mmc and flash images and requesting a Cortex-A9 based Versatile Express system with 1GB of RAM:

$ qemu-system-arm -M vexpress-a9 -kernel u-boot -m 1024m -sd mmc.img \
    -nographic -pflash pflash0.img -pflash pflash1.img

Then at the VExpress# prompt we can configure the default bootcmd to load grub and save the environment to the flash images. The backslash escapes (\$ and \;) should be included as written here so that e.g. the variables are only evaluated when bootcmd is evaluated and not immediately when setting bootcmd and the bootm is set as part of bootcmd instead of executed immediately:

VExpress# setenv bootcmd fatload mmc 0:0 \${loadaddr} core.img \; bootm \${loadaddr}
VExpress# saveenv

Now whenever we boot the system it will automatically load boot grub from the mmc and launch it. Grub in turn will load the Linux binary and DTB and launch those. I haven't actually configure Linux with a root filesystem here so it will eventually panic after failing to find root.

Future work

The most pressing issue is the hard coded load address built in to the grub kernel image. This is something which needs to be discussed with the upstream grub maintainers as well as the Debian package maintainers.

Now that the ARM packages have hit Debian (in experimental in the 2.02~beta2-1 package) I also plan to start looking at debian-installer integration as well as updating flash-kernel to setup the chain load of grub instead of loading a kernel directly.

Posted Thu 26 Dec 2013 13:20:51 GMT Tags:

Until now qcontrol has mostly only supported only ARM (kirkwood) based devices (upstream has a configuration example for the HP Media Vault too, but I don't know if it is used).

Debian bug #712191 asked for at least some basic support for x86 based devices. The mostly don't use the QNAP PIC used on the ARM devices so much of the qcontrol functionality is irrelevant but at least some of them do have a compatible A125 LCD.

Unfortunately I don't have any x86 QNAP devices and I've been unable to figure out a way to detect that we are running on an QNAP box as opposed to any random x86 box so I've not been able to implement the hardware auto-detection used on ARM to configure qcontrol for the appropriate device at installation time. I don't want to include a default configuration which tries to drive an LCD on some random serial port since I have no way of knowing what will be on the other end or what the device might do if sent random bytes of the LCD control protocol.

So I've implemented debconf prompting for the device type which is used only if auto-detection fails, so it shouldn't change anything for existing users on ARM. You can find this in version 0.5.2-3~exp1 in experimental (see DebianExperimental on the Debian wiki for how to use experimental).

Currently the package only knows about the existing set of ARM platforms and a default "unknown" platform, which has an empty configuration. If you have a QNAP device (ARM or x86) which is not currently supported then please install the package from experimental and tailor /etc/qcontrol.conf for you platform (e.g. by uncommenting the a125 support and giving it the correct serial port). Then send me the result along with the device's name. If the device is an ARM one please also send me the contents of /proc/cpuinfo too so I can implement auto-detection.

If you know how to detect a particular x86 QNAP device programmatically (via DMI decoding, PCI probing, sysfs etc, but make sure it is 100% safe on non-QNAP platforms) then please do let me know.

Posted Sat 05 Oct 2013 12:22:49 BST Tags:

This blog is powered by ikiwiki.