Posts Tagged ‘lvm’
Ubuntu 13.10 stuck on initramfs on boot
I finally decided to write my first Post. If it would be for my contribution so far, this would be the first one of the Blog. So… a big “thank you” goes to Roberto for “keeping it real”! I will promise I will contribute more from now on.
The problem I would like to talk about is a very “peculiar” issue I encountered yesterday evening when turning on my Ubuntu 13.10 x64 HP N40L Proliant micro-server via wake-on-lan: the booting process got stuck on an initramfs prompt, just after the following lines:
[ 3.956857] sdd: sdd1 sdd2 < sdd5 > [ 3.957496] sd 5:0:0:0: [sdd] Attached SCSI disk [ 3.977678] sda: sda1 [ 3.977713] sdc: sdc1 [ 3.978048] sd 4:0:0:0: [sdc] Attached SCSI disk [ 3.980252] sdb: sdb1 [ 3.980607] sd 1:0:0:0: [sdb] Attached SCSI disk [ 3.987741] sd 0:0:0:0: [sda] Attached SCSI disk [ 4.120846] bio: create slab <bio-1> at 1
It was clear from the beginning that the problem was a failure while mounting /root.
Initially I thought that the issue could be related to a hardware problem but it turned out to be some sort of fake signature on the boot disk.
It took me a while to figure out how to boot without having to reinstall the OS or loose data.
These are the steps I followed (note that my OS drive is installed on LVM so look elsewhere if your boot is stuck at initramfs AND you don’t have LVM on the boot disk):
- Create an Ubuntu live USB and boot from that one
- Select “Try Ubuntu”
- Open a Terminal, become root and execute the following commands (execute and analyze the output of each command individually)
lvdisplay #displays the logical volumes modprobe dm-mod #loads the device-mapper kernel module lvm vgscan #scan all disks for volume groups lvm vgchange -ay #activates the logical volumes ls /dev/mapper #lists /dev/mapper
Your logical volumes should now be listed by the above command. They should appear both in /dev/mapper/ and /dev/YourVolumeGroupName (it should be clear from the LV Name properties of the lvdisplay command above).
In my case I have 2 logical volumes:
- [servername]-root
- [servername]-swap
Run the below command on the logical volumes listed in /dev/mapper
fsck /dev/mapper/LogicalVolumeName #runs a filesystem check on the unmounted volume
Fsck on the swap logical volume completed without errors.
Fsck on the root logical volume failed identifying the type as “silicon_medley_raid_member” (while it is ext4 instead).
Tried to force ext4 filesystem check with the below command and that runs without error:
fsck.ext4 /dev/mapper/[servername]-root #runs an ext4 filesystem check on the unmounted volume
So the problem is that during the boot process, the logical volume that should be mounted as /root is detected as silicon_medley_raid_member instead of ext4. The boot is interrupted and the initramfs console is displayed to the user.
wipefs /dev/mapper/[servername]-root
The above command did show something similar to:
offset type ---------------------------------------------------------------- 0x438 ext4 [filesystem] UUID: 3fb6d498-f2a3-4f12-af65-316896d37b24 0x4e1fffe60 silicon_medley_raid_member [raid]
The offset for silicon_medley_raid_member seems to be quite high.
I don’t have any raid on the OS disk so I decided to get rid of the unwanted magic string with the following command:
wipefs -o 0x4e1fffe60 /dev/mapper/[servername]-root
The above command seems to have done the trick. Running an fsck command would properly detect the volume as ext4 and perform the filesystem check. Let’s wrap up!
- Close the Terminal
- Shut down Ubuntu
- Remove the Ubuntu live USB
- The system should now boot successfully
Unfortunately I wasn’t able to get to the root cause of the issue (i.e. how was the silicon_medley_raid_member signature added in the first place and what triggered it) but it seems that I am not the only one!
HTH,
Edmondo