Quantcast
Channel: mdadm + lvm2: destroyed all superblocks accidentally - Unix & Linux Stack Exchange
Viewing all articles
Browse latest Browse all 2

mdadm + lvm2: destroyed all superblocks accidentally

$
0
0

I accidentally overwrote all my RAID1 superblocks with garbage. I think this happened because I ALT-CTRL-DEL booted when the Ubuntu had put me in some kind of hard disk recovery mode. My display wasn't working and I did this "blind".

Looking at my RAID partitions, I see this:

# mdadm --examine /dev/sdc2/dev/sdc2:          Magic : a92b4efc        Version : 0.90.00           UUID : 00000000:00000000:00000000:00000000  Creation Time : Fri Nov  1 18:59:05 2013     Raid Level : -unknown-   Raid Devices : 0  Total Devices : 1Preferred Minor : 127    Update Time : Fri Nov  1 18:59:05 2013          State : active Active Devices : 0Working Devices : 1 Failed Devices : 0  Spare Devices : 1       Checksum : 6b1f0d22 - correct         Events : 1      Number   Major   Minor   RaidDevice Statethis     0       8       34        0      spare   /dev/sdc2   0     0       8       34        0      spare   /dev/sdc2

It is apparent that the superblocks have been fully overwritten by garbage. Both disks (sdb2, sdc2) look the same, the information is garbage, uuid is all zeros, raid level: unknown, raid devices: 0, etc.

The best bet I have is this:

mdadm --create --assume-clean --metadata=0.90 /dev/md0 --level=1 --raid-devices=2 /dev/sdb2 /dev/sdc2

Can I re-create my RAID array using mdadm --create like this?

On the RAID stack, I have an LVM2 physical volume. Can I somehow access my LVM2 data from the individual disks or backup disk images?

GRUB is able to find my initrd and kernel image from the disks, /boot is on ext4 root partition filesystem on top of LVM2, it is not a separate partition. So I believe the data is mostly intact, and the superblocks are gone.

edit: add --assume-clean to mdadm command line


Viewing all articles
Browse latest Browse all 2

Latest Images

Trending Articles





Latest Images