My setup was as follows. I had Ubuntu 9.04 x64 installed on a single 1TB SATA drive. Everything worked fine - I think I had Grub as boot loader. I since upgraded to 10.04. At some stage my hard drive broke and I had to fall back on a backup I had. So I bought two new 1TB drives, connected them to my Gigabyte X58A-UD3R motherboard's SATA2 ports and configured RAID1 in the BIOS.
I do know that the RAID controller(s) on this motherboard does not really implement hardware RAID, but it is the only way I could get the machine to dual boot with Windows and still have RAID. So I configured dmraid in Ubuntu, and set up a LVM. This is called FakeRAID. Everything worked fine. However when I upgraded then to 11.04 I decided to upgrade grub to grub2. That was a fatal mistake. My machine did not boot anymore. I got the grub boot loader, but nothing listed. When I type in exit, it chain loads to another grub instance showing me some bootable OS entries. 1 / 10 times it would boot. This was NOT ideal. But I left the issue as is.
Couple of days ago I added an awesome OCZ Vertex 3 240GB SSD to the SATA3 port on the motherboard. That device was detected as /dev/sdc, with the two FakeRAID HDD's as /dev/sda and /dev/sdb, as before. However this time the system refused to boot at all. I would get the initial grub screen, type in exit, then see a purple screen and nada - nothing boots.
So I started my 7 hour struggle. I first tried to boot from the rescue CD and reinstall grub2 and/or grub. Tried to install to /dev/mapper/waldopcl-root - did not work. Eventually I got tired of struggling so I decided to break the FakeRAID. I rebooted and in the RAID controller's BIOS removed /dev/sda from the RAID1 set. The set was now in degraded mode. Booting in to my rescue disk, the system recognised the RAID LVM and /dev/sda. I tried installing grub2 to /dev/sda but the system kept on showing the root mount point as the LVM volume and not /dev/sda that I selected. I presume this was because the LVM signature was still on the /dev/sda device. To remove that, I entered the following:
vgchange -a n /dev/mapper/waldopcl-root
vgremove /dev/mapper/waldopcl-root
I also changed the partition type to Linux from LVM.
That killed the LVM. I must warn though that I saw I could not access the data on /dev/sdb anymore, under that partition or the LVM which I just killed. I did make a full backup before I started out so do not rely on the ability to use the degraded RAID disk for data recovery. ALL my data on BOTH disks were gone.
Once I deleted the LVM I was in business. I could now install grub2 and restore my backup. In hindsight the quickest would most probably have been to make a full backup, repartition both disks and get going that way. My idea was to try and keep the data intact, but it did not work that way.