While setting up my new PC I also setup a new RAID 1 with 2 drives with LUKS on top. After copying all the data to it I made sure everything was usable and afterwards shredded the old drive.
But now I have no RAID anymore. I've found out that it's most likely due to me using full disks while creating the RAID instead of using partitions. Is there any way for me to recover the RAID and recovering the data within? I have saved the exact commands used to make the RAID but don't want to do anything until I'm certain I won't irreversibly mess something up.
Output of fdisk -l of both drives:
Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: WDC WD40EFAX-68J
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 068E27EE-055B-A24A-B51B-D0B79E3DEA00
Disk /dev/sdc: 2.73 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: TOSHIBA HDWD130
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F4ADCB83-B715-9B4A-A6A0-96687568611E
The RAID was created with the following command:
sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
mdadm --examine of both disks give the same output:
/dev/sdb:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
/dev/sdc:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
So it seems like the partition data is shot and that's why it's no longer visible as a raid member (type fd). Would it be possible to rewrite the partition data and restart the RAID?
The line in my mdadm.conf is as follows:
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=WORKSTATION:0 UUID=fe2547a6:3296c156:303989ac:febb5051 devices=/dev/sdb,/dev/sdc
And could I otherwise start the RAID with just 1 member and recover the data that way? the LUKS data header should be the same on both disks right? or should I back them up before something overwrites it again?
I would greatly appreciate your help, there was around 1500GB of data on it before it failed.
P.S. I'm aware that the 2 disks are a different size, it used to be 2 3TB drives but one failed so I replaced it with a 4TB drive. The RAID has worked and was fully synced before this happened.
sudo cryptsetup luksFormat /dev/md0and inside the LUKS device I created 1 ExFAT partition. Because the raid has to be used inside a windows system as well. I've read that using full disks isn't used recommended because other programs/OS'es will change the header information without confirmation. I suspect that booting windows was the culprit even though it's set to never mount any disk. Luckily the solution provided by frostschutz has worked and I'm currently backing up all the data.