0

I have 4x 1.8 TB HDD ( 2 TB ) in an existing Raid0 array. /dev/md127

/dev/md127:
           Version : 1.2
     Creation Time : Fri Mar 31 21:34:58 2017
        Raid Level : raid0
        Array Size : 7813533696 (7.28 TiB 8.00 TB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

I recently added 2x 3.7 TB ( 4TB ) NVMe in Raid0 as /dev/md126 , then added it to add it to /dev/md0 with is my selected raid1 device.

mdadm --create /dev/md0 --force --level=1 --raid-devices=1 /dev/md126

Formatted it with mkfs.ext4 -F /dev/md0, and used rsync to copy over the contents of my mounted md127 stripe.

Then I added the second raid0 to the mirror with

mdadm --grow /dev/md0 --raid-devices=2 --add /dev/md127

Once the sync was completed, I attempted to mount the /dev/md0 raid01 device and it fails with the message, "mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error."

Interestingly if I stop the raid1 and try to mount either of the stripes, it is unable to mount either of them with the message that they need cleaning. dimes shows "Block bitmap for group 0 not in group (block XXXXXX), group descriptors corrupted"

I was under the impression that when adding a disk with existing data to a dmraid raid1 that the data would be preserved, is that not the case?

What is the "proper way" to go about this, I do have a backup of my data; can I simply rysnc it back onto the mounted md0 device or do I need to wipe the raids and start over?

Additional info follows:

cat /proc/mdstat
Personalities : [raid0] [raid1] 
md0 : active raid1 md126[1] md127[0]
      7813401600 blocks super 1.2 [2/2] [UU]
      bitmap: 0/59 pages [0KB], 65536KB chunk

md126 : active raid0 nvme1n1[1] nvme0n1[0]
      8001308672 blocks super 1.2 512k chunks
      
md127 : active raid0 sdd[1] sdb[0] sda[3] sdc[2]
      7813533696 blocks super 1.2 512k chunks
       
mdadm -D /dev/XXX
/dev/md0:
           Version : 1.2
     Creation Time : Sun Oct 29 17:34:29 2023
        Raid Level : raid1
        Array Size : 7813401600 (7.28 TiB 8.00 TB)
     Used Dev Size : 7813401600 (7.28 TiB 8.00 TB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Mon Oct 30 05:13:32 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : bbox-arch:0  (local to host bbox-arch)
              UUID : ddec046f:e66d65b9:9c08802e:ef314054
            Events : 7846

    Number   Major   Minor   RaidDevice State
       0       9      127        0      active sync   /dev/md/nas:0
       1       9      126        1      active sync   /dev/md/bbox-arch:0
/dev/md126:
           Version : 1.2
     Creation Time : Sun Oct 22 17:53:50 2023
        Raid Level : raid0
        Array Size : 8001308672 (7.45 TiB 8.19 TB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Sun Oct 22 17:53:50 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

            Layout : -unknown-
        Chunk Size : 512K

Consistency Policy : none

              Name : bbox-arch:0  (local to host bbox-arch)
              UUID : 8f382f5f:ac064177:81ebd680:bdcc03ea
            Events : 0

    Number   Major   Minor   RaidDevice State
       0     259        5        0      active sync   /dev/nvme0n1
       1     259        6        1      active sync   /dev/nvme1n1
/dev/md127:
           Version : 1.2
     Creation Time : Fri Mar 31 21:34:58 2017
        Raid Level : raid0
        Array Size : 7813533696 (7.28 TiB 8.00 TB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Fri Mar 31 21:34:58 2017
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : -unknown-
        Chunk Size : 512K

Consistency Policy : none

              Name : nas:0
              UUID : c24bdcd4:4df06194:67f69cea:60916fc8
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       48        1      active sync   /dev/sdd
       2       8       32        2      active sync   /dev/sdc
       3       8        0        3      active sync   /dev/sda



Output of lsblk

NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0   1.8T  0 disk  
└─md127       9:127  0   7.3T  0 raid0 
  └─md0       9:0    0   7.3T  0 raid1 
sdb           8:16   0   1.8T  0 disk  
└─md127       9:127  0   7.3T  0 raid0 
  └─md0       9:0    0   7.3T  0 raid1 
sdc           8:32   0   1.8T  0 disk  
└─md127       9:127  0   7.3T  0 raid0 
  └─md0       9:0    0   7.3T  0 raid1 
sdd           8:48   0   1.8T  0 disk  
└─md127       9:127  0   7.3T  0 raid0 
  └─md0       9:0    0   7.3T  0 raid1 
...
nvme0n1     259:5    0   3.7T  0 disk  
└─md126       9:126  0   7.5T  0 raid0 
  └─md0       9:0    0   7.3T  0 raid1 
nvme1n1     259:6    0   3.7T  0 disk  
└─md126       9:126  0   7.5T  0 raid0 
  └─md0       9:0    0   7.3T  0 raid1
4
  • I don't like this setup at all (no partition tables, raid on raid, raid0 …) but it should have worked regardless. I assume you unmounted md127 before adding it to md0? How did you try to mount the stripes? You have to take offsets into account. Alternatively if you suspect that md127 returns wrong data, you can also --fail it directly, reassemble md0 from md126 only and see if that works. But if it went corrupt in between somewhere, then that's that... Commented Oct 31, 2023 at 8:25
  • Your md127 is a bit smaller than md126, did you perchance shrink the raid1 when adding md127, without shrinking the filesystem first? That would be reason for mount to fail (and in dmesg it would say something about block device size not matching). In this case after failing md127 you'd also have to grow size max Commented Oct 31, 2023 at 8:30
  • @frostschutz Ahhh! I should have thought of the filesystems.... LOOL I would have figured the smaller one would have over-written and taken precedence over the larger mdm127 when syncing. At this point can the setup be formatted as ext4 and used or is it untrustworthy? Commented Nov 1, 2023 at 4:39
  • In this case I assume mdadm --manage /dev/md0 --fail /dev/md127 , mdadm --stop /dev/md0 , mdadm --assemble --force /dev/md0 /dev/md126 ? Commented Nov 1, 2023 at 4:43

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.