0

To backup a remote server, I ran this command on my local machine: ssh user@remote "dd if=/dev/vda | gzip -1 -" | dd of=image.gz

I then decompressed image.gz and tried to mount it by double clicking it (i.e. via DiskImageMounter.app) to make sure all the contents are there.

The size of the decompressed image file is about right (the server had a 50 GB disk), but when I mount the image, all I see is a "UEFI" directory in file explorer, which is substantially smaller than the disk of the server and does not contain e.g. the home directory of the server.

This is the info I get from the Disk Utility in MacOS (I presume what I want is in "disk10s1", but all I can access so far is "UEFI". ):

Disk Utility GUI

8
  • The UEFI partition is a FAT, which MacOS knows how to read. What filesystem(s) are on the other partitions? If MacOS doesn't know how to read them, that'd explain why you don't see them in File Explorer. Commented Nov 30, 2023 at 14:40
  • 1
    FYI: This is not a consistent, reliable backup - you can not reliably clone a systems disk while it is running. Commented Nov 30, 2023 at 15:21
  • Also, /dev/vda points to this being a VM. The "easy" way here is just use the tools of the virtualizer to export an image. Commented Nov 30, 2023 at 15:55
  • @AndyDalton the server is running Ubuntu, so probably something MacOS can read. Commented Nov 30, 2023 at 21:48
  • 1
    @MarcusMüller Agreed, but unfortunately the virtualizer does not allow downloading of disk images, so I would have to leave the image hosted with them, meaning A) I have to continue paying for hosting the image and B) I have to count on the virtualizer staying in business for my lifetime — neither of which I want to do. :( Commented Nov 30, 2023 at 21:52

1 Answer 1

1

I then decompressed image.gz and tried to mount it by double clicking it (i.e. via DiskImageMounter.app) to make sure all the contents are there.

What you have is a full-disk backup. You haven't, but had you taken this while /dev/vda was rw-mounted, it would at least be slightly broken; it would be probably heavily broken if image.gz also resided on /dev/vda, as you'd be heavily modifying the file system during the backing up itself.

This means it's not only the partition that contains your data-containing file system, but starts with a partition table, and it seems it contains an UEFI boot partition.

Upon double-clicking, your DiskImageMounter seems to recognize at the very least that partition; that's good.

The others might or might not have been recognized; your operating system simply might not have file system drivers for that.

I'd say: if you want to access the data on that image, simply boot a VM using that disk locally.

If you want a more useful backup, I'd say you should simply back up the files, not the raw disk image (unless you really just want to replicate the whole VM). There's nothing outside of files and their attributes on a file system – so you don't lose anything just backing up the files.

And that could very trivially be achieved either directly using

ssh user@remote tar cpf - --xattrs --acls --zstd / > backup.tar.zst

which will just run tar on the target system to archive all files, compress it using zstd (needs to be installed on the remote system) on the fly, preserving all file attributes, xattrs and acls, and pipe the result through ssh to your local machine, which directly writes it to the local file backup.tar.zst.

You can also make a smaller archive of all files on the remote in a file on the remote itself (which excludes the backup itself), using mksquashfs (needs to be installed), for later download:

mksquashfs / backup-$(date --utc -Ihours).squashfs -wildcards -exclude '... backup-*.squashfs' -comp zstd

The elegant part here is that it's easy exclude any backup file to be included in the backup itself, and that mksquashfs inherently saves a lot of space by compressing well and stores the content of identical files only once.

Another elegant aspect is that these squashfs archives can directly be mounted (at least on Linux) – no decompression necessary. On MacOS, you can install squashfs-tools to uncompress, and I think (never tried), you can use macFUSE together with squashfuse to also mount them. Have fun!

7
  • The image was taken with the system NOT in operation via a recovery console. Image.gz does not reside on /dev/vda, the output of dd was piped directly to my local machine. Commented Nov 30, 2023 at 21:55
  • 1
    @JacksonHunt I'm glad to hear that! Is it OK if I leve that parentheses in my answer in, to warn future users (many of whom might not be as experienced as you) about this? Commented Nov 30, 2023 at 21:57
  • 1
    (by the way, not an optimal use of dd; you could just have directly used gzip -1 < /dev/vda. Because I think of this answer of a bit helpful just beyond the scope, note that gzip -1 isn't very compressed, but still pretty slow – gzip is single threaded! pigz generates the same compressed data, but can use multiple CPU cores. But zstd can do that, out of the box, and zstd -4 is still faster than pigz -1, but compresses about as good as gzip --best.) Commented Nov 30, 2023 at 22:01
  • @JacksonHunt I defused the statement regarding things being mounted significantly. Commented Nov 30, 2023 at 22:04
  • 1
    yep, as said in the answer :) Commented Nov 30, 2023 at 22:39

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.