Just a quick note. When I replaced my faulty hard drive, they also replaced all of them with larger models. I love upgrades. During the data migration, I only realized later that the actual size of the pool had not grown with it. This can be remedied in one small step.

Before

The value EXPANDSZ indicates that there is unused space in the pool that could be ‘expanded’.

NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
data   14.5T  10.9T  3.65T        -     14.5T     0%    74%  1.00x    ONLINE  -

Normally this is done automatically, but I forgot to set the attribute autoexpand=on when setting up the hard disks. For the future, the attribute is set with zpool set autoexpand=on data (in my pool data) (and added in the article).
Expanding is now done manually. A zpool status data shows the list of pool members beforehand.

  pool: data
 state: ONLINE
  scan: resilvered 37.0G in 00:06:47 with 0 errors on Sat Dec 21 20:45:42 2024
config:

    NAME          STATE     READ WRITE CKSUM
    data          ONLINE       0     0     0
      raidz2-0    ONLINE       0     0     0
        gpt/HTYM  ONLINE       0     0     0
        gpt/HDSM  ONLINE       0     0     0
        gpt/8VZK  ONLINE       0     0     0
        gpt/56WK  ONLINE       0     0     0

The last hard disk is decisive = gpt/56WK.
With the command zpool online -e data gpt/56WK we expand the pool to the empty storage.

After

NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
data   29.1T  10.9T  18.2T        -         -     0%    37%  1.00x    ONLINE  -

That's it and the entire disc space is available to the ZFS pool data.

So if you find this content valuable and useful or just want to say hello, I'd love to hear from you via Matrix, follow me on Mastodon or send me an email.

Previous Post Next Post