Hey there,
we have a strange effect. After some time (weeks) the ALLOC raises quite high (here >50GB) over the real file usage. System commands like du and df showing always (nearly) the right usage of the dataset, butt the ALLOC and AVAIL space going completely wrong. See outputs below:
vsd -- /home/vsd is the 'old' dataset with an age of a few weeks. vsdnew -- /home/vsdnew is the 'new' dataset created with rsync from the 'old' dataset with exactly the same data content.
Rsync command used:
May be of interest:
- There is a cronjob, create a new, daily snapshot and remove the oldest snapshot (keeping 7 days).
- When we transfer the dataset with zfs send/receive into another zfs pool, the same problem exists on the new zfs pool. So it seems, zfs 'thinks' the lost space is 'correct'. It's transfered into the new pool. Whatever there is transfered at all...
So - as this is a hugh lost of data: Has anyone an idea why all this data ist lost? WHERE is it?
Many thanks in advance for your help!
jimmy
we have a strange effect. After some time (weeks) the ALLOC raises quite high (here >50GB) over the real file usage. System commands like du and df showing always (nearly) the right usage of the dataset, butt the ALLOC and AVAIL space going completely wrong. See outputs below:
vsd -- /home/vsd is the 'old' dataset with an age of a few weeks. vsdnew -- /home/vsdnew is the 'new' dataset created with rsync from the 'old' dataset with exactly the same data content.
Rsync command used:
rsync -aHAX --fileflags --delete /home/vsd/ /home/vsdnew
Code:
# df -h
Filesystem Size Used Avail Capacity Mounted on
vsd 126G 2,4G 124G 2% /home/vsd
vsd/xxxxxx 239G 115G 124G 48% /home/vsd/xxxxxx
vsdnew 184G 4,3G 180G 2% /home/vsdnew
vsdnew/xxxxxx 236G 56G 180G 24% /home/vsdnew/xxxxxx
Code:
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
vsd 249G 118G 131G - - 12% 47% 1.00x ONLINE -
vsdnew 248G 60,1G 188G - - 4% 24% 1.00x ONLINE -
Code:
# zfs list -t all -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
vsd 124G 118G 0 2,38G 0 115G
vsd/xxxxxx 124G 115G 0 115G 0 0
vsdnew 180G 60,1G 0 4,26G 0 55,9G
vsdnew/xxxxxx 180G 55,9G 0 55,9G 0 0
Code:
# zdb
vsdnew:
version: 5000
name: 'vsdnew'
state: 0
txg: 209774
pool_guid: 16748974042122959608
hostid: 1527396945
hostname: 'xxxxxxxxxxxxxxxxxxxxxx
com.delphix:has_per_vdev_zaps
vdev_children: 1
vdev_tree:
type: 'root'
id: 0
guid: 16748974042122959608
create_txg: 4
children[0]:
type: 'disk'
id: 0
guid: 16956873139552582793
path: '/dev/xbd5'
whole_disk: 1
metaslab_array: 67
metaslab_shift: 31
ashift: 12
asize: 268430737408
is_log: 0
create_txg: 4
com.delphix:vdev_zap_leaf: 65
com.delphix:vdev_zap_top: 66
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
Code:
# zpool get all vsdnew
NAME PROPERTY VALUE SOURCE
vsdnew size 248G -
vsdnew capacity 24% -
vsdnew altroot - default
vsdnew health ONLINE -
vsdnew guid 16748974042122959608 default
vsdnew version - default
vsdnew bootfs - default
vsdnew delegation on default
vsdnew autoreplace off default
vsdnew cachefile - default
vsdnew failmode wait default
vsdnew listsnapshots off default
vsdnew autoexpand off default
vsdnew dedupditto 0 default
vsdnew dedupratio 1.00x -
vsdnew free 188G -
vsdnew allocated 60,2G -
vsdnew readonly off -
vsdnew comment - default
vsdnew expandsize - -
vsdnew freeing 0 default
vsdnew fragmentation 4% -
vsdnew leaked 0 default
vsdnew bootsize - default
vsdnew checkpoint - -
vsdnew feature@async_destroy enabled local
vsdnew feature@empty_bpobj active local
vsdnew feature@lz4_compress active local
vsdnew feature@multi_vdev_crash_dump enabled local
vsdnew feature@spacemap_histogram active local
vsdnew feature@enabled_txg active local
vsdnew feature@hole_birth active local
vsdnew feature@extensible_dataset enabled local
vsdnew feature@embedded_data active local
vsdnew feature@bookmarks enabled local
vsdnew feature@filesystem_limits enabled local
vsdnew feature@large_blocks enabled local
vsdnew feature@large_dnode enabled local
vsdnew feature@sha512 enabled local
vsdnew feature@skein enabled local
vsdnew feature@device_removal enabled local
vsdnew feature@obsolete_counts enabled local
vsdnew feature@zpool_checkpoint enabled local
vsdnew feature@spacemap_v2 active local
May be of interest:
- There is a cronjob, create a new, daily snapshot and remove the oldest snapshot (keeping 7 days).
- When we transfer the dataset with zfs send/receive into another zfs pool, the same problem exists on the new zfs pool. So it seems, zfs 'thinks' the lost space is 'correct'. It's transfered into the new pool. Whatever there is transfered at all...
So - as this is a hugh lost of data: Has anyone an idea why all this data ist lost? WHERE is it?
Many thanks in advance for your help!
jimmy