I am stumped. I've been running a FreeBSD 9 server with ZFS for 5 years and have now done a completely fresh install of FreeBSD 11. Everything (meaning the pools layout) is configured as it was on the old system (as far as it was possible), but created from scratch.
The problem that I face is that I noticed that neither log nor cache were touched according to
Installing zfs-stats and running
Output from
And for completeness, the output from
And from
I have tried removing the cache from the pool and readding it.
What can be the reason for L2ARC being disabled, as well as ZIL not being written to?
The problem that I face is that I noticed that neither log nor cache were touched according to
zpool iostat -v
:
Code:
logs - - - - - -
gpt/log 0 1008M 0 0 0 0
cache - - - - - -
gpt/cache 0 191G 0 0 0 0
Installing zfs-stats and running
zfs-stats -a
proved that something was off: L2ARC is shown as disabled:
Code:
------------------------------------------------------------------------
ZFS Subsystem Report Thu May 11 20:12:49 2017
------------------------------------------------------------------------
System Information:
Kernel Version: 1100122 (osreldate)
Hardware Platform: i386
Processor Architecture: i386
ZFS Storage pool Version: 5000
ZFS Filesystem Version: 5
FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep 29 03:40:55 UTC 2016 root
8:12PM up 22:52, 3 users, load averages: 0.33, 0.37, 0.33
------------------------------------------------------------------------
System Memory:
0.75% 25.19 MiB Active, 88.63% 2.93 GiB Inact
8.30% 280.48 MiB Wired, 0.00% 0 Cache
2.33% 78.63 MiB Free, 0.00% 0 Gap
Real Installed: 3.50 GiB
Real Available: 95.96% 3.36 GiB
Real Managed: 98.30% 3.30 GiB
Logical Total: 3.50 GiB
Logical Used: 14.20% 509.00 MiB
Logical Free: 85.80% 3.00 GiB
Kernel Memory: 112.07 MiB
Data: 74.31% 83.28 MiB
Text: 25.69% 28.79 MiB
Kernel Memory Map: 412.00 MiB
Size: 39.77% 163.85 MiB
Free: 60.23% 248.15 MiB
------------------------------------------------------------------------
ARC Summary: (THROTTLED)
Memory Throttle Count: 7
ARC Misc:
Deleted: 35.20m
Recycle Misses: 0
Mutex Misses: 216.28k
Evict Skips: 186.37m
ARC Size: 23.24% 59.84 MiB
Target Size: (Adaptive) 12.50% 32.19 MiB
Min Size (Hard Limit): 12.50% 32.19 MiB
Max Size (High Water): 8:1 257.50 MiB
ARC Size Breakdown:
Recently Used Cache Size: 42.19% 25.24 MiB
Frequently Used Cache Size: 57.81% 34.59 MiB
ARC Hash Breakdown:
Elements Max: 32.44k
Elements Current: 13.45% 4.36k
Collisions: 437.98k
Chain Max: 3
Chains: 17
------------------------------------------------------------------------
ARC Efficiency: 13.56m
Cache Hit Ratio: 49.01% 6.65m
Cache Miss Ratio: 50.99% 6.91m
Actual Hit Ratio: 48.61% 6.59m
Data Demand Efficiency: 62.19% 3.07m
CACHE HITS BY CACHE LIST:
Most Recently Used: 46.64% 3.10m
Most Frequently Used: 52.54% 3.49m
Most Recently Used Ghost: 21.17% 1.41m
Most Frequently Used Ghost: 39.48% 2.62m
CACHE HITS BY DATA TYPE:
Demand Data: 28.72% 1.91m
Prefetch Data: 0.00% 0
Demand Metadata: 70.37% 4.68m
Prefetch Metadata: 0.92% 60.84k
CACHE MISSES BY DATA TYPE:
Demand Data: 16.78% 1.16m
Prefetch Data: 0.00% 0
Demand Metadata: 82.37% 5.70m
Prefetch Metadata: 0.85% 58.86k
------------------------------------------------------------------------
L2ARC is disabled
------------------------------------------------------------------------
------------------------------------------------------------------------
VDEV cache is disabled
------------------------------------------------------------------------
ZFS Tunables (sysctl):
kern.maxusers 384
vm.kmem_size 432013312
vm.kmem_size_scale 3
vm.kmem_size_min 12582912
vm.kmem_size_max 432013312
vfs.zfs.trim.max_interval 1
vfs.zfs.trim.timeout 30
vfs.zfs.trim.txg_delay 32
vfs.zfs.trim.enabled 1
vfs.zfs.vol.unmap_enabled 1
vfs.zfs.vol.recursive 0
vfs.zfs.vol.mode 1
vfs.zfs.version.zpl 5
vfs.zfs.version.spa 5000
vfs.zfs.version.acl 1
vfs.zfs.version.ioctl 6
vfs.zfs.debug 0
vfs.zfs.super_owner 0
vfs.zfs.sync_pass_rewrite 2
vfs.zfs.sync_pass_dont_compress 5
vfs.zfs.sync_pass_deferred_free 2
vfs.zfs.zio.exclude_metadata 0
vfs.zfs.zio.use_uma 0
vfs.zfs.cache_flush_disable 0
vfs.zfs.zil_replay_disable 0
vfs.zfs.min_auto_ashift 9
vfs.zfs.max_auto_ashift 13
vfs.zfs.vdev.trim_max_pending 10000
vfs.zfs.vdev.bio_delete_disable 0
vfs.zfs.vdev.bio_flush_disable 0
vfs.zfs.vdev.write_gap_limit 4096
vfs.zfs.vdev.read_gap_limit 32768
vfs.zfs.vdev.aggregation_limit 131072
vfs.zfs.vdev.trim_max_active 64
vfs.zfs.vdev.trim_min_active 1
vfs.zfs.vdev.scrub_max_active 2
vfs.zfs.vdev.scrub_min_active 1
vfs.zfs.vdev.async_write_max_active 10
vfs.zfs.vdev.async_write_min_active 1
vfs.zfs.vdev.async_read_max_active 3
vfs.zfs.vdev.async_read_min_active 1
vfs.zfs.vdev.sync_write_max_active 10
vfs.zfs.vdev.sync_write_min_active 10
vfs.zfs.vdev.sync_read_max_active 10
vfs.zfs.vdev.sync_read_min_active 10
vfs.zfs.vdev.max_active 1000
vfs.zfs.vdev.async_write_active_max_dirty_percent60
vfs.zfs.vdev.async_write_active_min_dirty_percent30
vfs.zfs.vdev.mirror.non_rotating_seek_inc1
vfs.zfs.vdev.mirror.non_rotating_inc 0
vfs.zfs.vdev.mirror.rotating_seek_offset1048576
vfs.zfs.vdev.mirror.rotating_seek_inc 5
vfs.zfs.vdev.mirror.rotating_inc 0
vfs.zfs.vdev.trim_on_init 1
vfs.zfs.vdev.cache.bshift 16
vfs.zfs.vdev.cache.size 0
vfs.zfs.vdev.cache.max 16384
vfs.zfs.vdev.metaslabs_per_vdev 200
vfs.zfs.txg.timeout 5
vfs.zfs.space_map_blksz 4096
vfs.zfs.spa_slop_shift 5
vfs.zfs.spa_asize_inflation 24
vfs.zfs.deadman_enabled 1
vfs.zfs.deadman_checktime_ms 5000
vfs.zfs.deadman_synctime_ms 1000000
vfs.zfs.debug_flags 0
vfs.zfs.recover 0
vfs.zfs.spa_load_verify_data 1
vfs.zfs.spa_load_verify_metadata 1
vfs.zfs.spa_load_verify_maxinflight 10000
vfs.zfs.ccw_retry_interval 300
vfs.zfs.check_hostid 1
vfs.zfs.mg_fragmentation_threshold 85
vfs.zfs.mg_noalloc_threshold 0
vfs.zfs.condense_pct 200
vfs.zfs.metaslab.bias_enabled 1
vfs.zfs.metaslab.lba_weighting_enabled 1
vfs.zfs.metaslab.fragmentation_factor_enabled1
vfs.zfs.metaslab.preload_enabled 1
vfs.zfs.metaslab.preload_limit 3
vfs.zfs.metaslab.unload_delay 8
vfs.zfs.metaslab.load_pct 50
vfs.zfs.metaslab.min_alloc_size 33554432
vfs.zfs.metaslab.df_free_pct 4
vfs.zfs.metaslab.df_alloc_threshold 131072
vfs.zfs.metaslab.debug_unload 0
vfs.zfs.metaslab.debug_load 0
vfs.zfs.metaslab.fragmentation_threshold70
vfs.zfs.metaslab.gang_bang 16777217
vfs.zfs.free_bpobj_enabled 1
vfs.zfs.free_max_blocks -1
vfs.zfs.no_scrub_prefetch 0
vfs.zfs.no_scrub_io 0
vfs.zfs.resilver_min_time_ms 3000
vfs.zfs.free_min_time_ms 1000
vfs.zfs.scan_min_time_ms 1000
vfs.zfs.scan_idle 50
vfs.zfs.scrub_delay 4
vfs.zfs.resilver_delay 2
vfs.zfs.top_maxinflight 32
vfs.zfs.zfetch.array_rd_sz 1048576
vfs.zfs.zfetch.max_distance 8388608
vfs.zfs.zfetch.min_sec_reap 2
vfs.zfs.zfetch.max_streams 8
vfs.zfs.prefetch_disable 1
vfs.zfs.delay_scale 500000
vfs.zfs.delay_min_dirty_percent 60
vfs.zfs.dirty_data_sync 67108864
vfs.zfs.dirty_data_max_percent 10
vfs.zfs.dirty_data_max_max 4294967296
vfs.zfs.dirty_data_max 360617574
vfs.zfs.max_recordsize 1048576
vfs.zfs.mdcomp_disable 0
vfs.zfs.nopwrite_enabled 1
vfs.zfs.dedup.prefetch 1
vfs.zfs.l2c_only_size 0
vfs.zfs.mfu_ghost_data_lsize 12904960
vfs.zfs.mfu_ghost_metadata_lsize 11940864
vfs.zfs.mfu_ghost_size 24845824
vfs.zfs.mfu_data_lsize 0
vfs.zfs.mfu_metadata_lsize 65536
vfs.zfs.mfu_size 350208
vfs.zfs.mru_ghost_data_lsize 65024
vfs.zfs.mru_ghost_metadata_lsize 8585728
vfs.zfs.mru_ghost_size 8650752
vfs.zfs.mru_data_lsize 0
vfs.zfs.mru_metadata_lsize 32768
vfs.zfs.mru_size 24934400
vfs.zfs.anon_data_lsize 0
vfs.zfs.anon_metadata_lsize 0
vfs.zfs.anon_size 9734144
vfs.zfs.l2arc_norw 1
vfs.zfs.l2arc_feed_again 1
vfs.zfs.l2arc_noprefetch 1
vfs.zfs.l2arc_feed_min_ms 200
vfs.zfs.l2arc_feed_secs 1
vfs.zfs.l2arc_headroom 2
vfs.zfs.l2arc_write_boost 8388608
vfs.zfs.l2arc_write_max 8388608
vfs.zfs.arc_meta_limit 67502080
vfs.zfs.arc_free_target 6050
vfs.zfs.arc_shrink_shift 7
vfs.zfs.arc_average_blocksize 8192
vfs.zfs.arc_min 33751040
vfs.zfs.arc_max 270008320
------------------------------------------------------------------------
Output from
zfs get secondarycache zstore
shows:
Code:
NAME PROPERTY VALUE SOURCE
zstore secondarycache all default
And for completeness, the output from
zpool status
Code:
state: ONLINE
scan: scrub repaired 0 in 0h1m with 0 errors on Wed May 10 21:36:03 2017
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/main0 ONLINE 0 0 0
gpt/main1 ONLINE 0 0 0
errors: No known data errors
pool: zstore
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zstore ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
label/v1d1 ONLINE 0 0 0
label/v1d2 ONLINE 0 0 0
label/v1d3 ONLINE 0 0 0
label/v1d4 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
label/v2d1 ONLINE 0 0 0
label/v2d2 ONLINE 0 0 0
label/v2d3 ONLINE 0 0 0
label/v2d4 ONLINE 0 0 0
logs
gpt/log ONLINE 0 0 0
cache
gpt/cache ONLINE 0 0 0
And from
gpart show -l ada6
Code:
=> 40 468862048 ada6 GPT (224G)
40 1024 1 boot (512K)
1064 33554432 2 main0 (16G)
33555496 33554432 3 main1 (16G)
67109928 2097152 4 log (1.0G)
69207080 399655000 5 cache (191G)
468862080 8 - free - (4.0K)
I have tried removing the cache from the pool and readding it.
What can be the reason for L2ARC being disabled, as well as ZIL not being written to?