Hi, everyone!
There is two hard drive on the system
Description of the connected disks:
KINGSTON - SSD disk with newly installed FreeBSD 12.0-RELEASE-p2
WD - old SATA disk with FreeBSD 11.2 STABLE (also with ZFS)
Initial pre conditions
Very likely, the second disk (ada1) contain pool with the same name as the first one (I mean zroot).
Because, while installation system on the second disk, guided "ZFS Configuration" menu, "pool type" was accepted as stripe and "Swap size" was changed from default value to 8 Gb. All other settings was accepted by default.
I have created new pool:
And after that I've got
I notice that, output
In actual fact, the file system contain more then 250 Gb of data. As far as I understand, in the final result, the output should be something like that:
How can I get full access to the file system tree on the second disk with ZFS? I mean slice ada1p3... Maybe I missed something....? Please, correct me,
There is two hard drive on the system
Code:
% uname -a
FreeBSD desktop.freebsd.lan 12.0-RELEASE-p2 FreeBSD 12.0-RELEASE-p2 r343203 OptiPlex amd64
Description of the connected disks:
KINGSTON - SSD disk with newly installed FreeBSD 12.0-RELEASE-p2
WD - old SATA disk with FreeBSD 11.2 STABLE (also with ZFS)
Code:
% camcontrol devlist
<KINGSTON SUV400S37240G 0C3FD6SD> at scbus0 target 0 lun 0 (pass0,ada0)
<WDC WD3200AAJS-56B4A0 01.03A01> at scbus2 target 0 lun 0 (pass2,ada1)
Code:
% dmesg | grep ada
ada0 at ata2 bus 0 scbus0 target 0 lun 0
ada0: <KINGSTON SUV400S37240G 0C3FD6SD> ACS-4 ATA SATA 3.x device
ada0: Serial Number 50026B726701BDFF
ada0: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada0: 228936MB (468862128 512 byte sectors)
ada1 at ata4 bus 0 scbus2 target 0 lun 0
ada1: <WDC WD3200AAJS-56B4A0 01.03A01> ATA8-ACS SATA 2.x device
ada1: Serial Number WD-WCAT1D555825
ada1: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada1: 305245MB (625142448 512 byte sectors)
Initial pre conditions
Code:
% gpart status
Name Status Components
ada0p1 OK ada0
ada0p2 OK ada0
ada0p3 OK ada0
ada1p1 OK ada1
ada1p2 OK ada1
ada1p3 OK ada1
Code:
% gpart show
=> 40 468862048 ada0 GPT (224G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 8388608 2 freebsd-swap (4.0G)
8390656 460470272 3 freebsd-zfs (220G)
468860928 1160 - free - (580K)
=> 40 625142368 ada1 GPT (298G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 16777216 2 freebsd-swap (8.0G)
16779264 608362496 3 freebsd-zfs (290G)
625141760 648 - free - (324K)
Code:
% zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 65,5G 146G 88K /zroot
zroot/ROOT 15,0G 146G 88K none
zroot/ROOT/default 15,0G 146G 15,0G /
zroot/tmp 17,6M 146G 17,6M /tmp
zroot/usr 50,5G 146G 88K /usr
zroot/usr/home 47,6G 146G 47,6G /usr/home
zroot/usr/ports 1,55G 146G 1,55G /usr/ports
zroot/usr/src 1,31G 146G 1,31G /usr/src
zroot/var 1,17M 146G 88K /var
zroot/var/audit 88K 146G 88K /var/audit
zroot/var/crash 88K 146G 88K /var/crash
zroot/var/log 668K 146G 668K /var/log
zroot/var/mail 176K 146G 176K /var/mail
zroot/var/tmp 88K 146G 88K /var/tmp
Code:
% mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
linprocfs on /compat/linux/proc (linprocfs, local)
tmpfs on /compat/linux/dev/shm (tmpfs, local)
fdescfs on /dev/fd (fdescfs)
procfs on /proc (procfs, local)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
Code:
% zpool status
pool: zroot
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
errors: No known data errors
Very likely, the second disk (ada1) contain pool with the same name as the first one (I mean zroot).
Because, while installation system on the second disk, guided "ZFS Configuration" menu, "pool type" was accepted as stripe and "Swap size" was changed from default value to 8 Gb. All other settings was accepted by default.
Code:
% df -h
Filesystem Size Used Avail Capacity Mounted on
zroot/ROOT/default 161G 15G 146G 9% /
devfs 1,0K 1,0K 0B 100% /dev
linprocfs 4,0K 4,0K 0B 100% /compat/linux/proc
tmpfs 1,4G 4,0K 1,4G 0% /compat/linux/dev/shm
fdescfs 1,0K 1,0K 0B 100% /dev/fd
procfs 4,0K 4,0K 0B 100% /proc
zroot/tmp 146G 18M 146G 0% /tmp
zroot/usr/home 193G 48G 146G 25% /usr/home
zroot/usr/ports 147G 1,5G 146G 1% /usr/ports
zroot/usr/src 147G 1,3G 146G 1% /usr/src
zroot/var/audit 146G 88K 146G 0% /var/audit
zroot/var/crash 146G 88K 146G 0% /var/crash
zroot/var/log 146G 668K 146G 0% /var/log
zroot/var/mail 146G 176K 146G 0% /var/mail
zroot/var/tmp 146G 88K 146G 0% /var/tmp
zroot 146G 88K 146G 0% /zroot
I have created new pool:
#zpool create -f wdpool ada1p3
And after that I've got
Code:
# zpool history
History for 'wdpool':
2019-01-27.19:55:59 zpool create -f wdpool ada1p3
History for 'zroot':
2019-01-20.01:37:52 zpool create -o altroot=/mnt -O compress=lz4 -O atime=off -m none -f zroot ada0p3
2019-01-20.01:37:52 zfs create -o mountpoint=none zroot/ROOT
2019-01-20.01:37:52 zfs create -o mountpoint=/ zroot/ROOT/default
2019-01-20.01:37:52 zfs create -o mountpoint=/tmp -o exec=on -o setuid=off zroot/tmp
2019-01-20.01:37:52 zfs create -o mountpoint=/usr -o canmount=off zroot/usr
2019-01-20.01:37:52 zfs create zroot/usr/home
2019-01-20.01:37:52 zfs create -o setuid=off zroot/usr/ports
2019-01-20.01:37:52 zfs create zroot/usr/src
2019-01-20.01:37:52 zfs create -o mountpoint=/var -o canmount=off zroot/var
2019-01-20.01:37:52 zfs create -o exec=off -o setuid=off zroot/var/audit
2019-01-20.01:37:52 zfs create -o exec=off -o setuid=off zroot/var/crash
2019-01-20.01:37:52 zfs create -o exec=off -o setuid=off zroot/var/log
2019-01-20.01:37:52 zfs create -o atime=on zroot/var/mail
2019-01-20.01:37:52 zfs create -o setuid=off zroot/var/tmp
2019-01-20.01:37:52 zfs set mountpoint=/zroot zroot
2019-01-20.01:37:52 zpool set bootfs=zroot/ROOT/default zroot
2019-01-20.01:37:52 zpool set cachefile=/mnt/boot/zfs/zpool.cache zroot
2019-01-20.01:37:57 zfs set canmount=noauto zroot/ROOT/default
Code:
# mount
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
linprocfs on /compat/linux/proc (linprocfs, local)
tmpfs on /compat/linux/dev/shm (tmpfs, local)
fdescfs on /dev/fd (fdescfs)
procfs on /proc (procfs, local)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
wdpool on /wdpool (zfs, local, nfsv4acls)
Code:
% zfs list
NAME USED AVAIL REFER MOUNTPOINT
wdpool 268K 281G 88K /wdpool
zroot 65.5G 146G 88K /zroot
zroot/ROOT 15.0G 146G 88K none
zroot/ROOT/default 15.0G 146G 15.0G /
zroot/tmp 17.8M 146G 17.8M /tmp
zroot/usr 50.5G 146G 88K /usr
zroot/usr/home 47.6G 146G 47.6G /usr/home
zroot/usr/ports 1.55G 146G 1.55G /usr/ports
zroot/usr/src 1.31G 146G 1.31G /usr/src
zroot/var 1.17M 146G 88K /var
zroot/var/audit 88K 146G 88K /var/audit
zroot/var/crash 88K 146G 88K /var/crash
zroot/var/log 668K 146G 668K /var/log
zroot/var/mail 176K 146G 176K /var/mail
zroot/var/tmp 88K 146G 88K /var/tmp
Code:
% zpool status
pool: wdpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
wdpool ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
errors: No known data errors
pool: zroot
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zroot ONLINE 0 0 0
ada0p3 ONLINE 0 0 0
errors: No known data errors
Code:
%df -h
Filesystem Size Used Avail Capacity Mounted on
zroot/ROOT/default 161G 15G 146G 9% /
devfs 1.0K 1.0K 0B 100% /dev
linprocfs 4.0K 4.0K 0B 100% /compat/linux/proc
tmpfs 1.2G 4.0K 1.2G 0% /compat/linux/dev/shm
fdescfs 1.0K 1.0K 0B 100% /dev/fd
procfs 4.0K 4.0K 0B 100% /proc
zroot/tmp 146G 18M 146G 0% /tmp
zroot/usr/home 193G 48G 146G 25% /usr/home
zroot/usr/ports 147G 1.5G 146G 1% /usr/ports
zroot/usr/src 147G 1.3G 146G 1% /usr/src
zroot/var/audit 146G 88K 146G 0% /var/audit
zroot/var/crash 146G 88K 146G 0% /var/crash
zroot/var/log 146G 672K 146G 0% /var/log
zroot/var/mail 146G 176K 146G 0% /var/mail
zroot/var/tmp 146G 88K 146G 0% /var/tmp
zroot 146G 88K 146G 0% /zroot
wdpool 281G 88K 281G 0% /wdpool
I notice that, output
df -h
and zfs list
provide wrong information about actuly used size.In actual fact, the file system contain more then 250 Gb of data. As far as I understand, in the final result, the output should be something like that:
Code:
%df -h
....
wdpool/tmp
wdpool/usr/home
wdpool/usr/ports
wdpool/usr/src
wdpool/var/audit
wdpool/var/crash
wdpool/var/log
wdpool/var/mail
wdpool/var/tmp
wdpool
...
How can I get full access to the file system tree on the second disk with ZFS? I mean slice ada1p3... Maybe I missed something....? Please, correct me,