of of curiosity i used
sysutils/zfs-stats
that gave lot of values that are related to zfs, i looked at them:
Code:
vm.kmem_size: 4045062144
vm.kmem_size_max: 1319413950874
both are over my ram, i think, or not?:
Code:
real memory = 4299161600 (4100 MB)
avail memory = 4028526592 (3841 MB)
i could run top with that sorting but whenever i run git without limits machine goes down fast. maybe i could run git with some limits that are just about right
also, git worked before, it suddenly stopped working. i don't think major change happened in fbsd, definitely not in -p*, it was git that got upgrade
but should git be able to do it? and should kernel be able to allocate up all the ram? it saw kernel allocating ram up to point of less than 400m was free. i doubt that not many machines need this
i also don't know if i can limit kmem. if becomes full, will it panic? or just give error? and error to that? to zfs? to git?
right now, whatever git does, if i configure git with
Code:
> cat /usr/local/etc/gitconfig
[core]
packedGitWindowSize = 128m
packedGitLimit = 1g
preloadIndex = false
[diff]
renameLimit = 16384
that helps me
also, i wonder if i can do those tests in vm, but it will be even worse since then ram would be <4g obviously. it's annoying that every test brings machine down. after i traced issue to git, i don't really want to deliberately bring it down. i guess i could take another box and let it fail if this is needed to trace problem down
whole problem here is size of ram i guess. or half of the problem. many machines have loads of ram. even if git allocates a ton, and zfs, it won't fail
but should it fail at all? even if small memory?
by googling, i found lot of issues with git, not even on fbsd only. it says git eats resources. i knew it before too. but i assumed it would just use 100% cpu and swap whatever it needs and therefore be slow on this machine
actually git is not only one that takes it down, can't recall if i ran git at same time, before git itself starts taking machine down, but i also have this tool here which scans images. there are over 1t images 5mb in size. if i read exif from them all as fast as cpu and disk allows, it also seems like i'm on the edge. never realized to look how much wired is used back then
so it's not just git. i know zfs should not be used in low memory but should it just fail. it could as well allocate 128g or 1t, whatever the ram is in your machine, provided
i find it really hard to get what happens internally, but i have feeling that this should not happen
with vm, unless virtualization screws it up, it would be easier to capture it, but looks like i ran out of memory. not surprised, as wired was 3.6 and 4g ram
you tell me if that should happen
it's also not just my problem clearly. as i didn't start this topic
funnily, this escapes arc limits. maybe there should be more zfs limits. or kmem limits. how many machines are there that don't need sshd or shell running? or other things. like init
in the end, looks like zfs brings machine down
i have vague idea that this is some cache. but limiting cache would just make fs slow?
seems like here zfs just leaks all kernel memory off. or should i saw zfs just consumes every bit of ram. userland gets nothing
or is it outside of zfs? i find that area really complex too. generic (v)fs cache?
unless someone else gets there first, i could take another physical machine i could use solely to help to make this problem disappear in fbsd
maybe some test could be written here? iirc zfs has tests? unsure what git does, but it seems like it performs it's job very well. lately git managed to take machine down in every pull of ports main
now if we could make this to some deliberate test. maybe to take ram size of machine and just go over that somehow just to see what happens
i kind if refuse to believe that i'm the single person in the world. actually there were others earlier. but zfs should be used widely, and on fbsd too. are all those tuning their systems? or be really careful? why git triggers it? funnily i think there are others too
i'm wondering how to test it in this machine so kmem won't run out, just to see. or i don't know. i find kernel internals hard to get. fs is black magic. still is. i'm just pulling ideas out of my ass here
also, i kind don't want to run commands that lead to known crash here. maybe someone else can help. this is just generic low ram amd64 box. or i could just eventually put actual test machine up. which i should already have maybe, considering the first fbsd i installed was 4.6 and i have been running it since then. just meanwhile i had other things to do and that caused me to lose some in my home lab. just hw failed eh. but right now i just have one good machine here. it also runs network and has to work. hence the reluctance to do tests right now. unless they are non-destructive
btw, i had old 10.x machine here, in what i managed to permanently corrupt zfs. i hope that bug is now fixed? after many unclean shutdowns, it just panics 100% of the time. i didn't try to take it's pool yet and import it into newer machine to see of it's ok. i didn't know this could happen at all. wasn't zfs a thing where fsck is not needed? but there it felt like zfs-fsck was needed. or was it just kernel bug. strange eh. i hope those issues are gone now? catastrophic loss of pool?
so in the end, if you wait, i could have another machine and test things there. but isn't it faster if you run replicate it on your own. if you could? i don't have exotic hw
btw, even tho that this is just c2d with 4g ram, zfs here seems to be reasonably fast. i recall zfs was bad before, i mean on low end hw. so i can't really complain. just this unexpected issue bothers me