9-STABLE, ZFS, NFS, ggatec - suspected memory leak



Hi all,

Setup:

I'm running 2 machines (amd64, 16GB) with FreeBSD 9-STABLE (Mar 14 so
far) acting as NFS servers. They each serve 3 zpools (holding a single
zfs, hourly snapshots). The zpools each are 3-way mirrors of ggate
devices, each 2 TB, so 2 TB per zpool. Compression is "on" (to save
bandwith to the backend, compressratio around 1.05 to 1.15), atime is
off.

There is no special tuning in loader.conf (except I tried to limit ZFS
ARC to 8GB lately, which doesn't change a lot). sysctl.conf has:

kern.ipc.maxsockbuf=33554432
net.inet.tcp.sendspace=8388608
net.inet.tcp.recvspace=8388608
kern.maxfiles=64000
vfs.nfsd.maxthreads=254

Without the first three zfs+ggate goes bad after a short time (checksum
errors, stall), the latter are mainly for NFS and some regular local
cleanup stuff.

The machines have 4 em and 2 igb network interfaces. 3 of the are
dedicated links (with no switches) to the ggate servers, one is a
dedicated link to a third machine which gets feeded with incremental
snapshots by ZFS send (as backup and fallaback of last resort), one
interface for management tasks and one to an internal network with the
NFS clients.

The NFS clients are mostly FreeBSD 6, 7 and 9 STABLE machines (migration
to 9 is running), no NFSv4 (by now), all tcp NFS links, merely no
locking.

Data consists of a lot of files, this is mainly mailboxes (IMAP:
dovecot, incoming mail with exim, some simple web stuff with apache), so
lots of small files, only few bigger ones. Directory structures to a
reasonable depth.

System is on a SSD (ufs, trim), additionally there are 3 (4 actually, 1
unused by now) 120GB SSDs serving as cache devices for the zpools. I
first starting using the whole device, but in my hopes to change
something limited cache to partitions of 32GB without change in
behaviour.


Problem:

After about a week of runtime in normal workload the systems starts to
swap (with about 300 to 500 MB of RAM free). Lots of swapping in and
out, but only very few swap space used (30 to 50 MB). ZFS ARC at that
point reaches it's minimum (while using up to it's configured maximum
before). Most of the RAM is wired. L2ARC headers, accourding to
zfs-stats eat about 1GB, ARC is at 1.8GB at this time. No userland
processes using lots of RAM.

After some time the system becomes unresponsive, the only way to fix
this I had found by now is to reboot the machine (which of course gives
a service interruption).

From the start of swapping to unresponsiveness I have about 2 to 4 hours
to check several things (if I just knew what!).

Workload distribution is not even over the day, from my munin graphs I
can see, that wired grows at time of higher workload. At night with
lower workload (but far from nothing, let's say about 1/3 to 1/2 in
writes, but probably <1/4 in reads from weekday workload) I can barely
see any groth of the wired graph.

So where is my memory going, any ideas what to change?

Kernel is stripped down from GENERIC and then everything I need loaded
as modules.

Kernel config: http://sysadm.in/zprob/ZSTOR
loader.conf : http://sysadm.in/zprob/loader.conf
dmesg.boot : http://sysadm.in/zprob/dmesg.boot


--
| Oliver Brandmueller http://sysadm.in/ ob@xxxxxxxxx |
| Ich bin das Internet. Sowahr ich Gott helfe. |
_______________________________________________
freebsd-stable@xxxxxxxxxxx mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscribe@xxxxxxxxxxx"



Relevant Pages

  • Re: 9-STABLE, ZFS, NFS, ggatec - suspected memory leak
    ... Its not NFS, its ZFS. ... ARC to 8GB lately, ... Workload distribution is not even over the day, ...
    (freebsd-stable)
  • Re: ZFS vs OSX Time Machine
    ... FreeBSD with ZFS + NFS performs extremely poorly when trying to do ... It would be useful to provide ZFS ARC sysctl data from the FreeBSD ... resulting in the entire pool performing horribly). ...
    (freebsd-stable)
  • Re: fsid change of ZFS?
    ... hr> between oldnfs and newnfs on a CURRENT box (NFS server exporting ... ZFS ... The cause was that fsid was changed in the following ... The objset unique ID is unique to ...
    (freebsd-current)
  • Re: NFS - slow
    ... it's just a feature of NFS. ... When the client says FILE_SYNC, ... the server is required to do that. ... If you are exporting ZFS volumes and don't mind violating the NFS RFCs ...
    (freebsd-hackers)
  • Re: missing files in readdir(3) on NFS export of ZFS volume (since v28?)
    ... Since I upgraded to ZFS v28 I noticed missing files from NFS. ...
    (freebsd-current)