New disk schedulers available for FreeBSD
- From: Luigi Rizzo <rizzo@xxxxxxxx>
- Date: Mon, 12 Jan 2009 23:00:58 +0100
Fabio Checconi and myself have developed a GEOM-based disk scheduler
for FreeBSD. The scheduler is made of a GEOM kernel module, the
corresponding userland claas library, and other loadable kernel
modules that implement the actual scheduling algorithm.
At the URL below you can find a tarball with full sources and
also a set of pre-built modules/libraries for RELENG_7, to ease testing.
Below you can find the README file that comes with the distribution.
I would encourage people to try it and submit feedback, because the
initial results are extremely interesting. While I just tried the
code under RELENG_7/i386, it should build and work on all versions
that have GEOM (but read below).
Also the code is quite robust, because most of the difficult tasks
(data moving, synchronization, etc.) are handled by GEOM, and the
scheduler is only deciding which requests to serve and when.
NOTE: The scheduler is designed to be distributed as a port, but
it needs an extra field in 'struct bio' and a small change in
function g_io_request() to work. Both changes are trivial but need
a kernel rebuild.
To try this code on AMD64 you do need to patch and rebuild the kernel.
On i386, and purely to ease evaluation, we avoid the need for a kernel
rebuild by patching one function in-memory (and patching it back
when the module is unloaded).
luigi and fabio
A copy of the README file follows:
--- GEOM BASED DISK SCHEDULERS FOR FREEBSD ---
This code contains a framework for GEOM-based disk schedulers and a
couple of sample scheduling algorithms that use the framework and
implement two forms of "anticipatory scheduling" (see below for more
As a quick example of what this code can give you, try to run "dd",
or "tar", or some other code with highly SEQUENTIAL access patterns,
together with "cvs" or "cvsup" or other highly RANDOM access patterns
(this is not a made-up example: it is pretty common for developers
to have one or more apps doing random accesses, and others that do
sequential accesses e.g., loading large binaries from disk, checking
the integrity of tarballs, watching media streams and so on).
These are the results we get on a local machine (AMD BE2400 dual
core CPU, SATA 250GB disk):
/mnt is a partition mounted on /dev/ad0s1f
(or /dev/ad0-sched-s1f when used with the scheduler)
cvs: cvs -d /mnt/home/ncvs-local update -Pd /mnt/ports
dd-read: dd bs=128k of=/dev/null if=/dev/ad0 (or ad0-sched-)
dd-writew dd bs=128k if=/dev/zero of=/mnt/largefile
NO SCHEDULER RR SCHEDULER
dd cvs dd cvs
dd-read only 72 MB/s ---- 72 MB/s ---
dd-write only 55 MB/s --- 55 MB/s ---
dd-read+cvs 6 MB/s ok 30 MB/s ok
dd-write+cvs 55 MB/s slooow 14 MB/s ok
As you can see, when a cvs is running concurrently with dd, the
performance drops dramatically, and depending on read or write mode,
one of the two is severely penalized. The use of the RR scheduler
in this example makes the dd-reader go much faster when competing
with cvs, and lets cvs progress when competing with a writer.
To try it out:
1. PLEASE READ CAREFULLY THE FOLLOWING:
To avoid the need to rebuild a kernel, and just for testing
purposes, we implemented a hack which consists in patching one
kernel function (g_io_request) so that it executes the marking
of "bio's" (I/O requests). Also, the classification info is
stored in a rarely used field of struct bio. See details in the
file g_sched.c .
At the moment the 'patch' hack is only for i386 kernels built
with standard flags. For other configurations, you need to
manually patch sys/geom/geom_io.c as indicated by the error
message that you will get.
If you don't like the above, don't run this code.
Also note that these hacks are only for testing purpose. If
this code ever goes in the tree, it will use the correct approach
which is adding a field to 'struct bio' to store the classification
info, and modify g_io_request() to call a function to initialize
2. PLEASE MAKE SURE THAT THE DISK THAT YOU WILL BE USING FOR TESTS
DOES NOT CONTAIN PRECIOUS DATA.
This is experimental code and may fail, especially at this stage.
3. EXTRACT AND BUILD THE PROGRAMS
A 'make install' in the directory should work (with root privs),
or you can even try the binary modules.
If you want to build the modules yourself, look at the Makefile.
4. LOAD THE MODULE, CREATE A GEOM NODE, RUN TESTS
# --- configure the scheduler on device ad0
geom sched create -a rr ad0
# -- now you will have entries /dev/ad0-sched-
For tests you can do the same as i did above, i.e. run concurrent
programs that access the disk without the scheduler (/dev/ad0...)
and with the scheduler (/dev/ad0-sched-...) and see the difference.
--- NOTES ON THE SCHEDULERS ---
The important contribution of this code is the framework to experiment
with different scheduling algorithms. 'Anticipatory scheduling'
is a very powerful technique based on the following reasoning:
The disk throughput is much better if it serves sequential requests.
If we have a mix of sequential and random requests, and we see a
non-sequential request, do not serve it immediately but instead wait
a little bit (2..5ms) to see if there is another one coming that
the disk can serve more efficiently.
There are many details that should be added to make sure that the
mechanism is effective with different workloads and systems, to gain a
few extra percent in performance, to improve fairness, insulation
among processes etc. A discussion of the vast literature on the subject
is beyond the purpose of this short note.
freebsd-stable@xxxxxxxxxxx mailing list
To unsubscribe, send any mail to "freebsd-stable-unsubscribe@xxxxxxxxxxx"
- Re: New disk schedulers available for FreeBSD
- From: Garrett Cooper
- Re: New disk schedulers available for FreeBSD
- Prev by Date: Re: FreeBSD 7.1 Breaks re and rl Network Interface Drivers
- Next by Date: Re: FreeBSD 7.1 Breaks re and rl Network Interface Drivers
- Previous by thread: FreeBSD 7.0 kernel panic
- Next by thread: Re: New disk schedulers available for FreeBSD