Re: upd: 7.2->8.1 & many networks trouble & flowtable

I am the main author of this paper you referenced in your email.

The main discussion and focus of my paper was on the design and work done to separate L2 and L3 for both IPv4 and IPv6 to facilitate the elimination of GIANT lock in the networking subsystem, thus achieving high parallelism.

This redesign of separately managing L2 ARP/ND6 and L3 routing tables already show performance gain on multicore systems.

The flow-table enhancement is just one other component, described towards the end of the paper. Yes, It is experimental and was discussed as such in the paper as well as on the mailing list.

I did not know flow-table feature was enabled by default. I wouldn't have done so myself.

So help me understand you better: are you complaining about the general L2/L3 separation work, or you are angry about the flow-table enhancement in particular?


-- Qing

On Nov 24, 2010, at 1:54 AM, "Andrey Groshev" <greenx@xxxxxxxx> wrote:

Hi, PPL!

A couple of days ago decided to upgrade from 7.2-STABLE to 8.1-STABLE (amd64).
By tradition, waited some pitfalls.
But damn, not to the same degree!

The hardware on the server:
Motherboard: Intel SE7520JR23S
CPU's: 2 x Xeon 3Ghz
Ram: 4Gb

Software used: openospfd, openbgpd, bind, and so on.
In general, used as a boundary Router.

Update ... and began:
1. The server died a few minutes after launch, not even reacting to the keyboard. By issuing a warning about "em0 watchdog .....". I thought to myself - broke the driver, connect the other network card. Server even stopped hanging.
2. Nearest switch does not like OSPF from the server and it shuts down a port or vlan.
3. openbgpd loads CPU nearly 100%.
4. bind does not respond, despite the fact that properly loads the CPU.

In the end, I turned off everything that does not work as is necessary,
Only remaining process FLOWCLEANER which can be CPU at 100%.

Google started about this flowcleaner.
And what happened? I found a report entitled "Optimizing the BSD Routing System for Parallel Processing"(1). Roughly speaking, flowtable - a new approach to routing. Dividing the levels 2 and 3 can achieve more parallelism. But in the end, due to this - to increase network performance. Ok, everything looks great!

And now I ask: for whom all this? IMHO for example, ISP. Or, as stated in the above-mentioned report:

"The main goals for redesigning the kernel routing infrastructure
was to reduce the scope of the customization necessary when deriving products from FreeBSD, and to offer a generic solution that could be an integral part of the kernel." <<<

What ultimately relevant only to the equipment is used at the ISP.
Since the average user with its tiny routing table - it is not necessary.
But beyond the problems begin. How long have you seen the ISP without "fullview bpg"?

But beyond the problems begin.
Almost everywhere where it is mentioned a problem with FLOWCLEANER recommended for deletion from the kernel option FLOWTABLE.
And one of the co-authors wrote in his blog(2):
"One oversight that come up shortly afterwards
is that it adversely impacts performance for systems
with many routing prefixes to a greater degree than I had expected." <<<

How long have you seen the ISP without "fullview bpg"?
It turns out that the technology is designed to increase network performance that most network generally kills, which implies that it is not suitable for use.
And here it is not simply included in the source tree, and is enabled by default in the GENERIC kernel!
And do not say that there was no PR - they are (3)!


Sorry so long sets out the main meaning of the message is this:
Why in the kernel introduced new features, if it is good only on paper?
May exclude this option from the GENERIC kernel?


1. -
2. -
3. -

freebsd-stable@xxxxxxxxxxx mailing list
To unsubscribe, send any mail to "freebsd-stable-unsubscribe@xxxxxxxxxxx"
freebsd-stable@xxxxxxxxxxx mailing list
To unsubscribe, send any mail to "freebsd-stable-unsubscribe@xxxxxxxxxxx"