Re: Using a "special" proxy for ports

On Mon, 27 Jun 2011, Damien Fleuriot wrote:

On 6/27/11 4:27 PM, Dennis Glatting wrote:

On Mon, 27 Jun 2011, Damien Fleuriot wrote:

On 6/27/11 4:52 AM, Dennis Glatting wrote:

I have a requirement where I need to archive ports used across twenty
hosts for a year or more. I've decided to do this using Squid and to
take advantage of Squid's cache when updating common ports across those

(BTW, at another site I used rsync to sync /usr/ports/distfiles across
the hosts to a local master site then specified _MASTER_SITES_DEFAULT in
make.conf to a FTP server on the local site. That method works when the
port is previously cached however if the file isn't in the cache and I
simultaneously install the port across ten hosts, the port is fetched
ten times. Sigh.)

I have a Squid proxy installed that isn't meant for every-day/every-user
use and requires authentication. (Users either go through another Squid
proxy or direct.) The special Squid proxy works. No surprise there.
Authentication works. No surprise there.

What I need is a method to embed into make.conf a proxy specification
for fetch. Setting the environment variable HTTP_PROXY from the login
shell /is not/ preferred because the account is used by different
administrators, I don't what the special proxy accidentally polluted
with non-port stuff, and it would only create confusion.

Setting http_proxy in make.conf does not work. .netrc doesn't appear to
be a viable method (if it did, I could specify FETCH_ARGS in make.conf).

What about using a NFS share for /usr/ports/distfiles ?

Many of these servers provide network/system services across a WAN. If a
link goes down or is congested, NFS may hang them all. NFS also provides
certain security challenges.

What about using a SSHFS share for /usr/ports/distfiles ?

I don't know much about that file system and will have to look into it. I have had problems with FUSE code, as recently as last week (i.e., very large files).

How does SSHFS resolve multiple systems simultaneously downloading and caching ports? I assume much the same as any file system where there is a reasonable risk of content corruption (e.g., one of the downloads abort resulting in a partial download or a lack of file locking results in multiple processes simultaneously writing to the same file with unpredictable content).

Many of my servers provide network/system services over a dodgy AT&T MPLS. As such, the servers must be as autonomous as possible. In the _MASTER_SITES_DEFAULT technique I used at another site, if my site-local FTP server is unavailable then fetch does the normal stuff (i.e., it fails to the next site in the list). The compromise with a proxy technique is to disable the proxy spec if there is a network problem. This works because I have three, independent Internet exit points across my WAN linked together with local-preferenced BGP.

freebsd-questions@xxxxxxxxxxx mailing list
To unsubscribe, send any mail to "freebsd-questions-unsubscribe@xxxxxxxxxxx"