Strykar wrote:
Hi,
Could some of the long-time users of HAVP in production environments here share their experiences on hardware specifications for a 1U rack running 32 bit Linux /w Gigabit LAN for medium enterprise use up to 250 users?
Well, not so surprisingly I've used it for a long time.. don't know how many others there are.
About 1200 users (ip addresses) use my Squid (~200 req/s) and 600 of them go through HAVP+ClamAV. That makes 1.2 million hits to HAVP each day. If you look at busiest hours, the average is 40 reqs/s. On my simple 2 x 2.8Ghz Opteron the average CPU load is just 10%, that's including Squid itself.
You could calculate that one scanner process takes some 10-20MB of resident memory, so that makes my ~100 processes eat a maximum of 2GB.
I do use heavy whitelisting of most popular sites like newspapers etc, there is no need to pass them to HAVP. Also you must whitelist streaming users/sites as current HAVP design doesn't handle well lots of streaming clients that take up a scanning slot for long time.
Quote:
Does intelligent caching in Linux mean that using TmpFS/RAMdisk vs SATA/SCSI would not make a noticeable difference?
If you think about it, scanning mostly 1-100k sized files in fractions of a second doesn't leave much time to hit the disk from cache, does it?
Only some metadata is written, depending on the filesystem.
I do use Solaris tmpfs, since it's there by default and works well. Some people have problems on Linux tmpfs, so might be safer not to use it.