HTTP Anti-Virus Proxy
http://havp.hege.li/forum/

Problem with the number of child processes
http://havp.hege.li/forum/viewtopic.php?f=3&t=313
Page 1 of 2

Author:  trckh [ 30 Nov 2007 04:05 ]
Post subject:  Problem with the number of child processes

I am having a bit of a problem with Havp spawning new child processes. I have been experimenting to try and find the correct number to use for SERVERNUMBER and MAXSERVERS and it seems no matter what I use, I end up running out and then the program stops working.

I have plenty of hardware, it just seems like the childs keep building until the max is reached. I'm beginning to wonder if old child processes are never stopped, but I really don't know what the problem is?

I even thought it might be a problem with the Linux load so I reloaded it from scratch. Info as follows:

OS: Centos5 with all current patches as of this data
Processor: 3GHz
Memory: 3GB
Number of users pointing at this proxy: 15
Setup: Havp > Squid
Note: Squid works with no problems.

Final numbers tried (yes, I know these are way high but it still ran out)
SERVERNUMBER 600
MAXSERVERS 750

Error I get in the error.log
All childs busy, spawning new (now: 700) - SERVERNUMBER might be too low

The child number in the log entries just keeps climbing until it will no longer work.

What can I do to solve this? And by the way, even if I don't get this fixed, thank you for the program. It seems others have great success with and I can certainly see that it would be very useful.

Thank you in advance

Author:  hege [ 30 Nov 2007 09:48 ]
Post subject: 

If you really have 15 users, then something is broken yes..

Did you try just setting both same and small?

SERVERNUMBER 30
MAXSERVERS 30

Does it not work, are there delays?

If it doesn't work, most likely your scanner is broken and taking too long so everything is busy.

Author:  trckh [ 30 Nov 2007 17:29 ]
Post subject: 

Yep, I really only has 15 users. By the way, I forgot to mention that I am using the latest version as of this writing, v0.86.

I set SERVERNUMBER and MAXSERVERS both back to 30 and tried it again. It works for a couple of minutes and then access just stopps. The only errors it logs are the normal "Could not read server header". I then stop HAVP and set Squid back to listen on that port and all works just fine so I know the problem isn't with Squid.

The thing that I find strange is that I just loaded this server (twice) so it can't have anything out of the ordinary on it and the hardware has been working fine.

Any additional help would be greatly appreciated as I would really like to use the application.

Thank you.

Author:  trckh [ 04 Dec 2007 19:38 ]
Post subject:  Any Other Ideas?

Any other ideas on this? I would really like to get it working and the strange thing about this is it is a new load of Centos5. I reloaded it and can reproduce the problem. I know clam is working as I can use it just fine.

Any other suggestions?

Thank you.

Author:  hege [ 04 Dec 2007 19:40 ]
Post subject: 

Are you using clamd or clamlib?

Author:  trckh [ 05 Dec 2007 00:23 ]
Post subject:  Answer: Clamlib

I'm using clamlib

Thanks again.

Author:  hege [ 05 Dec 2007 00:58 ]
Post subject: 

What does top say when it hangs? Is something taking lot of CPU?

Maybe try to compile newest ClamAV yourself.

Author:  trckh [ 05 Dec 2007 01:05 ]
Post subject: 

There is nothing anywhere close to maxing out when I run top. I also am using the newest ClamAV compiled myself.

I had another machine here that I could do some testing on so I loaded Centos5 on it from scratch. Then I ran yum to install all the updates as well as Squid. Then, I installed ClamAV (no rpm but rather compiled myself) and finally HAVP (again I compiled it).

I am now pretty sure that the problem is that the number of child processes just keeps growing until it can grow no more and then things stop working.

Here is what I just saw in my log before it stopped working (it works fine for about 10 minutes):

05/12/2007 01:00:07 All childs busy, spawning new (now: 100) - SERVERNUMBER might be too low

Below are the only settings I have changed in my config file and I have tried numerous ways to configure it all with the same results:
SERVERNUMBER 40
MAXSERVERS 100
LOGLEVEL 1
PARENTPROXY 192.168.4.20
PARENTPORT 3128
X_FORWARDED_FOR true
BIND_ADDRESS 192.168.4.20
ENABLECLAMLIB true

I think this should be pretty easy to recreate using the info above. There is definitely a problem somewhere as I can now recreate it on two machines.

Thanks again.

Author:  hege [ 05 Dec 2007 01:23 ]
Post subject: 

Can you try running strace on havp something like this (set DAEMON false):

strace -ff -o havp.log /usr/local/bin/havp

I can't test the exact command right now, but I guess that should create trace for all processes. You can compress all and send url or files to havp@hege.li.

Author:  trckh [ 05 Dec 2007 20:08 ]
Post subject: 

I just sent the tar ball to the address specified. Please let me know when you receive it. I am curious what you find out.

Thank you again.

Author:  samueldg [ 14 Dec 2007 23:21 ]
Post subject:  HAVP stops working

I have the same problem described in this issue and tried all this.
The only workaround found was stop havp proccess 4 times at day (at the moment).
My configuration is

squid1->havp (with clamd)->squid2

squid1 and havp are in one machine and squid is a transparent proxy for the users. squid2 is the parent proxy for havp and is in another machine.

top command shows nothing interesting, ps shows theare many havp process running (appears to be more than MAXSERVERS).

Any suggestion else?

Author:  hege [ 15 Dec 2007 00:02 ]
Post subject: 

Might as well be the same problem. Clients flooding many requests to some URL that does not respond. So all HAVP processes are busy waiting for the response until timeout of 60-120 seconds by default. Are there any errors in your havp.log?

Author:  samueldg [ 15 Dec 2007 02:06 ]
Post subject: 

hege wrote:
Might as well be the same problem. Clients flooding many requests to some URL that does not respond. So all HAVP processes are busy waiting for the response until timeout of 60-120 seconds by default. Are there any errors in your havp.log?


The usual errors are:
Code:
14/12/2007 23:12:11 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)
14/12/2007 23:12:12 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)
14/12/2007 23:12:30 (10.1.1.6) Could not read server header (10.1.1.85/www.sumotracker.com:80)
14/12/2007 23:14:30 (10.1.1.6) Could not read server header (10.1.1.85/www.sumotracker.com:80)
14/12/2007 23:15:03 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)
14/12/2007 23:16:31 (10.1.1.6) Could not read server header (10.1.1.85/www.sumotracker.com:80)
14/12/2007 23:17:04 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)
14/12/2007 23:18:31 (10.1.1.6) Could not read server header (10.1.1.85/www.sumotracker.com:80)
14/12/2007 23:20:00 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)
14/12/2007 23:22:01 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)


Before that, i can see (taking a view into my "hand log") many logs concerning to "clamd" down.

I erase that logs to try to reproduce them when users call to me saying that there is no browsing and put here.

Author:  hege [ 15 Dec 2007 08:42 ]
Post subject: 

So it seems 62.178.230.210 is unreachable. Though if it's only one client connecting there, is shouldn't eat all processes. You might want to stop it reaching from HAVP anyway.

Also are you sure you have atleast same MaxThreads in clamd.conf as MAXSERVERS?

Author:  samueldg [ 15 Dec 2007 16:35 ]
Post subject: 

hege wrote:
So it seems 62.178.230.210 is unreachable. Though if it's only one client connecting there, is shouldn't eat all processes. You might want to stop it reaching from HAVP anyway.

Also are you sure you have atleast same MaxThreads in clamd.conf as MAXSERVERS?


I'll try some days with MaxThreads=MAXSERVERS parameter and say here what is the result.

Thanks

Page 1 of 2 All times are UTC + 2 hours [ DST ]
Powered by phpBB® Forum Software © phpBB Group
https://www.phpbb.com/