HTTP Anti-Virus Proxy

Official HAVP Support Forum
Registration disabled, I'm tired of spambots. E-mail havp@hege.li if you have questions.
HAVP project is pretty much frozen/abandoned at this time anyway.
It is currently 22 Jun 2014 09:52

All times are UTC + 2 hours [ DST ]




Post new topic Reply to topic  [ 18 posts ]  Go to page 1, 2  Next
Author Message
PostPosted: 30 Nov 2007 04:05 
Offline

Joined: 30 Nov 2007 04:03
Posts: 8
I am having a bit of a problem with Havp spawning new child processes. I have been experimenting to try and find the correct number to use for SERVERNUMBER and MAXSERVERS and it seems no matter what I use, I end up running out and then the program stops working.

I have plenty of hardware, it just seems like the childs keep building until the max is reached. I'm beginning to wonder if old child processes are never stopped, but I really don't know what the problem is?

I even thought it might be a problem with the Linux load so I reloaded it from scratch. Info as follows:

OS: Centos5 with all current patches as of this data
Processor: 3GHz
Memory: 3GB
Number of users pointing at this proxy: 15
Setup: Havp > Squid
Note: Squid works with no problems.

Final numbers tried (yes, I know these are way high but it still ran out)
SERVERNUMBER 600
MAXSERVERS 750

Error I get in the error.log
All childs busy, spawning new (now: 700) - SERVERNUMBER might be too low

The child number in the log entries just keeps climbing until it will no longer work.

What can I do to solve this? And by the way, even if I don't get this fixed, thank you for the program. It seems others have great success with and I can certainly see that it would be very useful.

Thank you in advance


Top
 Profile  
 
 Post subject:
PostPosted: 30 Nov 2007 09:48 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
If you really have 15 users, then something is broken yes..

Did you try just setting both same and small?

SERVERNUMBER 30
MAXSERVERS 30

Does it not work, are there delays?

If it doesn't work, most likely your scanner is broken and taking too long so everything is busy.


Top
 Profile  
 
 Post subject:
PostPosted: 30 Nov 2007 17:29 
Offline

Joined: 30 Nov 2007 04:03
Posts: 8
Yep, I really only has 15 users. By the way, I forgot to mention that I am using the latest version as of this writing, v0.86.

I set SERVERNUMBER and MAXSERVERS both back to 30 and tried it again. It works for a couple of minutes and then access just stopps. The only errors it logs are the normal "Could not read server header". I then stop HAVP and set Squid back to listen on that port and all works just fine so I know the problem isn't with Squid.

The thing that I find strange is that I just loaded this server (twice) so it can't have anything out of the ordinary on it and the hardware has been working fine.

Any additional help would be greatly appreciated as I would really like to use the application.

Thank you.


Top
 Profile  
 
 Post subject: Any Other Ideas?
PostPosted: 04 Dec 2007 19:38 
Offline

Joined: 30 Nov 2007 04:03
Posts: 8
Any other ideas on this? I would really like to get it working and the strange thing about this is it is a new load of Centos5. I reloaded it and can reproduce the problem. I know clam is working as I can use it just fine.

Any other suggestions?

Thank you.


Top
 Profile  
 
 Post subject:
PostPosted: 04 Dec 2007 19:40 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
Are you using clamd or clamlib?


Top
 Profile  
 
 Post subject: Answer: Clamlib
PostPosted: 05 Dec 2007 00:23 
Offline

Joined: 30 Nov 2007 04:03
Posts: 8
I'm using clamlib

Thanks again.


Top
 Profile  
 
 Post subject:
PostPosted: 05 Dec 2007 00:58 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
What does top say when it hangs? Is something taking lot of CPU?

Maybe try to compile newest ClamAV yourself.


Top
 Profile  
 
 Post subject:
PostPosted: 05 Dec 2007 01:05 
Offline

Joined: 30 Nov 2007 04:03
Posts: 8
There is nothing anywhere close to maxing out when I run top. I also am using the newest ClamAV compiled myself.

I had another machine here that I could do some testing on so I loaded Centos5 on it from scratch. Then I ran yum to install all the updates as well as Squid. Then, I installed ClamAV (no rpm but rather compiled myself) and finally HAVP (again I compiled it).

I am now pretty sure that the problem is that the number of child processes just keeps growing until it can grow no more and then things stop working.

Here is what I just saw in my log before it stopped working (it works fine for about 10 minutes):

05/12/2007 01:00:07 All childs busy, spawning new (now: 100) - SERVERNUMBER might be too low

Below are the only settings I have changed in my config file and I have tried numerous ways to configure it all with the same results:
SERVERNUMBER 40
MAXSERVERS 100
LOGLEVEL 1
PARENTPROXY 192.168.4.20
PARENTPORT 3128
X_FORWARDED_FOR true
BIND_ADDRESS 192.168.4.20
ENABLECLAMLIB true

I think this should be pretty easy to recreate using the info above. There is definitely a problem somewhere as I can now recreate it on two machines.

Thanks again.


Top
 Profile  
 
 Post subject:
PostPosted: 05 Dec 2007 01:23 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
Can you try running strace on havp something like this (set DAEMON false):

strace -ff -o havp.log /usr/local/bin/havp

I can't test the exact command right now, but I guess that should create trace for all processes. You can compress all and send url or files to havp@hege.li.


Top
 Profile  
 
 Post subject:
PostPosted: 05 Dec 2007 20:08 
Offline

Joined: 30 Nov 2007 04:03
Posts: 8
I just sent the tar ball to the address specified. Please let me know when you receive it. I am curious what you find out.

Thank you again.


Top
 Profile  
 
 Post subject: HAVP stops working
PostPosted: 14 Dec 2007 23:21 
Offline

Joined: 14 Dec 2007 23:13
Posts: 4
I have the same problem described in this issue and tried all this.
The only workaround found was stop havp proccess 4 times at day (at the moment).
My configuration is

squid1->havp (with clamd)->squid2

squid1 and havp are in one machine and squid is a transparent proxy for the users. squid2 is the parent proxy for havp and is in another machine.

top command shows nothing interesting, ps shows theare many havp process running (appears to be more than MAXSERVERS).

Any suggestion else?


Top
 Profile  
 
 Post subject:
PostPosted: 15 Dec 2007 00:02 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
Might as well be the same problem. Clients flooding many requests to some URL that does not respond. So all HAVP processes are busy waiting for the response until timeout of 60-120 seconds by default. Are there any errors in your havp.log?


Top
 Profile  
 
 Post subject:
PostPosted: 15 Dec 2007 02:06 
Offline

Joined: 14 Dec 2007 23:13
Posts: 4
hege wrote:
Might as well be the same problem. Clients flooding many requests to some URL that does not respond. So all HAVP processes are busy waiting for the response until timeout of 60-120 seconds by default. Are there any errors in your havp.log?


The usual errors are:
Code:
14/12/2007 23:12:11 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)
14/12/2007 23:12:12 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)
14/12/2007 23:12:30 (10.1.1.6) Could not read server header (10.1.1.85/www.sumotracker.com:80)
14/12/2007 23:14:30 (10.1.1.6) Could not read server header (10.1.1.85/www.sumotracker.com:80)
14/12/2007 23:15:03 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)
14/12/2007 23:16:31 (10.1.1.6) Could not read server header (10.1.1.85/www.sumotracker.com:80)
14/12/2007 23:17:04 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)
14/12/2007 23:18:31 (10.1.1.6) Could not read server header (10.1.1.85/www.sumotracker.com:80)
14/12/2007 23:20:00 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)
14/12/2007 23:22:01 (10.1.1.6) Could not read server header (10.1.1.85/62.178.230.210:80)


Before that, i can see (taking a view into my "hand log") many logs concerning to "clamd" down.

I erase that logs to try to reproduce them when users call to me saying that there is no browsing and put here.


Top
 Profile  
 
 Post subject:
PostPosted: 15 Dec 2007 08:42 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
So it seems 62.178.230.210 is unreachable. Though if it's only one client connecting there, is shouldn't eat all processes. You might want to stop it reaching from HAVP anyway.

Also are you sure you have atleast same MaxThreads in clamd.conf as MAXSERVERS?


Top
 Profile  
 
 Post subject:
PostPosted: 15 Dec 2007 16:35 
Offline

Joined: 14 Dec 2007 23:13
Posts: 4
hege wrote:
So it seems 62.178.230.210 is unreachable. Though if it's only one client connecting there, is shouldn't eat all processes. You might want to stop it reaching from HAVP anyway.

Also are you sure you have atleast same MaxThreads in clamd.conf as MAXSERVERS?


I'll try some days with MaxThreads=MAXSERVERS parameter and say here what is the result.

Thanks


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 18 posts ]  Go to page 1, 2  Next

All times are UTC + 2 hours [ DST ]


Who is online

Users browsing this forum: Google [Bot] and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group