HTTP Anti-Virus Proxy

Official HAVP Support Forum
Registration disabled, I'm tired of spambots. E-mail havp@hege.li if you have questions.
HAVP project is pretty much frozen/abandoned at this time anyway.
It is currently 22 Jun 2014 09:52

All times are UTC + 2 hours [ DST ]




Post new topic Reply to topic  [ 8 posts ] 
Author Message
PostPosted: 07 Aug 2007 13:16 
Offline

Joined: 07 Aug 2007 13:04
Posts: 3
Hi

I am attempting to set up squid and havp together.

I have followed a sample config I got from this site, but I am getting an error about a forwarding loop.

Havp is listening on 13128, with parent 3238 and the relevant parts from my squid.conf are

Code:
http_port 3128
http_port 127.0.0.1:3238
acl HAVP_PORT myport 3238
no_cache deny !HAVP_PORT
cache_peer 127.0.0.1 parent 13128 0 no-query no-digest no-netdb-exchange default
cache_peer 127.0.0.2 parent 3238 0 no-query no-digest no-netdb-exchange
prefer_direct off
never_direct allow all
acl Proto_HTTPS proto HTTPS
cache_peer_access 127.0.0.1 allow !Proto_HTTPS
cache_peer_access 127.0.0.1 deny all
cache_peer_access 127.0.0.2 allow all

follow_x_forwarded_for deny all
forwarded_for off

http_reply_access allow all
icp_access allow all




My problem is that I get in my logs the following
Code:
WARNING: Forwarding loop detected for:
Client: 127.0.0.1 http_port: 127.0.0.1:3238


This happens even if I connect a client directly to port 3238 (bypassing havp)

What is wrong in the above squid config?

Chris


Top
 Profile  
 
 Post subject:
PostPosted: 07 Aug 2007 14:15 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
Oops, the example isn't very tested or polished..

I rewrote it a bit, check it out again. It should work now, hopefully.

Cheers,
Henrik


Top
 Profile  
 
 Post subject:
PostPosted: 07 Aug 2007 17:12 
Offline

Joined: 07 Aug 2007 13:04
Posts: 3
Thanks. The example is a bit more flexible/extensible now.

I still get the same warning in my squid log file, though:

Code:
2007/08/07 16:13:23| WARNING: Forwarding loop detected for:
Client: 127.0.0.1 http_port: 127.0.0.1:3238
GET http://www.google.co.za/ HTTP/1.0
User-Agent: Wget/1.10.2
Accept: */*
Host: www.google.co.za
Cache-Control: max-age=259200
Via: 1.0 HAVP, 1.0 localhost.localdomain:3128 (squid/2.6.STABLE6)
Connection: keep-alive



The error is caused because both squid sessions have the same visible_hostname.

Is there not a way to modify this on a per port basis, or to tell squid not to check this?


Top
 Profile  
 
 Post subject:
PostPosted: 08 Oct 2007 07:51 
Offline

Joined: 07 Aug 2007 13:04
Posts: 3
Just to follow up

I am using the following config, and it is working correctly for me.

Code:
visible_hostname 172.17.17.1

http_port 3128
http_port 127.0.0.2:3238
acl from_havp myport 3238

# Size of cache (4 Gigs)
cache_dir ufs /var/spool/squid 4096 16 256

redirect_program /usr/bin/squidguard

acl localnet src 172.17.17.0/255.255.255.0
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl localhost src 127.0.0.2/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443 563 10000
acl Safe_ports port 80 21 443 563 1025-65535
acl CONNECT method CONNECT
acl Proto_HTTPS proto HTTPS

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all


redirector_access deny from_havp
cache_peer 127.0.0.1 parent 13128 0 no-query no-digest no-netdb-exchange
cache_peer 127.0.0.2 parent 3238 0 no-query no-digest no-netdb-exchange  proxy-only
prefer_direct off

always_direct allow localhost
always_direct allow from_havp
always_direct allow CONNECT
never_direct allow all

cache_peer_access 127.0.0.2 allow from_havp
cache_peer_access 127.0.0.1 allow all

redirector_access deny from_havp
redirector_access allow all

#follow_x_forwarded_for allow localhost
#forwarded_for on
http_reply_access allow all
icp_port 0
icp_access deny all
hierarchy_stoplist cgi-bin ?
acl QUERY urlpath_regex cgi-bin \?
no_cache deny QUERY
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern .               0       20%     4320
coredump_dir /var/spool/squid

header_access Via deny all

#log_access deny from_havp


Using squid 2.5, I get 2 log entries - One logging the client IP and destination, the next logging 127.0.0.2 as the source. My log parser ignores all 127.0.0.2 entries.

If using squid 2.6, you can uncomment the last line to drop the 127.0.0.2 logs.

ftp and http are scanned via squid(3128) -> havp(13128) -> squid(3238)

https is proxied using the internal squid, (3128) and connects directly out from there.

The forwarding loops are fixed using "header_access Via deny all"

my havp.config has

Code:
PARENTPROXY 127.0.0.2
PARENTPORT 3238
FORWARDED_IP true
X_FORWARDED_FOR true
PORT 13128
BIND_ADDRESS 127.0.0.1


Top
 Profile  
 
PostPosted: 21 Jan 2008 16:20 
Offline

Joined: 21 Jan 2008 16:12
Posts: 9
Hi,
is it possible to set up first squid as transparent proxy? i.e.:

http_port 3128 transparent

and the use iptables to redirect request to port 80 to localhost port 3128?

Do still need "header_access Via deny all"?

Thank you.

Litin


Top
 Profile  
 
PostPosted: 21 Jan 2008 16:41 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
litnoveweedle wrote:
Hi,
is it possible to set up first squid as transparent proxy? i.e.:


Of course, it has nothing to do with HAVP. Being transparent makes no difference what "parents" Squid has or how they work.


Top
 Profile  
 
 Post subject: wrong logging
PostPosted: 25 Jan 2008 19:05 
Offline

Joined: 21 Jan 2008 16:12
Posts: 9
Thank you for reply, It working with first squid as transparent proxy without any problem.

I've tried config from Zirafarafa, but there is one problem with it. It doesn't log as needed:

It logs only request from localnetwork to first squid(non caching) -> these will be always TCP_MISS
It omits logs from havp to second(caching squid) -> these can be HIT if stored in cache.
Also HAVP log will be non usable because all IP will be IP od first squid - i.e. 127.0.0.1

So is it possible to set:

squid:

Code:
#allow to see orig client IP in HAVP and caching squid logs
#use forwarded_for
forwarded_for on
#use only forwarder_for from our proxies
follow_x_forwarded_for allow localhost
#BUT don't use it for acls(I want them to remain same as in original cfg)
acl_uses_indirect_client off

#logging
#use IP from forwarded_for
log_uses_indirect_client on
#log request from havp (so request to caching squid will be logged)
log_access allow from_havp
#do not log any other request
log_access deny all

#I am not sure about this but anyway:
#clear forwarded_for header before requesting direct server (anonymous proxy)
header_access X-Forwarded-For deny from_havp
#also clear Via header field
header_access Via deny all


havp:

Code:
FORWARDED_IP true
X_FORWARDED_FOR true




With this config:
- I can see original IPs in havp log (when virus is found).
- I have to test if X-Forwarded-For is deleted (not validated yet)
- Strange thing, that there is almost NO HIT in my squid log - only MISS -> DIRECT

Is anything wrong with this configuration?
I am worry litle bit about "acl_uses_indirect_client off". I wanted to change IP only for logging purposes and leave acl as they are, but I am not pretty sure, if this IS really working as I suppose.

Thank You
Litin


Top
 Profile  
 
 Post subject:
PostPosted: 25 Jan 2008 19:25 
Offline

Joined: 21 Jan 2008 16:12
Posts: 9
If I log both squids to access.log I got something like:

Code:
1201277834.416      4 10.98.226.154 TCP_MISS/200 1398 GET http://online.sport.cz/on-line/load/comments?id=3088&lastCommentId=0 - DIRECT/77.75.72.113 application/x-javascript
1201277834.417      6 10.98.226.154 TCP_MISS/200 1485 GET http://online.sport.cz/on-line/load/comments?id=3088&lastCommentId=0 - FIRST_UP_PARENT/127.0.0.1 application/x-java


Strange is, that in the log, there is first logged cache which query DIRECT server (so it should be squid requested by havp) and after that, as a second, there is squid requested by client from localnetwork.

I suppose these two records should be in reverse order. ???
Havp is working without any problem with this setup - tested on eicar...

Have you please any idea, what can be wrong?

Thank you
Litin


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 8 posts ] 

All times are UTC + 2 hours [ DST ]


Who is online

Users browsing this forum: Google [Bot] and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group