HTTP Anti-Virus Proxy

Official HAVP Support Forum
Registration disabled, I'm tired of spambots. E-mail havp@hege.li if you have questions.
HAVP project is pretty much frozen/abandoned at this time anyway.
It is currently 22 Jun 2014 09:52

All times are UTC + 2 hours [ DST ]




Post new topic Reply to topic  [ 23 posts ]  Go to page 1, 2  Next
Author Message
PostPosted: 25 Jan 2008 18:48 
Offline

Joined: 25 Jan 2008 18:31
Posts: 3
Hello!

I have installed squid+havp+clamav sandwich configuration. All works fine, but I have one problem - Squid doesn't cache objects. Cache is empty.

This is my configuration, if anybody can help me, I'll appreciate your help.

# ***************** SANDWICH CONFIG ********************
# USERS -- > SQUID 1(port 8080) --> HAVP(port 8081) --> SQUID 2(port 8090) --> INTERNET

#Recommended minimum configuration:
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 1025-65535 # unregistered ports

acl CONNECT method CONNECT

# SQUID 1
http_port 192.168.0.1:3128

# SQUID 2
http_port 127.0.0.1:8081

# ACL for the port where havp requests are comming
acl HAVP_PORT myport 8081

# We only cache requests for SQUID2
no_cache deny !HAVP_PORT

# HAVP running on port 8081
cache_peer localhost parent 8090 0 no-query no-digest no-netdb-exchange default

# Needed if we want to go directly to SQUID2 without HAVP
# We can't use same peer name twice, so lets use 127.0.0.2..
cache_peer 127.0.0.2 parent 8081 0 no-query no-digest no-netdb-exchange

always_direct allow SSL_ports

cache_peer_access 127.0.0.2 allow localhost
cache_peer_access localhost allow !SSL_ports

never_direct allow !SSL_ports
always_direct allow HAVP_PORT

# Allow Squid 2 to go out on the internet
http_access allow localhost Safe_ports

acl users src "/usr/local/etc/squid/access/users"

# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

http_access allow users Safe_ports

cache_mem 32 MB
cache_dir ufs /usr/local/squid/cache 10000 32 512
# You probably don't care to log duplicate requests coming in from HAVP
access_log none HAVP_PORT
# After that you can add normal log files.. (these are matched in order)
access_log /usr/local/squid/logs/access.log squid
logfile_rotate 3

forwarded_for off
coredump_dir /usr/local/squid/cache

acl apache rep_header Server ^Apache
broken_vary_encoding allow apache

icp_port 0
htcp_port 0

shutdown_lifetime 10 seconds

# This makes sure ALL requests are sent to parent peers when needed
nonhierarchical_direct off

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern . 0 20% 4320

tcp_outgoing_address xxx.xxx.xxx.xxx

header_access Via deny all


Top
 Profile  
 
 Post subject:
PostPosted: 26 Jan 2008 11:23 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
Your config is outdated. Look at the example.

viewtopic.php?t=11

You are missing "proxy-only" on cache_peer, among other changes.


Top
 Profile  
 
 Post subject:
PostPosted: 01 Feb 2008 04:27 
Offline

Joined: 30 Nov 2007 04:03
Posts: 8
That seems backwards to me? I thought if proxy-only was included with cache_peer it just "proxied" the connection and did not cache the results?

Kyle


Top
 Profile  
 
 Post subject:
PostPosted: 01 Feb 2008 08:56 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
trckh wrote:
That seems backwards to me? I thought if proxy-only was included with cache_peer it just "proxied" the connection and did not cache the results?


Eh, that's the point, isn't it? ;)

Squid1 should not cache. Squid2 should.

(Yes it can be a bit hard to visualize the config in your mind)


Top
 Profile  
 
 Post subject:
PostPosted: 02 Feb 2008 16:51 
Offline

Joined: 25 Jan 2008 18:31
Posts: 3
Thank you for your reply, Hege! I make changes in my squid.conf file. I try to visit some Internet sites from different computers from my network. But I don't see any "TCP_HIT" records in squid log file.

Here's my new configuration file:

=========================================
acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 4430 # https
acl Safe_ports port 1025-65535 # unregistered ports

acl HTTPS proto HTTPS

# SQUID 1
http_port 192.168.0.1:3128

# SQUID 2
http_port 127.0.0.1:8081

# ACL for the port where havp requests are comming
acl HAVP_PORT myport 8081

# HAVP running on port 8090
cache_peer 127.0.0.1 parent 8090 0 name=havp proxy-only no-query no-digest no-netdb-exchange default

# Needed if we want to go directly to SQUID2 without HAVP
cache_peer 127.0.0.1 parent 8081 0 name=squid2 proxy-only no-query no-digest no-netdb-exchange

cache_peer_access havp deny HAVP_PORT
cache_peer_access havp deny HTTPS
cache_peer_access havp allow all
cache_peer_access squid2 deny HAVP_PORT
cache_peer_access squid2 allow all

always_direct allow HTTPS
never_direct allow !HAVP_PORT

# Allow Squid 2 to go out on the internet
http_access allow localhost Safe_ports

#Below, put your normal acl rules
acl users src "/usr/local/etc/squid/access/users"
http_access allow users Safe_ports

# Only allow cachemgr access from localhost
http_access allow manager localhost
http_access deny manager

cache_mem 32 MB
cache_dir ufs /usr/local/squid/cache 10000 32 512

# You probably don't care to log duplicate requests coming in from HAVP
access_log none HAVP_PORT
# After that you can add normal log files.. (these are matched in order)
access_log /usr/local/squid/logs/access.log squid
logfile_rotate 3

# This makes sure ALL requests are sent to parent peers when needed
prefer_direct off
nonhierarchical_direct off


Top
 Profile  
 
 Post subject:
PostPosted: 03 Feb 2008 00:32 
Offline

Joined: 21 Jan 2008 16:12
Posts: 9
Hi,
I think that there is all problem with all sandwich configs, but I am not 100% sure, so I would like to proove:

1. both squids are one procces
2. they share both caches (in memory and on disk)
3. what configuration deny first squid to look and take object from cache, if this object is already cached? preffer_direct or nonhierarchical_direct directives have nothing to do with caching itself. They only handle request that are not in cache yet.

I am right or not?

UPDATE:

cache directive (previously no_cache) SHOULD be used for this:

Synopsis
A list of ACL elements which, if matched, cause the request not to be satisfied from the cache and the reply not to be cached. In other words, use this to force certain objects to never be cached.

So I suggest use acl to DENY users a requesting first squid to use cache by "cache" directive.



BR
Litin


Last edited by litnoveweedle on 03 Feb 2008 01:40, edited 3 times in total.

Top
 Profile  
 
 Post subject:
PostPosted: 03 Feb 2008 00:39 
Offline

Joined: 21 Jan 2008 16:12
Posts: 9
trckh wrote:
That seems backwards to me? I thought if proxy-only was included with cache_peer it just "proxied" the connection and did not cache the results?

Kyle


From squid docs thats mean that requested FROM this peer cache won't be cached - so thats fine.

see http://www1.jp.squid-cache.org/Versions ... _peer.html


Top
 Profile  
 
 Post subject:
PostPosted: 12 Feb 2008 14:32 
Offline

Joined: 25 Jan 2008 18:31
Posts: 3
I solved my problem.
I installed Dansguardian for best site and content blocking. Now I'm using next configuration:
users->Dansguardian->HAVP->Squid->Internet
And now
1)Dansguardian can makes log files in Squid format for Sarg (monthly reports).
2)Squid works as usually and caches requests.
3)Then antivirus databases will update, viruses can't take from Squid's cache.


Top
 Profile  
 
 Post subject: Squid caching problem
PostPosted: 28 Feb 2008 01:17 
Offline

Joined: 21 Jan 2008 16:12
Posts: 9
I have same problem as Apollo. I haven't find out before, because my cache was already filled by previous squid configuration.
If you find any mistake I've done, I will appreciate to let me know about.

So problems with suggested sandwich config are:

1. squid doesn't cache as it should (my cache was previously filled by 2M object in +-5 days. Now with actual cfg based on example I have +- 40k object in 1 day. (I have squid-rrd monitor so I can easily compare speed of cache filling process)

2. If object is finally cached (don't asked me how, I have no glue) it is replied from cache of first squid (please note, that both squid are in fact one instance, which have same in memory cache and same in memory pointer structure to on disk cache, so if "cache" - previously "no_cache" directive is not used and object is already cached it is processed from cache regardless of peer "proxy-only" directive.

3. because of previous problem I have almost NO HITS, and majority of HITS are MEM_HITS.

You can see object live in cache bellow: it take several times before it was cached, but then it was processed directly to client and not from havp peer.

Code:
betelgeuse:/var/log/squid# more access.log | grep adsWrapper.js
1204117693.351    499 10.98.218.30 TCP_MISS/200 11393 GET http://ar.atwola.com/file/adsWrapper.js - DIRECT/205.188.165.121 application/x-javascript
1204117693.358    507 10.98.218.30 TCP_MISS/200 11475 GET http://ar.atwola.com/file/adsWrapper.js - DEFAULT_PARENT/havp application/x-javascript
1204117718.421    494 10.98.238.150 TCP_MISS/200 11393 GET http://ar.atwola.com/file/adsWrapper.js - DIRECT/64.12.174.249 application/x-javascript
1204117718.428    502 10.98.238.150 TCP_MISS/200 11475 GET http://ar.atwola.com/file/adsWrapper.js - DEFAULT_PARENT/havp application/x-javascript
1204146580.259    502 10.98.239.218 TCP_MISS/200 11393 GET http://ar.atwola.com/file/adsWrapper.js - DIRECT/64.12.174.57 application/x-javascript
1204146580.267    510 10.98.239.218 TCP_MISS/200 11475 GET http://ar.atwola.com/file/adsWrapper.js - DEFAULT_PARENT/havp application/x-javascript
1204146644.335    493 10.98.239.218 TCP_MISS/200 11393 GET http://ar.atwola.com/file/adsWrapper.js - DIRECT/205.188.165.249 application/x-javascript
1204146644.343    502 10.98.239.218 TCP_MISS/200 11475 GET http://ar.atwola.com/file/adsWrapper.js - DEFAULT_PARENT/havp application/x-javascript
1204148573.829      0 10.98.239.218 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148598.990      0 10.98.251.146 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148611.238      0 10.98.226.66 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148622.282      8 10.98.251.146 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148659.973      0 10.98.251.146 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148672.429      1 10.98.226.66 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148684.264      0 10.98.251.146 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148728.991      0 10.98.251.146 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148734.238      0 10.98.226.66 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148745.645      0 10.98.251.146 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148753.592      0 10.98.251.146 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148795.255      1 10.98.226.66 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148797.970      0 10.98.251.146 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148815.714      2 10.98.251.146 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148859.261      1 10.98.226.66 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
1204148920.325      1 10.98.226.66 TCP_HIT/200 11396 GET http://ar.atwola.com/file/adsWrapper.js - NONE/- application/x-javascript
 



Code:
Squid Object Cache: Version 2.6.STABLE18
Start Time:     Tue, 26 Feb 2008 21:18:37 GMT
Current Time:   Wed, 27 Feb 2008 22:02:47 GMT
Connection information for squid:
        Number of clients accessing cache:      143
        Number of HTTP requests received:       8824158
        Number of ICP messages received:        0
        Number of ICP messages sent:    0
        Number of queued ICP replies:   0
        Request failure ratio:   0.00
        Average HTTP requests per minute since start:   5945.5
        Average ICP messages per minute since start:    0.0
        Select loop called: 209924902 times, 0.424 ms avg
Cache information for squid:
        Request Hit Ratios:     5min: 0.2%, 60min: 0.4%
        Byte Hit Ratios:        5min: 1.5%, 60min: 1.3%
        Request Memory Hit Ratios:      5min: 28.3%, 60min: 53.2%
        Request Disk Hit Ratios:        5min: 3.8%, 60min: 8.9%
        Storage Swap size:      381608 KB
        Storage Mem size:       185320 KB
        Mean Object Size:       9.68 KB
        Requests given to unlinkd:      0
Median Service Times (seconds)  5 min    60 min:
        HTTP Requests (All):   0.01847  0.01745
        Cache Misses:          0.03241  0.02592
        Cache Hits:            0.00000  0.00000
        Near Hits:             0.04277  0.01469
        Not-Modified Replies:  0.02190  0.00000
        DNS Lookups:           0.05078  0.03374
        ICP Queries:           0.00000  0.00000
Resource usage for squid:
        UP Time:        89050.901 seconds
        CPU Time:       4029.060 seconds
        CPU Usage:      4.52%
        CPU Usage, 5 minute avg:        5.57%
        CPU Usage, 60 minute avg:       6.85%
        Process Data Segment Size via sbrk(): 313128 KB
        Maximum Resident Size: 0 KB
        Page faults with physical i/o: 0
Memory usage for squid via mallinfo():
        Total space in arena:  311848 KB
        Ordinary blocks:       308544 KB      0 blks
        Small blocks:               0 KB      0 blks
        Holding blocks:             0 KB      0 blks
        Free Small blocks:       2771 KB
        Free Ordinary blocks:     532 KB
        Total in use:          308544 KB 99%
        Total free:              3303 KB 1%
        Total size:            311848 KB
Memory accounted for:
        Total accounted:       238049 KB
        memPoolAlloc calls: 1315268252
        memPoolFree calls: 1314518086
File descriptor usage for squid:
        Maximum number of file descriptors:   4096
        Largest file desc currently in use:   1622
        Number of file desc currently in use:  880
        Files queued for open:                   0
        Available number of file descriptors: 3216
        Reserved number of file descriptors:   100
        Store Disk files open:                   2
        IO loop method:                     epoll
Internal Data Structures:
         43120 StoreEntries
         42223 StoreEntries with MemObjects
         42134 Hot Object Cache Items
         39408 on-disk objects


Code:
#host
visible_hostname XXXXXX

#port
http_port 3128 transparent
http_port 127.0.0.1:3129
icp_port 0

#DNS
dns_nameservers 10.98.231.130 10.98.231.66
hosts_file /etc/hosts

#cache
cache_mem 200 MB
maximum_object_size 40 MB
maximum_object_size_in_memory 64 KB
cache_replacement_policy heap LFUDA
memory_replacement_policy heap GDSF
cache_dir aufs /var/spool/squid/cache1 30000 32 256
cache_dir aufs /var/spool/squid/cache2 30000 32 256
cache_dir aufs /var/spool/squid/cache3 30000 32 256

cache_swap_low 98
cache_swap_high 99

refresh_pattern ^ftp: 20160 50% 43200
refresh_pattern -i \.(jpe?g|gif|png|ico|tif?f|bmp)$ 43200 100% 43200
refresh_pattern -i \.(zip|gz|bz2|rar|arj|cab|exe)$ 43200 100% 43200
refresh_pattern -i \.(mp3|mpe?g|avi|wmv|wma|vqf|ogg|mov|qt|wav)$ 43200 100% 43200
refresh_pattern -i \.(pdf|ps)$ 43200 100% 43200
refresh_pattern windowsupdate.com/.*\.(cab|exe)$ 43200 100% 43200
refresh_pattern download.microsoft.com/.*\.(cab|exe)$ 43200 100% 43200
refresh_pattern -i \.(cgi|asp|php|fcgi)$ 0 20% 60
refresh_pattern (cgi-bin|\?) 0 0% 0
#refresh_pattern . 20160 50% 43200
refresh_pattern . 0 20% 4320

#redirect_program /usr/local/bin/squidguard

#do not use follow_x_forwarded_for IP,
#used IP of last requestor instead
acl_uses_indirect_client off

acl LOCALNET src 10.0.0.0/8
acl ALL src 0.0.0.0/0.0.0.0
acl MANAGER proto cache_object
acl MONITOR src 10.98.231.86/31
acl LOCALHOST src 127.0.0.1/32 10.98.231.142/32
acl SSL_ports port 443 563 10000
acl SAFE_ports port 80 21 443 563 1025-65535
acl CONNECT method CONNECT
acl HTTP_proto proto HTTP
acl HTTPS_proto proto HTTPS
acl SQUID2 myport 3129

acl NOSCAN1 urlpath_regex -i \.(jpe?g|gif|png|ico|tif?f|bmp)$
acl NOSCAN2 dstdomain .play.cz
acl NOSCAN2 dstdomain .stream.aol.com
acl NOSCAN2 dstdomain .youtube.com

acl NOCACHE1 dstdomain .dsl.cz
acl NOCACHE1 dstdomain .speedmeter.internetprovsechny.cz


http_access allow MANAGER LOCALHOST
http_access allow MANAGER MONITOR
http_access deny MANAGER
http_access deny !SAFE_ports
http_access deny CONNECT !SSL_ports
http_access allow LOCALHOST
http_access allow LOCALNET
http_access deny ALL

icp_access deny ALL

#default, not realy needed
http_reply_access allow ALL

#only requests to squid2 can be satisfied
#from cache and cached if needed
#cache allow SQUID2
#cache deny !SQUID2

#havp proxy
cache_peer 127.0.0.1 parent 8080 0 name=havp no-query no-digest no-netdb-exchange proxy-only default
#second squid - caching
cache_peer 127.0.0.1 parent 3129 0 name=squid2 no-query no-digest no-netdb-exchange proxy-only

#default so not really needed
prefer_direct off
#not needed if always|never_direct is used
nonhierarchical_direct off


#allow squid2 to connect directly to server
#always_direct allow SQUID2
#there is no need to cache or scan https
always_direct allow HTTPS_proto
#anything not to be scanned AND cached can be listed bellow
always_direct allow CONNECT
always_direct allow NOCACHE1
#nothing else can be proccessed directly
never_direct allow !SQUID2


#havp should not be used by squid2
cache_peer_access havp deny SQUID2
#havp should not be used for https
cache_peer_access havp deny HTTPS_proto
#we have something not to be scanned
cache_peer_access havp deny NOSCAN1
cache_peer_access havp deny NOSCAN2
#anything other can be scanned
cache_peer_access havp allow ALL

#squid2 should not be used by itself
cache_peer_access squid2 deny SQUID2
cache_peer_access squid2 allow ALL


#redirector_access deny SQUID2
#redirector_access allow ALL

acl APACHE rep_header Server ^Apache
broken_vary_encoding allow APACHE

#hierarchy proccess directly and not by peers
#overrided by: nonhierarchical_direct off
hierarchy_stoplist cgi-bin ?

#anonymous proxy
header_access Via deny ALL
header_access X-Forwarded-For deny SQUID2

#allow to see orig client IP in logs
forwarded_for on
follow_x_forwarded_for allow LOCALHOST

quick_abort_min 0 KB
quick_abort_max 0 KB
half_closed_clients off
client_db off
pipeline_prefetch on

ipcache_size 16384
fqdncache_size 16384
#needed HAVP can procces max 20KB headers
request_header_max_size 20 KB
ie_refresh on

shutdown_lifetime 5 seconds
cache_effective_user squid
hosts_file /etc/hosts
coredump_dir /var/spool/squid
pid_filename /var/run/squid.pid
error_directory /usr/local/share/squid/errors/Czech
icon_directory /usr/local/share/squid/icons


#warnings
high_response_time_warning 200
high_page_fault_warning 10
high_memory_warning 2 GB


#logs
access_log none
#access_log /var/log/squid/access.log
cache_log /var/log/squid/cache.log
cache_store_log none
logfile_rotate 10
strip_query_terms off
buffered_logs on
log_uses_indirect_client on
log_icp_queries off



Cache is dualcore 3GHZ CPU, 8GB RAM, 3x WD Raptor HDD for cache and 2x HDD in mirror for system. Under load of max 200 request/s CPU is about 10%


Top
 Profile  
 
PostPosted: 17 Apr 2008 20:31 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
Yeah it seems my config example doesn't work. No matter what way you try, Squid doesn't cache inside the same process once it decides so. I'll try to find out if there is any other way than separate processes.


Top
 Profile  
 
PostPosted: 18 Apr 2008 02:20 
Offline

Joined: 01 Sep 2007 01:02
Posts: 18
I had the exact opposite problem, as squid was caching content, but not scanning it. In other words, my "sandwich" looked like this: SQUID(acl+cache)->HAVP->SQUID(no cache)

I've been hacking around like crazy all day and I'll post what I think it a truly working config.

The trick is ensure that you never cache requests from the local net, only localhost (or wherever your parent HAVP proxy is located).

The first problem I see here is that you have 'proxy-only' defined for your caching peer.

Code:
#second squid - caching
cache_peer 127.0.0.1 parent 3129 0 name=squid2 no-query no-digest no-netdb-exchange proxy-only


So I'm reasonably certain that it will, indeed, only proxy connections.

I also found this in one of the demo configs, which ended up in mine as well. If you are running HAVP on the same machine squid won't cache either.

Code:
no_cache deny localhost


Top
 Profile  
 
PostPosted: 18 Apr 2008 08:59 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
Yeah but it's not wise to cache possible virus content. ;)


Top
 Profile  
 
PostPosted: 18 Apr 2008 19:39 
Offline

Joined: 01 Sep 2007 01:02
Posts: 18
hege wrote:
Yeah but it's not wise to cache possible virus content. ;)


Sigh.

Yes, you are right. Squid ACL's are a mindf*ck. :)

Anyways, last night I dreamt of a snake eating its tail and I think I figured out a better way to integrate HAVP with Squid.

The thing is, I don't see why you need two Squid processes. A better solution, in my opinion at least, is have one Squid process with ACL's tuned to only cache requests for whitelisted whitelisted content or the associated HAVP process. Everything else gets forwarded to the HAVP parent.

So instead of SQUID1->HAVP->SQUID2 we have something like ...
Code:
      <HAVP <
      |     |
client>SQUID>server

Call it the squid loop, snake, piggyback, whatever.

This is my first stab at a revised template, based on hedge's original:

Code:
# Squid external IP and port
http_port 10.0.0.1:3128

# havp.config has PARENTHOST 10.0.0.1, PARENTPORT 3128

# You probably don't care to log duplicate requests coming in from HAVP
log_access deny FROM_HAVP

# HAVP on localhost port 8080
cache_peer 127.0.0.1 parent 8080 0 name=havp proxy-only no-query no-digest no-netdb-exchange default

# This makes sure ALL requests are sent to parent peers when needed
prefer_direct off
nonhierarchical_direct off

# HTTPS traffic scanning not needed
acl HTTPS method CONNECT
always_direct allow HTTPS

# Don't forward HAVP traffic back to itself!
acl FROM_HAVP src 127.0.0.1/32
always_direct allow FROM_HAVP
 
# It's easier to create whitelists here than in HAVP
# Also, if there is a bug in HAVP, whitelisting there might not work
acl NOSCAN dstdomain trusted.site.net
always_direct allow NOSCAN

# I don't think these three rules are needed, but they shouldn't hurt.
# ...plus, its a good idea to expressly prohibit a forwarding loop condition
cache_peer_access havp deny FROM_HAVP
cache_peer_access havp deny HTTPS
cache_peer_access havp deny NOSCAN
# Everything else, send to HAVP parent
cache_peer_access havp allow all


Note I haven't tested this yet! I'll try to do that over the weekend.


Top
 Profile  
 
PostPosted: 18 Apr 2008 19:42 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
DrKewp wrote:
The thing is, I don't see why you need two Squid processes.


Ahem, I guess you haven't paid close enough attention. That's what there was in my example. A single Squid process looping. ;)

And it does not work if you want to cache stuff.


Top
 Profile  
 
PostPosted: 18 Apr 2008 19:54 
Offline
HAVP Developer

Joined: 27 Feb 2006 18:12
Posts: 687
Location: Finland
And I'll clarify the problem here in a rough nutshell:

You might first get Squid to cache some URL.

But when another request come in for the same URL, with "proxy-only" or "cache deny", Squid will actually remove the cached content from it's memory. It won't be cached anymore. Then it goes the same way again and again.

I think it needs to be fixed in Squid code, to not remove cached content if we don't want to receive cached copy at one certain request.


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 23 posts ]  Go to page 1, 2  Next

All times are UTC + 2 hours [ DST ]


Who is online

Users browsing this forum: Google [Bot] and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
Powered by phpBB® Forum Software © phpBB Group