ZC buildout配方,用于配置Zope的squid代理
项目描述
什么是iw.recipe.squid ?
安装squid代理服务器和所有特定的python脚本,以便与zope服务器或zeo集群一起工作
有没有构建出的例子?
在您的buildout.cfg中添加一个部分
[buildout] parts = ... squid [squid] recipe = iw.recipe.squid squid_accelerated_hosts = www.mysite.org: 127.0.0.1:8080/mysite
其中选项是
squid_accelerated_hosts : 一个配置您的zope后端的列表,如下所示的模式
visible_host_name: <zope ip_or_host_name>:<zope list port>/<zope path>
可选选项是
url : 下载squid源以编译的url(待定)
squid_owner : squid进程的squid_owner(默认为用户登录)
location: squid安装的位置(默认为buildout的parts-directory/squid)
squid_visible_hostname: 错误消息中显示的主机名(默认为squid_accelerated_host中的第一个可见主机名,www.mysite.org)
squid_port : squid端口号(默认为3128)
squid_version : squid版本(默认为2.6)
squid_localisation : squid在apache中的位置(默认为127.0.0.1,squid和apache在同一台主机上)
squid_executable : squid二进制可执行文件的位置(默认为/usr/sbin/squid)
squid_admin_email : squid管理员电子邮件(默认为webmaster@www.mysite.org)
squid_cache_size_mb : 硬盘缓存大小(单位MB,默认1000MB)
squid_config_dir : squid的配置目录(默认为parts-directory/squid/etc)
squid_cache_dir : squid的缓存位置(默认为parts-directory/squid/cache)
squid_log_dir : squid的日志位置(默认为parts-directory/squid/log)
apache_conf_dir : apache的配置目录(默认为parts-directory/squid/apache)
front_http : 默认为1(即apache服务http请求)
front_https : 默认为0(即apache服务https请求)
debug_redirector : 默认为0(即调试squid重定向器)
debug_squid_acl : 默认为0(即调试squid访问控制列表)
debug_squid_rewrite_rules : 默认为0(即调试squid重写规则)
debug_apache_rewrite_rules : 默认为0(即调试apache重写规则,9为完全调试)
zope_cache_key : zope缓存键的列表(如果您想缓存网站特定区域,请添加特定的acl)
bind_apache_http : apache的绑定IP:端口(http默认为80,只能配置端口)
bind_apache_https : apache的绑定IP:端口(https默认为443,只能配置端口)
squid_extra_conf : squid的额外配置
buildout命令创建一个类似这样的目录结构
parts/squid/apache/vhost_www.mysite.org_80.conf : virtual host to include to apache parts/squid/etc/ : all config file for squid parts/squid/etc/squid.conf : main squid conf parts/squid/etc/iRedirector.py : to launch squidRewriteRules parts/squid/etc/squidAcl.py : avoid cache for authenticated user by squid parts/squid/etc/squidRewriteRules.py : rewrite engine for squid parts/squid/etc/squid_logrotate.conf : config for log rotate system (for logrotate system) parts/squid/cache/ : cache directory parts/squid/log/ : logs directory parts/squid/var/ : var directory, contains pid file bin/squidctl : squid controler shell script (for unix), Usage: squidctl {start|stop|reload|restart|status|debug|purgecache|createswap|configtest|rotate}
配置生成后squid和Apache如何?
Apache
通过创建符号链接来激活虚拟主机
在debian中
ln -s .../parts/squid/apache/vhost_www.mysite.org_80.conf /etc/apache2/sites-enabled/
确保apache已启用mod_rewrite和mod_proxy模块
日志位于parts/squid/log
Squid
填充squid目录缓存
/usr/sbin/squid -z -f parts/squid/etc/squid.conf OR bin/squidctl createswap
使用生成的配置启动squid
/usr/sbin/squid -f parts/squid/etc/squid.conf OR bin/squidctl start
这就完了
如何使用iw.recipe.squid ?
作为一个配方,您必须在buildout文件中提供一个部分,首先测试最简单的部分,我们可以配置
>>> import getpass >>> owner = group = getpass.getuser() >>> import os >>> data_dir = os.path.join(test_dir, 'data') >>> parts_dir = os.path.join(data_dir, 'parts') >>> buildout = {'instance': {'location': test_dir}, ... 'buildout': {'directory': test_dir, ... 'parts-directory': test_dir}} >>> name = 'squid' >>> options = {'url': 'mypackage.tgz', #url where we download squid src ... 'squid_owner' : 'proxy', ... } >>> options['squid_accelerated_hosts'] = """ ... www.mysite.com: 127.0.0.1:8080/mysite ... """ >>> options['squid_extra_conf'] = """ ... refresh_pattern . 0 20% 1440 ... """ Creating the recipe:: >>> from iw.recipe.squid import Recipe >>> recipe = Recipe(buildout, name, options) Test that zope conf is good:: >>> recipe.zope_confs [{'zope_host': '127.0.0.1', 'zope_path': 'mysite', 'host_name': 'www.mysite.com', 'zope_port': '8080'}] >>> recipe.options['squid_visible_hostname'] 'www.mysite.com' >>> recipe.options['front_https'] '0' >>> recipe.options['squid_admin_email'] 'webmaster@www.mysite.com' >>> recipe.options['squid_version'] '2.6' >>> recipe.options['binary_location'] '.../bin' >>> paths = recipe.install()
检查创建的文件
>>> paths.sort() >>> paths ['...squid/tests/bin/squidctl', '...squid/tests/squid/apache/vhost_www.mysite.com_80.conf', '...squid/tests/squid/etc/iRedirector.py', '...squid/tests/squid/etc/squid.conf', '...squid/tests/squid/etc/squidAcl.py', '...squid/tests/squid/etc/squidRewriteRules.py', '...squid/tests/squid/etc/squid_logrotate.conf']
默认生成的squid.conf
>>> cfg = os.path.join(recipe.options['prefix'], 'etc', 'squid.conf') >>> print open(cfg).read() # squid configuration file <BLANKLINE> # BASIC CONFIGURATION # ------------------------------------------------------------------------------ # TAG: visible_hostname # If you want to present a special hostname in error messages, etc, # define this. Otherwise, the return value of gethostname() # will be used. If you have multiple caches in a cluster and # get errors about IP-forwarding you must set them to have individual # names with this setting. visible_hostname www.mysite.com <BLANKLINE> cache_effective_user proxy cache_effective_group proxy <BLANKLINE> # port on which to listen <BLANKLINE> http_port 3128 vhost defaultsite=www.mysite.com <BLANKLINE> <BLANKLINE> cache_dir ufs .../squid/cache 1000 16 256 cache_mgr webmaster@www.mysite.com <BLANKLINE> <BLANKLINE> <BLANKLINE> # LOGS # ------------------------------------------------------------------------------ log_icp_queries off cache_access_log .../squid/log/access.log cache_log .../squid/log/cache.log cache_store_log .../squid/log/store.log # emulate_httpd_log off <BLANKLINE> <BLANKLINE> # RESOURCES # ------------------------------------------------------------------------------ # amount of memory used for caching recently accessed objects - defaults to 8 MB cache_mem 64 MB maximum_object_size 10 MB # max cached object size maximum_object_size_in_memory 300 KB # max cached-in-memory object size <BLANKLINE> <BLANKLINE> # ACCESS CONTROL # ------------------------------------------------------------------------------ <BLANKLINE> # Basic ACLs acl all src 0.0.0.0/0.0.0.0 acl localhost src 127.0.0.1/32 acl ssl_ports port 443 563 acl safe_ports port 80 443 <BLANKLINE> <BLANKLINE> acl zope_servers src 127.0.0.1 #acl zope_servers src 127.0.0.1 <BLANKLINE> acl manager proto cache_object acl connect method connect <BLANKLINE> # Assumes apache rewrite rule looks like this: # RewriteRule ^/(.*)/$ http://127.0.0.1:3128/http/%{SERVER_NAME}/80/$1 [L,P] <BLANKLINE> acl accelerated_protocols proto http acl accelerated_hosts dst 127.0.0.0/8 acl accelerated_ports myport 3128 acl accelerated_urls urlpath_regex __original_url__ acl accelerated_urls urlpath_regex __zope_cache_key__.*__cache_url__ <BLANKLINE> <BLANKLINE> http_access allow accelerated_hosts http_access allow accelerated_ports http_access allow accelerated_urls http_access allow accelerated_protocols <BLANKLINE> always_direct allow accelerated_hosts always_direct allow accelerated_ports <BLANKLINE> <BLANKLINE> <BLANKLINE> # Purge access - zope servers can purge but nobody else acl purge method PURGE http_access allow zope_servers purge http_access deny purge <BLANKLINE> # Reply access # http_reply_access allow all <BLANKLINE> # Cache manager setup - cache manager can only connect from localhost # only allow cache manager access from localhost http_access allow manager localhost http_access deny manager # deny connect to other than ssl ports http_access deny connect !ssl_ports <BLANKLINE> # ICP access - anybody can access icp methods icp_access allow localhost zope_servers <BLANKLINE> # And finally deny all other access to this proxy http_access deny all <BLANKLINE> <BLANKLINE> # CACHE PEERS # ------------------------------------------------------------------------------ <BLANKLINE> # CONFIGURE THE CACHE PEERS. FIRST PORT IS THE HTTP PORT, SECOND PORT # IS THE ICP PORT. REMEMBER TO ENABLE 'icp-server' ON YOUR 'zope.conf' # LISTENING ON THE ICP PORT YOU USE HERE. # acl in_backendpool dstdomain backendpool # cache_peer 127.0.0.1 parent 8080 9090 no-digest no-netdb-exchange # cache_peer 192.168.0.3 parent 8081 9091 no-digest no-netdb-exchange <BLANKLINE> # cache_peer_access 127.0.0.1 allow in_backendpool # cache_peer_access 127.0.0.1 deny all <BLANKLINE> # cache_peer_access 192.168.0.3 allow in_backendpool # cache_peer_access 192.168.0.3 deny all <BLANKLINE> # IF YOU NEED TO FORWARD REQUESTS TO HOSTS NOT IN THE POOL THIS IS # WHERE YOU ALLOW THE TARGET DOMAINS # acl local_servers dstdomain some.mysite.com other.mysite.com # always_direct allow local_servers <BLANKLINE> # THE FOLLOWING DIRECTIVE IS NEEDED TO MAKE 'backendpool' RESOLVE TO # THE POOL OF CACHE PEERS. # never_direct allow all # icp_access allow all <BLANKLINE> # PROXY ON, NEEDED TO MAKE CACHE PEERS INTERCOMMUNICATE # httpd_accel_with_proxy on <BLANKLINE> <BLANKLINE> # REDIRECTOR PROGRAM # ------------------------------------------------------------------------------ <BLANKLINE> <BLANKLINE> redirect_program .../squid/etc/iRedirector.py url_rewrite_children 1 url_rewrite_concurrency 20 url_rewrite_host_header off <BLANKLINE> <BLANKLINE> # SPECIFY WHAT REQUESTS SQUID SHOULD CACHE # ------------------------------------------------------------------------------ <BLANKLINE> # Control what squid caches. We want to have squid handle content that is not # personalized and that does not require any kind of authorization. # 1) Always cache static content in squid <BLANKLINE> acl static_content urlpath_regex -i \.(jpg|jpeg|gif|png|tiff|tif|svg|swf|ico|css|js|vsd|doc|ppt|pps|xls|pdf|mp3|mp4|m4a|ogg|mov|avi|wmv|sxw|zip|gz|bz2|tgz|tar|rar|odc|odb|odf|odg|odi|odp|ods|odt|sxc|sxd|sxi|sxw|dmg|torrent|deb|msi|iso|rpm)$ no_cache allow static_content <BLANKLINE> # 2) (OPTIONAL) Prevent squid from caching an item that is the result of a POST <BLANKLINE> acl post_requests method POST no_cache deny post_requests <BLANKLINE> # 3) (OPTIONAL) Prevent squid from caching items with items in the query string # If this is uncommented, squid will treat a url with 2 different query strings # as 2 different urls when caching. <BLANKLINE> # XXX: where did this example go? <BLANKLINE> <BLANKLINE> acl zope_key_caching urlpath_regex d41d8cd98f00b204e9800998ecf8427e no_cache allow zope_key_caching <BLANKLINE> <BLANKLINE> <BLANKLINE> # 4) Prevent squid from caching requests from authenticated users or conditional # GETs with an If-None-Match header (since squid doesn't know about ETags) # We use an external python method to check these conditions and pass in the # value of the __ac cookie (two different ways to allow for different cookie # delimiters), the HTTP Authorization header, and the If-None-Match header. # Squid caches the results of the external python method, so for debugging, set # the options ttl=0 negative_ttl=0 so you can see what is going on <BLANKLINE> # external_acl_type is_cacheable_type children=20 ttl=0 negative_ttl=0 %{Cookie:__ac} %{Cookie:;__ac} %{Authorization} %{If-None-Match} .../squid/etc/squidAcl.py <BLANKLINE> external_acl_type is_cacheable_type protocol=2.5 children=20 %{Cookie:__ac} %{Cookie:;__ac} %{Authorization} %{If-None-Match} .../squid/etc/squidAcl.py acl is_cacheable external is_cacheable_type no_cache allow is_cacheable <BLANKLINE> collapsed_forwarding on <BLANKLINE> # Explicitly disallow squid from handling anything else <BLANKLINE> <BLANKLINE> <BLANKLINE> <BLANKLINE> no_cache deny all <BLANKLINE> <BLANKLINE> # SPECIFY EFFECTS OF A BROWSER REFRESH # ------------------------------------------------------------------------------ <BLANKLINE> # RELOAD_INTO_IMS CAUSES WEIRD SQUID BEHAVIOR - IT APPEARS TO CAUSE FILES WITH # INAPPROPRIATE HEADERS TO END UP IN THE CACHE, AND AS A RESULT BROWSERS END # UP MAKING LOTS OF EXTRA (CONDITIONAL) REQUESTS WHEN THEY WOULD OTHERWISE MAKE # NO REQUESTS. DO NOT USE! <BLANKLINE> # Tell squid how to handle expiration times for content with no explicit expiration # Assume static content is fresh for at least an hour and at most a day #refresh_pattern -i \.(jpg|jpeg|gif|png|tiff|tif|svg|swf|ico|css|js|vsd|doc|ppt|pps|xls|pdf|mp3|mp4|m4a|ogg|mov|avi|wmv|sxw|zip|gz|bz2|tar|rar|odc|odb|odf|odg|odi|odp|ods|odt|sxc|sxd|sxi|sxw|dmg|torrent|deb|msi|iso|rpm)$ 60 50% 1440 reload-into-ims #refresh_pattern . 0 20% 1440 <BLANKLINE> # Change force-refresh requests into conditional gets using if-modified-since #reload_into_ims on <BLANKLINE> # DEBUGGING # ------------------------------------------------------------------------------ # debug_options ALL,1 33,2 # use this for debugging acls # debug_options ALL,8 <BLANKLINE> <BLANKLINE> # MISCELLANEOUS # ------------------------------------------------------------------------------ # have squid handle all requests with ranges # range_offset_limit -1 <BLANKLINE> # amount of time squid waits for existing requests to be serviced before shutting down shutdown_lifetime 1 seconds <BLANKLINE> # allow squid to process multiple requests simultaneously if client is pipelining pipeline_prefetch on <BLANKLINE> # allow white spaces to be included in URLs uri_whitespace allow <BLANKLINE> <BLANKLINE> # OTHER PARAMETERS THAT MAY BE OF INTEREST # ------------------------------------------------------------------------------ <BLANKLINE> # logfile_rotate 0 # reload_into_ims off # error_directory /usr/share/squid/errors/ <BLANKLINE> pid_filename .../squid/var/squid.pid <BLANKLINE> <BLANKLINE> refresh_pattern . 0 20% 1440 <BLANKLINE>
默认生成的squidRewriteRules.py
>>> cfg = os.path.join(recipe.options['prefix'], 'etc', 'squidRewriteRules.py') >>> print open(cfg).read() #!... rewrites = ( (r'http://[^/]+/([^/]+)/([^/]+)/([^/]+)/([^/]+)/([^/]+)/(.*)/__original_url__/(.*)', r'http://\1:\2/VirtualHostBase/\3/\4:\5/\6/VirtualHostRoot/\7', 'P,L'), (r'http://[^/]+/([^/]+)/([^/]+)/([^/]+)/([^/]+)/([^/]+)/(.*)/__zope_cache_key__/(.*)/__cache_url__/(.*)', r'http://\1:\2/VirtualHostBase/\3/\4:\5/\6/VirtualHostRoot/\7/\8', 'P,L'), <BLANKLINE> ) ...
默认生成的apache配置
>>> cfg = os.path.join(recipe.options['prefix'], 'etc', 'squidRewriteRules.py') >>> print open(cfg).read() #!...>>> cfg = os.path.join(recipe.options['prefix'], 'apache', 'vhost_www.mysite.com_80.conf') >>> print open(cfg).read() NameVirtualHost *:80 <VirtualHost *:80> ServerName www.mysite.com <BLANKLINE> <Proxy http://127.0.0.1:3128> Allow from all </Proxy> <BLANKLINE> <BLANKLINE> RewriteEngine On RewriteLog .../squid/log/rewrite_www.mysite.com.log RewriteLogLevel 0 <BLANKLINE> CustomLog .../squid/log/access_www.mysite.com.log common ErrorLog .../squid/log/error_www.mysite.com.log <BLANKLINE> RewriteRule ^(.*)$ - [E=BACKEND_LOCATION:127.0.0.1] RewriteRule ^(.*)$ - [E=BACKEND_PORT:8080] RewriteRule ^(.*)$ - [E=BACKEND_PATH:mysite] <BLANKLINE> <BLANKLINE> RewriteRule ^/(.*)/$ http://127.0.0.1:3128/%{ENV:BACKEND_LOCATION}/%{ENV:BACKEND_PORT}/http/%{SERVER_NAME}/80/%{ENV:BACKEND_PATH}/__original_url__/$1 [L,P] RewriteRule ^/(.*)$ http://127.0.0.1:3128/%{ENV:BACKEND_LOCATION}/%{ENV:BACKEND_PORT}/http/%{SERVER_NAME}/80/%{ENV:BACKEND_PATH}/__original_url__/$1 [L,P] <BLANKLINE> <BLANKLINE> </VirtualHost> <BLANKLINE>
/bin/squidacl文件
>>> f = open(os.path.join(recipe.options['binary_location'],'squidctl')) >>> print f.read() #!/bin/sh ... DAEMON=/usr/sbin/squid CONFIG=.../squid/etc/squid.conf CACHE_DIR=.../squid/cache ...
etc/squid_logrotate.conf
>>> cfg = os.path.join(recipe.options['prefix'], 'etc', 'squid_logrotate.conf') >>> print open(cfg).read() /.../squid/var/*.log { weekly compress delaycompress maxage 730 rotate 104 size=+4096k notifempty missingok create 740 proxy proxy postrotate .../bin/squidctl rotate endscript }
更多选项
向配方提供更多选项
>>> options = {'url': 'mypackage.tgz', #url where we download squid src ... 'squid_owner': owner, #owner of squid process ... 'squid_group' : group, #group of squid process ... 'squid_port' : '3128', #listen port of proxy ... 'squid_version' : '2.5', ... 'squid_localisation': '127.0.0.1', #host or ip that apache use to request apache ... ... 'squid_admin_email' : 'myemail@mycompany.com', #name appear in error message ... 'squid_cache_size_mb' : '1000', #total cache in disk ... 'squid_visible_hostname' : 'mysite', #public name of your site, appear in error message ... 'front_https': '1', # does front server (apache, iis) ... # serve https url O by default ... 'front_http':'1', # does front server (apache, iis) serve http url ... 'bind_apache_http':'81', # change the default binding port of apache ... 'debug_redirector':'1', #debug iRedirector 0 by default ... 'debug_squid_acl' : '0', #debug squidacl 0 by default ... 'debug_squid_rewrite_rules' : '1', #debug squidtrewriterule 0 by default ... 'debug_apache_rewrite_rules' : '9', #debug apache rewrite engine ... }
您的加速主机(zeo客户端或pound负载均衡器,要加速的urls与zope urls,端口和目录相对应)
>>> options['squid_accelerated_hosts'] = """ ... www.mysite.com: 127.0.0.1:8080/mysite ... mysite.com: 127.0.0.1:8080/mysite ... www.mysecondsite.com: 127.0.0.2:9080/mysite2 ... mysecondsite.com: 127.0.0.2:9080/mysite2 ... """
此部分是可选的。我们可以通过特定的配置在代理中缓存导航的一部分。这有时很有用,可以将zope页面按组或通过特定的cookie规则缓存。此配置通过zope_cache_keys完成。请注意:CMFSquid不会在没有您部分干预的情况下清除此URL。假设zope_cache_keys是zope发送的cookie,并且您已在zodb中创建了一个名为您的缓存键的文件夹以执行工作获取特定重写规则和squid
>>> options['zope_cache_key'] = """ ... my_key_one ... my_key_two ... my_key_three ... """
创建配方
>>> from iw.recipe.squid import Recipe >>> recipe = Recipe(buildout, name, options)
测试zope配置是否良好
>>> recipe.zope_confs [{'zope_host': '127.0.0.1', 'zope_path': 'mysite', 'host_name': 'www.mysite.com', 'zope_port': '8080'}, {'zope_host': '127.0.0.1', 'zope_path': 'mysite', 'host_name': 'mysite.com', 'zope_port': '8080'}, {'zope_host': '127.0.0.2', 'zope_path': 'mysite2', 'host_name': 'www.mysecondsite.com', 'zope_port': '9080'}, {'zope_host': '127.0.0.2', 'zope_path': 'mysite2', 'host_name': 'mysecondsite.com', 'zope_port': '9080'}]
zope acl是配置在squid中的IP或主机,以便成为授权清除squid缓存的IP或主机。
测试zope acl
>>> recipe.options['acl_zope_hosts'] '127.0.0.2 127.0.0.1'
测试重写规则
>>> recipe.cache_key ['my_key_one', 'my_key_two', 'my_key_three']
运行它
>>> paths = recipe.install()
检查创建的文件
>>> path = recipe.options['prefix']
检查生成的squid.conf文件
>>> cfg = os.path.join(path, 'etc', 'squid.conf') >>> print open(cfg).read() # squid configuration file ... http_port 3128 ... httpd_accel_host virtual httpd_accel_port 81 httpd_accel_uses_host_header on ... redirect_children 20 redirect_rewrites_host_header off ...
检查是否已生成iRedirector配置
>>> cfg = os.path.join(path, 'etc', 'iRedirector.py') >>> print open(cfg).read() #!... threaded = 0... >>> cfg = os.path.join(path, 'etc', 'squidAcl.py') >>> print open(cfg).read() #!...
测试默认apache绑定的更改
>>> cfg = os.path.join(recipe.options['prefix'], 'apache', 'vhost_www.mysite.com_81.conf') >>> print open(cfg).read() Listen *:81 NameVirtualHost *:81 ...
重新更改apache配置
>>> options['bind_apache_http'] = '80' >>> recipe = Recipe(buildout, name, options) >>> paths = recipe.install() >>> cfg = os.path.join(recipe.options['prefix'], 'apache', 'vhost_www.mysite.com_80.conf') >>> print open(cfg).read() NameVirtualHost *:80 ... >>> options['bind_apache_http'] = '192.168.2.1:80' >>> recipe = Recipe(buildout, name, options) >>> paths = recipe.install() >>> cfg = os.path.join(recipe.options['prefix'], 'apache', 'vhost_www.mysite.com_80.conf') >>> print open(cfg).read() Listen 192.168.2.1:80 NameVirtualHost 192.168.2.1:80 ...
在apache中查看缓存键生成配置
>>> print open(cfg).read() Listen 192.168.2.1:80 ... <BLANKLINE> RewriteRule ^(.*)$ - [E=BACKEND_LOCATION:127.0.0.1] RewriteRule ^(.*)$ - [E=BACKEND_PORT:8080] RewriteRule ^(.*)$ - [E=BACKEND_PATH:mysite] <BLANKLINE> RewriteRule ^(.*)$ - [E=have_cookie:1] RewriteCond %{HTTP_COOKIE} my_key_one="([^"]+) [NC] RewriteRule ^(.*)$ - [E=my_key_one:%1] #test if have cookie RewriteCond %{HTTP_COOKIE} !^.*my_key_one.*$ [NC] RewriteRule ^(.*)$ - [E=have_cookie:0] RewriteCond %{HTTP_COOKIE} my_key_two="([^"]+) [NC] RewriteRule ^(.*)$ - [E=my_key_two:%1] #test if have cookie RewriteCond %{HTTP_COOKIE} !^.*my_key_two.*$ [NC] RewriteRule ^(.*)$ - [E=have_cookie:0] RewriteCond %{HTTP_COOKIE} my_key_three="([^"]+) [NC] RewriteRule ^(.*)$ - [E=my_key_three:%1] #test if have cookie RewriteCond %{HTTP_COOKIE} !^.*my_key_three.*$ [NC] RewriteRule ^(.*)$ - [E=have_cookie:0] <BLANKLINE> RewriteCond %{ENV:have_cookie} 1 RewriteRule ^/(.*)$ http://127.0.0.1:3128/%{ENV:BACKEND_LOCATION}/%{ENV:BACKEND_PORT}/https/%{SERVER_NAME}/80/%{ENV:BACKEND_PATH}/__zope_cache_key__/41d154089fd778d8efbd889dffc18dbd:%{ENV:my_key_one}:%{ENV:my_key_two}:%{ENV:my_key_three}/__cache_url__/$1 [L,P] <BLANKLINE> RewriteRule ^/(.*)/$ http://127.0.0.1:3128/%{ENV:BACKEND_LOCATION}/%{ENV:BACKEND_PORT}/https/%{SERVER_NAME}/80/%{ENV:BACKEND_PATH}/__original_url__/$1 [L,P] RewriteRule ^/(.*)$ http://127.0.0.1:3128/%{ENV:BACKEND_LOCATION}/%{ENV:BACKEND_PORT}/https/%{SERVER_NAME}/80/%{ENV:BACKEND_PATH}/__original_url__/$1 [L,P] ...
现在测试2.6配置
>>> options['squid_version'] = '2.6' >>> buildout = {'instance': {'location': test_dir}, ... 'buildout': {'directory': test_dir, ... 'parts-directory': test_dir}} >>> name = 'squid' >>> recipe = Recipe(buildout, name, options) >>> recipe.options['squid_version'] '2.6' >>> paths = recipe.install() >>> cfg = os.path.join(path, 'etc', 'squid.conf')
测试重定向器是否线程化
>>> cfg = os.path.join(path, 'etc', 'iRedirector.py') >>> print open(cfg).read() #!... threaded = 1... >>> cfg = os.path.join(path, 'etc', 'squidAcl.py') >>> print open(cfg).read() #!... debug = 0... logfile = ...squid/log...
更改默认的安装位置
>>> options = {'url': 'mypackage.tgz', #url where we download squid src ... 'squid_owner': owner, #owner of squid process ... 'squid_group' : group, #group of squid process ... 'squid_port' : '3128', #listen port of proxy ... 'squid_version' : '2.5', ... 'squid_localisation': '127.0.0.1', #host or ip that apache use to request apache ... 'squid_log_dir' : '/var/log/dir', ... 'squid_config_dir' : '/usr/local/squid/etc', ... 'apache_conf_dir' : '/etc/apache2/conf', ... 'squid_admin_email' : 'myemail@mycompany.com', #name appear in error message ... 'squid_cache_size_mb' : '1000', #total cache in disk ... 'squid_visible_hostname' : 'mysite', #public name of your site, appear in error message ... 'front_https': '1', # does front server (apache, iis) ... # serve https url O by default ... 'front_http':'1', # does front server (apache, iis) serve http url ... 'debug_redirector':'1', #debug iRedirector 0 by default ... 'debug_squid_acl' : '0', #debug squidacl 0 by default ... 'debug_squid_rewrite_rules' : '1', #debug squidtrewriterule 0 by default ... ... } >>> from iw.recipe.squid import Recipe >>> options['squid_accelerated_hosts'] = """ ... www.mysite.com: 127.0.0.1:8080/mysite ... mysite.com: 127.0.0.1:8080/mysite ... www.mysecondsite.com: 127.0.0.2:9080/mysite2 ... mysecondsite.com: 127.0.0.2:9080/mysite2 ... """ >>> recipe = Recipe(buildout, name, options) >>> recipe.options['apache_conf_dir'] '/etc/apache2/conf' >>> recipe.options['squid_config_dir'] '/usr/local/squid/etc'