Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

[ANN] ngx_openresty mainline version 1.5.8.1 released (no replies)

$
0
0
Hello folks!

I am happy to announce that the new mainline version of ngx_openresty,
1.5.8.1, is now released:

http://openresty.org/#Download

This is the first openresty release with the latest nginx 1.5.8 core
bundled. And we have a lot of components updated as usual, which
reflects the ongoing active development in this project.

Special thanks go to all our contributors for making this happen!

This release still reflects our current focus on stability and
performance improvements. Getting things right and fast is always of
our first priority. More speedup will come from both the ngx_lua
module and LuaJIT v2.1 soon. But we may also add more new features in
the near future to make more users happy :)

Below is the complete change log for this release, as compared to the
last (mainline) release, 1.4.3.9:

* change: now we default to LuaJIT instead of the standard Lua 5.1
interpreter. the "--with-luajit" option for "./configure" is now
the default. To use the standard Lua 5.1 interpreter, specify
the "--with-lua51" option explicitly. thanks smallfish for the
suggestion.

* bugfix: Nginx's built-in resolver did not accept fully qualified
domain names (with a trailing dot).

* optimize: shortened the "Server" response header string
"ngx_openresty" to "openresty".

* upgraded the Nginx core to 1.5.8.

* see the changes here: http://nginx.org/en/CHANGES

* upgraded LuaJIT to v2.1-20140109.

* bugfix: fixed ABC (Array Bounds Check) elimination. (Mike
Pall)

* bugfix: fixed MinGW build. (Mike Pall)

* bugfix: x86: fixed stack slot counting for IR_CALLA (affects
table.new). (Mike Pall) this could lead to random table
field missing issues in LuaRestyMySQLLibrary on i386. thanks
lhmwzy for the report.

* bugfix: fixed compilation of "string.byte(s, nil, n)". (Mike
Pall)

* bugfix: MIPS: Cosmetic fix for interpreter. (Mike Pall)

* upgraded LuaNginxModule to 0.9.4.

* feature: allow use of ngx.exit() in the context of
header_filter_by_lua* to perform a "filter finalization".
but in this context ngx.exit() is an asynchronous operation
and returns immediately.

* feature: added the optional 5th argument, "res_table", to
ngx.re.match() which is the user-supplied result table for
the resulting captures. This feature can give 12%+ speedup
for simple ngx.re.match() calls with 4 submatch captures.

* feature: ngx.escape_uri() and ngx.unescape_uri() now accept
a "nil" argument, which is equivalent to an empty string.

* feature: added new pure C API,
"ngx_http_lua_ffi_max_regex_cache_size", for FFI-based
implementations like LuaRestyCoreLibrary.

* change: ngx.decode_base64() now only accepts string
arguments.

* bugfix: coroutines might incorrectly enter the "dead" state
even right after creation with coroutine.create(). thanks
James Hurst for the report.

* bugfix: segmentation fault might happen when aborting a
"light thread" pending on downstream cosocket writes. thanks
Aviram Cohen for the report.

* bugfix: we might try sending the response header again in
ngx.exit() when the header was already sent.

* bugfix: subrequests initiated by ngx.location.capture()
might send their own response headers more than once. this
issue might also lead to the alert message "header already
sent" and request aborts when nginx 1.5.4+ was used.

* bugfix: fixed incompatibilities in Nginx 1.5.8 which breaks
the resolver API in the Nginx core.

* bugfix: fixed a compilation warning when PCRE is disabled in
the build. thanks Jay for the patch.

* bugfix: we did not set the shortcut fields in
"r->headers_in" for request headers in our subrequests
created by ngx.location.capture*(), which might cause
inter-operative issues with other Nginx modules. thanks
Aviram Cohen for the original patch.

* optimize: we no longer clear the "lua_State" pointers for
dead "light threads" such that their coroutine context
structs could be reused by other "light threads" and user
coroutines. this can lead to smaller memory footprint.

* doc: documented that the coroutine.* API can be used in
init_by_lua* since 0.9.2. thanks Ruoshan Huang for the
reminder.

* upgraded LuaRestyMemcachedLibrary to 0.13.

* optimize: saved one cosocket receive() call in the get() and
gets() methods.

* bugfix: the Memcached connection might enter a bad state
when read timeout happens because LuaNginxModule's cosocket
reading calls no longer automatically close the connection
in this case. thanks Dane Knecht for the report.

* upgraded LuaRestyRedisLibrary to 0.18.

* optimize: eliminated one (potentially expensive)
"string.sub()" call in the Redis reply parser.

* bugfix: the Redis connection might enter a bad state when
read timeout happens because LuaNginxModule's cosocket
reading calls no longer automatically close the connection
in this case.

* upgraded LuaRestyLockLibrary to 0.02.

* bugfix: the lock() method accepted nil keys silently.

* upgraded LuaRestyDNSLibrary to 0.11.

* bugfix: avoided use of the module() built-in to define the
Lua module.

* bugfix: we did not reject bad domain names with a leading
dot. thanks Dane Knecht for the report.

* bugfix: error handling fixes in the query and tcp_query
methods.

* upgraded LuaRestyCoreLibrary to 0.0.3.

* feature: updated to comply with LuaNginxModule 0.9.4.

* bugfix: resty.core.regex: the ngx.re API did not honour the
lua_regex_cache_max_entries configuration directive.

* optimize: ngx.re.gsub used to use literal type string "const
char *" in ffi.cast() which is expensive in interpreter
mode. now we use the ctype object directly, which leads to
11% in interpreter mode.

* upgraded EchoNginxModule to 0.51.

* bugfix: for Nginx 1.2.6+ and 1.3.9+, the main request
reference count might go out of sync when Nginx's request
body reader returned status code 300+. thanks Hungpu DU for
the report.

* bugfix: echo_request_body truncated the response body
prematurely when the request body was in memory (because the
request reader sets "last_buf" in this case). thanks Hungpu
DU for the original patch.

* bugfix: using $echo_timer_elapsed variable alone in the
configuration caused segmentation faults. thanks Hungpu DU
for the report.

* doc: typo fix in the echo_foreach_split sample code. thanks
Hungpu DU for the report.

* upgraded DrizzleNginxModule to 0.1.7.

* bugfix: fixed most of warnings and errors from the Microsoft
Visual C++ compiler, reported by Edwin Cleton.

* upgraded HeadersMoreNginxModule to 0.25.

* bugfix: fixed a warning from the Microsoft C compiler.
thanks Edwin Cleton for the report.

* doc: documented the limitation that we cannot remove the
"Connection" response header with this module. thanks
Michael Orlando for bringing this up.

* upgraded SetMiscNginxModule to 0.24.

* bugfix: fixed the warnings from the Microsoft C compiler.
thanks Edwin Cleton for the report.

* upgraded SrcacheNginxModule to 0.25.

* feature: now the value specified in srcache_store_skip is
evaluated and tested again right after the end of the
response body data stream is seen. thanks Eldar Zaitov for
the patch.

The HTML version of the change log with lots of helpful hyper-links
can be browsed here:

http://openresty.org/#ChangeLog1005008

OpenResty (aka. ngx_openresty) is a full-fledged web application
server by bundling the standard Nginx core, lots of 3rd-party Nginx
modules and Lua libraries, as well as most of their external
dependencies. See OpenResty's homepage for details:

http://openresty.org/

We have run extensive testing on our Amazon EC2 test cluster and
ensured that all the components (including the Nginx core) play well
together. The latest test report can always be found here:

http://qa.openresty.org

Enjoy and happy new year!

Best regards,
-agentzh

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

fastcgi_cache and 304 response (no replies)

$
0
0
Hi,

I use nginx + php-fpm (via fcgi) and needed responses from php-server are putting into cache. I have one thought, could be better send cached pages to clients from cache with 304 code instead 200.
So we must know time when response has been cached (something like variable) and send 304 response as long as page will be in cache (this time we know).

Reading source codes I have not find any appropriate variable.

Any ideas?


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

403 error for one of my server blocks (1 reply)

$
0
0
Hi,

I installed nginx on an ubuntu server and then followed this tutorial (which I'll quote from) to setup server blocks https://digitalocean.com/community/articles/how-to-configure-single-and-multiple-wordpress-site-settings-with-nginx .(I don't do any of the WordPress config in that article). It uses a common.conf that the server blocks inherit from.

In one of my server blocks, I setup a basic index.html page and set my domain name for the server name in the server block file.

In the directory for my other server block, I setup a node app. That's the one where I'm getting the 403 error.

Anyone know why this might be happening and how I might debug and/or fix it?


server block

server {
# URL: Correct way to redirect URL's
server_name demo.com;
rewrite ^/(.*)$ http://www.demo.com/$1 permanent;
}
server {
server_name www.demo.com;
root /home/demouser/sitedir;
access_log /var/log/nginx/www.demo.com.access.log;
error_log /var/log/nginx/www.demo.com.error.log;
include global/common.conf;
}



common.conf

# Global configuration file.
# ESSENTIAL : Configure Nginx Listening Port
listen 80;
# ESSENTIAL : Default file to serve. If the first file isn't found,
index index.php index.html index.htm;
# ESSENTIAL : no favicon logs
location = /favicon.ico {
log_not_found off;
access_log off;
}
# ESSENTIAL : robots.txt
location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}
# ESSENTIAL : Configure 404 Pages
error_page 404 /404.html;
# ESSENTIAL : Configure 50x Pages
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/www;
}
# SECURITY : Deny all attempts to access hidden files .abcde
location ~ /\. {
deny all;
}
# PERFORMANCE : Set expires headers for static files and turn off logging.
location ~* ^.+\.(js|css|swf|xml|txt|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ {
access_log off; log_not_found off; expires 30d;
}

username mapping for imap/pop (no replies)

$
0
0
I need to map from username to login name where I wish to map from just username to username@domain.org for gmail and windomain\username for an exchange server. Is there a way for me to build in hooks to change the username before connecting. I can to that in my mailauth.pm module but don't know how to return the updated username.

nginx port for socket.io (1 reply)

$
0
0
I have a node application that uses websockets. I'm using a custom config file like this. However, when I post to the application, the post isn't appearing in the client side of the application. Since it's using websockets to communicate between client and server, i'm wondering if I have a problem with the port numbers. You can see in my config that the server is listening on 80, but the proxy_pass is set to localhost:3000. Should these numbers be the same? if so can I set Nginx to listen on 3000?

/etc/nginx/conf.d/domainame.com.conf

server {
listen 80;

server_name your-domain.com;

location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

fastcgi_cache_path empty (no replies)

$
0
0
I wanted to try fastcgi_cache on my nginx 1.5.8 as shown here
http://seravo.fi/2013/optimizing-web-server-performance-with-nginx-and-php

In nginx conf, http section, I added:

fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m
max_size=1000m inactive=60m;

In server section:
set $cache_uri $request_uri;

# POST requests and urls with a query string should always go to PHP
if ($request_method = POST) {
set $cache_uri 'null cache';
}
if ($query_string != "") {
set $cache_uri 'null cache';
}

# Don't cache uris containing the following segments
if ($request_uri ~*
"(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)")
{
set $cache_uri 'null cache';
}

# Don't use the cache for logged in users or recent commenters
if ($http_cookie ~*
"comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") {
set $cache_uri 'null cache';
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
include fastcgi.conf;
fastcgi_pass unix:/var/run/php5-fpm.sock;


##
# Fastcgi cache
##
set $skip_cache 1;
if ($cache_uri != "null cache") {
add_header X-Cache-Debug "$cache_uri $cookie_nocache
$arg_nocache$arg_comment $http_pragma $http_authorization";
set $skip_cache 0;
}
fastcgi_cache_bypass $skip_cache;
fastcgi_cache_key
$scheme$host$request_uri$request_method;
fastcgi_cache_valid any 8m;
fastcgi_cache_bypass $http_pragma;
fastcgi_cache_use_stale updating error timeout
invalid_header http_500;

}

I chowned /var/cache/nginx to www-data user (and group) and chmodded it to
775.
I restarted nginx but the folder is always empty. Is it normal? How can I
test if fastcgi_cache is working?

Thanks in advance
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Bounty for #416 (no replies)

$
0
0
Like the ticket creator says in the description, always serving cached
versions of pages would be extremely cool, so I wanted to let people know I
just offered a $500 bounty for http://trac.nginx.org/nginx/ticket/416 at
Bountysource.

https://www.bountysource.com/issues/972735-proxy_cache_use_stale-run-updating-in-new-thread-and-serve-stale-data-to-all
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Websocket tunnel broken with existing SSL session (1 reply)

$
0
0
We've been debugging this issue for 3 days now and even though we have a
temporary fix, we're still puzzled about it.

There is an iOS app, which opens a websocket connection to our server over
SSL. Our server runs SmartOS and has nginx 1.5.0 (also happens on 1.4.1)
proxying to a backend server running in NodeJS.

To reproduce, I start my app, a websocket connection is established and
works well, then I put the app to sleep for awhile until nginx kills the
connection. When I reopen the app, the following happens:

1) App notices that the connection is dead and reconnects.
2) Behind the scenes, iOS reuses the SSL session from before and quickly
opens a new socket.
3) A HTTP upgrade request and response flow across with no problems.
4) With a successful web-socket established on both sides, the client
starts sending frames. However, none of these gets delivered to the backend
server.
5) After a minute, nginx kills the connection even though the client is
sending periodic pings.
6) Back to 1.

I haven't managed to reduce the test case or reproduce it in another
environment yet. This only happens when using SSL. In wireshark I see the
websocket frames being sent from the iPhone client and TCP acked properly.

What currently fixes the problem is to disable SSL session reuse in nginx.
Then every websocket connection works like it should.

Here is the config before the fix:
###
server {
### Server port and name ###
listen 80 default_server;
listen 443 default_server ssl;
server_name test.mydomain.com;

### SSL cert files ###
ssl_certificate /opt/local/etc/nginx/ssl/certificate.crt;
ssl_certificate_key /opt/local/etc/nginx/ssl/certificate.key;

### SSL specific settings ###
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

keepalive_timeout 60;
client_max_body_size 10m;

location / {
access_log off;
proxy_pass http://localhost:3003;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

# WebSocket support (nginx 1.4)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}


Best regards,
Eirikur Nilsson
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java (no replies)

$
0
0
Hi!

Nginx-Clojure Module Release 0.1.0 was out several day ago!

It is a module for embedding Clojure or Java programs, typically those Ring based handlers.

It is an open source project hosted on Github, the site url is https://github.com/xfeep/nginx-clojure

With it we can develope high performance Clojure/Java Web App on Nginx without any Java web server.

By the way the result of simple performance test with Nginx-Clojure is inspiring, more details can be got from

https://github.com/ptaoussanis/clojure-web-server-benchmarks

src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings (5 replies)

$
0
0
src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings

In ngx_http_spdy_send_chain(ngx_connection_t *fc, ngx_chain_t *in, off_t limit):

src/http/ngx_http_spdy_filter_module.c(682) : warning C4244: 'function' : conversion from 'off_t' to 'size_t', possible loss of data

src/http/ngx_http_spdy_filter_module.c(701) : warning C4244: '-=' : conversion from 'off_t' to 'size_t', possible loss of data

src/http/ngx_http_spdy_filter_module.c(715) : warning C4244: 'function' : conversion from 'off_t' to 'size_t', possible loss of data

src/http/ngx_http_spdy_filter_module.c(751) : warning C4244: '=' : conversion from 'off_t' to 'size_t', possible loss of data

src/http/ngx_http_spdy_filter_module.c(757) : warning C4244: 'function' : conversion from 'off_t' to 'size_t', possible loss of data

Add proxy_next_upstream_action to distinguish diffrient network actions (no replies)

$
0
0
Hi all:

The directive "proxy_next_upstream error timeout" takes effect on three
network actions: connection, send and recieve. In practice ,we realy want
to try next upstream according to in which actions we are. For example, I
do not want to try next upstream if some error occurs or timed out when
recieving response from upstream, otherwise it maybe duplicate my request .

The proxy_next_upstream_action is involved to address this problem , the
directive takes one or more parameter : conn, send, recv which indicates
whether we should try next upstream.

Usage:
proxy_next_upstream error timeout;
proxy_next_upstream_action conn;
Try next upstream if error or timed out on connection.

Anyone suggests ?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

imap connection to gmail closes connection (3 replies)

$
0
0
I am running nginx 1.1.19 on an Ubuntu 12.04.4 64but server.

I have nginx configured to enter on port 143 and go out to 127.0.0.1:143 where it goes through stunnel to go to imap.gmail.com:993. If I talk directly to 127.0.0.1:143 (to stunnel) it works. If I talk to nginx, it authenticates, logs correct username, target IP and port, gets the Capability list and registers a successful login to the remote (gmail) imap server and then closes the connection immediately. The following is a transcript of the telnet session:

telnet nginx:143
* OK IMAP4 ready
a1 LOGIN user@example.com password
* CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE MOVE CONDSTORE ESEARCH
a1 OK user@example.com first_name Last_name authenticated (Success)
Connection closed by foreign host.

My nginx error.log shows the following:
*5 upstream sent invalid response: "* CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE MOVE CONDSTORE ESEARCH
a1 OK test@example.com Test User authenticated (Success)" while reading response from upstream,...

It appears to not like google's CAPABILITY line. Is it too long? Any suggestions?

Other connections through nginx/stunnel to exchange work just fine.

Fast CGI module "multipart/mixed" problem (it only accepts 1 "Content-Type" header) (no replies)

$
0
0
It seems, when my FCGI server responds to NGINX with "Status: 200 OK\r\nContent-Type: multipart/mixed;boundary=whatever\r\n\r\nboundary=whatever\r\nContent-Type: image/jpeg\r\n\r\n<BINARY DATA>"

The FASTCGI module is taking the 2nd "Content-Type" only and uses it in the initial response with the 200.

The client gets confused when it sees the boundaries and data later.
If I remove the subsequent "Content-Type:" headers, the initial one with the boundary indicator is sent; however, the client now does not know how to interpret the <BINARY DATA>.

Is it possible to rewrite links on a webpage into an other format? (1 reply)

$
0
0
Hello Everyone! Is it possible to rewrite links on a webpage with the format "domain/file" into the format "file.domain"? Please can anyone help?

question on some simple codes in ngx_buf.c (1 reply)

$
0
0
Hello there,

code snippet in the definition of ngx_chain_add_copy in ngx_buf.c:


ll = chain;

for (cl = *chain; cl; cl = cl->next) {
ll = &cl->next;
}


Why is ll assigned repeatedly? I'm sorry for failed thinking out any necessity.

And I modified the above as the following. Is it OK?


if (*chain) {
for (cl = *chain; cl->next; cl = cl->next) { /* void */ }
ll = &cl->next;
} else {
ll = chain;
}


Thank you very much.

Images Aren't Displaying When Perl Interpreter Is Enabled (1 reply)

$
0
0
I have awstats set up and working with Nginx and perl but all images return a 404 error. The virtual host config is identical to other websites where images work fine except for the added part for perl.

I think i know what's happening but i dont know how to fix it; images are being sent to the perl interpreter instead of Nginx. Here's my config:

server {
listen 1.2.3.4:80;
server_name stats.example.com;
rewrite ^ https://$server_name$request_uri? permanent;
}

server {
listen 1.2.3.4:443;
server_name stats.example.com;

access_log /path/to/logs/stats/access.log;

error_log /path/to/logs/stats/error.log;

index awstats.pl index.html index.htm;
client_max_body_size 40M;

ssl on;
ssl_certificate /etc/nginx/ssl/ssl.crt;
ssl_certificate_key /etc/nginx/ssl/private.key;

location / {
root /path/to/the/awstats/wwwr;
index index.html index.htm;
try_files $uri $uri/ /index.html?$uri&$args;
auth_basic "Restricted";
auth_basic_user_file /path/to/the/awstats/htpasswd;
}

# Block Image Hotlinking
location /icon/ {
valid_referers none blocked stats.example.com;
if ($invalid_referer) {
return 403;
}
}

# Dynamic stats.
location ~ \.pl$ {
gzip off;
include /etc/nginx/fastcgi_params;
fastcgi_pass 127.0.0.1:8999;
fastcgi_index index.pl;
fastcgi_param SCRIPT_FILENAME /path/to/the/awstats/wwwroot/$fastcgi_script_name;
}
}

proxy_cache purge details (1 reply)

$
0
0
Hello,

we've got a problem with the proxy_cache feature in nginx. To be more
precise, the problem occurs when the cache loader kicks in and starts
deleting the expired files that are stored on a LVM-striped (non-raid)
ext4 partition across six huge SSD disks. The purge (sometimes?) takes
ages and it completely kills the reads from the partition.

That's hardly an nginx issue, but it is why we would like to know if
there's a posibility to force the cache purge so that small amounts of
files get deleted more often rather than a lot of files at once getting
deleted less often.

Also, it would help us to know just exactly how (and where) does the
nginx store the last-access-time information for each file (for the
'inactive' feature in proxy_cache_path directive), if the atime feature
is off for performance reasons. I'm guessing that it needs to store this
so that it knows when to delete the files. It's quite difficult for us
to find it in the sources, so if you could point us in the right
direction, it would be awesome!

Thanks,
David

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

A 503 page gets written to my proxy cache, overwriting the 200 (3 replies)

$
0
0
Hi,

I'm trying to use the proxy cache to store regular pages (200) from my web server so that when the web server goes into maintenance mode and starts returning 503 nginx can still serve the good page out of cache. It works great for a few minutes but then at some point (5 to 10 minutes in) nginx will overwrite the good 200 page in the cache with the bad 503 page and then start handing out the 503. Looking at my config I don't understand how a 503 could ever get written to cache but it is. And the 200 page was brand new (written 10 minutes before) so it shouldn't be the "inactive" time on the proxy_cache_path setting causing nginx to delete the good file. Can anyone tell me what I'm missing? Here are the relevant pieces of my config:

proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:500m max_size=3000m inactive=120h;
proxy_temp_path /var/www/cache/tmp;
proxy_cache_key "$scheme$host$request_uri";

map $http_cookie $GotSessionCookie {
default "";
"~(?P<sessionid>\bSESS[^;=]+=[^;=]+)" $sessionid;
}

server {
listen 80;
server_name _;

proxy_cache my-cache;

location / {
proxy_pass http://production;
proxy_cache_valid 200 301 302 30m;
proxy_cache_valid 404 1m;
}

# don't cache pages with php's session cookie
proxy_no_cache $cookie_$GotSessionCookie;

# bypass the cache if we get a X-NoCache header
proxy_cache_bypass $http_nocache $cookie_$GotSessionCookie;

proxy_cache_use_stale http_500 http_503 error timeout invalid_header updating;
}

I can't imagine how a 503 would ever get cached given those proxy_cache_valid lines but maybe I don't understand something. Thanks for any ideas!

-Rick

Errors using HttpUseridModule (4 replies)

$
0
0
Hi guys,

I'm using the HttpUseridModule for storing session ids of our users.

We're receiving a lot of errors lately concerning the format of the
userid cookie.

Basically there are two types of errors:

[error] 1581#0: *20523638 client sent invalid userid cookie
"sid="Cvwk2lLYLvhh3gYtDscPAg=="; $Path="/"" while reading response
header from upstream, client: xx.xx.xx.xx, server: , request: "GET
/xxxx/xxxx HTTP/1.1", upstream: "http://xxxxxxxxxxxxxx", host:
"xxxxxxxxxxxxxxxxx"

and

[error] 1582#0: *17018740 client sent too short userid cookie
"sid=Cvwkcept: */*", client: xx.xx.xx.xx, server: xxxxxxxxx, request:
"GET /xxxxxx HTTP/1.0", host: "xxxxxxx", referrer: "http://xxxxxxxxxxx"

And I'm using this configuration for userid

userid on;
userid_name sid;
userid_expires 31d;
userid_path /;


Can you help me?

Thank you in advance,
Gabriel Arrais

--
Posted via http://www.ruby-forum.com/.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx configuration needed to dynamically rewrite a subdirectory to a subdomain (1 reply)

$
0
0
How to configure NGINX daemon so that blog.xxx.com becomes xxx.com/blog

Can anyone help me?
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>