Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

In-flight HTTP requests fail during hot configuration reload (SIGHUP) (1 reply)

$
0
0
We have recently migrated across from HAProxy to Nginx because it supports true zero-downtime configuration reloads. However, we are occasionally getting 502 and 504 errors from our monitoring systems during deployments. Looking into this, I have been able to consistently replicate the 502 and 504 errors as follows. I believe this is an error in how Nginx handles in-flight requests, but wanted to ask the community in case I am missing something obvious.

Note the set up of Nginx is as follows:
* Ubuntu 14.04
* Nginx version 1.9.1
* Configuration for an HTTP listener:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8080;
# pass on real client's IP
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
access_log /var/log/nginx/access.ws-8080.log combined;

location / {
proxy_pass http://server-ws-8080;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}

upstream server-ws-8080 {
least_conn;
server 172.17.0.51:8080 max_fails=0;
}

1. Telnet to the Nginx server on the HTTP port it is listening on.

2. Send a HTTP/1.1 request to the upstream server (172.17.0.51):
GET /health HTTP/1.1
Host: localhost
Connection: Keep-Alive

This request succeeds and the response is valid

3. Start a new HTTP/1.1 request but don’t finish the request i.e. send the following line using telnet:
GET /health HTTP/1.1

4. Whilst that request is now effectively in-flight because it’s not finished and Nginx is waiting for the request to be completed, reconfigure Nginx with a SIGHUP signal. The only difference in the config preceding the SIGHUP signal is that the upstream server has changed i.e. we intentionally want all new requests to go to the new upstream server.

5. Terminate the old upstream server 172.17.0.51

6. Complete the in-flight HTTP/1.1 request started in point 3 above with:
Host: localhost
Connection: Keep-Alive

7. Nginx will consistently respond with a 502 if the old upstream server rejects the request, or a 504 if there is no response on that IP and port.

I believe this behaviour is incorrect as Nginx, once it receives the complete request, should direct the request to the current available upstream server. However, it seems that that Nginx is instead deciding which upstream server to send the request to before the request is completed and as such is directing the request to a server that no longer exists.

Any advice appreciated.

BTW. I tried to raise an issue on http://trac.nginx.com/ http://trac.nginx.com/, however it seems that the authentication system is completely broken. I tried logging in with Google, My Open Id, Wordpress and Yahoo, and all of those OpenID providers no longer work.

Thanks,
Matt

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

SOLVED: Re: [OT] Cant write across filesystem mounts? (no replies)

$
0
0
> When running PHP script through Nginx it writes OK to files
> on the same disk mount where the PHP file is located but
> not to the other parts of the system that are on another mount.
> (well i dont know if its a matter of "same mount" or not, but
> that is how it is behaving)
>
> Example, /tmp is on another mount than the web root.
>
> <?php
> ini_set('display_errors', 'On');
> file_put_contents('/tmp/test', 'hello world');
> system('touch /tmp/test-touch');
> file_put_contents('/webroot/tmp/test', 'hello world');
> system('touch /webroot/tmp/test-touch');
> ?><html><body>hello world</body></html>
>
> I run this script from CLI (sudo as ANY user including the php
> user) and it always works fine (writes files in both places). If I
> access it from a browser the write/touch commands to /tmp
> fail silently.
>
> No AVC from selinux, no PHP or Nginx errors or warnings.
> /tmp permissions are usual 777. Can someone help me in
> right direction?

The problem was the use of PrivateTmp in systemd for php-fpm.

Writes to /tmp (and apparently /var/tmp) go to nowhereland per
systemd (somewhere I don't know about???)

But if I create a directory writable by php-fpm with another name,
it works.

Thanks for the comments.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Occasionally 500 responses (no replies)

$
0
0
We were seeing occasionally 500 responses from nginx on our production servers. There is nothing in the error log correlated to the event. Upon turning on debugging log and TCP dump, we identified the occasionally 500 responses are caused by end user resetting the connection(especially end users who used IE). The error in the debug log showed as some thing like '2015/04/20 23:37:58 [info] 17423#0: *1188530641 writev() failed(104: Connection reset by peer)'.

Is this supposed to be a 499 error response instead of 500 error. When we see 500 errors, we usually think there is something wrong with the server. This is actually end user resetting the connection. Is there any plan to fix that in the future?

Thanks

Nginx LibreSSL and BoringSSL alternative to OpenSSL ? (3 replies)

$
0
0
Currently on CentOS 6/7, I source compile my Nginx 1.9.x versions with static OpenSSL 1.02a patched for chacha20_poly1305 but thinking about switching to LibreSSL or BoringSSL (for equal preference group cipher support).

The question I have is anyone else using Nginx with LibreSSL or BoringSSL on CentOS/Redhat ? Any issues that needed working around or any features lost ? e.g. BoringSSL and OSCP stapling support etc ?

Recommended steps for compilation with Nginx ?

thanks

George

Find out if config file is loaded/used (1 reply)

$
0
0
Hi,

we configured load balancing around 10 machines in 2 clusters using symbolic links to various configuration file. We change the symbolic link to a different file if we need to do maintenance on one of the clusters and reload nginx. We are building some automation around this and want to make sure that a specific configuration is used. At the moment we just check the path of the symbolic link but that doesn't necessarily mean that the configuration is live.

Is there a way to query nginx which configuration (file) is loaded?

Cheers,
Oliver
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

SSL session caching (2 replies)

$
0
0
In my current setup I have nginx behind a load balancing router (OSPF)
where each connection to the same address has about 16% chance of hitting
the same server as the last time.

In a setup like that, does SSL session caching make any difference? I was
thinking it through this morning and I'm betting that the browser would
toss the old session ID unless it happened to be routed to the same
backend, because in the other cases the backend servers would respond they
don't know the session. Is that correct?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Any reason stub_status would return 0.00 for last CPU? (1 reply)

$
0
0
Server is running really hot so investigating. Stub_status displays requests, but CPU data is 0.00 for all:

http://pastebin.com/U0pLCBQ8

I'm running 1.8.0 with php-fpm and fastcgi caching

HUP signal to nginx doesn't work Ubuntu14 (2 replies)

$
0
0
Hello,
Am seeing an issue while sending HUP signal to nginx (for config reload) on Ubuntu14. It just kills the master process & doesn't start the new worker processes. The same works just fine on CentOS 6.6 64-bit

$ ps -eaf | grep nginx
zimbra 10860 1 0 16:05 ? 00:00:00 nginx: master process /opt/zimbra/nginx/sbin/nginx -c /opt/zimbra/conf/nginx.conf
zimbra 10861 10860 0 16:05 ? 00:00:00 nginx: worker process
zimbra 10862 10860 0 16:05 ? 00:00:00 nginx: worker process
zimbra 10863 10860 0 16:05 ? 00:00:00 nginx: worker process
zimbra 10864 10860 0 16:05 ? 00:00:00 nginx: worker process
zimbra 18638 25945 0 16:22 pts/0 00:00:00 grep nginx
zimbra 19994 1 0 Jun02 ? 00:01:51 /usr/bin/perl -w /opt/zimbra/libexec/zmstat-nginx
$
$ kill -HUP 10860
$
$ ps -eaf | grep nginx
zimbra 10861 1 0 16:05 ? 00:00:00 nginx: worker process <------ same old worker processes & master process is killed
zimbra 10862 1 0 16:05 ? 00:00:00 nginx: worker process
zimbra 10863 1 0 16:05 ? 00:00:00 nginx: worker process
zimbra 10864 1 0 16:05 ? 00:00:00 nginx: worker process
zimbra 18666 18641 0 16:22 pts/1 00:00:00 tail -f log/nginx.log
zimbra 18986 25945 0 16:23 pts/0 00:00:00 grep nginx
zimbra 19994 1 0 Jun02 ? 00:01:51 /usr/bin/perl -w /opt/zimbra/libexec/zmstat-nginx
$

From nginx.log, i can see the SIGHUP is received
2015/06/03 16:23:14 [notice] 10860#0: signal 1 (SIGHUP) received, reconfiguring
2015/06/03 16:23:14 [debug] 10860#0: wake up, sigio 0
2015/06/03 16:23:14 [notice] 10860#0: reconfiguring
2015/06/03 16:23:14 [debug] 10860#0: posix_memalign: 00000000021EAA50:16384 @16
2015/06/03 16:23:14 [debug] 10860#0: posix_memalign: 000000000223D570:32768 @16

Any ideas on why this doesn't work on ubuntu only ?

Thanks
-Kunal
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Thanks (1 reply)

$
0
0
Thanks all devs who put Nginx together and made this awesome piece of work!

With it and freedns.afraid.org I was able to host my site from my home server.

--
Thiago Farina

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Accept-Encoding: gzip and the Vary header (5 replies)

$
0
0
I have used gzip_static for some years without any issue that I am aware of
with the default gzip_vary off.

My reasoning is that the HTTP spec says in

http://tools.ietf.org/html/rfc2616#page-145

that "the Vary field value advises the user agent about the criteria that
were used to select the representation", and my understanding is that
compressed content is not a representation per se. The representation would
be the result of undoing what Content-Encoding says.

So, given the same .html endpoint you could for example serve content in a
language chosen according to Accept-Language. That's a representation that
depends on headers in my understanding. If you serve the same .css over and
over again no matter what, the representation does not vary. The compressed
thing that is transferred is not the representation itself, so no Vary
needed.

Do you guys agree with that reading of the spec?

Then, you read posts about buggy proxy servers. Have any of you founded a
real (modern) case in which the lack of "Vary: Accept-Encoding" resulted in
compressed content being delivered to a client that didn't support it? Or
are those proxies mythical criatures as of today?

Thanks!

Xavier
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

reverse proxy SMTP - How distinguish MUA and MTA (no replies)

$
0
0
Hi

Still building a nginx reverse proxy for my mail servers. Thanks to the community, I now have a secure connection between nginx and my backend mail server.

POP and IMAP are working well, from a MUA to my server.

I'm wondering how nginx can manage SMTP coonnections as it is used by both MUA and MTA.

A MUA must authenticate before sending mails; and my http_auth backend is able to authenticate users. More precisely, the authentication backend answers a server (auth-server / auth-port) depending on the domain of the destination email address.

Now I guess I have to accept incoming emails without authentication if the client is a MTA, but I don't find a obvious way to distingish a MUA and a MTA and let mu auth backend behave depending on that.

How to achieve that ?

Hello, nginx doesn’t work as I expected, please help (4 replies)

$
0
0
Hello,

My site cookkoo.com have the issue to display and now it dead. As i am not a programmer and i run cookkoo.com by trying to study from internet.
My hosting Dreamhost suggestion me to follow http://wiki.dreamhost.com/Nginx#WordPress and it's not work. and after that they recommend me follow https://rtcamp.com/wordpress-nginx/tutorials/ it's not work again. And i know the problem is, i do something wrong.

- I need to config my site to run with nginx + FastCGI (my site have FastCGI install)
- My configuration file is in home/user/nginx/cookkoo.com/ and wordpress.conf (please see attached file)
- I am attached error.log (please see attached file)

Please help me too.

Thank you
Sermsak H

TIME OUT (1 reply)

$
0
0
Hi!

Could nginx be the cause of (google chrome message):

"
This webpage is not available

ERR_CONNECTION_TIMED_OUT
"

My IP change is dynamic and changed this afternoon, and after that all
I'm getting now is this TIMEOUT. I have stopped and restarted nginx
many times now but still the same problem.

--
Thiago Farina

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

listen backlog for stream servers (no replies)

$
0
0
Hi,

Listen backlog in Nginx defaults to NGX_LISTEN_BACKLOG=511 on Linux & other platforms (excepting FreeBSD & MacOS), not to system somaxconn (why does it differ for different OSes?).

It’s not a problem to increase it using «backlog» option of «listen» directive for HTTP servers (http://nginx.org/en/docs/http/ngx_http_core_module.html#listen),
but there is no such option for stream server (http://nginx.org/en/docs/stream/ngx_stream_core_module.html#listen).

Is there any proper way to increase listen queue length for stream server (without patching source code) or globally? Seems to be a subject to fix.

--
Best Regards,
Dmitry Krikov








_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Writing first test Nginx module (1 reply)

$
0
0
Hello everyone,

Recently for some of our needs on Nginx, I am working to develop a new module on Nginx. So I have started to develop a test and basic module. My intention is to call a function before selecting one servers in upstream section. For this, I set NGX_STREAM_MODULE type. when running the nginx with new option "stream_test" in upstream configuration, I get below error :
ERROR: nginx: [emerg] "stream_test" directive is not allowed here in xxxxxx

I will appreciate if somebody assist me on this.Thanks

Following is the source code of test module
#######################################Test Module###############################
#include <ngx_config.h>
#include <ngx_core.h>
#include <ngx_stream.h>
#include <ngx_stream_upstream.h>


static char *ngx_http_test(ngx_conf_t *cf, ngx_command_t *cmd,void *conf);

static ngx_command_t ngx_stream_upstream_test_commands[] =
{
{ ngx_string("stream_test"),
NGX_STREAM_SRV_CONF|NGX_CONF_NOARGS,
ngx_http_test,
0,
0,
NULL
},

ngx_null_command
};

static ngx_stream_module_t ngx_http_test_ctx =
{
NULL,NULL,NULL,NULL
};

ngx_module_t ngx_http_test_module =
{
NGX_MODULE_V1,
&ngx_http_test_ctx, /* module context */
ngx_stream_upstream_test_commands, /* module directives */
NGX_STREAM_MODULE, /* module type */
NULL, /* init master */
NULL, /* init module */
NULL, /* init process */
NULL, /* init thread */
NULL, /* exit thread */
NULL, /* exit process */
NULL, /* exit master */
NGX_MODULE_V1_PADDING

};

static char *ngx_http_test(ngx_conf_t *cf, ngx_command_t *cmd,void *conf)
{
//ngx_stream_upstream_srv_conf_t *uscf;
//uscf = ngx_stream_conf_get_module_srv_conf(cf, ngx_stream_upstream_module);
ngx_conf_log_error(NGX_LOG_ERR,cf,0,"Test Function was called!");
/*if (uscf->peer.init_upstream)
{
ngx_conf_log_error(NGX_LOG_WARN, cf, 0,"Nima Test Func: load balancing method redefined");
}*/

return NGX_CONF_OK;

}

Right use of 'if' (1 reply)

$
0
0
Hai.

I try to refuse some attacks with map and if.

The requests looks like:

#############
/?id=../../../../../../etc/passwd%00&page=../../../../../../etc/passwd%00&file=../../../../../../etc/passwd%00&inc=../../../../../../etc/passwd%00&load=../../../../../../etc/passwd%00&path=../../../../../../etc/passwd%00

/index.php?id=../../../../../../etc/passwd%00&page=../../../../../../etc/passwd%00&file=../../../../../../etc/passwd%00&inc=../../../../../../etc/passwd%00&load=../../../../../../etc/passwd%00&path=../../../../../../etc/passwd%00

/index.php?culture=../../../../../../../../../../windows/win.ini&name=SP.JSGrid.Res&rev=laygpE0lqaosnkB4iqx6mA%3D%3D&sections=All%3Cscript%3Ealert(12345)%3C/script%3Ez

/index.php?test=../../../../../../../../../../boot.ini
#############

My solution:

#################
# http request line: "GET
/index.php?culture=../../../../../../../../../../windows/win.ini&name=SP.JSGrid.Res&rev=laygpE0lqaosnkB4iqx6mA%3D%3D&sections=All%3Cscript%3Ealert(12345)%3C/script%3Ez
HTTP/1.1"
# http uri: "/index.php"
# http args:
"culture=../../../../../../../../../../windows/win.ini&name=SP.JSGrid.Res&rev=laygpE0lqaosnkB4iqx6mA%3D%3D&sections=All%3Cscript%3Ealert(12345)%3C/script%3Ez"
# http exten: "php"

map $args $block {
default 0;
"~(boot|win)\.ini" 1;
"~etc/passwd" 1;
}

location = /index.php {
if ($block) {
# include is here not allowed ;-/
# include
/home/nginx/server/conf/global_setting_for_log_to_fail2ban_for_blocking.conf;
access_log logs/fail2ban.log combined;
return 403;
}
}
#########################

Is this the most efficient way for nginx?

BR Aleks

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

handling subdirectories location (1 reply)

$
0
0
Hi,

I have the following in my nginx configuration:

server {
listen 8080;
server_name myservername.com;
root /data/www/myservername.com;

location / {
index index.php index.html index.htm;
try_files $uri $uri/ /index.html;
}

# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}

# Pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000.
location ~ \.php$ {
try_files $uri = 404;
# fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}

But how do I configure 'location' so /subdir/index.php is processed
instead of /index.php?

Basically I'm trying to put third_party apps in their /subdirs/ to
test them, but when they navigate to index.php I'm thrown back to the
root /index.html for example.

Thanks in advance,

--
Thiago Farina

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx LUA (no replies)

$
0
0
Can anyone please help me with a lua configuration which I can embedded into nginx.conf to send the following sepaately in access log.

user_agent_os
user_agent_browser
user_agent_version

At present all these fields are embedded in http_user_agent and I am writing parser to parse at the receiver end. I am looking for some input where I can send them separately as different fields from Nginx itself.

problem : nginx with magento (2 replies)

$
0
0
hi i am using nginx with magento which use fastCGI .
whenever i type in url http://example.com/index.php then index.php start
downloading .
can anyone help me ?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

fastcgi_pass / error page /error code in HTTP rsp (5 replies)

$
0
0
Hi, I would like nginx to map a fastcgi error response a static error page, and include the HTTP error code in its HTTP response header; e.g.
1. have nginx return the proper error code in its header to the client.
2. have nginx return the proper error page based on the fastcgi_pass server's response error code.
For example, if the fastcgi server returns '400 Bad Request', I would like NGINX to return "Status code: 400" along with the bad request html static error page.

Is #2 feasible when fastcgi_pass is used? I was not able to do it, unless I used error code 302, a redirect. In other words, the only way I got nginx to return a specific error page was to have the fastcgi server respond with a redirect
"Status: 302 Found\r\n"
"Location: /<path>/badRequest.html\r\n"
The problem with this method was that the error code (400 for example) did not appear in the HTTP response of the error page (requirement #1 was not met).

How can I have nginx/fastcgi_pass return an error page with the HTTP error code (400 for example) appear in the HTTP header? My fastcgi server response did include 'Status' in the fastcgi response.
I tried not using the redirect 302 method but my attempts failed; I had the following inside the server block or inside the location/fastcgi_pass block of nginx.conf:

error_page 400 = /bad_request.html;

location = /bad_request.html {
try_files /<path>/bad_request.html 50x.html;
}
I tried the 'internal' directive also though I was not sure of its usage as the path of the html error page was not specified.
Any help on how to get nginx to return the error code in the HTTP header response and an error page when fastcgi is used would be greatly appreciated, thank you!
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>