Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

Serving files from a slow NFS storage (4 replies)

$
0
0
Hi all,

In our production environment, we have several nginx servers running on Ubuntu that serve files from a very large (several PBs) NFS mounted storage.
Usually the storage responds pretty fast, but occasionally it can be very slow. Since we are using aio, when a file read operation runs slow, it's not that bad - the specific HTTP request will just take a longer time to complete.
However, we have seen cases in which the file open itself can take a long time to complete. Since the open is synchronous, when the open is slow all active requests on the same nginx worker process are delayed.
One way to mitigate the problem may be to increase the number of nginx workers, to some number well above the number of CPU cores. This will make each worker handle less requests and therefore less requests will be delayed due to a slow open, but this solution sounds far from ideal.
Another possibility (that requires some development) may be to create a thread that will perform the task of opening the file. The main thread will wait on the completion of the open-thread asynchronously, and will be available to handle other requests until the open completes. The child thread can either be created per request (many requests probably won't even need to open a file, thanks to the caching of open file handles), or alternatively some fancier thread pooling mechanism could be developed.

I'd love to hear any thoughts / ideas on this subject

Thanks,

Eran

proper directive to pass requests (no replies)

$
0
0
Hi All
I'm building my configuration slowly. Thanks for all the help so far. My
current obstacle is this:

As it is now, external users will access an internal IIS web server by
using http://my.domain.com. The firewall points to the web server and the
web server automatically redirects to https://my.domain.com.

I'm trying to fit nginx between the firewall and the web server and figure
out how to configure nginx to respond to requests for http://my.domain.com
and proxy that to the web server. What do I use with the proxy_pass
directive to get this to work?

Would "proxy_pass http://my.domain.com; work" ?

Regards
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

http module handler, chain buffer and output_filter (1 reply)

$
0
0
Hey, i wonder why the server is freezing when i do a request to it and i do not define the "NO_PROBLEM" macro in the code.
In my ngx_html_chain_buffers_init i do a ngx_pcalloc because i thought that the server was freezing because of some memory missing alignment(it was firstly static). I am just too tired at this hour for change it(ill change it tommorow).

I am on a debian jessie,
nginx 1.6.2 (source from the debian repo if i remember well)

PS: I am sorry if i miss some forum rules. It is my first time here. Just tell me what i did wrong if i did something wrong.
Thank You !
here is my code:

//#define NO_PROBLEM

#include <ngx_config.h>
#include <ngx_core.h>
#include <ngx_http.h>
#include <stdio.h>
/* BEGIN html strings */
enum ngx_html_text_e {
NGX_HTML_ALL,
NGX_HTML_NB_PARTS
};

static ngx_str_t ngx_html_strings[NGX_HTML_NB_PARTS] = {
ngx_string(
"<!DOCTYPE html>\n"
"<html>\n"
"\t<head>\n"
"\t</head>\n"
"\t<body>\n"
"\t</body>\n"
"</html>"
)
};
/* END html strings */
static ngx_buf_t * ngx_html_buffers;
static ngx_chain_t ngx_html_chain_buffers[NGX_HTML_NB_PARTS];
static ngx_pool_t * ngx_html_buffers_pool;

static char *
ngx_html_chain_buffers_init(ngx_log_t *log){
ngx_int_t i;
ngx_html_buffers_pool = ngx_create_pool(NGX_HTML_NB_PARTS * sizeof(ngx_buf_t) + sizeof(ngx_pool_t), log);
//TODO TMP ALLOC
ngx_html_buffers = ngx_pcalloc(ngx_html_buffers_pool, NGX_HTML_NB_PARTS * sizeof(ngx_buf_t));
if(ngx_html_buffers == NULL){
return NGX_CONF_ERROR;
}
for(i = 0;i < NGX_HTML_NB_PARTS;i++){

ngx_html_buffers[i].pos = ngx_html_strings[i].data;
ngx_html_buffers[i].last = ngx_html_strings[i].data +
ngx_html_strings[i].len;
ngx_html_buffers[i].file_pos = 0;
ngx_html_buffers[i].file_last = 0;
ngx_html_buffers[i].start = NULL;
ngx_html_buffers[i].end = NULL;
ngx_html_buffers[i].tag = NULL;
ngx_html_buffers[i].file = NULL;
ngx_html_buffers[i].shadow = NULL;
ngx_html_buffers[i].temporary = 0;
ngx_html_buffers[i].memory = 1;
ngx_html_buffers[i].mmap = 0;
ngx_html_buffers[i].recycled = 0;
ngx_html_buffers[i].in_file = 0;
ngx_html_buffers[i].flush = 0;
ngx_html_buffers[i].sync = 0;
ngx_html_buffers[i].last_buf = 0;
ngx_html_buffers[i].last_in_chain = 0;
ngx_html_buffers[i].last_shadow = 0;
ngx_html_buffers[i].temp_file = 0;
ngx_html_buffers[i].num = 0;

ngx_html_chain_buffers[i].buf = &ngx_html_buffers[i];
}
ngx_html_buffers[i].last_buf = 1;
ngx_html_buffers[i].last_in_chain = 1;
return NGX_CONF_OK;
}

static char * ngx_http_diceroll_quickcrab_com(ngx_conf_t *cf, ngx_command_t *cmd, void *conf);

static ngx_command_t ngx_http_diceroll_quickcrab_com_commands[] = {
{
ngx_string("diceroll_quickcrab_com"),
NGX_HTTP_LOC_CONF|NGX_CONF_NOARGS,
ngx_http_diceroll_quickcrab_com,
0,
0,
NULL
},
ngx_null_command
};

/*
* The module context has hooks , here we have a hook for creating
* location configuration
*/
static ngx_http_module_t ngx_http_diceroll_quickcrab_com_module_ctx = {
NULL, /* preconfiguration */
NULL, /* postconfiguration */
NULL, /* create main configuration */
NULL, /* init main configuration */
NULL, /* create server configuration */
NULL, /* merge server configuration */
NULL, /* create location configuration */
NULL /* merge location configuration */
};
/*
* The module which binds the context and commands
*/
ngx_module_t ngx_http_diceroll_quickcrab_com_module = {
NGX_MODULE_V1,
&ngx_http_diceroll_quickcrab_com_module_ctx, /* module context */
ngx_http_diceroll_quickcrab_com_commands, /* module directives */
NGX_HTTP_MODULE, /* module type */
NULL, /* init master */
NULL, /* init module */
NULL, /* init process */
NULL, /* init thread */
NULL, /* exit thread */
NULL, /* exit process */
NULL, /* exit master */
NGX_MODULE_V1_PADDING
};
/*
* Main handler function of the module.
*/
static ngx_int_t
ngx_http_diceroll_quickcrab_com_handler(ngx_http_request_t *r){
ngx_int_t rc;
size_t content_length_n;
ngx_chain_t * out;
content_length_n = 0;
out = &ngx_html_chain_buffers[NGX_HTML_ALL];
out->next = NULL;
#ifdef NO_PROBLEM
out->buf = ngx_pcalloc(r->pool,sizeof(ngx_buf_t));
out->buf->pos = ngx_html_strings[NGX_HTML_ALL].data;
out->buf->last = ngx_html_strings[NGX_HTML_ALL].data +
ngx_html_strings[NGX_HTML_ALL].len;
out->buf->memory = 1;
out->buf->last_buf = 1;
#endif
content_length_n += ngx_html_strings[NGX_HTML_ALL].len;

/* we response to 'GET' and 'HEAD' requests only */
if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) {
return NGX_HTTP_NOT_ALLOWED;
}
/* discard request body, since we don't need it here */
rc = ngx_http_discard_request_body(r);
if (rc != NGX_OK) {
return rc;
}
/* set the 'Content-type' header */
r->headers_out.content_type_len = sizeof("text/html") - 1;
r->headers_out.content_type.data = (u_char *) "text/html";
/* send the header only, if the request type is http 'HEAD' */
if (r->method == NGX_HTTP_HEAD) {
r->headers_out.status = NGX_HTTP_OK;
r->headers_out.content_length_n = content_length_n;
return ngx_http_send_header(r);
}

/* set the status line */
r->headers_out.status = NGX_HTTP_OK;
r->headers_out.content_length_n = content_length_n;
/* send the headers of your response */
rc = ngx_http_send_header(r);
if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) {
return rc;
}
ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "------------------------------------------------");
/* send the buffer chain of your response */
rc = ngx_http_output_filter(r, out);
return rc;
}
/*
* Function for the directive diceroll_quickcrab_com , it validates its value
* and copies it to a static variable to be printed later
*/
static char *
ngx_http_diceroll_quickcrab_com(ngx_conf_t *cf, ngx_command_t *cmd, void *conf){
char * rc;
ngx_http_core_loc_conf_t *clcf;
static unsigned already_done = 0;

if(!already_done) {
rc = ngx_html_chain_buffers_init(cf->log);
if (rc != NGX_CONF_OK) return rc;
already_done = 1;
}

clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module);
clcf->handler = ngx_http_diceroll_quickcrab_com_handler;

return NGX_CONF_OK;
}



Here is my nginx.conf:
worker_processes 1;
error_log logs/debug.log debug;
events {
worker_connections 4;
}
http {
server_names_hash_max_size 4;
server_names_hash_bucket_size 4;
server {
listen 127.0.0.1:80 default_server;
server_name _;
return 444;
}
server {
listen 127.0.0.1:80;
server_name localhost 127.0.0.1;
root /var/www/diceroll.quickcrab.com;
location / {
diceroll_quickcrab_com;
}
}
}

Thank you

Setting the SSL protocol used on proxy_pass? (no replies)

$
0
0
I am trying to set up a reverse proxy which handles SSL. This is my first
time, so I may be doing something stupid.

On the NGINX which is acting as a proxy I get this:

SSL_do_handshake() failed (SSL: error:140770FC:SSL
routines:SSL23_GET_SERVER_HELLO:unknown protocol) while SSL handshaking to
upstream,

On the NGINX which is upstream I am configured to only accept TLS, because
of recent SSL security problems.

ssl_protocols TLSv1.2 TLSv1.1 TLSv1;

I would guess that the problem here is that NGINX is opening the proxy
connection using the wrong SSL protocol. Is there a way to control which
protocol it uses for the proxy connection?

Thanks for any help,

Edward
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How to write nginx, NGINX or Nginx ? (5 replies)

$
0
0
Hello. I'm writing some documentation for a project that use NGINX. I'm wondering what's the correct way to write nginx.

a) NGINX - Always all uppercase
b) nginx - Always all lowercase. Even at the beginning of a sentence
c) Nginx - Always capitalized

Is there an official way ?

Thanks

--
Simone

SSL (no replies)

$
0
0
I have two IIS servers with SSL certificate, and a load balancer with
NGINX. I need to know if i must install a SSL certificate in the NGINX
machine to?


Luis Vargas Catalán
SEGURIDAD Y REDES

mifuturofinanciero.com
Navarra 3720, Las Condes, SCL
Tel: *+56 2 22282884*
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Error: This server's certificate chain is incomplete. (5 replies)

$
0
0
Hi All
I managed to get the nginx reverse proxy up and forwarding to my https web
server.
I think I have missed something though as a user just let me know that when
he tried to access the site he gets a message that the certificate is
invalid.

I just did a test with ssllabs and noticed that it shows this error: "This
server's certificate chain is incomplete. "

Any ideas on what I have missed? Thanks for the assistance.

Regards.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Redirect to domain TTFB very slow (6 replies)

$
0
0
Hi

On my Nginx server i use a domain "domain.com" and i have all files here:

/home/nginxs/domains/mydomain.com/public


There i have a folder named "gadgets" and i have some files there and i use a redirect to another domain for this folder.

So if a user types seconddomain.com it goes to the folder gadgets in the first domains folder and get al results.....

The problem is that is very slow (first domain is loading super fast !) and then checking i found this :

So Time to first byte is very slow 6 seconds :(

No idea how can fix this :(

And i can't move that folder to the new created account that i redirect as the files inside gadgets are interacting with other files there from main account.....

second domain config:

server {
listen 80;
server_name mydomain.com;
return 301 $scheme://www.mydomain.com$request_uri;
}
server {
listen 80;
server_name blog.mydomain.com;
root /home/nginx/domains/firstdomain/public/blog;
index index.php;
access_log /var/log/nginx/blog.gogadget.gr_access.log;
error_log /var/log/nginx/blog.gogadget.gr_error.log;
location / {
try_files $uri $uri/ /index.html /index.php?$args;
}
}
server {
listen 80;
server_name www.mydomain.com dev.mydomain.com;
root /home/nginx/domains/firstdomain.com/public;
index index.php;
access_log /var/log/nginx/mydomain.com_access.log;
error_log /var/log/nginx/mydomain.com_error.log;
location /go {
return 301 http://www.mydomain.com/;
}
location / {
try_files $uri $uri/ /index.html /index.php?$args;
}
location /blog/ {
deny all;
}
error_page 500 502 504 /500.html;
location ~* ^.+\.(?:css|cur|js|jpg|jpeg|gif|ico|png|html|xml|zip|rar|mp4|3gp|flv|webm|f4v|ogm)$ {
access_log off;
expires 30d;
tcp_nodelay off;
open_file_cache max=3000 inactive=120s;
open_file_cache_valid 45s;
open_file_cache_min_uses 2;
open_file_cache_errors off;
}
location /api2/ {
rewrite ^/api2/(.*)$ /api/public/index.php?route=$1 last;
}
location ~* /(uploads|public)/ {
access_log off;
expires 30d;
}
location ~ /\.ht {
deny all;
}
include /usr/local/nginx/conf/staticfiles.conf;
include /usr/local/nginx/conf/php.conf;
include /usr/local/nginx/conf/drop.conf;
include /usr/local/nginx/conf/block.conf;
#include /usr/local/nginx/conf/errorpage.conf;
}

Any ideas?

I am using ZendOpcache and Memcached...

Thanks

Happy new 2015 ! (no replies)

$
0
0
And may your nginx keep performing no matter which OS it's running on !

Also from the support staff at the forums and all contributors have a good one !

Timeout for whole request body (no replies)

$
0
0
Nginx provides client_body_timeout which is only for a period between two successive read operations but in one of our use case we want to set timeout for whole request body. To set the timeout for whole request body, we changed the source to add a new timer. We would like to know whether this approach is correct or not. Please correct me if there is any issue in the following code.
Thanks

diff -bur src/core/ngx_connection.c src/core/ngx_connection.c
--- src/core/ngx_connection.c 2014-12-19 15:33:48.000000000 +0530
+++ src/core/ngx_connection.c 2015-01-02 00:18:19.000000000 +0530
@@ -884,6 +884,10 @@
ngx_del_timer(c->write);
}

+ if (c->read_full->timer_set) {
+ ngx_del_timer(c->read_full);
+ }
+
if (ngx_del_conn) {
ngx_del_conn(c, NGX_CLOSE_EVENT);

diff -bur src/core/ngx_connection.h src/core/ngx_connection.h
--- src/core/ngx_connection.h 2014-12-19 15:33:48.000000000 +0530
+++ src/core/ngx_connection.h 2015-01-02 00:42:41.000000000 +0530
@@ -114,6 +114,7 @@
void *data;
ngx_event_t *read;
ngx_event_t *write;
+ ngx_event_t *read_full;

ngx_socket_t fd;

diff -bur src/http/ngx_http_request_body.c src/http/ngx_http_request_body.c
--- src/http/ngx_http_request_body.c 2014-12-19 15:33:48.000000000 +0530
+++ src/http/ngx_http_request_body.c 2015-01-02 00:53:37.000000000 +0530
@@ -27,6 +27,7 @@
static ngx_int_t ngx_http_request_body_save_filter(ngx_http_request_t *r,
ngx_chain_t *in);

+static void ngx_http_full_body_timer_handler(ngx_event_t *wev);

ngx_int_t
ngx_http_read_client_request_body(ngx_http_request_t *r,
@@ -355,6 +356,15 @@
clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module);
ngx_add_timer(c->read, clcf->client_body_timeout);

+ if (c->read_full == NULL) {
+ c->read_full=ngx_pcalloc(c->pool, sizeof(ngx_event_t));
+ c->read_full->handler = ngx_http_full_body_timer_handler;
+ c->read_full->data = r;
+ c->read_full->log = r->connection->log;
+ ngx_add_timer(c->read_full, 10000);
+ }
+
+
if (ngx_handle_read_event(c->read, 0) != NGX_OK) {
return NGX_HTTP_INTERNAL_SERVER_ERROR;
}
@@ -1081,3 +1091,13 @@

return NGX_OK;
}
+
+static void ngx_http_full_body_timer_handler(ngx_event_t *wev)
+{
+ if (wev->timedout) {
+ ngx_http_request_t *r;
+ r = wev->data;
+ //ngx_close_connection(r->connection);
+ ngx_http_finalize_request(r, NGX_HTTP_REQUEST_TIME_OUT);
+ }
+}
diff -bur src/http/ngx_http_request.c src/http/ngx_http_request.c
--- src/http/ngx_http_request.c 2014-12-19 15:33:48.000000000 +0530
+++ src/http/ngx_http_request.c 2015-01-02 00:24:32.000000000 +0530
@@ -2263,6 +2263,10 @@
if (c->write->timer_set) {
ngx_del_timer(c->write);
}
+
+ if (c->read_full->timer_set) {
+ ngx_del_timer(c->read_full);
+ }
}

c->read->handler = ngx_http_request_handler;
@@ -2376,6 +2380,10 @@
ngx_del_timer(c->write);
}

+ if (c->read_full->timer_set) {
+ ngx_del_timer(c->read_full);
+ }
+
if (c->read->eof) {
ngx_http_close_request(r, 0);
return;

proxy_pass ignoring gai.conf/RFC3484 (no replies)

$
0
0
Hi everyone (and a happy new year!),

I'm trying to setup NginX as a reverse proxy to an internal machine
which has both private IPv4 and ULA IPv6 addresses, both resolvable from
the same name ``internal_machine`` to A and AAAA entries in our local
DNS servers. Outbound connections are still using IPv4, but I want to
phase out our private IPv4 ones in favour of ULA IPv6, thus I'm using
``/etc/gai.conf`` to leverage the mechanism described in [RFC 3484][] to
configure ``getaddrinfo()`` responses. This is my configuration:

precedence ::1/128 50 # loopback IPv6 first
precedence fdf4:7759:a7d2::/48 47 # then our ULA IPv6 range
precedence ::ffff:0:0/96 45 # then IPv4 (private and public)
precedence ::/0 40 # then IPv6 ...
precedence 2002::/16 30
precedence ::/96 20

[RFC 3484]: http://tools.ietf.org/html/rfc3484

This configuration seems to be correct, i.e. running ``getent ahosts
internal_machine`` puts ULA IPv6 addresses before private IPv4. If I
exchange the priorities of ULA IPv6 and IPv4, the command puts IPv4
addresses first. So far so good.

BUT if I configure NginX with ``proxy_pass http://internal_machine;``,
it always insists in using the IPv4 address first, regardless of what
``gai.conf`` says. The only way I have to force IPv6 first is
hardwiring it in the URL (which is ugly) or including the resolution in
``/etc/hosts`` (which disperses configuration).

Is this behaviour expected? Maybe I missed some configuration aspect?
I'm currently using:

# nginx -V # from Debian Wheezy backports
nginx version: nginx/1.6.2
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fstack-protector \
--param=ssp-buffer-size=4 -Wformat -Werror=format-security \
-D_FORTIFY_SOURCE=2' --with-ld-opt=-Wl,-z,relro \
--prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf \
--http-log-path=/var/log/nginx/access.log \
--error-log-path=/var/log/nginx/error.log \
--lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid \
--http-client-body-temp-path=/var/lib/nginx/body \
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi \
--http-proxy-temp-path=/var/lib/nginx/proxy \
--http-scgi-temp-path=/var/lib/nginx/scgi \
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug \
--with-pcre-jit --with-ipv6 --with-http_ssl_module \
--with-http_stub_status_module --with-http_realip_module \
--with-http_auth_request_module --with-http_addition_module \
--with-http_dav_module --with-http_geoip_module \
--with-http_gzip_static_module --with-http_image_filter_module \
--with-http_spdy_module --with-http_sub_module \
--with-http_xslt_module --with-mail --with-mail_ssl_module \
--add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-auth-pam \
--add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-dav-ext-module \
--add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-echo \
--add-module=/tmp/buildd/nginx-1.6.2/debian/modules/nginx-upstream-fair \
--add-module=/tmp/buildd/nginx-1.6.2/debian/modules/ngx_http_substitutions_filter_module
# uname -a
Linux frontend01 2.6.32-4-pve #1 SMP Mon May 9 12:59:57 CEST 2011 x86_64 GNU/Linux

I found [an nginx-devel thread][1] revolving around a similar issue, but
the proposed solutions overlooked ``/etc/gai.conf``.

[1]: http://www.mail-archive.com/nginx-devel%40nginx.org/msg01893.html
"proxy_pass behavior"

Thank you very much for your help!

--
Ivan Vilata i Balaguer

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

limit_conn module exclude also on Maxim Dunin recommended code (no replies)

$
0
0
Hi

I am using this code to limit requests and exclude some ip's"

http {

limit_req_zone $limit zone=delta:8m rate=60r/s;

geo $limited {
default 1;
192.168.45.56/32 0;
199.27.128.0/21 0;
173.245.48.0/20 0;
}

map $limited $limit {
1 $binary_remote_addr;
0 "";
}


And this on the domain config:

server {

limit_req zone=delta burst=90 nodelay;


Now i have two questions:

1)Does nginx realy knows how to exclude ip's in this format .0/21 or i must use them as 199.27.128.5 for example?

199.27.128.0/21

2)Now i want to use the limit_conn_zone on the above recommendation from Maxim Dunin...

like this:

http {

limit_conn_zone $binary_remote_addr zone=alpha:8m;
limit_req_zone $limit zone=delta:8m rate=60r/s;

geo $limited {
default 1;
192.168.45.56/32 0;
199.27.128.0/21 0;
173.245.48.0/20 0;
}

map $limited $limit {
1 $binary_remote_addr;
0 "";
}


And this on the domain config:

server {

limit_conn alpha 20;
limit_req zone=delta burst=90 nodelay;

But how i can use the above exclude list for the limit_conn module also?

Thanks

Location served by all virtual servers (no replies)

$
0
0
Hi, I have some configuration issue with my nginx. Currently both URLs return the same page when I open:
http://domain1.com/SharedFIles and http://domain2.com/SharedFiles.

Location "SharedFiles" is definied only in one virtual server (domain2) however it is accessible from both domains. How come?
I'd like to have it only in a way that only domain2.com serves SharedFiles location.

What's wrong? THank you!


Here are two config files (doamin1 and domain2) I have in sites-available:

file domain1:
server {
listen 80; ## listen for ipv4; this line is default and implied
root /home/pi/webapps/domain1/public_html;
index index.html index.htm;
server_name *.domain1.com;
}

file domain2:
server {
listen 80;
server_name *.domain2.com;

access_log /home/pi/webapps/domain2/logs/nginx-access.log;
error_log /home/pi/webapps/domain2/logs/nginx-error.log;

location /SharedFiles {
root /media/Seagate/Video;
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
autoindex on;
}
}

resolver directive doesn't fallback to the system DNS resolver (no replies)

$
0
0
Hello,
I am looking at how to use nginx's resolver directive (http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver) to address this one issue i am facing. I have a host for which there is already an entry in the system DNS resolver (verified using nslookup/dig) but when i specify the same host in the proxy_pass directive inside a location block, i get the following error thrown in nginx.log
015/01/05 14:24:13 [error] 22560#0: *5 no resolver defined to resolve ...

Seems like nginx is not falling back to the system DNS resolver in case the 'resolver' directive is not used. Isn't this incorrect behaviour ?

Thanks
-Kunal

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Upstream Keepalive connection close (no replies)

$
0
0
I have Nginx server configured with couple of backend servers with keepalive connections enabled.

I am trying to understand what will be the Nginx's behaviour in case the connection is closed by an upstream server legitimately when Nginx is trying to send a new request exactly at the same time. In this race condition, does Nginx re-try the request internally or does it return an error code?

In case Nginx needs to be forced to retry, should I be using proxy_next_upstream? My understanding is that this setting will make the request re-tried on the next server in the upstream block. On the same note, how do I force the retry on the failed server first to avoid cache misses.

Thanks,
Gopala

Conversion Scripts (no replies)

$
0
0
Hi,
Does anyone has any scripts to convert the F5 config to Nginx (acting as a reverse proxy) config?

Thanks..

Bug re: openssl-1.0.1 (4 replies)

$
0
0
Hi All
I'm trying to use nginx to also proxy to owa. I am getting the error
*peer closed connection in SSL handshake while SSL handshaking to upstream*

I have read that this is due to a bug and that the solution is to downgrade
to openssl 1.0

I don't want to downgrade because I would want users to be able to connect
using TLS-1.1 and 1.2 and my understanding is that support for these
protocols was introduced in openssl-1.0.1

So my question is: Is this a bug in nginx or in openssl? If nginx, has it
been fixed yet or will it be soon?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx restart/reload not working (no replies)

$
0
0
I have compiled nginx from source and i think that there is something wrong with my init script.I changed the error log from debug to crit but error log was still showing [debug] in logs.I had to killall nginx and then i ran service nginx start to nginx again

#!/bin/sh
#
# nginx - this script starts and stops the nginx daemon
#
# chkconfig: - 85 15
# description: Nginx is an HTTP(S) server, HTTP(S) reverse \
# proxy and IMAP/POP3 proxy server
# processname: nginx
# config: /etc/nginx/nginx.conf
# config: /etc/sysconfig/nginx
# pidfile: /var/run/nginx.pid

# Source function library.
. /etc/rc.d/init.d/functions

# Source networking configuration.
. /etc/sysconfig/network

# Check that networking is up.
[ "$NETWORKING" = "no" ] && exit 0

nginx="/usr/sbin/nginx"
prog=$(basename $nginx)

NGINX_CONF_FILE="/etc/nginx/nginx.conf"

[ -f /etc/sysconfig/nginx ] && . /etc/sysconfig/nginx

lockfile=/var/lock/subsys/nginx

make_dirs() {
# make required directories
user=`$nginx -V 2>&1 | grep "configure arguments:" | sed 's/[^*]*--user=\([^ ]*\).*/\1/g' -`
if [ -z "`grep $user /etc/passwd`" ]; then
useradd -M -s /bin/nologin $user
fi
options=`$nginx -V 2>&1 | grep 'configure arguments:'`
for opt in $options; do
if [ `echo $opt | grep '.*-temp-path'` ]; then
value=`echo $opt | cut -d "=" -f 2`
if [ ! -d "$value" ]; then
# echo "creating" $value
mkdir -p $value && chown -R $user $value
fi
fi
done
}

start() {
[ -x $nginx ] || exit 5
[ -f $NGINX_CONF_FILE ] || exit 6
make_dirs
echo -n $"Starting $prog: "
daemon $nginx -c $NGINX_CONF_FILE
retval=$?
echo
[ $retval -eq 0 ] && touch $lockfile
return $retval
}

stop() {
echo -n $"Stopping $prog: "
killproc $prog -QUIT
retval=$?
echo
[ $retval -eq 0 ] && rm -f $lockfile
return $retval
}

restart() {
configtest || return $?
stop
sleep 1
start
}

reload() {
configtest || return $?
echo -n $"Reloading $prog: "
killproc $nginx -HUP
RETVAL=$?
echo
}

force_reload() {
restart
}

configtest() {
$nginx -t -c $NGINX_CONF_FILE
}

rh_status() {
status $prog
}

rh_status_q() {
rh_status >/dev/null 2>&1
}

case "$1" in
start)
rh_status_q && exit 0
$1
;;
stop)
rh_status_q || exit 0
$1
;;
restart|configtest)
$1
;;
reload)
rh_status_q || exit 7
$1
;;
force-reload)
force_reload
;;
status)
rh_status
;;
condrestart|try-restart)
rh_status_q || exit 0
;;
*)
echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload|configtest}"
exit 2
esac





nginx version: nginx/1.7.9
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC)
TLS SNI support enabled
configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_spdy_module --with-http_realip_module --with-http_geoip_module --with-http_sub_module --with-http_random_index_module --with-http_gzip_static_module --with-http_stub_status_module --with-debug

HTTPS Load Test (3 replies)

$
0
0
Hi Folks,
I am trying to get some performance numbers on nginx by sending HTTP and HTTPS requests. My aim is to check the ratio of CPU usage, connections/sec across HTTP and HTTPS requests.

In the process, I need to verify certain certificates/keys needed for SSL . Are there any tools which can help in generating the load in the following conditions:


1. Keepalive/Persistent HTTP client support.
2. Options to verify the certificates/keys/CA chain certs.

Thanks,
Jagannath
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx call external api (no replies)

$
0
0
Hi @all,
i need some help with the following situation: we use nginx as reverse proxy for microsoft exchange owa / active sync

All working so far but since yesterday we have a new firewall (Palo Alto) which supports "User-ID", meaning that the remote IP is connect to the domain\username. That means that all non-microsoft devices (Apple, Linux) can also use user-based policies in the firewall.

Now the problem is, that the username, which is accessing exchange, is bound to the proxy ip and not to the client ip.

There exits an Palo Alto API which supports manual mapping via the API. Now my idea was to use the parameters $remote_addr and $remote_user to get this running but i have no idea how to call the api.

An example looks like this:
https://<Firewall-IPaddress>/api/?type=user-id&key=<Key Value>&action=set&vsys=vsys1&cmd=<uid-message><version>1.0</version><type>update</type><payload><login><entry name="pan\sam1" ip="192.168.141.82"/></login></payload></uid-message>

"pan\sam1" has to be replaced by $remote_user and ip by $remote_addr, right?

But which is the right place in the config to start the api call? My config looks similiar like this: forum.nginx.org/read.php?11,252590,252590

Thanks a lot in advance,
Uwe
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>