Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

remote_addr not set using x-real-ip (1 reply)

$
0
0
Hi All,

I would just like to check what mistake i did on implementing real-ip
module.
Im using nginx 1.6.2 with real_ip_module enabled:

nginx -V
nginx version: nginx/1.6.2
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fstack-protector
--param=ssp-buffer-size=4 -Wformat -Wformat-security
-Werror=format-security -D_FORTIFY_SOURCE=2'
--with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro'
--prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf
--http-log-path=/var/log/nginx/access.log
--error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock
--pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi
--http-proxy-temp-path=/var/lib/nginx/proxy
--http-scgi-temp-path=/var/lib/nginx/scgi
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit
--with-ipv6 --with-http_ssl_module --with-http_stub_status_module*
--with-http_realip_module*


i have the following entry on nginx.conf

real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;
real_ip_recursive on;

and i added the following to format my logs:

log_format custom_logs '"$geoip_country_code" - "$http_x_forwarded_for" -
"$remote_addr" -

in which i get this results:

"-" - "172.16.8.39, 102.103.104.105" - "172.16.8.39" -
"-" - "172.16.23.72, 203.204.205.206" - "172.16.23.72"
"-" - "172.16.163.36, 13.14.15.16" - "172.16.163.36"

the first column does not match any country code on the geoip database
since it is detected as the private IP ( in which this country's ISP seems
to have proxy sending the private IP )

if using real_ip modules i should be seeing the source IP on $remote_addr
in the logs, is that correct? please advise if anyone has encountered the
same issue. thank you in advanced.

Regards,
Ron
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

HttpLuaModule - SPDY seems fully supported now? (no replies)

$
0
0
Hi,

in the HttpLuaModule docs it is written that SPDY mode is not fully supported yet: http://wiki.nginx.org/HttpLuaModule#SPDY_Mode_Not_Fully_Supported
Specifically, that "ngx.location.capture()" does not work yet.

However, i ran some code that uses ngx.location.capture(), with SPDY and everything worked (both SPDY and the Lua code).

So is it possible that SPDY mode now works fully under ngx_lua, only the documentation is not updated?

nginx version: openresty/1.7.4.1

Bug or feature (1 reply)

$
0
0
Dear Reader.

I have setup ed only mod_proxy

http://nginx.org/en/docs/http/ngx_http_proxy_module.html

.....
proxy_pass $my_upstream;
....


no mod_upstream.

http://nginx.org/en/docs/http/ngx_http_upstream_module.html

My logformat looks like this.

############
log_format upstream_log '$remote_addr [$time_local] '
'"$request" $status $body_bytes_sent '
'up_resp_leng $upstream_response_length up_stat
$upstream_status '
'up_resp_time $upstream_response_time
request_time $request_time';
############

Is this a expected behavior ;-)?

Cheers
Aleks

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx Supports SLES 11? (3 replies)

$
0
0
Hi,

According to http://nginx.org/en/linux_packages.html, Nginx only supports
SLES12. Can Nginx runs on SLES 11 also?

Thanks,
Mei Ken










_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Proxy cache of X-Accel-Redirect, how? (no replies)

$
0
0
Hi!
Tried to cache the X-Accel-Redirect responses from Phusion Passenger application server with the use of a second layer without success (followed the hint on http://forum.nginx.org/read.php?2,241734,241948#msg-241948).

Configuration:
1) Application server (Phusion Passenger)
adds X-Accel-Redirect header to response
sends to
2) NGINX server
>> tries to cache <<
proxy_ignore_headers X-Accel-Redirect;
proxy_pass_header X-Accel-Redirect;
passenger_pass_header X-Accel-Redirect;

sends to
3) NGINX server
delivers file

But caching of the reuest on server (2) does not work.
Any idea?

default_server directive not respected (2 replies)

$
0
0
I have multiple files each with a config for a different vhost.
On one of these config files (included in the main nginx config file) I set
the default_server directive:

server {

listen 80;
listen 443 ssl default_server spdy;
server_name 188.166.X.XXX;
root /var/www/default;
index index.php index.html;
...
}

.... but it's not respected. If I point the A record of a domain I didn't
add in a nginx server block, the first server block in alphabetical order
is picked up (instead of the default_server).
Why?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx with php configuration how to block all requests/urls other than two? (5 replies)

$
0
0
So i use nginx with PHP and i have the following two urls i want to allow access on the subdomain.

The full url would be sub1.domain.com/index.php?option=com_hwdmediashare&task=addmedia.upload&base64encryptedstring

if ( $args ~ 'option=com_hwdmediashare&task=addmedia.upload([a-zA-Z0-9-_=&])' ) {
}

And

sub1.domain.com/media/com_hwdmediashare/assets/swf/Swiff.Uploader.swf

But i cant figure out in nginx how to block all other traffic/requests on the subdomain apart from those two urls can anyone help me get a understanding of the location block of nginx so i can block access to all links apart from those two.

rbtree in ngx_http_upstream_fair_module.c (no replies)

$
0
0
hi..

Just wanted to ensure my understanding of rbtree usage in Grzegorz Nosek's upstream fair load balancer is correct. I believe the rbtree is necessary because when nginx.conf is reloaded workers may continue to reference upstream server metadata from earlier versions aka generations of the nginx.conf file. The rbtree stores the metadata until none of the workers reference it. The extra complexity is needed because this load balancer tracks server load across requests and nginx.conf reloads. Does this seem accurate? If so, is this currently considered a recommended way to handle this situation?

thanks

Google QUIC support in nginx (1 reply)

$
0
0
Any plans to support Google QUIC[1] in nginx?

[1] http://en.wikipedia.org/wiki/QUIC

Limit incoming bandwith with nginx !! (1 reply)

$
0
0
Hi,

is there a way we can limit incoming bandwidth (from Remote to linux
box) using nginx ? Nginx is forwarding user requests to different URL and
downloading videos locally due to which server's incoming port is choking
on 1Gbps for large number of concurrent users. If we can lower incoming
bandwidth to 500Mbps it'll surely help us.

Regards.
shahzaib
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Will this work, is it the best way? (no replies)

$
0
0
Hi,

Slightly complicated setup with 2 nginx servers.

server1 has a public ipv4 address using proxy_pass to server2 over ipv6
which only has a public ipv6, this then has various upstreams for each
subdomain.

ipv6 capable browsers connect directly to server2, those with only ipv4
will connect via server1.

I'm currently considering something like the below config.


server1 - proxy all subdomain requests to upstream ipv6 server:

http {
server_name *.example.com;
location / {
proxy_pass http://fe80::1337;
}
}

server2:

http {
server_name ~^(?<subdomain>\w+)\.example\.com$;
location / {
proxy_pass http://$subdomain
}

upstream subdomain1 {
server 127.0.0.1:1234;
}
}

The theory here is that each subdomain and upstream would match, meaning
that when adding another upstream it would just need the upstream{}
block configuring and automatically work.

I realise there's dns stuff etc but that's out of scope for this list
and I can deal with that.

Does this seem sound? It's not going to see major usage but hopefully
this will reduce work when adding new upstreams.

If you've a better way to achieve this please let me know.

Steve.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Why does fastcgi_keep_conn default to off? (1 reply)

$
0
0
Why does fastcgi_keep_conn default to off?
On seems to be the faster option.

Intermittent SSL Handshake Errors (2 replies)

$
0
0
Hi,

We are using round-robin DNS to distribute requests to three servers
all running identically configured nginx. Connections then go upstream
to HAProxy and then to our Rails app.

About two weeks ago, users began to experience intermittent SSL
handshake errors. Users reported that these appeared as
"ssl_error_no_cypher_overlap" in the browser. Most of our reports have
come from Firefox users, although we have seen reports from Safari and
stock Android browser users as well. In our nginx error logs, we began
to see consistent errors across all three servers. They started at
around the same time and no recent modifications were made to hardware
or software:

....
2015/01/13 12:22:59 [crit] 11871#0: *140260577 SSL_do_handshake()
failed (SSL: error:1408A0D7:SSL
routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL
handshaking, client: *.*.*.*, server: 0.0.0.0:443
2015/01/13 12:23:09 [crit] 11874#0: *140266246 SSL_do_handshake()
failed (SSL: error:1408A0D7:SSL
routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL
handshaking, client: *.*.*.*, server: 0.0.0.0:443
2015/01/13 12:23:54 [crit] 11862#0: *140293705 SSL_do_handshake()
failed (SSL: error:1408A0D7:SSL
routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL
handshaking, client: *.*.*.*, server: 0.0.0.0:443
2015/01/13 12:23:54 [crit] 11862#0: *140293708 SSL_do_handshake()
failed (SSL: error:1408A0D7:SSL
routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL
handshaking, client: *.*.*.*, server: 0.0.0.0:443
2015/01/13 12:25:18 [crit] 11870#0: *140342155 SSL_do_handshake()
failed (SSL: error:1408A0D7:SSL
routines:SSL3_GET_CLIENT_HELLO:required cipher missing) while SSL
handshaking, client: *.*.*.*, server: 0.0.0.0:443
....

Suspecting that this may be related to our SSL configuration in nginx
and a recent update to a major browser, I decided to get us up to
date. Previously we were on CentOS5 and could only use an older
version of OpenSSL with the latest security patches. This meant we
could only support TLSv1.0 and a few of the secure recommended
ciphers. After upgrading to CentOS6 and implementing Mozilla's
recommended configurations for TLSv1.0, TLSv1.1, and TLSv1.2 support,
I am confident that we are following best practices for SSL browser
compatibility and security. Unfortunately this did not fix the issue.
Users began to report a new error in their browser:
"ssl_error_inappropriate_fallback_alert", and this is currently
reflected in our nginx error logs across all three servers:

....

2015/01/31 03:24:33 [crit] 30658#0: *57298755 SSL_do_handshake()
failed (SSL: error:140A1175:SSL
routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL
handshaking, client: *.*.*.*, server: 0.0.0.0:443
2015/01/31 03:24:35 [crit] 30661#0: *57299105 SSL_do_handshake()
failed (SSL: error:140A1175:SSL
routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL
handshaking, client: *.*.*.*, server: 0.0.0.0:443
2015/01/31 03:24:41 [crit] 30657#0: *57300774 SSL_do_handshake()
failed (SSL: error:140A1175:SSL
routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL
handshaking, client: *.*.*.*, server: 0.0.0.0:443
2015/01/31 03:24:41 [crit] 30657#0: *57300783 SSL_do_handshake()
failed (SSL: error:140A1175:SSL
routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL
handshaking, client: *.*.*.*, server: 0.0.0.0:443
2015/01/31 03:24:41 [crit] 30661#0: *57300785 SSL_do_handshake()
failed (SSL: error:140A1175:SSL
routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL
handshaking, client: *.*.*.*, server: 0.0.0.0:443
....

Thinking that I had ruled out a faulty SSL stack or nginx
configuration, I focused on monitoring the network connections on
these servers. ESTABLISHED connections are currently at 13k and
TIME_WAIT is at 94k on one server, if that gives any indication to the
type of connections we are dealing with. The other two have very
similar stats. This is typical for peak hours of traffic. I tried
tuning kernel params: lowering tcp_fin_timeout, increasing
tcp_max_syn_backlog, increasing the range of ip_local_port_range,
turning on tcp_tw_reuse, and other popular tuning practices. Nothing
has helped so far and more users continue to contact us about issues
using our site.

I've exhausted my ideas and I'm not quite sure what's gone wrong. I
would be extremely appreciative of any guidance list members could
provide. Below are more technical details about our installation and
configuration of nginx.

nginx -V output:

nginx version: nginx/1.6.2
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC)
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log
--pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx
--group=nginx --with-http_ssl_module --with-http_realip_module
--with-http_addition_module --with-http_sub_module
--with-http_dav_module --with-http_flv_module --with-http_mp4_module
--with-http_gunzip_module --with-http_gzip_static_module
--with-http_random_index_module --with-http_secure_link_module
--with-http_stub_status_module --with-http_auth_request_module
--with-mail --with-mail_ssl_module --with-file-aio --with-ipv6
--with-http_spdy_module --with-cc-opt='-O2 -g -pipe
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m64 -mtune=generic'

nginx config files:

--- /etc/nginx/nginx.conf ---
user nginx;
worker_processes 12;

error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
worker_connections 50000;
}

http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format with_cookie '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$cookie_FL"';

access_log /var/log/nginx/access.log;

sendfile on;
tcp_nopush on;
tcp_nodelay on;

keepalive_timeout 65;

gzip on;
gzip_http_version 1.0;
gzip_comp_level 2;
gzip_proxied any;
gzip_types text/plain text/html text/css application/x-javascript
text/xml application/xml application/xml+rss text/javascript
application/json;
gzip_vary on;

server_names_hash_bucket_size 64;


set_real_ip_from *.*.*.*;
real_ip_header X-Forwarded-For;

include /etc/nginx/upstreams.conf;
include /etc/nginx/sites-enabled/*;
}

--- /etc/nginx/sites-enabled/fl-ssl.conf ---

server {
root /var/www/fl/current/public;

listen 443;
ssl on;
ssl_certificate /etc/nginx/ssl/wildcard.fl.pem;
ssl_certificate_key /etc/nginx/ssl/wildcard.fl.key;
ssl_session_timeout 5m;
ssl_session_cache shared:SSL:50m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_prefer_server_ciphers on;

server_name **********.com;

access_log /var/log/nginx/fl.ssl.access.log with_cookie;
client_max_body_size 400M;
index index.html index.htm;

if (-f $document_root/system/maintenance.html) {
return 503;
}

# Google Analytics
if ($request_filename ~* ga.js$) {
rewrite .* http://www.google-analytics.com/ga.js permanent;
break;
}

if ($request_filename ~* /adgear.js/current/adgear_standard.js) {
rewrite .* http://**********.com/adgear/adgear_standard.js permanent;
break;
}

if ($request_filename ~* /adgear.js/current/adgear.js) {
rewrite .* http://**********.com/adgear/adgear_standard.js permanent;
break;
}

if ($request_filename ~* __utm.gif$) {
rewrite .* http://www.google-analytics.com/__utm.gif permanent;
break;
}

if ($host ~* "www") {
rewrite ^(.*)$ http://*********.com$1 permanent;
break;
}

location / {
location ~* \.(eot|ttf|woff)$ {
add_header Access-Control-Allow-Origin *;
}

if ($request_uri ~* ".(ico|css|js|gif|jpe?g|png)\?[0-9]+$") {
expires max;
break;
}

# needed to forward user's IP address to rails
proxy_set_header X-Real-IP $remote_addr;

# needed for HTTPS
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-FORWARDED_PROTO https;
proxy_redirect off;
proxy_max_temp_file_size 0;

if ($request_uri ~* /polling) {
proxy_pass http://ssl_polling_upstream;
break;
}

if ($request_uri = /upload) {
proxy_pass http://rest_stop_upstream;
break;
}

if ($request_uri = /crossdomain.xml) {
proxy_pass http://rest_stop_upstream;
break;
}

if (-f $request_filename/index.html) {
rewrite (.*) $1/index.html break;
}

# Rails 3 is for old testing stuff... We don't need this anymore
#if ($http_cookie ~ "rails3=true") {
# set $request_type '3';
#}

if ($request_uri ~* /polling) {
set $request_type '${request_type}P';
}


if ($request_type = '3P') {
proxy_pass http://rails3_upstream;
break;
}

if ($request_uri ~* /polling) {
set $request_type '${request_type}P';
}

if ($request_type = '3P') {
proxy_pass http://rails3_upstream;
break;
}

if ($request_type = 'P') {
proxy_pass http://ssl_polling_upstream;
break;
}

if (!-f $request_filename) {
set $request_type '${request_type}D';
}

if ($request_type = 'D') {
proxy_pass http://ssl_fl_upstream;
break;
}

if ($request_type = '3D') {
proxy_pass http://rails3_upstream;
break;
}
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

buffering / uploading large files (3 replies)

$
0
0
Hi, how can tell nginx not to buffer client's requests? I need this capability to upload files larger than the nginx's max buffering size. I got an nginx unknown directive error when I tried the fastcgi_request_buffering directive. Is the directive supported and I am missing a module in my nginx build? I am running nginx 1.7.9. Thank you!

SSL3_CTX_CTRL:called a function you should not call (1 reply)

$
0
0
nginx 1.6.2 + libressl 2.1.3

>tail -f [...]/port-443/*.log

==> stderr.log <==
2015/02/01 01:35:34 [alert] 15134#0: worker process 15139 exited on signal 11
2015/02/01 01:35:34 [alert] 15134#0: shared memory zone "SSL" was locked by 15139
2015/02/01 01:35:42 [alert] 15134#0: worker process 15138 exited on signal 11
2015/02/01 01:35:42 [alert] 15134#0: shared memory zone "SSL" was locked by 15138
2015/02/01 01:35:49 [alert] 15134#0: worker process 15140 exited on signal 11
2015/02/01 01:35:49 [alert] 15134#0: shared memory zone "SSL" was locked by 15140
2015/02/01 01:36:20 [alert] 15134#0: worker process 15584 exited on signal 11
2015/02/01 01:36:20 [alert] 15134#0: shared memory zone "SSL" was locked by 15584
2015/02/01 01:36:27 [alert] 15134#0: worker process 15586 exited on signal 11
2015/02/01 01:36:27 [alert] 15134#0: shared memory zone "SSL" was locked by 15586
2015/02/01 01:36:34 [alert] 15134#0: worker process 15585 exited on signal 11
2015/02/01 01:36:34 [alert] 15134#0: shared memory zone "SSL" was locked by 15585

>tail -f [...]/vhost_123/port-443/*.log

==> stderr.log <==
2015/02/01 01:36:13 [alert] 15584#0: *54 ignoring stale global SSL error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you should not call) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
2015/02/01 01:36:20 [alert] 15586#0: *55 ignoring stale global SSL error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you should not call) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443
2015/02/01 01:36:27 [alert] 15585#0: *56 ignoring stale global SSL error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you should not call) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443

patch to src/event/ngx_event_openssl.c (nginx 1.6.2) (no replies)

$
0
0
nginx-1.6.2

>make

[...]
src/event/ngx_event_openssl.c:2520:9: error: implicit declaration of function 'RAND_pseudo_bytes' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
RAND_pseudo_bytes(iv, 16);
^
1 error generated.


patch:

perl -i.bak -0p -e 's|(^#include <ngx_event.h>).*(typedef struct)|$1\n#include <openssl\/rand\.h>\n\nint RAND_bytes\(unsigned char \*buf, int num\);\nint RAND_pseudo_bytes\(unsigned char \*buf, int num\);\n\n$2|ms' ./src/event/ngx_event_openssl.c;

EC_GOST_2012_Test (warning) (no replies)

$
0
0
nginx-1.6.2

>make

[...]
ec/ec_curve.c:2918:2: warning: unused variable '_EC_GOST_2012_Test' [-Wunused-const-variable]
_EC_GOST_2012_Test = {
^
1 warning generated.


Perhaps its defining block is best moved to the 1.7 branch.

Slow downloads over SSL (1 reply)

$
0
0
Hi,

I'm trying to find answers to a problem that I'm currently experiencing in all my servers. Downloads offered over HTTPS are at least 4 times slower than those delivered over HTTP. All these servers are running nginx/1.6.2. Here is my nginx.conf in case someone have experienced something similar and could give me a hint. By the way, when I say 4 x slower I'm being optimistic... I can download 4-5MB/s over HTTP while https download are 600-700kb/s the fastest I've seen.

user www-data;
worker_processes 2;
pid /run/nginx.pid;
worker_rlimit_nofile 4096;

events {
worker_connections 1024;
multi_accept on;
use epoll;
}

http {

# SSL Configuration
###################
ssl_buffer_size 8k;
ssl_session_cache shared:SSL_CACHE:20m;
ssl_session_timeout 4h;
ssl_session_tickets on;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM;
ssl_prefer_server_ciphers on;


# Custom Settings
#################

open_file_cache max=10000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
charset UTF-8;

client_body_buffer_size 128K;
client_header_buffer_size 1k;
client_max_body_size 25m;
large_client_header_buffers 4 8k;

fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;
fastcgi_read_timeout 120s;

client_body_timeout 20;
client_header_timeout 20;
keepalive_timeout 25;
send_timeout 20;
reset_timedout_connection on;


# Basic Settings
################

sendfile on;
tcp_nopush on;
tcp_nodelay on;
types_hash_max_size 2048;
server_tokens off;

server_names_hash_bucket_size 64;
server_name_in_redirect off;

include /etc/nginx/mime.types;
default_type application/octet-stream;


# Logging Settings
##################

access_log off;
error_log /var/log/nginx/error.log;


# Gzip Settings
###############

gzip on;
#gzip_disable "msie6";
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 5;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript;


# Virtual Host Configs
######################

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

Weird nginx SSL connection issues (no replies)

$
0
0
OK, I have an incredibly weird nginx connection issue.

I have a cluster of boxes that are responsible for terminating SSL requests
and passing them to a local haproxy instance for further routing. I have
corosync/pacemaker setup to manage the IP addresses and failover instances
if there’s an issue.

This server has been running fine for a long time, but we recently had to
reboot because of the GHOST stuff. Before we did that, we did an apt-get
upgrade to get to the latest Debian Wheezy packages, including a new nginx
(1.6.2), openssl, kernel, and just about

After that happened, we started seeing connection issues to the nginx that
does SSL termination. We When it was happening, about 50% of our requests
were timing out (iOS/Android clients). I was testing manually using curl
when it was happening, and we were seeing huge fluctuations in the time it
takes to connect. I saw a lot of connections just timing out completely, in
combination with connections take 1s, 3s, 15s, 30s, etc…

When this issue was happening to nginx, haproxy on the same box was
unaffected, tested by curling every second from a box close to it, logging
the results and verifying results. So, it seemed to just be SSL with nginx.

Now that our peak load is down, it’s not as big an issue, but we are still
seeing connection issues when I curl, just more like 1-3s typically, just
not as many. Since we’ve had some time to experiment, I’ve gathered more
information that makes no sense to me.

Almost all the traffic was setup to go to the address managed by corosync.
When I setup my curl tests to run every second, I see the timeouts. SO, I
tried something. I bound the main ip address of the NIC to nginx, reloaded,
and redid the same test, but pointed the curl to go to the main ip address.
As soon as I did that, my curl tests never saw a single issue and the
connect phase never takes more than 2ms and no timeouts.

So, I started thinking it was the corosync IP, so I sent all our traffic to
go to the main nic ip address that just tested fine, and once the normal
traffic levels switched over to main nic, I started seeing curl timeouts
now that it had traffic. So, I then started curling the IP from corosync
that used to be primary, and now IT has no connection issues.

So, I have connection issues to nginx but only on the IP address that takes
the traffic. nginx on a different IP on the same NIC is fine. haproxy on
the same NIC as fine.

What the heck? Struggling to think of anything I could tweak. This doesn’t
make sense, but I have triple checked my info, and it’s legit.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How to Handle "data sent" or "connection closed" event in Module ? (no replies)

$
0
0
Hi.
I'm developing Nginx module where I need to handle some function when all data have been sent to client or when client closed the connection.
For example I have
ngx_http_finalize_request(r, ngx_http_output_filter ( r , out_chain ));

Where out_chain contains over 700KB of data.
I can't find where to add function to handle event that all 700kb have been sent to client, or client closed the connection.

As I understand all that 700kb data Nginx not sending at once it will take some Nginx loops to be sent.

So is there any function or event to handle "data sent" event ?

Thanks.
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>