Quantcast
Viewing all 7229 articles
Browse latest View live

another "bind() to 0.0.0.0:80 failed (98: Address already in use)" issue (no replies)

Hello All,

I have another "bind() to 0.0.0.0:80 failed (98: Address already in use)" issue.

I am working on a minimal system including nginx only. System startup time, and readiness time are important points. Whilte testing I figured out sometime system boots up within 500ms and sometimes it takes around 3 second. On further probing I find out nginx is taking different time to start up which costs me extra 2.5 Seconds. So I tested and figured out that error in those cases is "bind() to 0.0.0.0:80 failed (98: Address already in use).

Few of my observation here are,
1. No other process is using that port, there is no other web server or application running on the system.
2. The case is not only limited to nginx restart, where nginx might not be shutdown correctly and itself might be using that port. Nginx even fails during system start, in cases where it has caused longer boot time.
3. I use customized kernels, but that kernel shouldn't be culprit either because sometimes it works on that kernel as well. Another point here is failure in customized kernel is more often as compared to stock kernel. The ratio of failure in stock kernel is around 30% and in customized is 70% but system works on both and fails on both.
4. Start/Stop scripts always exit with success status "0".
5. I tested nginx in a restart loop, with a 1 second sleep before and after start and stop. Failure is random.
6. Worse, nginx is actually running even though error said bind failed. I can connect to it, access default web page, and it is listed in netstat as listening as well.

Output of netstat -ntl is at: http://pastebin.com/26b6KNAZ

Error Log is at: http://pastebin.com/w0y8aa9p

This is one of the customized system, a derivative of debian, I am working on. System wise, everything is consistent. I use same kernel, same system image with same parameters and it works sometime and fails otherwise.

nginx -t gives
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
So configuration shouldn't be a problem.

configuration file is default and available at:
http://pastebin.com/iRFfW3UE

Process listing after nginx startup: http://pastebin.com/0vB19rLq
Process listing after nginx stop: http://pastebin.com/iQafxjiF

Any pointer to debug the issue would be very helpful.

Regards,
sum-it

nginx X-backend upstream hostname (no replies)

Hi,

I'm have following configuration.
nginx 1.10.3 installed on ubuntu 16.04

one upstream


upstream backend {
server app01.local.net:81;
server app02.local.net:81;
server app03.local.net:81;
}


one vhost that dose proxy_pass http://backend;

i also have a old nginx setup that was done some time ago, on every request its add a header X-Backend:app01, depending on with backed the request is sent.
i tried to reproduce the setup with no success, i checked all files but did not find any configuration that is setting X-Backend on the nginx side.

can someone help me with this issue.

Thanks.

Vuko

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Limited period offer till stocks last (no replies)

Limited period offer till stocks
last

www.hotprivatemall.com

Websocket, set Sec-Websocket-Protocol (no replies)

Hello,guys!
How I can set Sec-WebSocket-Protocol in config?
I've tried proxy_set_header Sec-WebSocket-Protocol "v10.stomp, v11.stomp"; and add_header Sec-WebSocket-Protocol "v10.stomp, v11.stomp".
In response, I'm not getting 'Sec-WebSocket-Protocol' header.
What can be wrong?

P.S. Nginx 1.6.2

Best regards, Arthur.

Nginx proxy_pass HTTPS/SSL/HTTP2 keepalive (1 reply)

So the Nginx documentation says this http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

For HTTP, the proxy_http_version directive should be set to “1.1” and the “Connection” header field should be cleared:

upstream http_backend {
server 127.0.0.1:8080;

keepalive 16;
}

server {
...

location /http/ {
proxy_pass http://http_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
...
}
}


But does it also apply for HTTPS/HTTP2 because proxy_http_version gets set to 1.1 ?

Example :

upstream https_backend {
server 127.0.0.1:443;

keepalive 16;
}

server {
listen 443 ssl http2;

location /https/ {
proxy_pass https://https_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}

One NGINX server to 2 backend servers (no replies)

Hello everybody,

I have installed a dedicated server for NGINX based on Ubuntu Core

I want to define this behavior

NGINX Server (Public IP) >>> listening on port 80 >>> redirect to a LAN Server on port 80 (http://mylocalserver/virtualhost1)
NGINX Server (Public IP) >>> listening on port 81 >>> redirect to a LAN Server on port 80 (http://mylocalserver/virtualhost2)

My local virtualhost on my backend server is reachable (ie: http://mylocalserver/virtualhost1)
but my second virtualhost is not reachable (ie: http://mylocalserver/virtualhost2)

it is like the network port is closed but my firewall is accepting the flow.

here is my configuration if you have any idea why my second virtualhost is not reachable

##NGINX.CONF##

user www-data;
worker_processes 2;
events {
worker_connections 19000;
}
worker_rlimit_nofile 40000;
http {

client_body_timeout 5s;
client_header_timeout 5s;
keepalive_timeout 75s;
send_timeout 15s;
gzip on;
gzip_disable "msie6";
gzip_http_version 1.1;
gzip_comp_level 5;
gzip_min_length 256;
gzip_proxied any;
gzip_vary on;
gzip_types
application/atom+xml
application/javascript
application/json
application/rss+xml
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/svg+xml
image/x-icon
text/css
text/plain
text/x-component;

client_max_body_size 100k;
client_body_buffer_size 128k;
client_body_in_single_buffer on;
client_body_temp_path /var/nginx/client_body_temp;
client_header_buffer_size 1k;
large_client_header_buffers 4 4k;

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

#PROXY.CONF#
proxy_redirect on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_hide_header X-Powered-By;
proxy_intercept_errors on;
proxy_buffering on;


proxy_cache_key "$scheme://$host$request_uri";
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache:10m inactive=7d max_size=700m;


#VIRTUALHOST 1
server {
listen 80;
server_name virtualhost1;
}

#VIRTUALHOST 2
server {
listen 81;
server_name virtualhost2;
}


Could you please help me regarding my issue,

Thanks so much,

How to cache image urls with query strings? (no replies)

We've recently started delivering image urls with query strings for cropping, like

http://images-camping.info/CampsiteImages/116914_Large.jpg?width=453&height=302&mode=crop

We've also successfully been using the NGINX cache for our images *before* adding the query strings.

Unfortunately, with the query strings added, caching does not work anymore and all requests to above URL are passed to the upstream server. You can see this by inspecting the HTTP response headers for above url: X-Cache-Status is always MISS.

Can anybody point to me to the necessary pieces of information to get caching for resources with query strings to work?

(no subject) (no replies)

Hi Team ,

Would like to know how i can configure Ngnix LB with SSL termination ?
In addition to the above would like to configure LB with multiple httpd's
with single IP . Can you guide me how i can do the same with proxy pass ?

Note : I have a single OHS server with 2 different httpd.conf files
listening to two different ports. Need to configure LB with SSL
termination to redirect to same servers .


Need a step by step guideline help - thanks
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx with a ICAP-like front-end (no replies)

Hi I want to add an ICAP-like front-end validation server V with nginx.
The user scenario is like this:

The client will usually access the real app server R via nginx, but with
a validation server V, the client request will first pass to V, V will dp
certain
validation and upon sucess the request will be forwarded to R and R will
return directly to clients; Upon failure, the request will be denied.

Is there any easy nginx config which can achieve this? Thanks,

- Alder
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Using proxy_cache_background_update (2 replies)

Hi all,

I tested the new proxy_cache_background_update function to serve stale
content while fetching an update in the background.

I ran into the following issue:
- PHP application running on www.example.com
- Root document lives on /index.php

As soon as the cache has expired:
- A client requests http://www.example.com/
- Nginx returns the stale response
- In the background Nginx will fetch
http://www.mybackend.com/index.html (index.html
instead of index.php or just /)
- The backend server returns a 404 (which is normal)
- The root document remains in stale state as Nginx is unable to fetch it
properly

As a workaround I included "rewrite ^/index.html$ / break;" to rewrite the
/index.html call to a simple / for the backend server.
This works, but is not ideal.

Is there a better way to tell Nginx to just fetch "/"?

Thanks,

Jean-Paul Hemelaar
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx limit_conn and limit_req for static .js (javascript) .css (stylesheets) images (no replies)

So in the documentation and from what I see online everyone is limiting requests to prevent flooding on dynamic pages and video streams etc.

But when you visit a HTML page the HTML page loads up allot of various different elements like .css .js .png .ico .jpg files.

To prevent those elements also being flooded by bots or malicious traffic.

I was going to to the following.

#In http block
limit_conn_zone $binary_remote_addr zone=addr1:100m;
limit_req_zone $binary_remote_addr zone=two2:100m rate=100r/s; #style sheets javascript etc
#end http block

#in server location block
location ~* \.(ico|png|jpg|jpeg|gif|swf|css|js)$ {

limit_conn addr1 10; #Limit open connections from same ip
limit_req zone=two2 burst=5; #Limit max number of requests from same ip

expires max;
}
#end server location block


Because on my sites I know that all together in a single HTML page request there will never be any more than 100 of those static elements that could be requested in a single page. I set the limit_req rate as "rate=100r/s;" For 100 requests a second.

Does anyone have any recommended limits for these element types if my value is perhaps to high or to low I set it according to roughly how many media files I know can get requested each time a HTML page gets rendered.

nginx stopping abruptly at fix time (2:00 am) repeatedly on Cent OS 7.2 (no replies)

Hi ,

Please note that we are using nginx v 1.10.2 and on one of our webserver (centos 7.2) we are observing below error and sudden stopping of nginx service repeatedly at fix time i.e. at 2:00 am. Below are error lines for your reference :

2017/02/26 02:00:01 [alert] 57550#57550: *131331605 open socket #97 left in connection 453
2017/02/26 02:00:01 [alert] 57550#57550: *131334225 open socket #126 left in connection 510
2017/02/26 02:00:01 [alert] 57550#57550: *131334479 open socket #160 left in connection 532
2017/02/26 02:00:01 [alert] 57550#57550: *131334797 open socket #121 left in connection 542
2017/02/26 02:00:01 [alert] 57550#57550: *131334478 open socket #159 left in connection 552
2017/02/26 02:00:01 [alert] 57550#57550: *131334802 open socket #194 left in connection 633
2017/02/26 02:00:01 [alert] 57570#57570: aborting
2017/02/26 02:00:01 [alert] 57553#57553: aborting
2017/02/26 02:00:01 [alert] 57539#57539: aborting
2017/02/26 02:00:01 [alert] 57550#57550: aborting

Also find below nginx conf files for your reference :

worker_processes auto;
events {
worker_connections 4096;
use epoll;
multi_accept on;
}
worker_rlimit_nofile 100001;

http {
include mime.types;
default_type video/mp4;
proxy_buffering on;
proxy_buffer_size 4096k;
proxy_buffers 5 4096k;
sendfile on;
keepalive_timeout 30;
tcp_nodelay on;
tcp_nopush on;
reset_timedout_connection on;
gzip off;
server_tokens off;
log_format access '$remote_addr $http_x_forwarded_for $host [$time_local] ' '$upstream_cache_status ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" $request_time'

Also note that we have similar servers with exact same nginx config running but those servers are not giving any such errors. Also we are not running any script or cron at this point of time.
Kindly help us to resolve this issue. Also let me know in case any other details are required from my end.

set_real_ip_from,real_ip_header directive in ngx_http_realip_module (no replies)

Hello,
I tried to limit an IPv4 Address with ngx_http_limit_req module and
ngx_realip_module via Akamai would send True-Client-IP headers.

According to the document ngx_http_readip_module(
http://nginx.org/en/docs/http/ngx_http_realip_module.html),
we can write set_real_ip_from and real-_ip_header directive in http,
server, location context.

But, in the above case(ngx_http_limit_req module is defined the key in http
context), directives on ngx_http_realip_module must be defined before the
keys(a.k.a replaced IPv4 adress by ngx_http_realip_module) and followed
limit_req_zone directive in http context.

I think it better that the document explained ngx_http_realip_module
directive is configured before ngx_http_limit_req module configuration.

Our environment is Amazon Linux on AWS EC2 package and nginx version was
1.10.1.

If you already plan to improve the documentation and you know, please let
me know and I will check it out.

Thanks.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx using variables / split_clients is very slow (2 replies)

Hi,

I want to use nginx as reverse proxy for an A/B testing scenario.
The nginx should route to two versions of one backend service. The two versions provides this service via different URL paths.
Example:
* x% of all requests https://edgeservice/myservice should be routed to https://1.2.3.4/myservice,
* the other 100-x% should be routed to https://4.3.2.1/myservice/withAnothePath.
For that I wanted to use the split_client directive that perfectly matches our requirements.

We have a general nginx configuration that reaches a high throughput (at least 2.000 requests / sec) - unless we specify the routing target via nginx variables.

So, when specifying the routing target "hard coded" in the proxy_pass directive (variant 0) or when using the upstream directive (variant 1), the nginx routes very fast (at least 2.000 req/sec).
Once we use split_clients directive to specify the routing target (variant 2) or we set a variable statically (variant 3), the nginx ist very slow and reaches only 20-50 requests / sec. All other config parameters are the same for all variants.

We did some research (nginx config reference, google, this forum...) to find a solution for this problem.
Now that we do not find any approach I wanted to ask the mailing list if you have got any idea?
Is there a solution to increase performance when using split_clients so that we can reach at least 1.000 requests / sec?
Or did we already reach maximum performance for this scenario?

It would be great if we could used split_clients since we are very flexible in defining routing rules and we can route to backend services with different URL paths.

Kind Regards
Lars


nginx 1.10.3 running on Ubunutu trusy
nginx.conf:
...
http {
...

# variant 1
upstream backend1 {
ip_hash;
server 1.2.3.4;
}

# variant 2
split_clients $remote_addr $backend2 {
50% https://1.2.3.4/myservice/;
50% https://4.3.2.1/myservice/withAnotherPath;
}

server {
listen 443 ssl backlog=163840;

# variant 3
set $backend3 https://1.2.3.4/myservice;

location /myservice {
# V0) this is fast
proxy_pass https://1.2.3.4/myservice;

# V1) this is fast
proxy_pass https://backend1;

# V2) this is slow
proxy_pass $backend2;

# V3) this is slow
proxy_pass $backend3;
}
}
}

IPv6 upstream problem (2 replies)

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hello!

Currently I have problem with upstream with IPv6. For example I have
an origin with subdomain dual-stack-ipv4-ipv6.xtremenitro.org.

dual-stack-ipv4-ipv6.xtremenitro.org IN A 192.168.1.1
dual-stack-ipv4-ipv6.xtremenitro.org IN AAAA 2001:xx:xx::1;

My configuration are like this :
$ nginx -V
nginx version: nginx/1.11.10
built by gcc 4.8.5 20150623 (Red Hat 4.8.5-4) (GCC)
built with LibreSSL 2.4.5
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
- --modules-path=/usr/lib64/nginx/modules
- --conf-path=/etc/nginx/nginx.conf
- --error-log-path=/var/log/nginx/error.log
- --http-log-path=/var/log/nginx/access.log
- --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock
- --http-client-body-temp-path=/var/cache/nginx/client_temp
- --http-proxy-temp-path=/var/cache/nginx/proxy_temp
- --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
- --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
- --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx
- --group=nginx --with-http_ssl_module --with-openssl=libressl-2.4.5
- --with-http_realip_module --with-http_addition_module
- --with-http_sub_module --with-http_gunzip_module
- --with-http_gzip_static_module --with-http_random_index_module
- --with-http_stub_status_module --with-http_auth_request_module
- --with-http_image_filter_module=dynamic
- --with-http_geoip_module=dynamic --with-http_perl_module=dynamic
- --with-http_xslt_module=dynamic --add-dynamic-module=ngx_cache_purge
- --add-dynamic-module=nginx-module-vts
- --add-dynamic-module=headers-more-nginx-module
- --add-dynamic-module=ngx_small_light --add-dynamic-module=ngx_brotli
- --add-dynamic-module=nginx_upstream_check_module --with-threads
- --with-stream=dynamic --with-stream_ssl_module
- --with-http_slice_module --with-mail=dynamic --with-mail_ssl_module
- --with-file-aio --with-ipv6 --with-http_v2_module --with-cc-opt='-g
- -Ofast -march=native -ffast-math -fstack-protector-strong -Wformat
- -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2'


...
resolver 103.52.3.72 ipv6=off;

upstream cf {
server dual-stack-ipv4-ipv6.xtremenitro.org;
}

.... snip ...

location ~ \.(jpe?g|gif|png|JPE?G|GIF|PNG)$ {
proxy_pass http://cf;
proxy_cache_background_update on;
proxy_cache_use_stale error timeout updating http_500
http_502 http_503 http_504;
proxy_cache_valid 200 302 301 60m;
proxy_cache images;
proxy_cache_valid any 3s;
proxy_cache_lock on;
proxy_cache_lock_timeout 60s;
proxy_cache_min_uses 1;
proxy_ignore_headers Cache-Control Expires;
proxy_hide_header X-Cache;
proxy_hide_header Via;
proxy_hide_header ETag;
}

I see on error log, all error was came from IPv6 upstream.

2017/02/28 22:13:15 [error] 24079#24079: *429979 upstream timed out
(110: Connection timed out) while connecting to upstream, client:
114.120.233.8, server: dual-stack-ipv4-ipv6.xtremenitro.org, request:
"GET /2015-09/thumbnail_360/wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg
HTTP/2.0", subrequest:
"/2015-09/thumbnail_360/wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg",
upstream:
"http://[2600:9000:2031:4000:6:24ba:3100:93a1]:80/2015-09/thumbnail_360/
wd/d7d63419f8ac6b6981cec72c8a6644ea.jpg?of=webp&q=50",
host: "dual-stack-ipv4-ipv6.xtremenitro.org", referrer: "[REMOVED]"
2017/02/28 22:13:20 [error] 24080#24080: *432226 upstream timed out
(110: Connection timed out) while connecting to upstream, client:
124.153.33.23, server: dual-stack-ipv4-ipv6.xtremenitro.org, request:
"GET /2016-02/thumbnail_360/wd/df4f88d6a5d62427c11e746e187ba527.jpg
HTTP/1.1", subrequest:
"/2016-02/thumbnail_360/wd/df4f88d6a5d62427c11e746e187ba527.jpg",
upstream:
"http://[2600:9000:2031:7e00:6:24ba:3100:93a1]:80/2016-02/thumbnail_360/
wd/df4f88d6a5d62427c11e746e187ba527.jpg?of=webp&q=50",
host: "dual-stack-ipv4-ipv6.xtremenitro.org", referrer: "[REMOVED]"

Any hints, clue or help are very appreciated.
Thanks in advance
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQI4BAEBCAAiBQJYtZ3JGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl
f9IgoCjNcFiXD/46SeZToPFxfwaG2SwFtbMCsa3e2aelQOdjl36o893zgN7EkgkU
NIiLBTuydSke0I2tF6uof2eCpJdKaxP1R+iWPa3FE1rfn8s3gE32CnJZetBzaPn2
/6j1S5s5ZfT8n+91URAvAzEvBzhWfqErJqWH+Q8JYvrW57eEn/6DoIqcyyqw287m
ZbSovx+bkTj3q+hClxURyU+oHq8g1TaiGimp8eBWmdyciTn+vk8L5qUZ8rgFniBS
75zVoZvim3yO7qpnCi98gFv1N+ghlEnqRtO/xNoC+I7cCbp93OoWfQi8z6T9Ljyu
pkg7ptNZ8slIHhcsjxf6V3wW6Uuih0q/BFdc8WVmNzkL/tfW6cwBDzz2kymcaOBl
hB+KRMsS5yTj4uVpnabzqDMRANUw/mvaM+t+4XWcvXWVhQY1pHT+pynD1kVzXnug
EGszUcA71ZNMPqH9fGLrN7igaBRRt1GMn7/sqNQKmY54GwjSJAziE0edpapBrP7I
aWMQaLdc7DBudlR4rMNaXt9bGh/2oQm1T4/xImK8sp9SHBFKyZBkMZq+UnGdPlGZ
UyU/XOJrDca//ipsI2g3G6LUBpUKJtoE6bMsTRhakMaU8K3T0s1sgB1oBYsNQGyb
YpLDfnZMxXk/Jn2ttXG22E1b8MtQsDdn946hsRrGddIWg+4bucgzEbAYLA==
=Cs9Q
-----END PGP SIGNATURE-----
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Issue about nginx removing the header "Connection" in HTTP response? (no replies)

Hi, nginx guy,

In our system, for some special requests, the upstream server will return a response which the header includes "Connection: Close". According to HTTP protocol, "Connection" is one-hop header. So, nginx will remove this header and the client can't do the business logic correctly.

How to handle this scenario?

Thanks
Liu Peng
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

NGINX - Reverse Proxy With Authentication at 2 Layers (no replies)

** Problem Background **
I have an application, say app-A, which is running on a private network unreachable by public network. Now a new requirement needs to deliver the webpages of app-A to external users over public network.

As a solution to expose app-A, I want to use NGINX as reverse proxy and will use two layers of authentication as explained below. Kindly advise if i am moving in the right direction in implementing the secure entry using NGINX.

Reference Images attached at the end of email.

** Authentication Level 1 ** NGINX Auth Service As a solution to expose app-A, I want to use NGINX as reverse proxy and API gateway for External users to access the application in internal network. Once NGINX authenticates the request it will forward to app-A.

** Authentication Level 2 ** App-A performs Authentication After receiving request from nginx, app-A will perform its own authentication, ignoring that the request came pre-authenticated from NGINX. app-A will perform the authentication as app-A is to be kept unaware of the new NGINX reverse proxy and app-A will continue to work as is.

** Problem Situation **
NGINX Authentication service authenticates the request and sets a session-id in response so that it can identify the next request coming from the same client. As app-A also authenticates the request and puts the session-id in response. The problem here is that one session-id will get overriden by the other.

Questions/Options in consideration :

1. (Image-ref-1) Is there anyway that I can configure NGINX to keep both the session-ids seperate in the request so that Auth service and app-A can recognise there own session informations for authenticated client.

2. (image-Ref-2) If both the session info cannot be saved, then can we configure NGINX to store session-id response of app-A and auth service both in its memory and only send the session-id of auth service back to client. And when the request comes back with Auth Service's session-id, NGINX should correlate the session of App-A and forward App-A's session to app-A. This way the request would get authenticated at both layers.

3. Which solution can be performed from the above 2 ?

4. Is it good approach to have 2 layers of authentication when NGINX's API gateway is used? If not then what configuration is required in app-A to not perform authentication for the requests coming from NGINX? Application environment java spring.?

** Links to Images **
Image-Ref-1 : http://i64.tinypic.com/27zbthj.gif
Image-Ref-2 : http://i63.tinypic.com/35a2lbp.png

Large latency increase when using a resolver for proxy_pass (3 replies)

Hi,
I am wanting to resolve upstream hostnames used for proxy_pass inline with the TTL of the DNS record but when I add this configuration the response from nginx is MUCH slower.
I've run tcpdump and the upstream DNS record is be resolved and reresolved as TTL would dictate.
But requests are slower even when the TTL has not expired and nginx is not attempting to resolve the record.

The difference is quite large: 0.3s vs 1s.

Measured like this:
curl -w "%{response_code} %{time_total}\n" -s -o /dev/null 'http://my_nginx_host/api/something_something


== Slow Configuration ===

resolver 8.8.8.8 8.8.4.4 ipv6=off;
server {
listen 80;
set $upstream_host https://my.upstream-host.com;
location ~ /api/ {
rewrite /api/(.*) /$1 break;
proxy_pass $upstream_host;
}
}

And tcpdump:
12:56:16.697158 IP 10.180.32.6.19343 > 10.76.8.92.80: Flags [P.], seq 2024041718:2024042038, ack 2611558079, win 4136, options [nop,nop,TS val 829926189 ecr 2513956729], length 320: HTTP: GET /api/something_something HTTP/1.1
12:56:16.697175 IP 10.76.8.92.80 > 10.180.32.6.19343: Flags [.], ack 320, win 229, options [nop,nop,TS val 2513956752 ecr 829926189], length 0
12:56:16.697287 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [S], seq 239123061, win 28400, options [mss 1420,sackOK,TS val 2513956752 ecr 0,nop,wscale 7], length 0
12:56:16.841393 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [S.], seq 2824997386, ack 239123062, win 28960, options [mss 1460,sackOK,TS val 1717524764 ecr 2513956752,nop,wscale 5], length 0
12:56:16.841423 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [.], ack 1, win 222, options [nop,nop,TS val 2513956896 ecr 1717524764], length 0
12:56:16.841585 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [P.], seq 1:290, ack 1, win 222, options [nop,nop,TS val 2513956896 ecr 1717524764], length 289
12:56:16.985287 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [.], ack 290, win 939, options [nop,nop,TS val 1717524908 ecr 2513956896], length 0
12:56:16.985428 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [P.], seq 1:4097, ack 290, win 939, options [nop,nop,TS val 1717524908 ecr 2513956896], length 4096
12:56:16.985442 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [.], ack 4097, win 286, options [nop,nop,TS val 2513957040 ecr 1717524908], length 0
12:56:16.986672 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [P.], seq 4097:4736, ack 290, win 939, options [nop,nop,TS val 1717524909 ecr 2513956896], length 639
12:56:16.986679 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [.], ack 4736, win 308, options [nop,nop,TS val 2513957041 ecr 1717524909], length 0
12:56:16.987530 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [P.], seq 290:416, ack 4736, win 308, options [nop,nop,TS val 2513957042 ecr 1717524909], length 126
12:56:17.131482 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [P.], seq 4736:4962, ack 416, win 939, options [nop,nop,TS val 1717525054 ecr 2513957042], length 226
12:56:17.131714 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [P.], seq 416:779, ack 4962, win 330, options [nop,nop,TS val 2513957186 ecr 1717525054], length 363
12:56:17.315103 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [.], ack 779, win 972, options [nop,nop,TS val 1717525238 ecr 2513957186], length 0
12:56:17.649290 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [P.], seq 4962:5766, ack 779, win 972, options [nop,nop,TS val 1717525572 ecr 2513957186], length 804
12:56:17.649310 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [P.], seq 5766:5797, ack 779, win 972, options [nop,nop,TS val 1717525572 ecr 2513957186], length 31
12:56:17.649366 IP 23.210.228.34.443 > 10.76.8.92.48696: Flags [F.], seq 5797, ack 779, win 972, options [nop,nop,TS val 1717525572 ecr 2513957186], length 0
12:56:17.649377 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [.], ack 5797, win 352, options [nop,nop,TS val 2513957704 ecr 1717525572], length 0
12:56:17.649566 IP 10.76.8.92.48696 > 23.210.228.34.443: Flags [F.], seq 779, ack 5798, win 352, options [nop,nop,TS val 2513957704 ecr 1717525572], length 0
12:56:17.649599 IP 10.76.8.92.80 > 10.180.32.6.19343: Flags [P.], seq 1:789, ack 320, win 229, options [nop,nop,TS val 2513957704 ecr 829926189], length 788: HTTP: HTTP/1.1 200 OK

== Fast Configuration ===

server {
listen 80;
location ~ /api/ {
rewrite /api/(.*) /$1 break;
proxy_pass https://my.upstream-host.com;
}
}

And tcpdump:
12:49:13.058495 IP 10.180.32.5.15708 > 10.76.5.82.80: Flags [P.], seq 4214721185:4214721506, ack 57908483, win 4136, options [nop,nop,TS val 829505219 ecr 2514027940], length 321: HTTP: GET /api/something_something HTTP/1.1
12:49:13.058510 IP 10.76.5.82.80 > 10.180.32.5.15708: Flags [.], ack 321, win 229, options [nop,nop,TS val 2514027965 ecr 829505219], length 0
12:49:13.058696 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [S], seq 722986122, win 28400, options [mss 1420,sackOK,TS val 2514027965 ecr 0,nop,wscale 7], length 0
12:49:13.064657 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [S.], seq 3299173152, ack 722986123, win 28960, options [mss 1460,sackOK,TS val 1717551633 ecr 2514027965,nop,wscale 5], length 0
12:49:13.064677 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [.], ack 1, win 222, options [nop,nop,TS val 2514027971 ecr 1717551633], length 0
12:49:13.064808 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [P.], seq 1:482, ack 1, win 222, options [nop,nop,TS val 2514027971 ecr 1717551633], length 481
12:49:13.070245 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [.], ack 482, win 939, options [nop,nop,TS val 1717551639 ecr 2514027971], length 0
12:49:13.070423 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [P.], seq 1:138, ack 482, win 939, options [nop,nop,TS val 1717551639 ecr 2514027971], length 137
12:49:13.070432 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [.], ack 138, win 231, options [nop,nop,TS val 2514027977 ecr 1717551639], length 0
12:49:13.070619 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [P.], seq 482:533, ack 138, win 231, options [nop,nop,TS val 2514027977 ecr 1717551639], length 51
12:49:13.070663 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [P.], seq 533:896, ack 138, win 231, options [nop,nop,TS val 2514027977 ecr 1717551639], length 363
12:49:13.076080 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [.], ack 896, win 972, options [nop,nop,TS val 1717551644 ecr 2514027977], length 0
12:49:13.287759 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [P.], seq 138:942, ack 896, win 972, options [nop,nop,TS val 1717551856 ecr 2514027977], length 804
12:49:13.287832 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [P.], seq 942:973, ack 896, win 972, options [nop,nop,TS val 1717551856 ecr 2514027977], length 31
12:49:13.287843 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [.], ack 973, win 243, options [nop,nop,TS val 2514028194 ecr 1717551856], length 0
12:49:13.287845 IP 23.66.26.5.443 > 10.76.5.82.32816: Flags [F.], seq 973, ack 896, win 972, options [nop,nop,TS val 1717551856 ecr 2514027977], length 0
12:49:13.287972 IP 10.76.5.82.32816 > 23.66.26.5.443: Flags [F.], seq 896, ack 974, win 243, options [nop,nop,TS val 2514028194 ecr 1717551856], length 0
12:49:13.288004 IP 10.76.5.82.80 > 10.180.32.5.15708: Flags [P.], seq 1:789, ack 321, win 229, options [nop,nop,TS val 2514028194 ecr 829505219], length 788: HTTP: HTTP/1.1 200 OK

I know the upstreams in the two tcpdump examples here have different IP addresses but the upstream is Akamai with a TTL on 19s so they are constantly changing. But the the response time is absolutely consistent at 0.3s vs 1s.

What am I missing here? Is it because the upstream host is using HTTPS? I am using version 1.11.10.

Nginx support - transparent Forward proxy for internet traffic and reverse proxy for local Web server traffic (no replies)

Hi All,
We are planning to integrate Nginx in our router. Basically, we wanted to
confirm whether Nginx supports the below functionalities

1) As a reverse proxy + Forward transparent proxy - To route the traffic
from specific domains to a local web server and all other traffic to
public internet

2) Websocket support over reverse proxy - To support WebSocket
communication between client and local web server in Nginx reverse proxy
mode


--
Regards,
John
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Support for TLS extended master secret extension (1 reply)

Hello,

Does nginx support TLS extended master secret extension(RFC 7627:
https://tools.ietf.org/html/rfc7627 )?

If so what directive is available to enable or disable it?

Thanks,
Santosh
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>