Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

SSL handshake attack mitigation (no replies)

$
0
0
Greetings!

I run a bunch of sites on nginx-plus-r19 (OpenSSL 1.0.2k-fips) and was recently hit by a nasty DDoS SSL handshake attack.

I noticed nginx worker processes suddenly eating all available CPU and the "Handshakes failed" counter in the nginx plus dashboard suddenly climbing out of proportion to the successful handshakes.

If I understand correctly, the limit_req directive would not be effective in mitigating this type of attack since the SSL handshake occurs earlier in the request chain.

I ended up setting the error_log level to "info" and feeding the failed handshake client IPs to fail2ban.

My first question is regarding the particular error log messages produced during the attack - see example below:

[info] 8050#8050: *146 SSL_do_handshake() failed (SSL: error:14094416:SSL routines:ssl3_read_bytes:sslv3 alert certificate unknown:SSL alert number 46) while SSL handshaking, client: XXX.XXX.XXX.XXX, server: 0.0.0.0:443

The "certificate unknown" seems to suggest that nginx is trying to verify the certificate of the client, yet "ssl_verify_client" should be off by default, so why does nginx care about that certificate?

My second question - is there a better way of mitigating this type of attack? (Preferably without putting an expensive firewall in front of nginx)

I would also like to put in a feature request to have a limit_req equivalent for SSL handshakes.

Thanks!

Custom Sticky Module development (no replies)

$
0
0
Hi guys

We have a use case when we plan to use Nginx as our load-balancer with a session persistence requirement. We are using it in the context of Kubernetes.. Nothing special here

Our specific need is that each user will have one non-shared pod which means that once a upstream server is assigned to one session, it should not be assigned to another user.

To simplify the architecture, we have a Redis cache storage were the list of available servers is listed, which means that each time a user is redirected to a server, the server notifies the Redis that he is not available for assignment but still available for traffic (that’s why we can’t use health probes because we want the traffic to continue to be redirected but only for one user

We think about development a fork of the stick module (https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/src/master/) and make it read the list of availables pods from Redis instead of the in memory list of all upstreams server.

Questions :

1. Does it seems feasible ?
2. It is better to overwrite the sticky module and edit its code (seems to be a built-in module)
3. Is it better to develop and load a custom module .so but then how ensure it can be loaded instead of the builtin module ?

Thank a lot for your help and thoughts





Provenance : Courrierhttps://go.microsoft.com/fwlink/?LinkId=550986 pour Windows 10




_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Disable only Hostname verification of proxied HTTPS server certificate (no replies)

$
0
0
Is there any way where we can configure nginx to only verify the root of the proxied HTTPS server (upstream server) certificate and to skip the host name (or domain name) verification?

As I understand, proxy_ssl_verify directive can be used to completely enable/disable the verification of proxied HTTPS server certificate but not selectively. Is there any directive to only disable the host name verification?

NginX Sudden "Weird server reply" HACKED ? (2 replies)

$
0
0
Hi,

We just recently received an alert against one of our Nginx based server
which has started to download files with any extension e.g .html, .php) on
HTTP instead of processing it. On HTTPS file process fine but on HTTP, even
though its .html extension file it is started to download by the browser.

We've forced redirect setup from HTTP to HTTPS, which is also stopped
working. If we send curl request to HTTP , following is the reply we get:

[root@cw025 /usr/local/etc/nginx/vhosts]# curl -I
http://cw025.domain.com/test.html
curl: (8) Weird server reply

Can anyone help whats going on?

Regards.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Extremely slow file (~5MB) upload via POST (no replies)

$
0
0
Hi everyone

I'm new here, and i've searched if the problem appeared before but couldn't find anything useful.

[DESCRIPTION] I've an upstream backend service behind Niginx (1.16.1, openssl-1.1.1) which allow people upload files from their browser. The files are simply stored on disk. Nothing else is performed on them.

[SYSTEM CONFIG]
. Linux 4.15.0 Ubuntu 18.04 LTS SMP x86_64
. RAM: 32GB
. CPU: 8-Cores Intel(R) Xeon(R) CPU E3-1270 v6 @ 3.80GHz
. DISK: 1TB SSD
. NETWORK CARD: 10Gbps
. System is never under load. We usually upload 10 files per hour at max.

[DATA CONFIG]
. File size is between 5MB to 20MB

[NGINX CONFIG]

We are running Nginx 1.16.1 with TLSv1.3 support (built on openssl 1.1.1).

-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8

$ cat /etc/nginx/nginx.conf
worker_processes auto;
worker_rlimit_nofile 100000;
pid /run/nginx.pid;

error_log off; #/var/log/nginx/error.log info;

events {
worker_connections 655350;
multi_accept on;
use epoll;
}

http {
include mime.types;
default_type application/octet-stream;

server_tokens off;

keepalive_timeout 3600;

access_log off; #/var/log/nginx/access.log;
sendfile on;
tcp_nopush on;
tcp_nodelay on;

types_hash_max_size 2048;

open_file_cache max=10000 inactive=10m;
open_file_cache_valid 1h;
open_file_cache_min_uses 1;
open_file_cache_errors on;
include /etc/nginx/conf.d/*.conf;

}

-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8

$ cat /etc/nginx/conf.d/uploader.conf
server {
listen 443 ssl;

server_name BACKEND_HOST_NAME:

ssl_certificate /etc/nginx/certs/bundle.pem;
ssl_certificate_key /etc/nginx/certs/key.pem;
ssl_dhparam /etc/nginx/certs/dh.pem; ## 2048-bit

ssl_protocols TLSv1.2 TLSv1.3;

ssl_prefer_server_ciphers on;
ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;

client_max_body_size 30m;

location / {
proxy_pass http://127.0.0.1:7777;
}
}

-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8-8

[PROBLEM] A 5MB file takes almost 30 seconds to upload via Nginx.
When uploading it directly to the upstream backend, it takes ~400 millisec at max.

Running strace, we've got this:
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
47.96 0.041738 11 3887 1489 read
21.73 0.018909 13 1509 epoll_wait
17.95 0.015622 22 708 writev
10.62 0.009241 13 712 write
0.47 0.000407 19 21 21 connect
0.39 0.000338 17 20 close
0.21 0.000180 8 22 epoll_ctl
0.20 0.000173 8 21 socket
0.13 0.000110 110 1 accept4
0.11 0.000095 5 21 getsockopt
0.10 0.000091 4 21 recvfrom
0.07 0.000060 3 21 ioctl
0.04 0.000037 12 3 brk
0.03 0.000023 8 3 setsockopt
------ ----------- ----------- --------- --------- ----------------
100.00 0.087024 6970 1510 total

A lot of errors in "read" calls: 1489 errors. They all correspond to (thanks again to strace):

22807 read(3, "\26\3\1\2\0\1\0\1\374\3\3\304\353\3\333\314\0\36\223\244z\246\322n\375\205\360\322\35\237_\240"..., 16709) = 517
22807 read(3, 0x559de2a23f03, 16709) = -1 EAGAIN (Resource temporarily unavailable)
22807 read(3, "\24\3\3\0\1\1\26\3\3\2\0\1\0\1\374\3\3\304\353\3\333\314\0\36\223\244z\246\322n\375\205"..., 16709) = 523
22807 read(3, 0x559de2a23f03, 16709) = -1 EAGAIN (Resource temporarily unavailable)
22807 read(3, "\27\3\3\0E\271m'\306\262\26X\36J\25lC/\202_7\241\32\342XN \357\303%\264\0"..., 16709) = 74
22807 read(3, "\27\3\3\0\245\240\204\304KJ\260\207\301\232\3147\217\357I$\243\266p+*\343L\335\6v\276\323"..., 16709) = 478
22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable)
22807 read(3, "\27\3\3\0\32`\324\324\237\v\266n\300x\24\277\357z\374)\365\260F\235\24\346#A%\300\376", 16709) = 31
22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable)
22807 read(3, "\27\3\3\0\177\310*W\352\265\230\357\325\177\302\275\357=\246`\246^\372\214T\206\264b\352;\273z"..., 16709) = 814
22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable)
22807 read(3, "\27\3\3\0Y\330\276PNY\220\245\254E\0066\2016\355\334\237Yo\2510\253\320+\26z\342\275"..., 16709) = 644
22807 read(3, 0x559de2a229e3, 16709) = -1 EAGAIN (Resource temporarily unavailable)
22807 read(3, "\27\3\3\0Z \237j\230\f\331\222\246\325\1\272Y]\252\255%\31\257L\25\10\226\267 \253\353\367"..., 16709) = 285
22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable)
22807 read(3, "\27\3\3\0\212\216j6\256\370\367\310\366Hjs\275r\276>\217\216\374\377a\375\363\4\2yr\23"..., 16709) = 176
22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable)
22807 read(3, "\27\3\3\0\227K2\345P\200Ls\234\10\230f\362\221\273\270V\2371X\261|\245\315\240B\177\224"..., 16709) = 1717
22807 read(3, 0x559de2a1e9a3, 16709) = -1 EAGAIN (Resource temporarily unavailable)
22807 read(3, "\27\3\3>\232\344\316\245i\375hM\362\376\frr\340\21umx&\3311\373}\35\4\3069`"..., 16709) = 4380
22807 read(3, 0x559de2a1fabf, 11651) = -1 EAGAIN (Resource temporarily unavailable)


We tried to tune our Nginx's config, but the result is always the same:
22807 read(3, 0x559de2a1fabf, 11651) = -1 EAGAIN (Resource temporarily unavailable)


Help appreciated

/F.

Pandora-Charms als auch -Armbänder (no replies)

$
0
0
I am Allgemeinen können die [url=https://www.setschmuckch.com/]pandora schmuck online kaufen[/url] systematisch aus den neuesten Schmuckwerkzeugen und -methoden hergestellt werden, um ihre Eleganz und Anmut für expire zielorientierten Kunden zu erh?hen. Auf der anderen Seite haben die Pandora-Charms ihre eigene Einzigartigkeit und Vielseitigkeit, um Ihre Blicke auf sich zu ziehen. Was fabelhaftesten ist, dass sie zur Zeit zum echten Schmuck für die heißesten, brutzelndsten und glamourösesten Damen der gro?e Welt geworden sind.

Dieses Pandora-Armband und der Pandora-Charm können [url=https://www.setschmuckch.com/silber]pandora silber schweiz sale[/url] absolut in sehr vielen einzigartigen Stilen und Models nach Ihren eigenen Wünschen erhältlich sein. Sie können Ihre eigenen Pandora-Charms als auch -Armbänder sowohl aus kulturellen als auch aus sozialen Gründen verwenden. Auf diesem internationalen Markt gibt es ein weiteres fabelhaftes und einzigartiges Design für Modeschmuck.

Ha sido wird als Pandora-Ohrring bezeichnet. Neben den [url=https://www.setschmuckch.com/tiere]pandora tiere online[/url] können Sie die Pandora-Ringe uneingeschränkt nutzen, um Ihr Selbstverständnis für lange Zeit allzu stärken. Denken Sie daran, dass sie für Sie sehr beliebte, symbolische als auch kostengünstige Schmuckdesigns sind. Aus diesem Grund würden sie auf jeden Fall Ihre Augen an Ort als auch Stelle packen. Derzeit bieten Ihnen verschiedene Online-Juweliergeschäfte expire besten Arten von Pandora-Juwelen, Ringen, Anhängern und Armbändern kostengünstig online an.

Bracelets werden [url=https://www.setschmuckch.com/reise]pandora reise charms sale[/url] seit Jahrhunderten als Medium für Aberglauben, religiösen Glauben und Mode geschluckt. Die alten Ägypter verwendeten zunächst Amulette, um expire Gunst der Götter allzu erlangen und einen Durchgang ins Jenseits zu erhalten. Später, im Laufe der Jahre, wurden Zauber auch eingesetzt, um böse Geister abzuwehren und gleichzeitig Feinde zu verfluchen.

hi all, [nginx]"accept_mutex on" cause 1s delay (no replies)

$
0
0
hi all:
I use nginx-1.16.0, nginx is running on X86 embedded devices. The
embedded device has 4 CPU, CPU type is: "Intel(R) Atom(TM) CPU D525 @
1.80GHz".
When I use "accpet_mutex on", nginx use 1 secod for get static file.





*events { use epoll; accept_mutex on; worker_connections 10240;}*

The debug log is:
###########################################################################
2019/11/08 17:08:10 [debug] 2552#2552: *1 post access phase:
12
2019/11/08 17:08:10 [debug] 2552#2552: *1 generic phase:
13
2019/11/08 17:08:10 [debug] 2552#2552: *1 generic phase:
14
2019/11/08 17:08:10 [debug] 2552#2552: *1 http script copy: "http://
"
2019/11/08 17:08:10 [debug] 2552#2552: *1 http script var: "
pcdnapkwsdl2.com.cn
"
2019/11/08 17:08:10 [debug] 2552#2552: *1 http script copy:
"/"
2019/11/08 17:08:10 [debug] 2552#2552: *1 http script var:
"appstore/developer/soft/20191008/201910081449521157660_v2_820_811.patch"
2019/11/08 17:08:10 [debug] 2552#2552: *1 http init upstream, client timer:
0
2019/11/08 17:08:10 [debug] 2552#2552: *1 epoll add event: fd:15 op:3
ev:80002005
2019/11/08 17:08:10 [debug] 2552#2552: *1 http cache key: "
http://pcdnapkwsdl2.vivo.com.cn
"
2019/11/08 17:08:10 [debug] 2552#2552: *1 http cache key:
"/appstore/developer/soft/20191008/201910081449521157660_v2_820_811.patch"
2019/11/08 17:08:10 [debug] 2552#2552: *1 add cleanup:
00000000026C46C0
2019/11/08 17:08:10 [debug] 2552#2552: shmtx
lock
2019/11/08 17:08:10 [debug] 2552#2552: shmtx
unlock
2019/11/08 17:08:10 [debug] 2552#2552: *1 http file cache exists: 0
e:1
2019/11/08 17:08:10 [debug] 2552#2552: *1 cache file:
"/tmp/storage/youyu/ikcdndata/wangsu2/wan1/p2p_proxy/cache/c/df/468f0ede6aa8ba9073f9a989b8377dfc"
2019/11/08 17:08:10 [debug] 2552#2552: *1 add cleanup:
00000000026C4740
2019/11/08 17:08:10 [debug] 2552#2552: *1 http file cache fd:
16
2019/11/08 17:08:10 [debug] 2552#2552: *1 malloc:
00000000026C48D0:4096
2019/11/08 17:08:10 [debug] 2552#2552: *1 thread read: 16,
00000000026C48D0, 4096,
0
2019/11/08 17:08:10 [debug] 2552#2552: task #0 added to thread pool
"default"
2019/11/08 17:08:10 [debug] 2552#2552: *1 http upstream cache:
-2
2019/11/08 17:08:10 [debug] 2552#2552: *1 http finalize request: -4, "/
pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_811
2019/11/08 17:08:10 [debug] 2552#2552: *1 http request count:2
blk:1
2019/11/08 17:08:10 [debug] 2552#2552: worker
cycle
2019/11/08 17:08:10 [debug] 2552#2552: accept mutex
locked
2019/11/08 17:08:10 [debug] 2552#2552: epoll timer:
-1
2019/11/08 17:08:11 [debug] 2552#2552: epoll: fd:15 ev:0004
d:00007F026302A3F0
2019/11/08 17:08:11 [debug] 2552#2552: *1 post event
00007F0262E48190

2019/11/08 17:08:11 [debug] 2552#2552: timer delta:
533
2019/11/08 17:08:11 [debug] 2552#2552: posted event
00007F0262E48190
2019/11/08 17:08:11 [debug] 2552#2552: *1 delete posted event
00007F0262E48190
2019/11/08 17:08:11 [debug] 2552#2552: *1 http run request: "/
pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_811.patch
?"
2019/11/08 17:08:11 [debug] 2552#2552: worker
cycle
2019/11/08 17:08:11 [debug] 2552#2552: accept mutex
locked
2019/11/08 17:08:11 [debug] 2552#2552: epoll timer:
-1
2019/11/08 17:08:11 [debug] 2552#2564: run task #0 in thread pool
"default"
2019/11/08 17:08:11 [debug] 2552#2564: thread read
handler
2019/11/08 17:08:11 [debug] 2554#2554: timer delta: 809
2019/11/08 17:08:11 [debug] 2554#2554: worker cycle
2019/11/08 17:08:11 [debug] 2554#2554: accept mutex lock failed:
02019/11/08 17:08:11 [debug] 2554#2554: epoll timer:
500
2019/11/08 17:08:11 [debug] 2553#2553: timer delta: 809
2019/11/08 17:08:11 [debug] 2553#2553: worker cycle
2019/11/08 17:08:11 [debug] 2553#2553: accept mutex lock failed: 0
2019/11/08 17:08:11 [debug] 2553#2553: epoll timer: 500
2019/11/08 17:08:11 [debug] 2555#2555: timer delta: 811
2019/11/08 17:08:11 [debug] 2555#2555: worker
cycle
2019/11/08 17:08:11 [debug] 2555#2555: accept mutex lock failed:
0
2019/11/08 17:08:11 [debug] 2555#2555: epoll timer:
500
2019/11/08 17:08:11 [debug] 2552#2564: pread: 4096 (err: 0) of 4096
@0
2019/11/08 17:08:11 [debug] 2552#2564: complete task #0 in thread pool
"default"
2019/11/08 17:08:11 [debug] 2552#2552: epoll: fd:11 ev:0001
d:000000000086FF20
2019/11/08 17:08:11 [debug] 2552#2552: post event 000000000086FEC0
2019/11/08 17:08:11 [debug] 2552#2552: timer delta: 343
2019/11/08 17:08:11 [debug] 2552#2552: posted event 000000000086FEC0
2019/11/08 17:08:11 [debug] 2552#2552: delete posted event
000000000086FEC0
2019/11/08 17:08:11 [debug] 2552#2552: thread pool handler2019/11/08
17:08:11 [debug] 2552#2552: run completion handler for task
#0
2019/11/08 17:08:11 [debug] 2552#2552: *1 http file cache thread: "/
pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_811.patch
?"
2019/11/08 17:08:11 [debug] 2552#2552: *1 thread read: 16,
00000000026C48D0, 4096, 0
2019/11/08 17:08:11 [debug] 2552#2552: *1 http upstream cache: 0
2019/11/08 17:08:11 [debug] 2552#2552: *1 posix_memalign:
00000000026C58E0:4096 @16
2019/11/08 17:08:11 [debug] 2552#2552: *1 http proxy status 200 "200 OK"
###########################################################################

If I don't use "accpet_mutex on" , I can get http response quickly. The
debug log is:
###########################################################################
2019/11/08 17:14:34 [debug] 23726#23726: *1 malloc:
000000000226E8D0:4096
2019/11/08 17:14:34 [debug] 23726#23726: *1 thread read: 17,
000000000226E8D0, 4096,
0
2019/11/08 17:14:34 [debug] 23726#23726: task #0 added to thread pool
"default"
2019/11/08 17:14:34 [debug] 23726#23726: *1 http upstream cache:
-2
2019/11/08 17:14:34 [debug] 23726#23795: run task #0 in thread pool
"default"
2019/11/08 17:14:34 [debug] 23726#23726: *1 http finalize request: -4, "/
pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_8
2019/11/08 17:14:34 [debug] 23726#23795: thread read
handler
2019/11/08 17:14:34 [debug] 23726#23726: *1 http request count:2
blk:1
2019/11/08 17:14:34 [debug] 23726#23726: timer delta:
2
2019/11/08 17:14:34 [debug] 23726#23795: pread: 4096 (err: 0) of 4096
@0
2019/11/08 17:14:34 [debug] 23726#23726: worker
cycle
2019/11/08 17:14:34 [debug] 23726#23795: complete task #0 in thread pool
"default"
2019/11/08 17:14:34 [debug] 23726#23726: epoll timer:
-1
2019/11/08 17:14:34 [debug] 23726#23726: epoll: fd:16 ev:0004
d:00007F3D265033F0
2019/11/08 17:14:34 [debug] 23726#23726: *1 http run request: "/
pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_811.patch
?
2019/11/08 17:14:34 [debug] 23726#23726: timer delta:
0
2019/11/08 17:14:34 [debug] 23726#23726: worker
cycle
2019/11/08 17:14:34 [debug] 23726#23726: epoll timer:
-1
2019/11/08 17:14:34 [debug] 23726#23726: epoll: fd:13 ev:0001
d:000000000086FF20
2019/11/08 17:14:34 [debug] 23726#23726: thread pool
handler
2019/11/08 17:14:34 [debug] 23726#23726: run completion handler for task
#0
2019/11/08 17:14:34 [debug] 23726#23726: *1 http file cache thread: "/
pcdnapkwsdl2.com.cn/pcdnapkwsdltest.com.cn/appstore/developer/soft/20191008/201910081449521157660_v2_820_811
..
2019/11/08 17:14:34 [debug] 23726#23726: *1 thread read: 17,
000000000226E8D0, 4096,
0
2019/11/08 17:14:34 [debug] 23726#23726: shmtx
lock
2019/11/08 17:14:34 [debug] 23726#23726: shmtx
unlock
2019/11/08 17:14:34 [debug] 23726#23726: *1 http upstream cache:
0
2019/11/08 17:14:34 [debug] 23726#23726: *1 posix_memalign:
000000000226F8E0:4096
@16
2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy status 200 "200
OK"
2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header: "Date: Fri,
08 Nov 2019 07:50:58
GMT"
2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header:
"Content-Type:
application/octet-stream"
2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header:
"Content-Length:
40492454"
2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header: "Connection:
close"
2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header: "Server:
AliyunOSS"
2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header:
"x-oss-request-id:
5D9C3B95591574F5686E7B00"
2019/11/08 17:14:34 [debug] 23726#23726: *1 http proxy header:
"Accept-Ranges: bytes"
###########################################################################

How should this problem be analyzed? I am a newcomer to nginx.
Thank you.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

limit_rate_after does not work inside location block (2 replies)

$
0
0
Hi,

I have a location block

location ~ /get_file$ {
limit_rate_after 500m;
limit_rate 1m;
...
...
}

The limit_rate_after does not work when put inside the location block, if I move it right above the location line i.e. inside server block, it works.

Any idea on how to make it work inside location and if block.

Per IP bandwidth limit (5 replies)

$
0
0
Hello, is it the correct way to limit download/upload speed per client IP,
at the same time ignore how many connections it opens and request rate
produced?

I need just limit bandwidth for example 100 mbit/s per IP, and no matter
it opens 1 connection or 100 simulation connections.

--
*Best Regards *

*Kostiantyn Velychkovsky *
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Help with nginx second request (no replies)

$
0
0
Hi all,
I would really appreciate it if you could help me with nginx.

The situation is:
Nginx (v. 1.14.2) redirects the request to the application server. In case this request with the POST method and the application server gives an error code 500, the response is transmitted to the client.
But nginx then makes a second request. Is there any way to disable execution of the second request by nginx?
Many thanks in advance.

Here is a part of nginx file:

#Makes upstream to the branch with response code 500
upstream test5 {
       server 127.0.0.1:90;
       keepalive 20;
}

#virtualhost that issues response code 500;
server {
   listen 90 default_server;
   server_name localhost;
   root /home/jetty/www;

       location @intercept_disabled {
               proxy_intercept_errors off;
               proxy_pass http://test5;
       }
   location /test500 {
       return 500;
       error_page 500 /500.html;
   }
}

# Redirection to virtualhost that issues response code 500;
       location /test500 {
           proxy_pass http://test5;
       }

================================
Request and responses:
================================
curl -i -X POST localhost/test500

HTTP/1.1 500 Internal Server Error
Server: nginx
Date: Wed, 13 Nov 2019 14:21:25 GMT
Content-Type: text/html
Content-Length: 13
Connection: close
ETag: "5dcbd0b0-d"
ERROR 500 page.


ngrep -qiltW byline -s 1000 -c 1024 -d lo '' port 90


T 2019/11/13 14:25:08.751892 127.0.0.1:44416 -> 127.0.0.1:90 [AP] #4
POST /test500 HTTP/1.1.
X-Forwarded-For: 127.0.0.1.
Host: localhost.
X-Forwarded-Proto: http.
User-Agent: curl/7.58.0.
Accept: */*.

T 2019/11/13 14:25:08.752038 127.0.0.1:90 -> 127.0.0.1:44416 [AFP] #6
HTTP/1.1 500 Internal Server Error.
Server: nginx.
Date: Wed, 13 Nov 2019 14:25:08 GMT.
Content-Type: text/html.
Content-Length: 13.
Connection: close.
ETag: "5dcbd0b0-d".
ERROR 500 page.

T 2019/11/13 14:25:08.752139 127.0.0.1:44418 -> 127.0.0.1:90 [AP] #12
POST /test500 HTTP/1.1.
X-Forwarded-For: 127.0.0.1.
Host: localhost.
X-Forwarded-Proto: http.
User-Agent: curl/7.58.0.
Accept: */*.

T 2019/11/13 14:25:08.752221 127.0.0.1:90 -> 127.0.0.1:44418 [AFP] #14
HTTP/1.1 500 Internal Server Error.
Server: nginx.
Date: Wed, 13 Nov 2019 14:25:08 GMT.
Content-Type: text/html.
Content-Length: 13.
Connection: close.
ETag: "5dcbd0b0-d".
ERROR 500 page.

So, in the response we see that nginx makes second request.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx 405 not allowed issue (no replies)

$
0
0
Hi,

I want to allow to accept post request for static content by nginx server.

i have three solution so i can add patches. take a look below.

1.) error_page 405 =200 $uri

This basically tells nginx to change the response code to 200 for 405 messages

2.) location / {
rewrite ^.*$ /ampsec-tv-widget.html last;
}
# To allow POST on static pages
error_page 405 = $uri;

first we send POST request to api.json (which is our static file), then we proxy 405 error to the original url and our rewrite condition matches the request and return api.json with 200 status code


3.) create a proxy for static content, converting POST request to GET.

But i want to add any patch. is there is any other solution to make request for static content by nginx server

Mail Proxy with Multiple Mail Domains (2 replies)

$
0
0
Hello,
 
I would like to setup a Nginx mail proxy which handles IMAP and SMTP for two different mail domains and two different backend servers (one server for each of the domains).

Let's say we have the two mail domains:
- mail.foo.com
- mail.bar.com
 
Then we can setup a minimalistic mail block like:
 
mail {
server_name mail.foo.com; <-- ############ Can I simply add 'mail.bar.com' here? ############

auth_http localhost/nginxauth.php;

server {
listen 25;
protocol smtp;
}

server {
listen 143;
protocol imap;
}
}

And a minimalistic nginxauth.php script like:

<?php

/*
Variables we have here:
$_SERVER["HTTP_AUTH_USER"]
$_SERVER["HTTP_AUTH_PASS"]
$_SERVER["HTTP_AUTH_USER"]
$_SERVER["HTTP_AUTH_PASS"]
$_SERVER["HTTP_AUTH_PROTOCOL"]
*/

if ($protocol=="imap")
{
$backend_port=143;
}

if ($protocol=="smtp")
{
$backend_port=25;
}

$backend_ip["mailhost_foo"] ="192.168.1.10";
$backend_ip["mailhost_bar"] ="192.168.1.20";

$selection <-- ############ How to make this selection? ############
Do we have information about the requested mail domain here?
If yes, in which $_SERVER item?

header("Auth-Status: OK");
header("Auth-Server: $backend_ip[$selection]");
header("Auth-Port: $backend_port");
?>


But how to solve the questions marked with "###" above?
I tried to find something in the Nginx documentation, but without success.
Any ideas?

Thanks a lot in advance.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx Container crash no logs (no replies)

$
0
0
Hey all,

We have a new set up running large amounts of data through a container
nginx. This is crashing, without error. Forcing a reboot to recover daily
at the moment. We are getting nothing from nginx logs or the docker logs of
any use. Any suggestions to debugging this?

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9bc827d6ccd7 nginx:stable

Container is running on ubuntu 18.04.3 LTS OS.
The last thing we see in the logs is

18:15:36 [alert] 7#7: ignore long locked inactive cache entry
9f78089258be73e98f58abed986ddb8b, count:1
2019/11/13 18:25:36 [alert] 7#7: ignore long locked inactive cache entry
9f78089258be73e98f58abed986ddb8b, count:1
2019/11/13 18:35:36 [alert] 7#7: ignore long locked inactive cache entry
9f78089258be73e98f58abed986ddb8b, count:1

We're tried a lot of things such as down grading the nginx version as we
were using latest but this just prolonged the crash time. Small improvement
by a day
Any suggestions welcome

Nginx conf is below

Windows Terminal
worker_rlimit_nofile 30000;
events {}
http {
log_format compression '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" "$gzip_ratio"';
error_log /etc/nginx/error_log.log warn;
client_max_body_size 20m;
server_names_hash_bucket_size 512;
proxy_headers_hash_bucket_size 128;
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=content_cache:10m
max_size=10g use_temp_path=off;
upstream hub_node {
server hub-node:3000;
keepalive 16;
}
upstream hub_cms {
server hub-be:80;
keepalive 16;
}
upstream hub_analytics {
server hub-matomo:80;
keepalive 16;
}

server {
listen 443 default_server;
server_name _;
return 418;
}

server {
listen 443 ssl http2;
server_name digital-hub.bwi.dpn.gov.uk;
add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive";
location /sites/default/files/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_cache_valid 200 302 10m;
proxy_cache content_cache;
proxy_pass http://hub_cms/sites/default/files/;

}
location / {
access_log /var/log/nginx/access.log compression buffer=32k;
proxy_pass http://hub_node/;
}
}

server {
listen 443 ssl http2;
server_name analytics.digital-hub.bwi.dpn.gov.uk;
add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive";
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://hub_analytics/;
}
}

server {
listen 443 ssl http2;
server_name content.digital-hub.bwi.dpn.gov.uk;
add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive";
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504;
proxy_cache_valid 200 302 10m;
proxy_cache content_cache;
proxy_pass http://hub_cms/;
}
}

ssl_certificate /etc/letsencrypt/live/localhost/san.digital-hub.crt;
ssl_certificate_key /etc/letsencrypt/live/localhost/san.digital-hub.rsa;
ssl_session_timeout 1d;
ssl_session_cache shared:MozSSL:10m; # about 40000 sessions
ssl_session_tickets off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers
ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-
SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
# HSTS (ngx_http_headers_module is required) (63072000 seconds)
add_header Strict-Transport-Security "max-age=63072000" always;
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Unit 1.13.0 release (no replies)

$
0
0
Hi,

I'm glad to announce a new release of NGINX Unit.

This release expands Unit's functionality as a generic web server by
introducing basic HTTP reverse proxying.

See the details in our documentation:

- https://unit.nginx.org/configuration/#proxying

Compared to mature proxy servers and load balancers, Unit's proxy features
are limited now, but we'll continue the advance.

Also, this release improves the user experience for Python and Ruby modules and
remediates compatibility issues with existing applications in these languages.

Our long-term goal is to turn Unit into the ultimate high-performance building
block that will be helpful and easy to use with web services of any kind. To
accomplish this, Unit's future releases will focus on the following aspects:

- security, isolation, and DoS protection
- ability to run various types of dynamic applications
- connectivity with load balancing and fault tolerance
- efficient serving of static media assets
- statistics and monitoring


Changes with Unit 1.13.0 14 Nov 2019

*) Feature: basic support for HTTP reverse proxying.

*) Feature: compatibility with Python 3.8.

*) Bugfix: memory leak in Python application processes when the close
handler was used.

*) Bugfix: threads in Python applications might not work correctly.

*) Bugfix: Ruby on Rails applications might not work on Ruby 2.6.

*) Bugfix: backtraces for uncaught exceptions in Python 3 might be
logged with significant delays.

*) Bugfix: explicit setting a namespaces isolation option to false might
have enabled it.


Please feel free to share your experiences and ideas on GitHub:

- https://github.com/nginx/unit/issues

Or via Unit mailing list:

- https://mailman.nginx.org/mailman/listinfo/unit

wbr, Valentin V. Bartenev

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reply to a thread (no replies)

$
0
0
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Mail Proxy: SSL to Backend (no replies)

$
0
0
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Expert needed to Tuning For Best Performance (no replies)

$
0
0
Is there anyone one the list consider themselves an Nginx tuning expert interested in running some HTTP Load Testing to establish baseline
on my server and then tune config for best performance?

1000 Mbps up/down connection
Xeon E-2146G (6 core -12 thread) 3.5-4.5 GHz
Supermicro X11SCZ-F
32GB RAM
Samsung SSD 860 2x512 GB SSD

I’ve been reading Denis Denisov (denji)

NGINX Tuning For Best Performance
https://github.com/denji/nginx-tuning https://github.com/denji/nginx-tuning

But this specifically says for testing not PRODUCTION.

I’m preparing my server for Production environment.

If any interest, please email me, happy to compensate for efforts.

Thanks_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Expert needed to Tune Nginx For Best Performance (no replies)

$
0
0
Is there anyone one the list consider themselves an Nginx tuning expert interested in running some HTTP Load Testing to establish baseline
on my server and then tune config for best performance?

1000 Mbps up/down connection
Xeon E-2146G (6 core -12 thread) 3.5-4.5 GHz
Supermicro X11SCZ-F
32GB RAM
Samsung SSD 860 2x512 GB SSD

I’ve been reading Denis Denisov (denji)

NGINX Tuning For Best Performance
https://github.com/denji/nginx-tuning https://github.com/denji/nginx-tuning

But this specifically says for testing not PRODUCTION.

I’m preparing my server for Production environment.

If any interest, please email me, happy to compensate for efforts._______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

502 Bad Gateway - nginx/1.14.0 (Ubuntu) (1 reply)

$
0
0
Recently I moved my Dell Server from one location to another, with a completely different Router. Its Main OS/VM Manager is Proxmox VE 5.3. I got an Nginx VM that is Reverse Proxying several other VMs.

After configuring my new Router, I got several of my VMs to connect to the Internet (setting the same Private IPs to the same MAC Addresses, and opening the same ports for the same Private IPs). However, with my Discourse VM, I am receiving 502 Bad Gateway - nginx/1.14.0 (Ubuntu) when trying to access Discourse in a Browser.

In the past, it was usually through the Discourse software that I could fix the problem ... but not so this time. Please read here for more details: https://meta.discourse.org/t/502-bad-gateway-nginx-1-14-0-ubuntu-unable-to-find-image-locally-error-response-from-daemon/133392

I've basically hit a point where I believe the issue is with the configuration of the Nginx VM. Recently I've tried renewed the SSL Certificates (using Let's Encrypt) for all my sites on Nginx, hoping it would fix the problem ... but it didn't. However, I checked the Nginx Error Log and found a message being reported over and over again:

> root@ngx:/etc/nginx/sites-available# less /var/log/nginx/error.log
> 2019/11/17 06:01:24 [error] 23646#23646: *37 connect() failed (111: Connection refused) while connecting to upstream, client: 1.2.3.4, server: discourse.domainame.com, request: "GET / HTTP/2.0", upstream: "http://5.6.7.8:8080/", host: "discourse.domainame.com"

* 1.2.3.4 = My Old Public IP Address
* 5.6.7.8 = My New Public IP Address

It looks to me like Nginx is trying to connect this site using my Old Public IP Address. That is definitely incorrect.
My /etc/nginx/sites-available CONF file for the Discourse Site can be found in THIS link: https://pastebin.com/fiiyATeP

* 192.168.0.101 = Nginx VM
* 192.168.0.104 = Discourse VM

------------------------------------------------------------------------------------------------------------------------------------------------------------

So my question is: How can I tell Nginx to connect discourse.domainname.com to my New Public IP Address 5.6.7.8?

nginx-1.17.6 (no replies)

$
0
0
Changes with nginx 1.17.6 19 Nov 2019

*) Feature: the $proxy_protocol_server_addr and
$proxy_protocol_server_port variables.

*) Feature: the "limit_conn_dry_run" directive.

*) Feature: the $limit_req_status and $limit_conn_status variables.


--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>