Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

SSL_ERROR_BAD_CERT_DOMAIN with multiple domains (no replies)

$
0
0
I have two domains:

(1) myvery.owndomain.com
(2) domain.synology.me

(1) is under my control (I own the domain) and I manage the certs (Let's Encrypt).
If I visit "https://myvery.owndomain.com" I'm greeted by the "Welcome to Nginx!" landing page. (I use nginx as a reverse proxy only.)

(2) is a DDNS that Synology manages and it also has certs by LE (managed by Synology).

I have a Mac Mini running the "main" Nginx server and a bunch of other services. (1) points to theses services on the Mini. The IP of the mini is 192.168.13.10.

(2) points to a NAS that has it's own Nginx to handle, among other things, the LE certs. This machine runs on IP 192.168.11.10.
Without any settings in the "main" nginx, I can't use (2) because in my router (EdgeRouter X) both :80 and :443 point to the Mini (192.168.13.10).

So I need to add two new server blocks in my config so that:
If I visit "http://domain.synology.me" (port 80) that redirects me to "http://domain.synology.me:5000"
and
If I visit "https://domain.synology.me" (port 443) that redirects me to "https://domain.synology.me:5001"

I've managed to get part of the way. But I'm getting SSL errors like for instance: "SSL_read() failed (SSL: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate:SSL alert number 42) while waiting for request, client: 192.168.13.1, server: 0.0.0.0:443"

What am I doing wrong?

Here's my current config: https://gist.github.com/BeyondEvil/e246d1725438989815272ac96fd1a767

Thanks!

Nginx/Unit sendfile failed for upstream (no replies)

$
0
0
Hello

I have nginx (1.15.9) + unit(1.9) + php(7.2.15).

When uploading files larger then 8Mb Nginx was failing with 413 Payload too large, so in nginx.conf, http {} section I added "client_max_body_size=0" - it worked for nginx, but now Unit is rejecting uploads.

Everything works fine for uploads smaller then ~8MB. When I go above that value, browser will fail with 502 Bad Gateway, Unit will log nothing and nginx/error.log will have a following line:

2019/06/23 08:38:14 [error] 20682#20682: *3 sendfile() failed (32: Broken pipe) while sending request to upstream, client: 127.0.0.1, server: wp1.wrf, request: "POST /uploader.php HTTP/1.1", upstream: "http://127.0.0.1:8301/uploader.php", host: "wp1.wrf", referrer: "http://wp1.wrf/zamowienie,dodaj-zdjecia.html"

PHP limits: post_max_size=40M, upload_max_filesize=20M verified with phpinfo - but uploader.php is never executed in that case (it logs every request in separate file, no log files are created (not even empty)).

In Unit docs (http://unit.nginx.org/configuration/#settings) there is something called "max_body_size", however it is impossilbe to adjust that value. I've made PUT / POST requests to /config/max_body_size, /config/settings/http/max_body_size and many other variations, all without success, e.g.:

curl -X PUT -d '"30M"' --unix-socket /var/run/control.unit.sock http://localhost/config/max_body_size
{
"error": "Invalid configuration.",
"detail": "Unknown parameter \"max_body_size\"."
}

Any help would be appreciated.
Cheers,
Bartek

Accepting Multiple TLS Client Certificates (no replies)

$
0
0
Hi,

as per our understanding one can provide a file with multiple certificates
as "ssl_client_certificate". Nginx would then accept any one of the
certificates. However, when we actually provided multiple certificates we
found that only the first one in the list was accepted.

In our test case we provided a chain of two certificates, a root cert and
the client certs signed by this CA. We tried both, concatenating the files
like this: "user1 user2 ca" and like this "user1 ca user2 ca". In all cases
just the first certificate was accepted.

Are we misunderstanding the expected behaviour of nginx, or is this a bug,
or are we maybe doing something wrong?

I will mention that we are using nginx in the nginx-ingress Kubernetes
package. We have tested with a version which uses nginx 1.15.10.

Thank you!
Johannes Gehrs
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx-1.17.1 (no replies)

$
0
0
Changes with nginx 1.17.1 25 Jun 2019

*) Feature: the "limit_req_dry_run" directive.

*) Feature: when using the "hash" directive inside the "upstream" block
an empty hash key now triggers round-robin balancing.
Thanks to Niklas Keller.

*) Bugfix: a segmentation fault might occur in a worker process if
caching was used along with the "image_filter" directive, and errors
with code 415 were redirected with the "error_page" directive; the
bug had appeared in 1.11.10.

*) Bugfix: a segmentation fault might occur in a worker process if
embedded perl was used; the bug had appeared in 1.7.3.


--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: [nginx-announce] nginx-1.17.1 (no replies)

$
0
0
Hello Nginx users,

Now available: Nginx 1.17.1 for Windows
https://kevinworthington.com/nginxwin1171 (32-bit and 64-bit versions)

These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.

Announcements are also available here:
Twitter http://twitter.com/kworthington

Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
https://kevinworthington.com/
https://twitter.com/kworthington

On Tue, Jun 25, 2019 at 8:34 AM Maxim Dounin <mdounin@mdounin.ru> wrote:

> Changes with nginx 1.17.1 25 Jun
> 2019
>
> *) Feature: the "limit_req_dry_run" directive.
>
> *) Feature: when using the "hash" directive inside the "upstream" block
> an empty hash key now triggers round-robin balancing.
> Thanks to Niklas Keller.
>
> *) Bugfix: a segmentation fault might occur in a worker process if
> caching was used along with the "image_filter" directive, and errors
> with code 415 were redirected with the "error_page" directive; the
> bug had appeared in 1.11.10.
>
> *) Bugfix: a segmentation fault might occur in a worker process if
> embedded perl was used; the bug had appeared in 1.7.3.
>
>
> --
> Maxim Dounin
> http://nginx.org/
> _______________________________________________
> nginx-announce mailing list
> nginx-announce@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

njs-0.3.3 (no replies)

$
0
0
Hello,

I’m glad to announce a new release of NGINX JavaScript module (njs).

This release mostly focuses on stability issues in njs core after regular
fuzzing tests were introduced.

Notable new features:
- Added ES5 property getter/setter runtime support:
: > var o = {a:2};
: undefined
: > Object.defineProperty(o, ‘b’, {get:function(){return 2*this.a}}); o.b
: 4

- Added global “process” variable:
: > process.pid
<current process pid>
: > process.env.HOME
<current process HOME env variable>

You can learn more about njs:

- Overview and introduction: http://nginx.org/en/docs/njs/
- Presentation: https://youtu.be/Jc_L6UffFOs

Feel free to try it and give us feedback on:

- Github: https://github.com/nginx/njs/issues
- Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel http://mailman.nginx.org/mailman/listinfo/nginx-devel


Changes with njs 0.3.3 25 Jun 2019

nginx modules:

*) Improvement: getting of special response headers in headersOut.

*) Improvement: working with unknown methods in r.subrequest().

*) Improvement: added support for null as a second argument
of r.subrequest().

*) Bugfix: fixed processing empty output chain in stream body filter.

Core:
*) Feature: added runtime support for property getter/setter.
Thanks to 洪志道 (Hong Zhi Dao) and Artem S. Povalyukhin.

*) Feature: added “process” global object.

*) Feature: writable most of built-in properties and methods.

*) Feature: added generic implementation of Array.prototype.fill().

*) Bugfix: fixed integer-overflow in String.prototype.concat().

*) Bugfix: fixed setting of object properties.

*) Bugfix: fixed Array.prototype.toString().

*) Bugfix: fixed Date.prototype.toJSON().

*) Bugfix: fixed overwriting “constructor” property of built-in
prototypes.

*) Bugfix: fixed processing of invalid surrogate pairs in strings.

*) Bugfix: fixed processing of invalid surrogate pairs in JSON
strings.

*) Bugfix: fixed heap-buffer-overflow in toUpperCase() and
toLowerCase().

*) Bugfix: fixed escaping lone closing square brackets in RegExp()
constructor.

*) Bugfix: fixed String.prototype.toBytes() for ASCII strings.

*) Bugfix: fixed handling zero byte characters inside RegExp
pattern strings.

*) Bugfix: fixed String.prototype.toBytes() for ASCII strings.

*) Bugfix: fixed truth value of JSON numbers in JSON.parse().

*) Bugfix: fixed use-of-uninitialized-value in
njs_string_replace_join().

*) Bugfix: fixed parseInt(‘-0’).
Thanks to Artem S. Povalyukhin._______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Log request at request time, not after response (no replies)

$
0
0
Hi All,

I am trying to understand if it's possible to extend nginx functionality to support what I am looking for.

Problem:

We are trying to look for poison pill in-flight requests that would affect backend cluster stability. We currently cannot do much for the first request, but the idea is to block subsequent requests from the same user.

Ideally we would have solved it at the app layer, but seems like there's a lot of work involved with that solution. So we are trying to solve it at the nginx level.

But looks like nginx can only do logging after a response is received from the backend server. Is it possible to modify this behavior to log at request time without waiting for a response. IMHO this is a reasonable thing to do.

I am open to alternative approaches to solve this problem. I would also like to know if there are any available modules/plugins that could be easily tweaked to emit the request log.

If this was intended by design for nginx, it would be nice to understand what the reasoning behind such a decision was.

Thank you very much for your time.
Appreciate all the help I can get.

Regards,
Vinayak Ponangi

Macos Mojave php-fpm restart unexpectedly (no replies)

$
0
0
Hello,

I have just updated to Macos Mojave and PHP7.3 using nginx. All installations happened through brew (also downgraded to PHP7.2) but php-fpm restart unexpectedly when access ONLY /wp-admin route of my wordpress website. All frontend wordpress pages works fine and and info.php is working as well.

On php.ini we have added xdebug using pecl. All other settings are the default ones:

zend_extension="xdebug.so"
[XDebug]
xdebug.remote_enable=1
xdebug.remote_autostart=1
xdebug.remote_handler=dbgp
xdebug.remote_mode=req
xdebug.remote_host=127.0.0.1
xdebug.remote_port=9000
extension="redis.so"


php-fpm log using log_level=debug when accessing /wp-admin show "exited on signal 11" and php-fpm service restarts:

[25-Jun-2019 22:26:01.104274] DEBUG: pid 47, fpm_pctl_perform_idle_server_maintenance(), line 378: [pool www] currently 1 active children, 1 spare children, 2 running children. Spawning rate 1
[25-Jun-2019 22:26:01.839904] DEBUG: pid 47, fpm_got_signal(), line 75: received SIGCHLD
[25-Jun-2019 22:26:01.839968] WARNING: pid 47, fpm_children_bury(), line 256: [pool www] child 980 exited on signal 11 (SIGSEGV) after 21.933941 seconds from start
[25-Jun-2019 22:26:01.841578] NOTICE: pid 47, fpm_children_make(), line 425: [pool www] child 1037 started
[25-Jun-2019 22:26:01.845627] DEBUG: pid 47, fpm_event_loop(), line 418: event module triggered 1 events [25-Jun-2019 22:26:02.176554] DEBUG: pid 47, fpm_pctl_perform_idle_server_maintenance(), line 378: [pool www] currently 0 active children, 2 spare children, 2 running children. Spawning rate 1

Nginx 1.17.0 doesn't change the content-type header (no replies)

$
0
0
Hello,

I have the following config in the http:

include mime.types;
default_type application/octet-stream;


also i have this in the location:

types {
application/vnd.apple.mpegurl m3u8;
video/mp2t ts;
}


But when i send a request, i am getting these headers:

Request URL:
https://example.com/hls/5d134afe91b970.80939375/1024_576_1500_5d134afe91b970.80939375_00169.ts


1.
Accept-Ranges:
bytes
2.
Access-Control-Allow-Credentials:
true
3.
Access-Control-Allow-Headers:
DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token
4.
Access-Control-Allow-Methods:
OPTIONS, GET
5.
Access-Control-Allow-Origin:
*
6.
Cache-Control:
max-age=31536000
7.
Connection:
keep-alive
8.
Content-Length:
259440
9.
Content-Type:
application/octet-stream
10.
Date:
Sat, 29 Jun 2019 22:43:57 GMT
11.
ETag:
"d1a1739b4444da72c0e25251e4669b45"
12.
Last-Modified:
Wed, 26 Jun 2019 18:08:17 GMT
13.
Server:
nginx/1.17.0
14.

Request URL:
https://example.com/hls/5d134afe91b970.80939375/playlist.m3u8



1.
Accept-Ranges:
bytes
2.
Access-Control-Allow-Credentials:
true
3.
Access-Control-Allow-Headers:
DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Origin,X-Auth-Token,Authorization,Accept,Client-Security-Token
4.
Access-Control-Allow-Methods:
OPTIONS, GET
5.
Access-Control-Allow-Origin:
*
6.
Cache-Control:
max-age=31536000
7.
Connection:
keep-alive
8.
Content-Length:
601
9.
Content-Type:
application/octet-stream
10.
Date:
Sat, 29 Jun 2019 22:37:57 GMT
11.
ETag:
"7ba4b759c57dbffbca650ce6a290f524"
12.
Last-Modified:
Wed, 26 Jun 2019 10:57:04 GMT
13.
Server:
nginx/1.17.0


For some reason, Nginx doesn't change the Content-Type


Thanks
Andrew

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

set_real_ip_from behavior (1 reply)

$
0
0
Hello,

I'm having some issues with getting X-Forwarded-For set consistently for
upstream proxy requests. The server runs Nginx/OpenResty in front of
Apache, and has domains hosted behind Cloudflare as well as direct. The
ones behind Cloudflare show the correct X-Forwarded-For header being set,
using (snippet):

http {
set_real_ip_from 167.114.56.190/32;
[..]
set_real_ip_from 167.114.56.191/32;
real_ip_header X-Forwarded-For;
server {
location ~ .* {
[..]
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
}

However, when I receive a direct request, which does not include
X-Forwarded-For, $http_x_forwarded_for, $proxy_add_x_forwarded_for,
$http_x_real_ip are empty, and I'm unable to set the header to $remote_addr
(which shows the correct IP). If I try adding this in the server {} block:

if ($http_x_forwarded_for = '') {
set $http_x_forwarded_for $remote_addr;
}

I get:

nginx: [emerg] the duplicate "http_x_forwarded_for" variable in
/usr/local/openresty/nginx/conf/nginx.conf:131
nginx: configuration file /usr/local/openresty/nginx/conf/nginx.conf test
failed

The above works to set $http_x_real_ip, but then I end up with direct
connections passing Apache the client IP through X-Real-IP, and proxied
connections (from Cloudflare) set X-Forwarded-For.

The log format I'm using to verify both $http_x_forwarded_for and
$http_x_real_ip is:

log_format json_combined escape=json
'{'
'"id":"$zid",'
'"upstream_cache_status":"$upstream_cache_status",'
'"remote_addr":"$remote_addr",'
'"remote_user":"$remote_user",'
'"stime":"$msec",'
'"timestamp":"$time_local",'
'"host":"$host",'
'"server_addr":"$server_addr",'
'"server_port":"$proxy_port",'
'"request":"$request",'
'"status": "$status",'
'"body_bytes_sent":"$body_bytes_sent",'
'"http_referer":"$http_referer",'
'"http_user_agent":"$http_user_agent",'
'"http_x_forwarded_for":"$http_x_forwarded_for",'
'"http_x_real_ip":"$http_x_real_ip",'
'"request_type":"$request_type",'
'"upstream_addr":"$upstream_addr",'
'"upstream_status":"$upstream_status",'
'"upstream_connect_time":"$upstream_connect_time",'
'"upstream_header_time":"$upstream_header_time",'
'"upstream_response_time":"$upstream_response_time",'
'"country":"$country_code",'
'"request_time":"$request_time"'
'}';

How can I consistently pass the backend service an X-Forwarded-For header,
with the client IP, regardless of it being a direct request or proxied
through Cloudflare/some other CDN?

Thanks!
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

request authorization with grpc (failure status code) (no replies)

$
0
0
I have an nginx configuration that passes gRPC API requests to other services an authorization endpoint that is used in conjunction.

This works great when authorization is successful (my HTTP1 authorization endpoint returns HTTP 2xx status codes).

When authorization fails (it returns 401), the gRPC connection initiated by the client receives a gRPC Cancelled(1) status code, rather than what would be ideal for the client - an Unauthorized (16) status code. The status message appears to be populated by nginx indicating the 401 failure.

Is there a way to control the status code returned to the gRPC channel during failed auth?

I tried and failed at doing this with the below configuration. Any non-200 code returned by the auth failure handling results in the same cancelled status code even after trying to set the status code manually. If I override the return with a 200 series code, it treats authorization as successful (which it also bad).

server {
location /some_grpc_api {

grpc_pass grpc://internal_service:50051;
grpc_set_header x-grpc-user $auth_resp_x_grpc_user;
}

# send all requests to the `/validate` endpoint for authorization
auth_request /validate;
auth_request_set $auth_resp_x_grpc_user $upstream_http_x_grpc_user;

location = /validate {
proxy_pass http://auth:5000;

# the auth service acts only on the request headers
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;

# attempt to customize grpc error code
proxy_intercept_errors on;
error_page 401 /grpc_auth_fail_page;
}

# attempt to customize grpc error code
location = /grpc_auth_fail_page {
internal;
grpc_set_header grpc-status 16;
grpc_set_header grpc-message "Unauthorized";
return 401;
}
}

Re: effect of bcrypt hash $cost on HTTP Basic authentication's login performance? (1 reply)

$
0
0
> (And no, it does not look like an appropriate question for the
> nginx-devel@ list. Consider using nginx@ instead.)

k.


On 7/2/19 5:23 PM, Maxim Dounin wrote:
> On Sat, Jun 29, 2019 at 09:48:01AM -0700, PGNet Dev wrote:
>
>> When generating hashed data for "HTTP Basic" login auth
>> protection, using bcrypt as the hash algorithm, one can vary the
>> resultant hash strength by varying specify bcrypt's $cost, e.g.
>
> [...]
>
>> For site login usage, does *client* login time vary at all with
>> the hash $cost?
>>
>> Other than the initial, one-time hash generation, is there any
>> login-performance reason NOT to use the highest hash $cost?
>
> With Basic HTTP authentication, hashing happens on every user
> request. That is, with high costs you are likely make your site
> completely unusable.

Noted.

*ARE* there authentication mechanisms available that do NOT hash on
every request? Perhaps via some mode of secure caching?

AND, that still maintain a high algorithmic cost to prevent breach
attemtps, or at least maximize their efforts?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

how to force/send TLS Certificate Request for all client connections, in client-side ssl-verification? (no replies)

$
0
0
I've setup my nginx server with self-signed SSL server-side certs, using my own/local CA.

Without client-side verifications, i.e. just an unverified-TLS connection, all's good.

If I enable client-side SSL cert verification with,

ssl_certificate "ssl/example.com.server.crt.pem";
ssl_certificate_key "ssl/example.com.server.key.pem";
ssl_verify_client on;
ssl_client_certificate "ssl_cert_dir/CA_intermediate.crt.pem";
ssl_verify_depth 2;

, a connecting android app is failing on connect, receiving FROM the nginx server,

HTTP RESPONSE:
Response{protocol=http/1.1, code=400, message=Bad Request, url=https://proxy.example.com/dav/myuser%40example.com/3d75dc22-8afc-1946-5b3f-4d84e9b28432/}
<html>
<head><title>400 No required SSL certificate was sent</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>No required SSL certificate was sent</center>
<hr><center>nginx</center>
</body>
</html>

I've been unsuccessful so far using tshark/ssldump to decrypt the SSL handshake; I suspect (?) it's because my certs are ec signed. Still working on that ...

In 'debug' level nginx logs, I see

2019/06/30 21:58:14 [debug] 41777#41777: *7 s:0 in:'35:5'
2019/06/30 21:58:14 [debug] 41777#41777: *7 s:0 in:'2F:/'
2019/06/30 21:58:14 [debug] 41777#41777: *7 http uri: "/dav/myuser@example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/"
2019/06/30 21:58:14 [debug] 41777#41777: *7 http args: ""
2019/06/30 21:58:14 [debug] 41777#41777: *7 http exten: ""
2019/06/30 21:58:14 [debug] 41777#41777: *7 posix_memalign: 0000558C35B3C840:4096 @16
2019/06/30 21:58:14 [debug] 41777#41777: *7 http process request header line
2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Depth: 0"
2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Content-Type: application/xml; charset=utf-8"
2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Content-Length: 241"
2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Host: proxy.example.com"
2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Connection: Keep-Alive"
2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Accept-Encoding: gzip"
2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Accept-Language: en-US, en;q=0.7, *;q=0.5"
2019/06/30 21:58:14 [debug] 41777#41777: *7 http header: "Authorization: Basic 1cC5...WUVi"
2019/06/30 21:58:14 [debug] 41777#41777: *7 http header done
2019/06/30 21:58:14 [info] 41777#41777: *7 client sent no required SSL certificate while reading client request headers, client: 10.0.1.235, server: proxy.example.com, request: "PROPFIND /dav/myuser%40example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/ HTTP/1.1", host: "proxy.example.com"
2019/06/30 21:58:14 [debug] 41777#41777: *7 http finalize request: 496, "/dav/myuser@example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/?" a:1, c:1
2019/06/30 21:58:14 [debug] 41777#41777: *7 event timer del: 15: 91237404
2019/06/30 21:58:14 [debug] 41777#41777: *7 http special response: 496, "/dav/myuser@example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/?"
2019/06/30 21:58:14 [debug] 41777#41777: *7 http set discard body
2019/06/30 21:58:14 [debug] 41777#41777: *7 headers more header filter, uri "/dav/myuser@example.com/7a59f94d-6be5-18ef-4248-b8a2867fe445/"
2019/06/30 21:58:14 [debug] 41777#41777: *7 charset: "" > "utf-8"
2019/06/30 21:58:14 [debug] 41777#41777: *7 HTTP/1.1 400 Bad Request
Date: Mon, 01 Jul 2019 04:58:14 GMT
Content-Type: text/html; charset=utf-8
Content-Length: 230
Connection: close
Secure: Groupware Server
X-Content-Type-Options: nosniff

In comms with the app vendor, I was asked

Does your proxy send TLS Certificate Request

https://tools.ietf.org/html/rfc5246#section-7.4.4?

... the TLS stack which is used ... won't send certificates preemptively, but only when they're requested. In my tests, client certificates are working as expected, but ONLY if the server explicitly requests them.


I don't recognize the preemptive request above.

DOES nginx send such a TLS Certificate Request by default? Is there a required, additional config to force that request?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

TLS 1.3 support in nginx-1.17.1 binary for Ubuntu 18.04 "bionic" provided by nginx.org (no replies)

$
0
0
I've installed the nginx package provided by nginx.org (
https://nginx.org/en/linux_packages.html#Ubuntu)
specifically the binary provided by
https://nginx.org/packages/mainline/ubuntu/pool/nginx/n/nginx/nginx_1.17.1-1~bionic_amd64.deb
and it doesn't have TLS 1.3 support.
According to
https://mailman.nginx.org/pipermail/nginx/2019-January/057402.html this
would be because it was built on an Ubuntu 18.04 "bionic" that was not
fully updated.
Ubuntu 18.04 "bionic" switched from openssl 1.1.0 to openssl 1.1.1 recently
and I hoped the newer releases would be compiled with openssl 1.1.1 and
support TLS 1.3.
When I build that package myself (using apt-get source nginx ; cd
nginx-1.17.1/ ; debuild -i -us -uc -b) on a fully updated Ubuntu 18.04
"bionic", it does support TLS 1.3.
I ask that the build environment is set up such that the next release will
support TLS 1.3, or better yet, that 1.16.0 and 1.17.1 packages for Ubuntu
18.04 "bionic" are updated to include TLS 1.3 support.
Unless such packages won't work on a non-updated Ubuntu 18.04 system? (Why?)
Or does anyone know of a workaround that does not involve building the
packages myself?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx request processing is slow when logging disabled (no replies)

$
0
0
Hello All,

I have an Nginx reverse proxy connected to uWSGI backend. I have configured
nginx to log to a centralized remote syslog service as:

*error_log syslog:server=example.com:514
http://example.com:514,tag=nginx_error debug;*

The problem here is that, when I remove the above line from my nginx.conf,
the request processing time is becoming very high and It leads to client
timeout ( returns HTTP 460 ).

When I enable logging in my nginx.conf, I do not get HTTP 460 at all. But,
there's an extra overhead introduced which Increases the CPU Utilization.
What I suspect is that the nginx is sending the HTTP requests to my uWSGI
backend little slowly and my uWSGI backend is able to handle them
gracefully, write the response back to nginx successfully. The average
response time of the backend also spikes to 5x when logging is enabled.

Once I disable logging, the CPU Utilization decreases while the requests
are flooded to uWSGI backend and the backend takes time to return the
response within the defined client timeout period. If the request takes
time to process and if the client ( Android/iOS app) hasn't received the
response, It aborts the connection either when the timeout is reached or if
the user cancels the request.

I'd like to know whether I have to add a proxy buffer to my nginx to queue
up requests and send it to my backend instead of overflooding it. Or any
other solutions are appreciated.

I've also attached the log files below. any help is appreciated. Thanks in
advance.

Om
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Multiple master processes per reloading (1 reply)

$
0
0
Hi,

Per my understanding, the reloading would only replace the old workers with new ones, while during testing (constantly reloading), I found the output of "ps -ef" shows multiple masters and shutting down workers which would fade away very quickly, so I guess the master process may undergo the same replacement.
Could some experts help confirm this?

What's strange is that in a production env, those masters and shutting down workers keep stay there forever and never go away. in this env, there is a running script that would reload the nginx every 5 minutes.
Any idea what's going on here?


Thanks and Regards,
Allen

2019 NGINX User Survey: Give us feedback and be part of our future (no replies)

$
0
0
Hello-

Reaching out because it’s that time of year for the annual NGINX User
Survey. We're always eager to hear about your experiences to help us
evolve, improve and shape our product roadmap.

Please take ten minutes to share your thoughts:
https://nkadmin.typeform.com/to/nSuOmW?source=email

Best,
Kelsey

--
Kelsey Dannels
San Francisco
https://nginx.com/
https://www.linkedin.com/company/2962671 https://twitter.com/nginx
https://www.facebook.com/nginxinc
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

deny vs limit_req (no replies)

$
0
0
Hi,

I have a few `deny` rules set in global scope, sometimes I add spammers
there to block annoying attacks.

I also have a couple of `limit_req` rules in global scope, and 1 in a local
scope, that is more restrictive and I put it inside a `location` directive.

Last time an attack happened the limit_req was kicking in for this location
but after I put the IP addr on the `deny` rules it didn't do anything.

So my question is: is this a matter of precedence?
The limit_req inside a location would suppress any global deny rules?


Thanks

Regards,

Webert Lima
*Belo Horizonte - Brasil*
*IRC NICK - WebertRLZ*
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How to configure Nginx LB IP-Transparency for custom UDP application (no replies)

$
0
0
Hi all,


I am using *NGINX 1.13.5 as a Load Balancer for one of my
CUSTOM-APPLICATION *which will listen on* UDP port 2231,67 and 68.*

I am trying for Load Balancing with IP-Transparency.



When I using the proxy_protocol method the packets received from a remote
client is modified and send to upstream by NGINX LB not sure why/how the
packet is modified and also the remote client IP is NOT as source IP.



When I using proxy_bind, the packet is forwarded to configured upstream but
the source IP is not updated with Remote Client IP.



*Basically, in both methods, the remote client address was not used as a
source IP. I hope I missed some minor parts. Can someone help to resolve
this issue?*



The following are the detailed configuration for your reference.



*Method 1 :- proxy_protocol*



*Configuration:*



user *root;*
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;

}

stream {
server {
listen 10.43.18.107:2231 udp;
proxy_protocol on;
proxy_pass 10.43.18.172:2231;
}
server {
listen 10.43.18.107:67 udp;
proxy_protocol on;
proxy_pass 10.43.18.172:67;
}
server {
listen 10.43.18.107:68 udp;
proxy_protocol on;
proxy_pass 10.43.18.172:68;
}
}

*TCPDUMP O/P :*



*From LB:*

10:05:07.284259 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43

10:05:07.284555 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91



*From upstream[Custom application]:*

10:05:07.284442 IP 10.43.18.107.51775 > 10.43.18.172.2231: UDP, length 91



*Method 2:- [ proxy_bind ]*



*Configuration:*



user root;
worker_processes 1;
error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}

stream {
server {
listen 10.43.18.107:2231 udp;
proxy_bind $remote_addr:2231 transparent;
proxy_pass 10.43.18.172:2231;
}
server {
listen 10.43.18.107:67 udp;
proxy_bind $remote_addr:67 transparent;
proxy_pass 10.43.18.172:67;
}
server {
listen 10.43.18.107:68 udp;
proxy_bind $remote_addr:68 transparent;
proxy_pass 10.43.18.172:68;
}

}



*Also, added the below rules :*



ip rule add fwmark 1 lookup 100

ip route add local 0.0.0.0/0 dev lo table 100
iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 2231 -j
MARK --set-xmark 0x1/0xffffffff
iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 67 -j MARK
--set-xmark 0x1/0xffffffff
iptables -t mangle -A PREROUTING -p udp -s 10.43.18.0/24 --sport 68 -j MARK
--set-xmark 0x1/0xffffffff



However, still, the packet is sent from NGINX LB with its own IP, not with
the remote client IP address.



*TCPDUMP O/P from LB:*



11:49:51.999829 IP 10.43.18.116.2231 > 10.43.18.107.2231: UDP, length 43

11:49:52.000161 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43



*TPCDUM O/P from Upstream:*



11:49:52.001155 IP 10.43.18.107.2231 > 10.43.18.172.2231: UDP, length 43



*Note:* I have followed the below link.



https://www.nginx.com/blog/ip-transparency-direct-server-return-nginx-plus-transparent-proxy/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How to properly log a bug (no replies)

$
0
0
Hi,

I have been working with NGINX for about a year now. I have some 40 instances of NGINX running and I am running into a core dump with 2 new ones.

I have a repeatable process that generates my .conf and my .map files. I have powershell scripts that runs and read from a database and generate the ..conf and .map files.

There is something about the .map files for these two that is causing a core dump / segmentation fault. So I built an NGINX box from code with debugging on and I have the error log files with all the debugging, but it means nothing to me.

I just don't know exactly where or to whom to report this issue.

Thanks,

-Bernie

Bernard Quick | Polaris Industries | Staff DevOps Architect
9955 59th Ave N | Plymouth, MN 55442 | p:763.417.2204 | c:612.963.7742 | e:Bernard.Quick@polaris.com




CONFIDENTIAL: The information contained in this email communication is confidential information intended only for the use of the addressee. Unauthorized use, disclosure or copying of this communication is strictly prohibited and may be unlawful. If you have received this communication in error, please notify us immediately by return email and destroy all copies of this communication, including all attachments.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>