Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

njs-0.3.7 (no replies)

$
0
0
Hello,

I'm glad to announce a new release of NGINX JavaScript module (njs).

This release proceeds to extend the coverage of ECMAScript
specifications.

Notable new features:

- Object.assign() method:
: > var obj = { a: 1, b: 2 }
: undefined
: > var copy = Object.assign({}, obj)
: undefined
: > console.log(copy)
: {a:1,b:2}

You can learn more about njs:

- Overview and introduction: http://nginx.org/en/docs/njs/
- Presentation: https://youtu.be/Jc_L6UffFOs

Feel free to try it and give us feedback on:

- Github: https://github.com/nginx/njs/issues
- Mailing list: http://mailman.nginx.org/mailman/listinfo/nginx-devel


Changes with njs 0.3.7 19 Nov 2019

nginx modules:

*) Improvement: refactored iteration over external objects.

Core:

*) Feature: added Object.assign().

*) Feature: added Array.prototype.copyWithin().

*) Feature: added support for labels in console.time().

*) Change: removed console.help() from CLI.

*) Improvement: moved constructors and top-level objects to
global object.

*) Improvement: arguments validation for configure script.

*) Improvement: refactored JSON methods.

*) Bugfix: fixed heap-buffer-overflow in njs_array_reverse_iterator()
function. The following functions were affected:
Array.prototype.lastIndexOf(), Array.prototype.reduceRight().

*) Bugfix: fixed [[Prototype]] slot of NativeErrors.

*) Bugfix: fixed NativeError.prototype.message properties.

*) Bugfix: added conversion of "this" value to object in
Array.prototype functions.

*) Bugfix: fixed iterator for Array.prototype.find() and
Array.prototype.findIndex() functions.

*) Bugfix: fixed Array.prototype.includes() and
Array.prototype.join() with "undefined" argument.

*) Bugfix: fixed "constructor" property of "Hash" and "Hmac"
objects.

*) Bugfix: fixed "__proto__" property of getters and setters.

*) Bugfix: fixed "Date" object string formatting.

*) Bugfix: fixed handling of NaN and -0 arguments in Math.min()
and Math.max().

*) Bugfix: fixed Math.round() according to the specification.

*) Bugfix: reimplemented "bound" functions according to
the specification.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How to avoid sending incomplete request data to backend if 499 error (1 reply)

$
0
0
Hello...

Few days ago I have had this problem... let me explain with log lines:

X.X.X.X - - [16/Nov/2019:04:36:17 +0100] "POST /api/budgets/new HTTP/2.0" 200 2239 "----" "Mozilla/5.0 (iPhone; CPU iPhone OS 13_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) GSA/86.0.276299193 Mobile/15E148 Safari/605.1" Exec: "2.190" Conn: "10" Upstream Time: "2.185" Upstream Status: "200"

X.X.X.X - - [16/Nov/2019:04:36:55 +0100] "POST /api/budgets/new HTTP/2.0" 499 0 ""----"" "Mozilla/5.0 (iPhone; CPU iPhone OS 13_2 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) GSA/86.0.276299193 Mobile/15E148 Safari/605.1" Exec: "0.147" Conn: "1" Upstream Time: "0.142" Upstream Status: "-"

In the first line, there is nothing of interest... just the POST request was completely fine.

In the second request, there was a client disconnection and POST request was not complete, as given by the 499 logged error.

The problem was:
- the incomplete POST data was sent from nginx to the backend fastcgi server somehow.
- that code did process the incomplete request data and generated a corrupt entry in certain database... another history.

I need NGINX to do not behavior like this. If request data is not complete and connection was timed out, dropping a 499, I want NGINX to discard completely that request instead of sending incomplete data to the fastcgi backend.

I guess there would be two ways:
- Nginx main core buffer the client request and discard it completely if not finishing correctly (499).
- Nginx fastcgi module buffer the client request and discard it completely if not finishing correctly (499).

But I do not know how to configure like this.

Even "fastcgi_request_buffering on" is supposed to be default, but in this case, incomplete request was sent to backend generating an execution of code with corrupt data.

Is there a way to discard incomplete requests when happening a client disconnect and before parsing it to the backends?

Thanks to all!

--
Gino

Re: [nginx-announce] nginx-1.17.6 (no replies)

$
0
0
Hello Nginx users,

Now available: Nginx 1.17.6 for Windows https://
kevinworthington.com/nginxwin1176 https://t.co/iyCq763Nwe?amp=1 (32-bit
and 64-bit versions)

These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.

Announcements are also available here:
Twitter http://twitter.com/kworthington

Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
https://kevinworthington.com/
https://twitter.com/kworthington


On Tue, Nov 19, 2019 at 9:33 AM Maxim Dounin <mdounin@mdounin.ru> wrote:

> Changes with nginx 1.17.6 19 Nov
> 2019
>
> *) Feature: the $proxy_protocol_server_addr and
> $proxy_protocol_server_port variables.
>
> *) Feature: the "limit_conn_dry_run" directive.
>
> *) Feature: the $limit_req_status and $limit_conn_status variables.
>
>
> --
> Maxim Dounin
> http://nginx.org/
> _______________________________________________
> nginx-announce mailing list
> nginx-announce@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

feature request: warn when domain name resolves to several addresses (1 reply)

$
0
0
I noticed that in ngx_http_proxy_module

proxy_pass http://localhost:8000/uri/;
"If a domain name resolves to several addresses, all of them will be
used in a round-robin fashion. In addition, an address can be
specified as a server group."

However this can be confusing for end users who innocently put the
domain name "localhost" then find that round-robin across ipv6 and
ipv4 is occurring, ref:
https://stackoverflow.com/a/58924751/32453
https://stackoverflow.com/a/52550758/32453

Suggestion/feature request: If a domain name resolves to several
addresses, log a warning in error.log file somehow, or at least in the
output of -T, to warn somehow. Then there won't be unexpected
round-robins occurring and "supposedly single" servers being
considered unavailable due to timeouts, surprising people like myself.

Thank you for your attention, and for nginx, it's rocking fast! :)

-Roger Pack-
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

NGINX configuration with two backends (without load balancing) and NGINX - MYSQL TLS encryption (no replies)

$
0
0
Hi all,

Question 1
Is it possible to have NGINX reverse proxy to multiple MySQL servers listening on the same port using different names like you can with http? We don't want to perform any load balancing operation on them, we just want to be able to redirect to MySQL instances based on a logical name, same as on http.

Question 2
When I try to implement TLS encryption between NGINX and MYSQL Database server, I have the following error on my MySQL Client : ERROR 2013 (HY000): Lost connection to MySQL server at 'reading initial communication packet', system error

I have the following configuration : Ubuntu server with the MySQL Client // NGINX (with the configuration below) // MYSQL Database (with SSL activated)
stream {

upstream mysql1 {​
server 172.31.39.168:3306;​
​ }​

server {​
listen 3306;​
proxy_pass mysql1;​
proxy_ssl on;​

proxy_ssl_certificate /etc/ssl/client-cert.pem;​
proxy_ssl_certificate_key /etc/ssl/client-key.pem;​
#proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2;​
#proxy_ssl_ciphers HIGH:!aNULL:!MD5;​
proxy_ssl_trusted_certificate /etc/ssl/ca-cert.pem;​

proxy_ssl_verify on;​
proxy_ssl_verify_depth 2;​
proxy_ssl_session_reuse on;​
}​
}​

If I comment proxy_ssl* parameters on NGINX, the connection works between "Ubuntu server (with the MySQL Client)" and "MYSQL Database (with SSL activated)" throught "NGINX".

Thanks all


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

two identical keycloak servers + nginx as reverse proxy (no replies)

$
0
0
Hello,

Can somebody enlighten me please?

i have two identical keycloak servers running in HA mode via DNS
discovery keycloak1.my.domain & keycloak2.my.domain

the dns discovery record is: keycloak.my.domain

this part is working no questions.


no i am trying to add nginx to the picture:

upstream signin {
      server 172.19.24.13:8080;
      server 172.19.24.16:8080;
  }

server {

        listen 443;
        ignore_invalid_headers off;
        ssl on;
        ssl_certificate /etc/ssl/my.domain.crt;
        ssl_certificate_key /etc/ssl/my.domain.key;

        server_name signin.my.domain;
        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        location / {
            proxy_pass          http://signin;
            proxy_redirect      off;
            proxy_set_header    Host               $host;
            proxy_set_header    X-Real-IP          $remote_addr;
            proxy_set_header    X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header    X-Forwarded-Host   $host;
            proxy_set_header    X-Forwarded-Server $host;
            proxy_set_header    X-Forwarded-Port   $server_port;
            proxy_set_header    X-Forwarded-Proto  $scheme;
        }

every request to https://signin.my.domain  results in error 500, and in
logs i see:

rewrite or internal redirection cycle while internally redirecting to
"////////////",

i know exactly that keycloak part work , i could go to
keycloak.my.domain in my browser no problem.


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

upstream_response_length and upstream_addr can't work (no replies)

$
0
0
hi all:
When I use module of slice, upstream_response_length and
upstream_addr can't work.
nginx.conf :
#########################################################################
include mime.types;
default_type application/octet-stream;

log_format main
'$status^$scheme^$request^$body_bytes_sent^$request_time^$upstream_cache_status^$remote_addr^$http_referer^$http_user_agent^$content_type^$http_range^$cookie_name^$upstream_addr^$upstream_response_time^$upstream_bytes_received^$upstream_response_length^[$time_local]';


access_log logs/access.log main;
rewrite_log on;

sendfile on;
aio threads;

keepalive_timeout 65;

if ($uri ~ ^/([a-zA-Z0-9\.]+)/([a-zA-Z0-9\.]+)/(.*)) {
set $cdn $1;
set $new_host $2;
set $new_uri $3;
}

location / {
slice 1m;
proxy_cache_lock on;
proxy_cache my_cache;
proxy_cache_key $uri$is_args$args$slice_range;
proxy_set_header Range $slice_range;
proxy_cache_valid 200 206 24h;
proxy_pass http://$cdn/$new_uri;
}
#########################################################################
I Initiate a rang htttp request, for example
#########################################################################
curl -o result -H 'Range: bytes=2001-4932000' "
http://127.0.0.1:64002/A.com/B.com/appstore/developer/soft/20191008/201910081449521157660.patch
"
#########################################################################
upstream_response_length and upstream_bytes_received is just 1 MB, not
4.9MB. I find nginx build 5 http request to A.com by tcpdump, and nginx
implement slice by subrequest.

This is why? How to fix it?

Thank you
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Are modules built with --with-compat compatible across minor versions of NGINX? (no replies)

$
0
0
If I build a dynamic module against, say nginx 1.12.2 with `--with-compat`, will it work with, say nginx 1.12.1 (assuming --with-compat all around)


I assume not, because I found this in ngx_module.c, separate from the signature check. nginx_version has the minor version in it.


if (module->version != nginx_version) {
ngx_conf_log_error(NGX_LOG_EMERG, cf, 0,
"module \"%V\" version %ui instead of %ui",
file, module->version, (ngx_uint_t) nginx_version);
return NGX_ERROR;
}

Add new Directive to existing nginx module (no replies)

$
0
0
Is it possible to add new directive to existing nginx modules?
While trying to add i am getting the error. "directive is duplicate in <nginx.conf>"

Offload TCP traffic to another process (2 replies)

$
0
0
Dear experts,

We are evaluating nginx as a platform for the product of our new startup company.

Our use-case requires a TCP proxy that will terminate TLS, which nginx handles very well. However, we need to be able to send all TCP traffic to another process for offline processing.

Initially we thought we could write a NGX_STREAM_MODULE (call it tcp_mirror) that will be able to read both the downstream bytes (client <--> nginx) and upstream bytes (proxy <--> server) and send them to another process, but after looking at a few module examples and trying out a few things we understood that we can only use a single content handler for each stream configuration.

For example, we were hoping the following mock configuration would work for us, but realized we can't have both proxy_pass and tcp_mirror under server because there can be only one content handler:
stream {
server {
listen 12346;
proxy_pass backend.example.com:12346;
tcp_mirror processor.acme.com:6666;
}
}

The above led us to the conclusion that in order to implement our use-case we would have to write a new proxy_pass module, more specifically we would have to re-write ngx_stream_proxy_module.c. The idea is that we would manage two upstreams, the server and the processor. The configuration would look something like this:
stream {
server {
listen 12346;
proxy_pass_mirror backend.example.com:12346 processor.acme.com:6666;
}
}

Before we begin implementation of this design, we wanted to consult with the experts here and understand whether anyone has a better idea on how to implement our use-case on top of nginx.

Thanks in advance,
Yoav Cohen.

ssl_client_fingerprint and sha256 (no replies)

$
0
0
Hi everyone,

this is my first post on this mailing list, so bear with me :-)

Sorry if my question is silly, but I haven't found any way to use a
sha256 fingerprint for client certificate validation in Nginx. Sha1
fingerprints work fine but we are slowly going toward sha256 as hashing
function by default. The ngx_http_ssl_module documentation explicitly
specify only sha1 [1].

I have seen in the Trac that there is a issue open about that [2].
Perhaps there a good reason for not having it currently. I'll be glad to
hear from you all. We are using ssl client auth for WAPT project [3]
which automates Windows workstation software install and update.

Cheers,

Denis

[1] http://nginx.org/en/docs/http/ngx_http_ssl_module.html
[2] https://trac.nginx.org/nginx/ticket/1302
[3] https://doc.wapt.fr

--
Denis Cardon
Tranquil IT
12 avenue Jules Verne (Bat. A)
44230 Saint Sébastien sur Loire (FRANCE)
tel : +33 (0) 240 975 755
http://www.tranquil.it

Tranquil IT recrute! https://www.tranquil.it/nous-rejoindre/
Samba install wiki for Frenchies : https://dev.tranquil.it
WAPT, software deployment made easy : https://wapt.fr
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

proxy_pass in post_action location does not send any http request (no replies)

$
0
0
Hi

I am trying to configure NGINX to send another http request after successful completion of the original proxied request, in order to count statistics etc'
I am using post_action with proxy_pass as following:

location / {
proxy_http_version 1.1;
proxy_set_header HOST $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port 1061;
proxy_set_header X-Forwarded-Host $host:1061;
proxy_set_header X-Forwarded-Server $host:1061;
proxy_read_timeout 86400;
proxy_max_temp_file_size 0;
proxy_set_header Connection "";

set $backend rack-storage-radosgw:8802;

proxy_pass http://$backend;

post_action /send_event;
}

location /send_event {
proxy_method POST;
set $s3_proxy_request s3-proxy-manager:8946/internal/api/v3/s3proxies/actions/raise_event;
proxy_pass http://$s3_proxy_request;
}


The request in location / is sent to rack-storage-radosgw.service.strato and is completed successfully, but the request to s3-proxy-manager is not sent at all. I used tcpdump to capture any traffic on port 8946 and no traffic arrived. I also checked that send_event location is entered by code, and it does ( using rewrite_by_lua_block).
What am i doing wrong?
Thanks

'Lost' the default config location (no replies)

$
0
0
Noob here , so please bear with me. I have a reverse proxy working so if I https://mysite.com/footyscore the page will launch. However if I browse to http or https://mysite.com the default welcome to nginx page loads. I want to change this (change the root) so that my roundcube webmail will launch.

Serving a subdirectory (no replies)

$
0
0
Hi!

I'm a little bit lost now, since various configurations tried just dont
work. None of them.

I have:

server {
listen 443 default_server ssl;
listen [::]:443 default_server ssl;

server_name nc409.my.domain;
root /var/www;
index index.sh index.html;

location /chrony {
try_files $uri $uri/ $uri/ index.sh;
}

location ~ "index\.sh"$ {
gzip off;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
include /etc/nginx/fastcgi_params;
fastcgi_param DOCUMENT_ROOT $document_root;
fastcgi_param SCRIPT_FILENAME $request_filename;
}

}

With https://nc409.my.domain/ it gives an empty page (this is intended).
With https://nc409.my.domain/chrony it gives 404-not found.
With https://nc409.my.domain/chrony/ it gives 404-not found.
With https://nc409.my.domain/chrony/index.sh it gives 403 Forbidden.

If i am looking at the debug logs all seems OK: nginx feeds
/var/www/chrony/index.sh to fcgiwraper. /var/www/chrony/index.sh is allowed
to be executed by all. group is root, owner is root. Only the owner is
allowed to write the file. All others are allowed to execute it.

My first question: why doesn't nginx:
https://nc409.my.domain/chrony -> https://nc409.my.domain/chrony/index.sh
https://nc409.my.domain/chrony/ -> https://nc409.my.domain/chrony/index.sh
https://nc409.my.domain/chrony/index.sh ->
https://nc409.my.domain/chrony/index.sh
https://nc409.my.domain/chrony/index.css ->
https://nc409.my.domain/chrony/index.css

feeding index.sh to fcgiwraper and delivering index.css directly? And how
do I have to set up this thing to do exactly that?


--
Thomas
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Various errors while configuring nginx for certificate-based client auth (2 replies)

$
0
0
Hi guys,

I'm using nginx in form of container (docker.io/nginx), version 1.17.3-alpine
I'm trying to setup my nginx to do TLS auth and then forward packets to another host in the network.
As part of this I also have to support some probes that continuously monitor a secondary location, same server, same port.

This is my configuration

```
server {
listen 443 ssl;
server_name mydomain.com;

ssl_certificate /etc/nginx/certs/tls.crt;
ssl_certificate_key /etc/nginx/certs/tls.key;

ssl_client_certificate /etc/nginx/ca_certs/ca.crt;
ssl_verify_client optional;
ssl_verify_depth 2;

location = /healthz {
return 200 'the app is alive!';
}

location = / {
if ($ssl_client_verify != SUCCESS) {
return 403;
}

proxy_pass http://other-host:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header SSL_Client $ssl_client_s_dn;
proxy_set_header SSL_Client_Verify $ssl_client_verify;
}
}
```

First of all, as soon as I load the configuration I get the following error:

```
2019/12/05 10:22:35 [emerg] 1#1: invalid condition "!=" in /etc/nginx/conf.d/mydomain.conf:36
nginx: [emerg] invalid condition "!=" in /etc/nginx/conf.d/mydomain.conf:36
```

I find this if directive on any possible tutorial. I'm really not sure what's wrong here...

Also, even if I remove the if clause (just to see if otherwise it would work) I get another error:

```
2019/12/05 11:10:20 [emerg] 1#1: invalid number of arguments in "proxy_set_header" directive in /etc/nginx/conf.d/mydomain.conf:41
nginx: [emerg] invalid number of arguments in "proxy_set_header" directive in /etc/nginx/conf.d/mydomain.conf:41
```

Even after removing all the entire `location = /` block (to see if at least the container starts and /healtz return 200), I still get the following error:

```
2019/12/05 11:43:30 [error] 8#8: *90 open() "/etc/nginx/html/healtz" failed (2: No such file or directory), client: 172.16.0.158, server: , request: "GET /healtz HTTP/1.1", host: "mydomain.com"
172.16.0.158 - - [05/Dec/2019:11:43:30 +0000] "GET /healtz HTTP/1.1" 404 153 "-" "Wget" "-"
172.16.56.6 - - [05/Dec/2019:11:43:40 +0000] "GET / HTTP/1.1" 404 153 "-" "-" "-"
```

Shouldn't the return directive (as written) simply return a 200 and the message, even if a page is not present?

Sorry if I posted in the same thread three different issues... I just thought it would have made sense to post them together.

Thank you,

-Luca

Nginx map - use variable multiple times or use multiple variables (1 reply)

$
0
0
Hello,

We use Nginx map module to sent traffic to different upstreams based on the HTTP header:

map $http_flow $flow_upstream {
default "http://flow-dev";
prod "http://flow-prod";
test "http://flow-test";
dev "http://flow-dev";
}

location / {
proxy_read_timeout 5s;
proxy_pass $flow_upstream;
}

Now, we want to define a different timeouts to different flows:

map $http_flow $read_timeout {
default 15s;
prod 5s;
test 10s;
dev 15s;
}

location / {
proxy_read_timeout $read_timeout;
proxy_pass $flow_upstream;
}

Bunt Nginx config test show the error here:
nginx -t
nginx: [emerg] "proxy_send_timeout" directive invalid value in /etc/nginx/conf.d/flow.conf:19
nginx: configuration file /etc/nginx/nginx.conf test failed

Can we use map in such a way?

Or maybe something like this:
map $http_flow $flow_upstream $read_timeout {
default "http://flow-dev" "15s";
prod "http://flow-prod" "5s";
test "http://flow-test" "10s";
dev "http://flow-dev" "15s";
}

Thank you!

RD Gateway thru Reverse Proxy (1 reply)

$
0
0
I have multiple servers internal that need to use port 443 due to requirements of the applications and vendors. One is a Windows 2016 Essentials server the other a custom web app on Linux that requires a communication to the cloud on 443. I have setup a reverse proxy and it's excellent. Only issue I'm having is with Essentials server I login to the web console and when I click to launch a RD Gateway session it comes up and I can authenticate but when it's going to launch the actual session it fails.

Error I get is:

2019/12/10 14:27:48 [error] 27899#27899: *291 upstream prematurely closed connection while reading response header from upstream, client: <IP I'm at>, server: <essentials URL>, request: "RDG_OUT_DATA /remoteDesktopGateway/ HTTP/1.1", uupstream: "https:/<internal_ip>:443/remoteDesktopGateway/", host: "<essentials_URL>"

Below is my custom config settings:

######--------------BEGIN of the script server {
listen 80;
server_name <essentials_URL>;
# redirect http to https
return 301 https://$server_name$request_uri;
client_max_body_size 0;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;

location / {
proxy_pass http://<essentials_internal_ip>;
}
}

server {
listen 80;
server_name <smartwebsite_url>;
# redirect http to https
return 301 https://$server_name$request_uri;
client_max_body_size 0;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;

location / {
proxy_pass http://<smartwebsite_internal_ip>;
}
}

server {
listen 443 ssl;
listen [::]:443 ssl;
server_name <essentials_URL>;
ssl_certificate /config/user-data/ssl_chain_essentials.pem;
ssl_certificate_key /config/user-data/ssl_chain_key_essentials.pem;
access_log /var/log/nginx/<essentials-URL>.access.log;
error_log /var/log/nginx/<essentials-URL>.error.log;
ssl_session_timeout 1d;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
#dh param
ssl_dhparam /config/user-data/dhparam.pem;
# Enable HTTP Strict-Transport-Security
# If you have a subdomain of your site,
# be careful to use the 'includeSubdomains' options
add_header Strict-Transport-Security "max-age=63072000;
includeSubdomains; preload";
# XSS Protection for Nginx web server
add_header X-Frame-Options DENY;
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options nosniff;
ssl_session_cache shared:SSL:10m;
add_header X-Robots-Tag none;
client_max_body_size 0;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
location / {
proxy_pass https://<essentials_internal_ip>;
}
}

server {
listen 443 ssl;
server_name <smartwebsite_url>;
ssl_certificate /config/user-data/ssl_chain_smartweb.pem;
ssl_certificate_key /config/user-data/ssl_chain_key_smartweb.pem;
access_log /var/log/nginx/<smartwebsite-URL>.access.log;
error_log /var/log/nginx/<smartwebsite-URL>.error.log;
ssl_session_timeout 1d;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:ECDHE-RSA-AES128-GCM-SHA256:AES256+EECDH:DHE-RSA-AES128-GCM-SHA256:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
#dh param
ssl_dhparam /config/user-data/dhparam.pem;
# Enable HTTP Strict-Transport-Security
# If you have a subdomain of your site,
# be carefull to use the 'includeSubdomains' options
add_header Strict-Transport-Security "max-age=63072000;
includeSubdomains; preload";
# XSS Protection for Nginx web server
add_header X-Frame-Options DENY;
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options nosniff;
add_header X-Robots-Tag none;
client_max_body_size 0;
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forward-For $proxy_add_x_forwarded_for;
location / {
proxy_pass https://<smartwebsite_internal_ip>:8123;
}
}
#######-----------------end of script----------------------------


Thoughts?

Thanks.

JR

Getting started (no replies)

$
0
0
I edited the config file as follows:

http {
server {
location / {
root /www;
}
}
....
}

I then reloaded the config file with sudo nginx -s reload. I created a test
file at /www/index.html then when I tried to load the page in my browser I
still got the default Welcome to nginx! page.

What am I doing wrong?

I also tried editing the page at /usr/share/nginx/html/index.html None of
my edits seem to have any effect. When I tried to load the page in my
browser I still got the default Welcome to nginx! page.

What's going on?

thanks,
James
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

NGINX only for forwarding to LAN (no replies)

$
0
0
Hi@all,

first of all a "hello" to the round. I am new here :-)

I want to set up NGINX on my firewall/router (IPFire). But only as reverse proxy. There are no websites running on the IPFire.

The IP-Fire has a fixed IP on the WAN interface and can be reached from the Internet.

[IPFire]
WAN: 10.20.30.40
LAN: 192.168.xx.254

Behind the firewall, i.e. in the LAN, there are several web servers each running on the same port (80) but on different physical servers:

[Server1]
LAN: 192.168.xx.5

[Server2]
LAN: 192.168.xx.6

Furthermore I have an external, official domain (mydomain.de). On the external root server I have created two subdomains which I redirect to the IPFire's fixed IP:

gw.mydomain.com -> http://10.20.30.40
cloud.mydomain.com -> http://10.20.30.40

NGINX on the IPfire should now forward all requests directed to gw.mydomain.de to the server 192.168.xx.5 (and turück)

and requests addressed to cloud.mydomain.com to LAN: 192.168.xx.6

As far as I know the header has to be rewritten so that the remote client thinks it is communicating with xxxx.mydomain.de and not with 192.168.xx.y.

I tried for hours yesterday to get this with examples from the internet but nothing worked.

Does this work at all? Can anyone help me?

best regards
pixel24

Dumping request metadata to local port (no replies)

$
0
0
Hi,

I am relatively new to NGINX module development so I apologize if the question may seem trivial.

I would like to dump data/metadata about certain requests that my nginx server receives for a certain location/ config. Can I write a module that creates a regular C udp socket as well as a handler that works as one of the content phase handlers and dumps the data I want to the localhost port using the socket.

Is this safe to do in the first place? Is there any other quicker or easier way to do what I want?
Viewing all 7229 articles
Browse latest View live