Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

How to cache static files under root /var/www/html/images (5 replies)

$
0
0
Hi,

I have Nginx running as a webserver (not as proxy). I need to cache static
files that are under /var/www/html/images in memory. What's the simplest
way to do this?

Thank you
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Including Multiple Server Blocks via wildcard (1 reply)

$
0
0
In my main nginx.conf file I am doing an include for various files to include multiple server blocks (1 block per file).... If I use a wildcard include the https servers break but the http server is fine.... Example....

include /servers/*;

this would include 3 server blocks

1 http
2 https

If I include each file specifically the servers work fine (including both https server blocks) ANY IDEA WHY THIS WOULD BE? The server starts up fine but I just can't connect to the HTTPS endpoints (timeout)

..... Example

include /servers/server1;
include /servers/server2;
include /servers/server3;

Each server file contains a while server block.... Example.....

server {
listen 443 ssl backlog=2048;
server_name server1.domain.com;

ssl_certificate /test.crt;
ssl_certificate_key /test.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_protocols TLSv1.2;

location / {
root /html;
}

include /locations/*_conf;

status_zone https_server1;

}


Thanks,
Eric



________________________________

This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx as a proxy between ICA client and citrix presentation server (1 reply)

$
0
0
Hi,

can Nginx be used as a proxy server between a ICA client and a citrix presentation server?

http://de.slideshare.net/fdwl/citrix-internals-ica-connectivity
https://aspsupport.krz.de/ica-client/Connecting%20to%20MetaFrame%20Presentation%20Server%20through%20Proxy%20Servers.pdf

I´ve searched for ICA protocol & nginx but found nothing so far...

Heiko

Re: input required on proxy_next_upstream (no replies)

$
0
0
Hello!

On Wed, Feb 15, 2017 at 01:27:53PM +0530, Kaustubh Deorukhkar wrote:

> We are using nginx as reverse proxy and have a set of upstream servers
> configured
> with upstream next enabled for few error conditions to try next upstream
> server.
> For some reason this is not working. Can someone suggest if am missing
> something?

[...]

This question looks irrelevant to the nginx-devel@ mailing list.
Please keep user-level questions in the nginx@ mailing list.
Thank you.

--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

input required on proxy_next_upstream (no replies)

$
0
0
Hi,

We are using nginx as reverse proxy and have a set of upstream servers
configured
with upstream next enabled for few error conditions to try next upstream
server.
For some reason this is not working. Can someone suggest if am missing
something?

http {
....
upstream myservice {
server localhost:8081;
server localhost:8082;
}

server {
...
location / {
proxy_pass http://myservice;
proxy_next_upstream error timeout invalid_header http_502 http_503
http_504;
}
}
}

So what i want is if any upstream server gives the above errors, it should
try
the next upstream instance, but it does not and just reports error to
clients.

Note that, in my case one of the upstream server responds early for some
PUT request with 503 before entire request is read by upstream. I
understand that nginx closes the current upstream connection where it
received early response, but i expect it to try the next upstream server as
configured for the same request before it responds with error to client.

Am I missing some nginx trick here?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

reverse proxy with TLS-PSK (no replies)

$
0
0
Hey,

For our application we want embedded devices to access backend websocket/http services through nginx with TLS/SSL.
The embedded devices are very resource constraint and would benefit from using TLS-PSK.

My question is does nginx support reverse proxy and using TLS-PSK to secure incoming connections?
From what I understand nginx uses openssl and that supports TLS-PSK.
If nginx supports this could you give me a hint in how to configure this?

Thank you in advance.

Bruno De Maesschalck
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Client certificate fails with "unsupported certificate purpose" from iPad, works in desktop browsers (no replies)

$
0
0
We have client certificates set up and working for desktop browsers, but when using the same certificates that work on the desktop browser from an iPad, we get a "400: The SSL certificate error" in the browser, and the following in the log:

"18205#18205: *11 client SSL certificate verify error: (26:unsupported certificate purpose) while reading client request headers, client"


"openssl x509 -purpose" for the cert used to create the pkcs12 file is:

Certificate purposes:
SSL client : Yes
SSL client CA : No
SSL server : Yes
SSL server CA : No
Netscape SSL server : Yes
Netscape SSL server CA : No
S/MIME signing : Yes
S/MIME signing CA : No
S/MIME encryption : Yes
S/MIME encryption CA : No
CRL signing : Yes
CRL signing CA : No
Any Purpose : Yes
Any Purpose CA : Yes
OCSP helper : Yes
OCSP helper CA : No
Time Stamp signing : No
Time Stamp signing CA : No

Which appears to be the correct purpose, and it does work in regular browsers. We have a CA, and intermediate CA to sign the client certs and then the client cert itself.


The command used to create the pkcs file is:

openssl pkcs12 -export -out file.pk12 -inkey file.key -in file.crt -certfile ca.comb -nodes -passout pass:mypassword

Where ca.comb is the file specified in the ssl_client_certificate directive, which contains the public certificates for the CA, and the intermediary CA.

Since this works fine on desktop browsers, I'm not sure what to check. How can I figure out what is going wrong?

potential null dereference (no replies)

$
0
0
Hi,

In file /src/http/ngx_http_upstream.c, function
ngx_http_upstream_finalize_request


// if u->pipe == NULL, ngx_http_file_cache_free(r->cache, u->pipe->temp_file); will dereference a null pointer, it's that right ?

// Regards
// Alex

if (u->store && u->pipe && u->pipe->temp_file
&& u->pipe->temp_file->file.fd != NGX_INVALID_FILE)
{
if (ngx_delete_file(u->pipe->temp_file->file.name.data)
== NGX_FILE_ERROR)
{
ngx_log_error(NGX_LOG_CRIT, r->connection->log, ngx_errno,
ngx_delete_file_n " \"%s\" failed",
u->pipe->temp_file->file.name.data);
}
}

#if (NGX_HTTP_CACHE)

if (r->cache) {

......

ngx_http_file_cache_free(r->cache, u->pipe->temp_file);
}




alexc@sbrella.com
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

swapiness value to be set for high load nginx server (no replies)

$
0
0
Hi,

We are using nginx/1.10.2 as web server on centos and redhat Linux 7.2 OS.
We are getting issues of getting our SWAP fully utilized even though we have free memory , vm.swapiness is kept as default i.e. 60.
below are memory details for your reference :

# free -g
total used free shared buff/cache available
Mem: 1511 32 3 361 1475 1091
Swap: 199 199 0

Please suggest method by which we can avoid swap full scenario. also let us know what should be vm.swapiness value for high load nginx web server. Also let us know any other sysctl parameters to improve nginx performance.

Question about proxy_cache_key (no replies)

$
0
0
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hello!

I've compiled latest nginx 1.11.10 with ngx_cache_purge, my
configurations likes:

proxy_cache_key "$uri$is_args$args";
proxy_cache_path /var/cache/nginx/proxy_cache levels=1:2
keys_zone=networksninja_cache:60m inactive=60m use_temp_path=off
max_size=8192m;

And location syntax is :

location / {
proxy_pass http://10.8.0.10:80;
proxy_cache networksninja_cache;
proxy_cache_purge PURGE;
proxy_cache_use_stale error timeout updating http_500
http_503 http_504;
proxy_cache_valid 200 302 5m;
proxy_cache_valid any 3s;
proxy_cache_lock on;
proxy_cache_revalidate on;
proxy_cache_min_uses 1;
proxy_cache_bypass $arg_clear;
proxy_no_cache $arg_clear;
proxy_ignore_headers Cache-Control Expires Set-Cookie;
rewrite (?<capt>.*) $capt?$args&view=$mobile break;
}

My question is, the URI
https://subdomain.domain.tld/artikel/berita/a-pen-by-hair accessed
from another browser (eg. Safari) and cache status was HIT, then I
tried to invoke `PURGE` command from shell using cURL, it was failed
and return cache_key not found

But, if the URI accessed from shell using cURL and PURGE command was
invoke from shell too, the cache_key found, but returning only like this
:

<html>
<head><title>Successful purge</title></head>
<body bgcolor="white">
<center><h1>Successful purge</h1>
<br>Key : /artikel/berita/a-pen-by-hair&view=desktop?
<br>Path:
/var/cache/nginx/proxy_cache/a/5e/7527d90390275ac034d4a3d5b2e485ea
</center>
<hr><center>nginx/1.11.10</center>
</body>
</html>

The following argument "&view=desktop?" should be follow. Is there any
hints to match the proxy_cache_key? So I can purge the cache anywhere
and anytime?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQI4BAEBCAAiBQJYpUHfGxxkZXdhbmdnYWJhQHh0cmVtZW5pdHJvLm9yZwAKCRDl
f9IgoCjNcCz/D/wJqeJrn7vkxh2Nm7ZFAtBiNB7GL1oS3il7m7Rx7+rP9gZWVpOm
zFqap7Nv+qzZ96319soTxGDxB3enGKbkP9Rr8J6ica3X3p4vG1rUryxnQX5cbV77
E6ikNckkFXK26MLWnbXHee8YrLNUjhePpqPZYpSvIMWpumTH2XtY3+EYRWlDFWJY
6tOqTTsz6nkvgXvcnrAvPl1oHfysm2Lzc773sd0uWxE/ue4DHQleKNVzG67tXxNF
YFQCPp3Fa3qlK8F3s3jf/tgw/uZ6gwmDn4/0z6WiqIQ8HGxyEYdhxFr+1lJaWjG9
j6iYFjf/stpC9EHTvLH3NDvv0MSR38eeIvTB5SYiI/yGwsx+I+izHZn26Z1vYUeh
QNkZhvjSpd6truliYQU6ftBR796LspM8LdoQuLB3z2Swg2BD1SB1vL3/Cm9Lwwoe
MY3ghSKiVHyV9adySDCK9MWK/g77Cq3GUIXBW3uWFyBPTgqE84kRibw6tHI4Edaf
r2gZPqT0N0qBr4cs6/6Q+84PNFCjQtr2lIB7UzNXV7leZRU5gax6rJN2wE6KemD5
pHRdocVM8QDKW4XJRBAjrdgYMsQu3DBLffsxFCCR75HIOnrxKxEWn4mk2qKtDTVO
+lsSxancZ77VU2aZvh7znZ7dUy7eQQ6oA3OtR19h4yCccDbq5UrJxQLTeQ==
=x9J2
-----END PGP SIGNATURE-----
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Proxying and static files (no replies)

$
0
0
Hi

I have a number of apps running behind nginx and I want to configure nginx so that is serves static content (js and css files) from the pulic directory of each app directly rather than proxying these requests, for example:

myserver.com/app1 dynamic requests proxied to hypnotoad (perl server) listening on http://*:5000, css/js files served directly by Nginx (example path app1/public/js/script.js)
myserver.com/app2 dynamic requests proxied to hypnotoad (perl server) listening on http://*:5001, css/js files served directly by Nginx (example path app2/public/js/script.js)
myserver.com/appN dynamic requests proxied to hypnotoad (perl server) listening on http://*:5..N, css/js files served directly by Nginx (example path app3/public/js/script.js)

My conf file:

server {
listen 80 default_server;
listen [::]:80 default_server;

# Root for stuff like default index.html
root /var/www/html;

location /app1 {
proxy_pass http://127.0.0.1:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host:$server_port;
}

location /app2 {
proxy_pass http://127.0.0.1:5001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host:$server_port;
}

location /appN {...}
}

I've tried something like the following but can't get it work for each app:
location ~* /(images|css|js|files)/ {
root /home/username/app1/public/;
}

If I request app1/js/script.js for example it goes to /home/username/app1/public/app1/js/script.js rather than /home/username/app1/public/js/script.js

How can I get this to work for each app?

SSL Passthrough (3 replies)

$
0
0
Hi all,

I have the following setup:

PRIVATE SERVER <--> NGINX <--> PUBLIC SERVER

I need the NGINX server to work as both reverse and forward proxy with SSL passthrough. I have found online the following configuration for achieving this (note that for the forward proxy, I send packets always to the same destination, the public server, hardcoded in proxy_pass):

stream {
upstream backend {
server <private server IP address>:8080;
}
# Reverse proxy
server {
listen 9090;
proxy_pass backend;
}

# Forward proxy
server{
listen 9092;
proxy_pass <public server IP address>:8080;
}
}

I have not tried the reverse proxy capability yet as the forward proxy is already giving me problems. In particular, when the private server tries to connect to the public server the TLS session fails.

On the public server it says:
http: TLS handshake error from <PUBLIC_SERVER_IP>:49848: tls: oversized record received with length 20037

while on the private server it says:
Post https://<PUBLIC_SERVER_IP>:8080/subscribe: malformed HTTP response "\x15\x03\x01\x00\x02\x02\x16"


This is what I see with tshark:

PRIVATE_SRV ? NGINX TCP 74 48044?9092 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=1209793579 TSecr=0 WS=128

NGINX ? PRIVATE_SRV TCP 74 9092?48044 [SYN, ACK] Seq=0 Ack=1 Win=28960 Len=0 MSS=1460 SACK_PERM=1 TSval=1209793579 TSecr=1209793579 WS=128

PRIVATE_SRV ? NGINX TCP 66 48044?9092 [ACK] Seq=1 Ack=1 Win=29312 Len=0 TSval=1209793579 TSecr=1209793579

NGINX ? PUBLIC_SRV TCP 74 49848?8080 [SYN] Seq=0 Win=29200 Len=0 MSS=1460 SACK_PERM=1 TSval=1209793579 TSecr=0 WS=128

PRIVATE_SRV ? NGINX HTTP 161 CONNECT <PUBLIC_SRV_IP>:8080 HTTP/1.1

NGINX ? PRIVATE_SRV TCP 66 9092?48044 [ACK] Seq=1 Ack=96 Win=29056 Len=0 TSval=1209793580 TSecr=1209793580

PUBLIC_SRV ? NGINX TCP 74 8080?49848 [SYN, ACK] Seq=0 Ack=1 Win=28960 Len=0 MSS=1460 SACK_PERM=1 TSval=854036623 TSecr=1209793579 WS=128

NGINX ? PUBLIC_SRV TCP 66 49848?8080 [ACK] Seq=1 Ack=1 Win=29312 Len=0 TSval=1209793580 TSecr=854036623

NGINX ? PUBLIC_SRV HTTP 161 CONNECT <PUBLIC_SRV_IP>:8080 HTTP/1.1

PUBLIC_SRV ? NGINX TCP 66 8080?49848 [ACK] Seq=1 Ack=96 Win=29056 Len=0 TSval=854036623 TSecr=1209793580

PUBLIC_SRV ? NGINX HTTP 73 Continuation

NGINX ? PUBLIC_SRV TCP 66 49848?8080 [ACK] Seq=96 Ack=8 Win=29312 Len=0 TSval=1209793581 TSecr=854036623

NGINX ? PRIVATE_SRV HTTP 73 Continuation

PRIVATE_SRV ? NGINX TCP 66 48044?9092 [ACK] Seq=96 Ack=8 Win=29312 Len=0 TSval=1209793581 TSecr=1209793581

PUBLIC_SRV ? NGINX TCP 66 8080?49848 [FIN, ACK] Seq=8 Ack=96 Win=29056 Len=0 TSval=854036624 TSecr=1209793580

NGINX ? PUBLIC_SRV TCP 66 49848?8080 [FIN, ACK] Seq=96 Ack=9 Win=29312 Len=0 TSval=1209793581 TSecr=854036624

NGINX ? PRIVATE_SRV TCP 66 9092?48044 [FIN, ACK] Seq=8 Ack=96 Win=29056 Len=0 TSval=1209793581 TSecr=1209793581

PRIVATE_SRV ? NGINX TCP 66 48044?9092 [FIN, ACK] Seq=96 Ack=9 Win=29312 Len=0 TSval=1209793581 TSecr=1209793581

NGINX ? PRIVATE_SRV TCP 66 9092?48044 [ACK] Seq=9 Ack=97 Win=29056 Len=0 TSval=1209793581 TSecr=1209793581

PUBLIC_SRV ? NGINX TCP 66 8080?49848 [ACK] Seq=9 Ack=97 Win=29056 Len=0 TSval=854036624 TSecr=1209793581


Do you have any suggestion on how to debug this? Is the fact that I am using HTTPS POST matter? Does it matter for NGINX that I am not using the default port 443 for SSL?

Thanks a lot for all the help you may give me.

try_files does not have any effect on existing files (2 replies)

$
0
0
usually you would have something like this in your config:

location / {
try_files $uri $uri/ /index.php
}

which works pretty good (1.11.10) - however it seems, that if you are requesting a physical file it will work anyway und the try_files gets ignored - so the following will work just as well:

location / {
try_files /foobar /index.php
}

This means, I can not for example overwrite an existing physical file location with a config like this:

location / {
try_files /$host$uri /index.php
}

Since if $uri exists under the root/alias it will be served directly without triggering that try_files directive.

Am I doing something wrong - or is this expected behaviour?

Cache only static files in sub/subfolder but not sub (no replies)

$
0
0
Hi, I'm having as so many other a subfolder with media files, but I've like to do a simple file caching of only one of the subfolders = /media//thumbs/embedded with path insite the domain.tld and serve them as media.domain.tld

So what I have done is added this to my config and it's working fine when I out comments the second location directive, files are stored in the cache and served as expected.
Now the trouble shooting: as noticed above, this only works when I out comments the second location, which is NOT to be cached at all. I have of course tried to switch between which location comes first. Even chose I recall it as first rule matching is served first.

Any one who can tell me why this isn't working as i like it to?

(for those who is curious, the location /thumbs/embedded holds about 4.200.000 files, where the rest is logically stored in same folder. The other folders is divided into sub1/sub2/sub3/sub4/sub5, this one isn't)

location /thumbs/embedded {
add_header X-Served-By "IDENT1";
add_header Cache-Control public;
add_header Pragma 'public';
add_header X-Cache-Status $upstream_cache_status;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header HOST $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
error_page 404 = /image404.php;
proxy_pass http://127.0.0.1:9001;
}

##Match what's not in above location directive
location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|xml)$ {
#access_log on;
#log_not_found on;
aio on;
sendfile on;
expires max;
add_header Cache-Control public;
add_header Pragma 'public';
add_header X-Served-By "IDENT2";
#add_header X-Frame-Options SAMEORIGIN;
error_page 404 = /image404.php;
}

massive deleted open files in proxy cache (no replies)

$
0
0
Hi!

We are useing Ubuntu 16.04 with nginx version 1.10.0-0ubuntu0.16.04.4.

nginx.conf:

user nginx;
worker_processes auto;
worker_rlimit_nofile 20480; # ulimit open files per worker process

events {
# Performance
worker_connections 2048; # openfilelimits beachten
multi_accept on;
use epoll;
}

http {
open_file_cache max=10000 inactive=1d;
open_file_cache_valid 1d;
open_file_cache_min_uses 1;
open_file_cache_errors off;

proxy_cache_path /var/cache/nginx/proxy_cache
levels=1:2 keys_zone=html_cache:30m max_size=8192m inactive=4h
use_temp_path=off;
proxy_cache_path /var/cache/nginx/wordpress_cache
levels=1:2 keys_zone=wordpress_cache:1m max_size=256m inactive=24h
use_temp_path=off;

proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
}


# df -h |grep cache
tmpfs 300M 74M 227M 25% /var/cache/nginx/wordpress_cache
tmpfs 9.0G 4.1G 5.0G 45% /var/cache/nginx/proxy_cache

# df -i |grep cache
tmpfs 20632978 5457 20627521 1% /var/cache/nginx/wordpress_cache
tmpfs 20632978 74613 20558365 1% /var/cache/nginx/proxy_cache

# grep cache /etc/fstab
tmpfs /var/cache/nginx/proxy_cache/ tmpfs
rw,uid=109,gid=117,size=9G,mode=0755 0 0
tmpfs /var/cache/nginx/wordpress_cache/ tmpfs
rw,uid=109,gid=117,size=300M,mode=0755 0 0


# free -m
total used free shared buff/cache
available
Mem: 161195 112884 1321 4626 46988
42519
Swap: 3903 211 3692


Problem:
========
We got massive open file handles from nginx user located inside
proxy_cache_path with status deleted:

# lsof -E -M -T > lsof.`date +"%Y%d%m-%H%M%S"`.out

nginx 3613 nginx 48r REG 0,42 148664 29697
/var/cache/nginx/proxy_cache/temp/5/23/04ca8002edd2daa3c538ada5b202d6eb
(deleted)
nginx 3613 nginx 50r REG 0,42 161618 19416
/var/cache/nginx/proxy_cache/temp/1/40/d8f0a3563d18af4fbf43242e19283b15
(deleted)

# grep nginx lsof.20172002-085328.out |wc -l
69003

# grep nginx lsof.20172002-085328.out |grep deleted |wc -l
36312

# grep nginx lsof.20172002-085328.out |grep deleted |grep
"/var/cache/nginx/proxy_cache/temp" |wc -l
32004

The most of the 36k deleted files are located inside the temp cache folder.

My question is, why we got so much deleted files inside the cache. Why
is nginx not freeing these files?
Is there a problem with the proxy_cache_path option use_temp_path=off ?

I am in worry that the cache file system will be filled up with deleted
files, and will reach open files limits.

Or do we have a nginx misconfiguration somewhere?

Additionally we were often visited from the oom-killer, (in face of 20G
free memory). If I restart nginx before we reach 80k open ngnix files,
the oom-killer will not visit us!

Has anybody simmilar findings regarding deleted open files?

br,
Marco
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

how can I use external URI with the auth_request module (no replies)

$
0
0
Hello!

up vote
<>
down vote
<>favorite
<http://stackoverflow.com/questions/42380402/how-can-i-use-external-uri-with-the-nginxs-auth-request-module#>
I'm trying to use nginx's ngx_http_auth_request_module in such way:

server {

location / {
auth_request http://external.url;
proxy_pass http://protected.resource;
}
}
It doesn't work, the error is:

2017/02/21 02:45:36 [error] 17917#0: *17 open() "/usr/local/htmlhttp://external.url" failed (2: No such file or directory), ...
Or in this way with named location:

server {

location / {
auth_request @auth;
proxy_pass http://protected.resource;
}

location @auth {
proxy_pass http://external.url;
}
}
In this case the error is almost the same:

2017/02/22 03:13:25 [error] 25476#0: *34 open() "/usr/local/html@auth" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET / HTTP/1.1", subrequest: "@auth", host: "127.0.0.1"
I know there is a way like this:

server {

location / {
auth_request /_auth_check;
proxy_pass http://protected.resource;
}

location /_auth_check {
internal;
proxy_pass http://external.url;
}
}
But in this case the http://protected.resource can not use the /_auth_check path.

Is there a way to use an external URI as a parameter for the auth_request directive without overlapping the http://protected.resource routing?

If not, why?
It looks a little bit strange to look for the auth_request's URI through static files (/usr/local/html).


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Image Maps (1 reply)

$
0
0
Hi All,

I have searched the archives in hopes of answering this myself. But no luck.
My html was recently migrated from apache to nginx. It worked fine on
apache.

The html uses image maps, such as:
html v1 style: <br><a href=index.map><img src=index.jpg ISMAP></a>
or newer css style: <img src=index.jpg usemap="#mymap">

Neither seem to work with my nginx-1.10.1 on Fedora (really Amazon Linux).
(I believe this is an entirely different subject than the nginx maps
module.)

The image map looks something like this:
<map name="mymap">
rect /cgi-bin/picview.cgi?london01s.jpg 0,0 99,99
rect /cgi-bin/picview.cgi?london02s.jpg 100,0 199,99
rect /cgi-bin/picview.cgi?london03s.jpg 200,0 299,99
rect /cgi-bin/picview.cgi?london04s.jpg 300,0 399,99
rect /cgi-bin/picview.cgi?london05s.jpg 400,0 499,99
</map>

Any tips appreciated.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx as reverse proxy to several backends (no replies)

$
0
0
Hi all,


I am trying to set-up a reverse proxy with nginx so that based on the
server_name it goes to the correct backend.

I have been looking in to examples but no luck to get it actually working.

So this is want I want to do

when user type xxxx.yyy.be as normal http it redirects to https and then
forwards it to the backend nummer 1

but when user type zzzz.yyy.be also as normal http it redrects it to
https and forwards it to the correct backend (so here it would be
backend nummer 2)

so in sites-enabled i put several files that is being loaded but
nothing is working

so i would like to see an example that works as i can not found a
complete example to work with.

So please advice.


So here is my nginx.conf file

user www;
worker_processes auto;
pid /var/run/nginx.pid;

events {
worker_connections 768;
multi_accept on;
}

http {

##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
more_set_headers "Server: Your_New_Server_Name";
server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;

include /opt/local/etc/nginx/mime.types;
default_type application/octet-stream;

##
# SSL Settings
##
#ssl on;
ssl_protocols TLSv1.2;
ssl_ciphers
EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!MD5:!3DES:!CAMELLIA:!AES128;
ssl_prefer_server_ciphers on;
ssl_certificate /opt/local/etc/nginx/certs/fullchain.pem;
ssl_certificate_key /opt/local/etc/nginx/certs/key.pem;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
ssl_stapling on;
ssl_stapling_verify on;
## Enable HSTS
add_header Strict-Transport-Security max-age=63072000;

# Do not allow this site to be displayed in iframes
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options "SAMEORIGIN" always;
# Do not permit Content-Type sniffing.
add_header X-Content-Type-Options nosniff;
##
# Logging Settings
##
rewrite_log on;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;

##
# Gzip Settings
##

gzip on;
gzip_disable "msie6";

#gzip_vary on;
#gzip_proxied any;
#gzip_comp_level 6;
#gzip_buffers 16 8k;
#gzip_http_version 1.1;
#gzip_types text/plain text/css application/json
application/javascript text/xml application/xml application/xml+rss
text/javascript;

##
# Virtual Host Configs
##

include /opt/local/etc/nginx/sites-enabled/*;
}

and then in sites-enabled there are following files:

owncloud and mattermost

here is the content:

owncloud:

upstream owncloud {
server 192.168.1.51:80;
}




server {
listen 80;
server_name xxxx.yyy.be;
return 301 https://$server_name$request_uri;
#rewrite ^/.*$ https://$host$request_uri? permanent;
more_set_headers "Server: None of Your Business";
server_tokens off;
}
server {
listen 443 ssl http2;
server_name xxxx.yyy.be;
more_set_headers "Server: None of Your Business";
server_tokens off;

location / {
client_max_body_size 0;
proxy_set_header Connection "";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_buffers 256 16k;
proxy_buffer_size 16k;
proxy_read_timeout 600s;
proxy_cache owncloud_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 2;
proxy_cache_use_stale timeout;
proxy_cache_lock on;
proxy_pass http://192.168.1.51;
}
# Lets Encrypt Override
location '/.well-known/acme-challenge' {
root /var/www/proxy;
auth_basic off;
}

}

and mattermost:

server {
listen 80;
server_name zzzz.yyy.be;

location / {
return 301 https://$server_name$request_uri;

}
}
server {
listen 443;
server_name zzzz.yyy.be;

location / {
client_max_body_size 0;
proxy_set_header Connection "";
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Frame-Options SAMEORIGIN;
proxy_buffers 256 16k;
proxy_buffer_size 16k;
proxy_read_timeout 600s;
proxy_cache mattermost_cache;
proxy_cache_revalidate on;
proxy_cache_min_uses 2;
proxy_cache_use_stale timeout;
proxy_cache_lock on;
proxy_pass http://192.168.1.95:8065;
}

}


This is working (more or less) but if i start moving the ssl bit into
the owncloud or mattermost its simply is not working any more

getting each time that i type http://zzzz.yyy.be i get 400 bad request
The plain HTTP request was sent to HTTPS port



Thanks

Filip Francis


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

RE: Nginx multiple upstream with different protocols (no replies)

$
0
0
If you are SSL on the frontend (server directive) why would you want to proxy between ssl/non-ssl on the upstreams? Can they not be the same? I don't get what you are trying to solve?

From: nginx [mailto:nginx-bounces@nginx.org] On Behalf Of Kilian Ries
Sent: Wednesday, February 22, 2017 9:55 AM
To: nginx@nginx.org
Subject: Nginx multiple upstream with different protocols


Hi,



i'm trying to setup two Nginx upstreams (one with HTTP and one with HTTPS) and the proxy_pass module should decide which of the upstreams is serving "valid" content.



The config should look like this:



upstream proxy_backend {

server xxx.xx.188.53;

server xxx.xx.188.53:443;

}



server {

listen 443 ssl;

...

location / {

proxy_pass http://proxy_backendhttps://urldefense.proofpoint.com/v2/url?u=http-3A__proxy-5Fbackend&d=DwMFAw&c=WUZzGzAb7_N4DvMsVhUlFrsw4WYzLoMP5bgx2U7ydPE&r=20GRp3QiDlDBgTH4mxQcOIMPCXcNvWGMx5Y0qmfF8VE&m=ggR0dMpbDQRqzdhj1Aoq_FUpo8iYplzYiTPyRlQMs9Y&s=wcDWb0xGOKhBVtan1kM5-AVvxNT0ZMnUT9r-yLbyjAQ&e=;

#proxy_pass https://proxy_backendhttps://urldefense.proofpoint.com/v2/url?u=https-3A__proxy-5Fbackend&d=DwMFAw&c=WUZzGzAb7_N4DvMsVhUlFrsw4WYzLoMP5bgx2U7ydPE&r=20GRp3QiDlDBgTH4mxQcOIMPCXcNvWGMx5Y0qmfF8VE&m=ggR0dMpbDQRqzdhj1Aoq_FUpo8iYplzYiTPyRlQMs9Y&s=ztdy1u_d7Ag0QPBnpk1R-LazdfexcrTnljKLZet4VFA&e=;

}

}





The Problem is that i don't know if the upstream is serving the content via http or https. Is there any possibility to tell nginx to change the protocol from the proxy_pass directive? Because if i set proxy_pass to https, i get an error (502 / 400) if the upstream website is running on http and vice versa.



So i'm searching for a way to let Nginx decide if he should proxy_pass via http or https. Can anybody help me with that configuration?



Thanks

Greets

Kilian

________________________________

This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is confidential and protected by law from unauthorized disclosure. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx multiple upstream with different protocols (2 replies)

$
0
0
Hi,


i'm trying to setup two Nginx upstreams (one with HTTP and one with HTTPS) and the proxy_pass module should decide which of the upstreams is serving "valid" content.


The config should look like this:


upstream proxy_backend {

server xxx.xx.188.53;

server xxx.xx.188.53:443;

}


server {

listen 443 ssl;

...

location / {

proxy_pass http://proxy_backend;

#proxy_pass https://proxy_backend;

}

}



The Problem is that i don't know if the upstream is serving the content via http or https. Is there any possibility to tell nginx to change the protocol from the proxy_pass directive? Because if i set proxy_pass to https, i get an error (502 / 400) if the upstream website is running on http and vice versa.


So i'm searching for a way to let Nginx decide if he should proxy_pass via http or https. Can anybody help me with that configuration?


Thanks

Greets

Kilian
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>