Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

Handling upstream response 401 (2 replies)

$
0
0
I have a problem that I thought I knew how to solve but must be just having a mind blank moment.

If the upstream server returns a 401 response I want to make sure Nginx serves the response. Right now it is serving the stale version. What happened is that the upstream page was public but then made secure, so it sends back the 401 redirect for browser login. Nginx is behaving properly in serving stale but I want to change how it works just for 401. We do serve stale for 404 because we don’t see a need to serve a fresh response every time for content that doesn’t exist.

An alternative is to force the upstream app to return 501 instead of 401 but my understanding is that there are technical issues at stake that force me to try to resolve in Nginx.

Any help would be appreciated, I just feel like it’s an obvious fix and I’m forgetting how.

___________________________________________
Michael Friscia
Office of Communications
Yale School of Medicine
(203) 737-7932 - office
(203) 931-5381 - mobile
http://web.yale.eduhttp://web.yale.edu/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

support http and https on the same port (1 reply)

$
0
0
Stream servers can now do ssl and non-ssl on the same port:
https://www.nginx.com/blog/running-non-ssl-protocols-over-ssl-port-nginx-1-15-2/

Can this be added to http virtual hosts as well?
If ssl is on a listening port and client doesn't send ClientHello, can
nginx fallback to use normal http? Maybe introduce a new directive
"fallback_http on;"?

Thanks!
Frank
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Feature request (no replies)

$
0
0
Hi

Not sure where to put this.

But I would like to have the ability to add client cert required any where
on the URI tree

so

www.abc.com.au/ you can access with out a cert but
www.abc.com.au/private/ you need a cert
www.abc.com.au/public/ no cert needed


A
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx -> httpd -> mod_jk -> tomcat (no replies)

$
0
0
Hi everybody,

I recently begun using proxy with nginx (same tests were made with haproxy).

My needs are to proxy for failover and balancing tomcat: I need to serve lots of users with production app.


While I understood that a 100% tomcat AJP1.3 compatibility is achievable with apache httpd only and mod_jk, I successfully serve my app with apache to a simple 80 http port (cookie path already patched). So I decided to have a localhost apache httpd to proxy tomcat with AJP. IT works perfectly.

Now, I need to proxy httpd with nginx, adding SSL with letsencrypt. I successfuly configured the proxy and everything works but uploads: if I send a file to my app, only small uploads work.

I'd like to investigate the headers, maybe I need to transform some string but I'm a completely newbie from this point of view.

Do you have some tips on how to investigate the problem?


Thanks,




Giacomo Arru
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx url decoding URI problem with proxy_pass (no replies)

$
0
0
Greetings Nginx mailing list!

I'm using nginx as an image proxy and am using proxy_pass to fetch the
image. Unfortunately if that image URL has a space in it the proxy_pass
fails. It works fine with any other image.

example successful URL:

/image_preview/https://somedomain.com/image.jpg

example failedl URL:

/image_preview/https://somedomain.com/My%20Images/image.jpg

^^
Nginx is URL decoding the url in the path and putting the space back in.

Here's my nginx.conf

# redirect thumbnail url to real world
location ~ ^/image_preview/(.*?)$ {
resolver ${HOST};

set $fallback_image_url ${FALLBACK_IMAGE_URL};
set $image_url $1;

if ($args) {
set $image_url $1?$args;
}

proxy_intercept_errors on;
error_page 301 302 303 305 306 307 308 $fallback_image_url;
error_page 400 401 402 403 404 405 406 409 408 409 410 411 412
413 414 415 416 417 $fallback_image_url;
error_page 500 501 502 503 504 505 506 507 508 509 510 511 520
522 598 599 $fallback_image_url;

proxy_connect_timeout 2s;
proxy_read_timeout 4s;
proxy_pass_request_headers off;
proxy_buffering off;
proxy_redirect off;

proxy_pass $image_url;
proxy_set_header If-None-Match $http_if_none_match;
proxy_set_header If-Modified-Since $http_if_modified_since;
}

I've scoured the docs, stackoverflow, and the internet in general but don't
see how to address this problem. As I see it I have two options:

1) Find a way to make nginx not URL decode the param URL (doesn't seem
possible)
2) The original $request_uri contains the URL encoded URL. Find a way to
create a rewrite rule to strip off the prefix and proxy_pass to the
resulting URL. I haven't found a way to do something liek that as it
appears rewrite rules will only operate on the URI in context and that URI
appears to be decoded.

I've found an entire chapter in "Nginx Troubleshooting" on creating a proxy
for external links. But that example also appears to fall vicitm to this
same problem.

Any help/pointers would be appreciated as I am pretty well stuck at this
point on an approach that might work.

Thanks,
-Michael
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

secure/hide "api.anothersite.com" from public and only allow "mysite.com" to access it via 127.0.0.1:50010 internally (no replies)

$
0
0
I would like to hide a backend API REST server from public view and have it accessed from frontend web server locally/internally. Is this possible? Below are my setup and configs:

angular/nodejs frontend app, say it is "mysite.com" running on server at 127.0.0.1:51910

nodejs backend app, say it is "api.anothersite.com" running on server at 127.00.0.1:50010

nginx(open source) listens for the server_name/domain and does a proxy_pass to the host/port listed above

I currently can communicate back and forth with GET and POST requests and JSON responses.

So far everything is great.

However, beside just using CORS, I would now like to secure/hide "api.anothersite.com" from the public and just allow "mysite.com" to access 127.0.0.1:50010 internally instead of "api.anothersite.com"

Can this be done via nginx?

   server {
           server_name api.anothersite.com;
 
           listen 443 ssl;
           ssl_certificate /etc/letsencrypt/live/anothersite.com/fullchain.pem;
           ssl_certificate_key /etc/letsencrypt/live/anothersite.com/privkey.pem;
           include /etc/letsencrypt/options-ssl-nginx.conf;
           ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
   
      location / {
              #allow xx.xx.xx.xx;
              #allow 127.0.0.1;
              #deny all;
              proxy_pass http://127.0.0.1:50010;
 
              proxy_http_version 1.1;
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection 'upgrade';
              proxy_set_header Host $host;
              proxy_cache_bypass $http_upgrade;
          }
  }

   server {
       server_name mysite.com www.mysite.com;
 
       location / {
 
      proxy_http_version 1.1;
            proxy_pass http://localhost:51910;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
           # proxy_set_header Host $host;
           proxy_set_header Host mysite.com;
           proxy_cache_bypass $http_upgrade;
           proxy_pass_request_headers on;
      }
 
      #error_page  404              /404.html;
 
      # redirect server error pages to the static page /50x.html
      #
      error_page   500 502 503 504  /50x.html;
      location = /50x.html {
          root   /usr/share/nginx/html;
      }
 
      listen 443 ssl;
      ssl_certificate /etc/letsencrypt/live/mysite..com/fullchain.pem;
      ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem;
      include /etc/letsencrypt/options-ssl-nginx.conf;
      ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
  }
 
  server {
      if ($host = www.mysite.com) {
          return 301 https://$host$request_uri;
      }
 
      if ($host = mysite.com) {
          return 301 https://$host$request_uri;
      }
 
      listen       80;
      server_name mysite.com www.mysite.com;
      return 404;
  }
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

proxy_pass to dyndns address (no replies)

$
0
0
Hello,

inside a location I have a proxy_pass to a hostname with a dynamic IP
for example

location ^~ /example/ {
proxy_pass https://host1.dyndns.example.com;
}

getent hosts resolve the right IP.
But in via nginx return a 504.

When I reload nginx it work until IP is changed.
The DNS Server for this is on the same host.
TTL is only 300s.

I have found the resolver directive, I'm not sure if this is the right
one because of the small TTL.
Is there a way to get this working?

Best Regards
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Large CRL file crashing nginx on reload (1 reply)

$
0
0
Hi,

We are trying to use nginx to support the DoD PKI infrastructure, which
includes many DoD and contractor CRLs. The combined CRL file is over 350MB
in size, which seems to crash nginx during a reload (at least on Red Hat
6). Our cert/key/crl set up is valid and working, and when only including a
subset of the CRL files we have, reloads work fine.

When we concatenate all the CRLs we need to support, the config reload
request causes worker threads to become defunct and messages in the error
log indicate the following:

2018/07/26 16:05:25 [alert] 30624#30624: fork() failed while spawning
"worker process" (12: Cannot allocate memory)

2018/07/26 16:05:25 [alert] 30624#30624: sendmsg() failed (9: Bad file
descriptor)

2018/07/26 16:08:42 [alert] 30624#30624: worker process 1611 exited on
signal 9

Is there any way we can get nginx to support such a large volume of CRLs?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Modify url at nginx (1 reply)

$
0
0
Hello All,

We have a use case.

Our web application is deployed in tomcat7. At front, nginx is configured as reverse proxy and all requests are passed through nginx and are forwarded to tomcat7. Nginx serve static files directly and dynamic requests ( json ) are forwarded to tomcat7. At backend, we have MySQL db to save the application settings.


What we want is when client type https://test1.apphost.com , nginx see url as test1.apphost.com. Before proxy pass request to tomcat7, it should modify url to https://test.apphost.com so tomcat7 see client url as test.apphost.com. Once request is processed, response is given back to nginx and nginx give it back to end url https://test1.apphost.com


This is needed because in our application database, we use domain name to DB name mapping. Currently one domain name mapping entry is allowed. We want to allow multiple urls to login to our application from client side. That means, we use modified url (domain name ) test.apphost.com in database settings. When client type https://test1.apphost.com, nginx should modify it to test.apphost.com which matches the database mapping settings thus allow successful login.

We have following nginx config settings put in place.

server {
listen 80;
rewrite ^(.*) https://$host$1 permanent;
error_page 500 502 503 504 /50x.html;
}


server {
listen 443 ssl default_server;

location /server {

proxy_pass http://127.0.0.1:8080/server;
proxy_connect_timeout 6000;
proxy_send_timeout 6000;
proxy_read_timeout 6000;
proxy_request_buffering off;
send_timeout 6000;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_temp_path /var/nginx/proxy_temp;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Server $host;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_redirect off;
proxy_cache sd6;

add_header X-Proxy-Cache $upstream_cache_status;
proxy_cache_bypass $http_cache_control;
add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options nosniff;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header Referrer-Policy "no-referrer";
}

ssl on;
ssl_certificate /etc/nginx/ssl/example.com.bundle.crt;
ssl_certificate_key /etc/nginx/ssl/example.com.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA HIGH !RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS";
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 24h;

keepalive_timeout 300;
access_log /var/log/nginx/ssl-access.log;
error_log /var/log/nginx/ssl-error.log;

}

Would be of great help if someone can advise us how can we modify the url based on the use case explained above.
Thank you.

How to create a module to process a data stream during transfer (no replies)

$
0
0
Hello all,

I am looking for a way to do two things in particular. The first is the be able to have a way to direct HTTP POST's to a program's stdin with arguments and to then take its stdout and put that back in the stream being uploaded, and then to apply this to a flex/bison program or module. This is to handle 300GB files without saving them to disk, but still getting the important information out of them. Having the general stdin/stdout part is also because I think this should be more generalized for uploads and downloads in general so things like streaming video become less of a problem.

Apache does not have their modules set up in such a way as to feasibly do this without a whole new major revision. Looking at nginx, it looks closer but I'm still not expert enough to know without help.

nginx reuseport duplicate listen options ? (1 reply)

$
0
0
I know that nginx reuseport is only usable per ip:port pair so I am confused about this error.

I have 3 nginx vhosts

vhost #1

server {
listen 443 ssl http2 default_server backlog=2048 reuseport;
}

vhost #2

server {
listen 80 default_server backlog=2048 reuseport fastopen=256;
}

vhost #3

server {
listen 443 ssl http2;
}

This configuration works and I see socket sharding in use on 8 cpu thread centos 7.5 64 server

ss -lnt | egrep -e ':80 |:443 '
LISTEN 0 2048 *:443 *:*
LISTEN 0 2048 *:443 *:*
LISTEN 0 2048 *:443 *:*
LISTEN 0 2048 *:443 *:*
LISTEN 0 2048 *:443 *:*
LISTEN 0 2048 *:443 *:*
LISTEN 0 2048 *:443 *:*
LISTEN 0 2048 *:443 *:*
LISTEN 0 2048 *:80 *:*
LISTEN 0 2048 *:80 *:*
LISTEN 0 2048 *:80 *:*
LISTEN 0 2048 *:80 *:*
LISTEN 0 2048 *:80 *:*
LISTEN 0 2048 *:80 *:*
LISTEN 0 2048 *:80 *:*
LISTEN 0 2048 *:80 *:*

but if i had the 3 nginx vhosts where reuseport was used on vhost #3 instead of vhost #2, i get error

'nginx: [emerg] duplicate listen options for 0.0.0.0:443 in'



vhost #1

server {
listen 443 ssl http2 default_server backlog=2048;
}

vhost #2

server {
listen 80 default_server backlog=2048 reuseport fastopen=256;
}

vhost #3

server {
listen 443 ssl http2 reuseport;
}

nginx 1.15.3 and 1.15.2 with GCC 7.3.1/8.2 or OpenSSL 1.1.0h/1.1.1-pre8 all result in same error 'nginx: [emerg] duplicate listen options for 0.0.0.0:443 in' ???

nginx -V
nginx version: nginx/1.15.3 (260718-233400)
built by gcc 8.2.0 (GCC)
built with OpenSSL 1.1.1-pre8 (beta) 20 Jun 2018
TLS SNI support enabled
configure arguments: --with-ld-opt='-L/usr/local/lib -ljemalloc -Wl,-z,relro -Wl,-rpath,/usr/local/lib' --with-cc-opt='-I/usr/local/include -m64 -march=native -DTCP_FASTOPEN=23 -g -O3 -Wno-error=strict-aliasing -fstack-protector-strong -flto -fuse-ld=gold --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wimplicit-fallthrough=0 -fcode-hoisting -Wno-cast-function-type -Wp,-D_FORTIFY_SOURCE=2 -Wno-deprecated-declarations' --sbin-path=/usr/local/sbin/nginx --conf-path=/usr/local/nginx/conf/nginx.conf --build=260718-233400 --with-compat --with-http_stub_status_module --with-http_secure_link_module --add-dynamic-module=../nginx-module-vts --with-libatomic --with-http_gzip_static_module --add-dynamic-module=../ngx_brotli --with-http_sub_module --with-http_addition_module --with-http_image_filter_module=dynamic --with-http_geoip_module --with-stream_geoip_module --with-stream_realip_module --with-stream_ssl_preread_module --with-threads --with-stream=dynamic --with-stream_ssl_module --with-http_realip_module --add-dynamic-module=../ngx-fancyindex-0.4.2 --add-module=../ngx_cache_purge-2.4.2 --add-module=../ngx_devel_kit-0.3.0 --add-dynamic-module=../set-misc-nginx-module-0.32 --add-dynamic-module=../echo-nginx-module-0.61 --add-module=../redis2-nginx-module-0.15 --add-module=../ngx_http_redis-0.3.7 --add-module=../memc-nginx-module-0.18 --add-module=../srcache-nginx-module-0.31 --add-dynamic-module=../headers-more-nginx-module-0.33 --with-pcre=../pcre-8.42 --with-pcre-jit --with-zlib=../zlib-cloudflare-1.3.0 --with-http_ssl_module --with-http_v2_module --with-openssl=../openssl-1.1.1-pre8 --with-openssl-opt='enable-ec_nistp_64_gcc_128 enable-tls1_3'

posix_memalign error (no replies)

$
0
0
I am repeatedly seeing errors like

######################
2018/07/31 03:46:33 [emerg] 2854560#2854560: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 03:54:09 [emerg] 2890190#2890190: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:08:36 [emerg] 2939230#2939230: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:24:48 [emerg] 2992650#2992650: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:42:09 [emerg] 3053092#3053092: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:42:17 [emerg] 3053335#3053335: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:42:28 [emerg] 3053937#3053937: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
2018/07/31 04:47:54 [emerg] 3070638#3070638: posix_memalign(16, 16384)
failed (12: Cannot allocate memory)
####################

on a few servers

The servers have enough memory free and the swap usage is 0, yet somehow
the kernel denies the posix_memalign with ENOMEM ( this is what I think is
happening!)

The numbers requested are always 16, 16k . This makes me suspicious

I have no setting in nginx.conf that reference a 16k

Is there any chance of finding out what requests this and why this is not
fulfilled


--
*Anoop P Alias*
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Getting NGINX to view an alias (no replies)

$
0
0
Hi,

I'm new at NGINX and I'm having difficulty in setting up to read an alias.
I'm setting up adminer on NGINX to use an alias to see a file outside of
it's main directory. The file is called latest.php in /usr/share/adminer. I
created a synlink to link adminer.php to latest.php. I'm trying to access
adminer through /admin/adminer.php but returns a 404 when I try to access
the file.

my config file:

server {
listen 80;
listen [::]:80;
include /etc/nginx-rc/conf.d/[site]/main.conf;
location /admin/ {
alias /usr/share/adminer/;
}
}

Thanks
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

2018 NGINX User Survey: Help Us Shape the Future (no replies)

$
0
0
Hello-

My name is Kelsey and I recently joined the NGINX team. Reaching out
because it’s that time of year for the annual NGINX User Survey. We're
always eager to hear about your experiences to help us evolve, improve and
shape our product roadmap.

Please take ten minutes to share your thoughts:
https://nkadmin.typeform.com/to/e1A4mJ?source=email

Best,
Kelsey


--
*Join us at **NGINX Conf 2018* https://www.nginx.com/nginxconf/2018/*,
Oct 8-11, Atlanta, GA*

Kelsey Dannels
Marketing Communication Specialist
Mobile: 650 773 1046
San Francisco
https://nginx.com/
https://www.linkedin.com/company/2962671 https://twitter.com/nginx
https://www.facebook.com/nginxinc
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx mail proxy LDAP iRedMail (no replies)

$
0
0
Hi there,

I try configure a little mail infrastructure but i have problem with this. So i have exacly three servers. One is MX (frontend) there is nginx with configuration:

user nginx;
worker_processes 2;
error_log /var/log/nginx/error.log info;
pid /var/run/nginx.pid;
load_module /usr/lib64/nginx/modules/ngx_http_perl_module.so;
load_module /usr/lib64/nginx/modules/ngx_mail_module.so;


events {
worker_connections 1024;
multi_accept on;
}

http {
perl_modules perl/lib;
perl_require mailauth.pm;

server {
location /auth {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
perl mailauth::handler;
}
}
}

mail {
auth_http 127.0.0.1:80/auth;

pop3_capabilities "TOP" "USER";
imap_capabilities "IMAP4rev1" "UIDPLUS";

server {
listen 110;
protocol pop3;
proxy on;
}

server {
listen 143;
protocol imap;
proxy on;
}

server {
listen 25;
protocol smtp;
proxy on;
}
}

And i try write auth script in perl, look like:

package mailauth;

use strict;
use warnings;
use nginx;
use Net::LDAP;

my $mail_server1 = "10.12.1.109";
my $mail_server2 = "10.12.1.109";

our $mail_server_ip={};
our $protocol_ports={};
$mail_server-ip->{'mailhost01'}="10.12.1.109";
$mail_server_ip->{'mailhost02'}="192.168.1.33";
$protocol_ports->{'pop3'}=110;
$protocol_ports->{'imap'}=143;

my $ldapconnect = Net::LDAP->new( "10.12.1.109",
version => 3,
port => 389 ) or die $@;


my $bind = $ldapconnect->bind( "cn=vmail,dc=poczta,dc=coml",
password => "PPkRSNeYtIDm7QXAq7Dr" );
if ( $bind->code ) {
LDAPerror( "Bind: ", $bind);
}


sub handler {

my $r = shift;


our $mail_server;
my $auth_user->execute($r->header_in("Auth-User"));
if ($auth_user =~ m/^[abcdefghijklmp]/) {
$mail_server = $mail_server1;
} else {
$mail_server = $mail_server2;
}



my $search = $ldapconnect->search(
base => "o=domains,dc=poczta,dc=com",
filter => '(&(mail=' . $r->header_in("Auth-User") . '))'
);


my $goto = $search->entry(0)->get_value('mail');
$r->header_out( "Auth-Status", "OK" );
$r->header_out( "Auth-Server", $mail_server);
$r->header_out( "Auth-Port", $protocol_ports->{$r->header_in("Auth-Protocol")});
$r->send_http_header("text/html");



return OK;
}
1;

$ldapconnect->unbind;

__END__


Two backend servers installed with LDAP form iRedMail package. I want have two servers backend with half and half users. So i add to script logic like:

our $mail_server;
my $auth_user->execute($r->header_in("Auth-User"));
if ($auth_user =~ m/^[abcdefghijklmp]/) {
$mail_server = $mail_server1;
} else {
$mail_server = $mail_server2;
}

Check with curl:
curl -i -H 'Auth-User: postmaster@com' -H 'Auth-Pass: supersecret' -H 'Auth-Protocol: imap' 10.12.1.128:80/auth

and ive got:

HTTP/1.0 200 OK
Server: nginx/1.12.2
Date: Wed, 01 Aug 2018 08:40:49 GMT
Content-Type: text/html
Auth-Status: OK
Auth-Server:
Auth-Port: 143


telnet 10.12.1.128 143
Trying 10.12.1.128...
Connected to 10.12.1.128.
Escape character is '^]'.
* OK IMAP4 ready
LOGIN postmaster@com supersecret
LOGIN BAD invalid command
Connection closed by foreign host.

sub_filter not working on JSON file (3 replies)

$
0
0
I’m trying to figure out why my sub_filter is not working on a JSON file. I do have application/json and text/json listed in the sub_filter_types but the simple string replace is not happening. It causes me to think that for whatever reason, Nginx is not seeing this file as JSON. Is there a way to output what mime type the file from the upstream server is so I can make sure I have the right filter type?

Or, is there something I should also be doing to make the sub_filter work on a JSON file?

___________________________________________
Michael Friscia
Office of Communications
Yale School of Medicine
(203) 737-7932 - office
(203) 931-5381 - mobile
http://web.yale.eduhttp://web.yale.edu/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx, php7.0-fpm and laravel, not able to set it up when the url has a prefix which the server doesn't have (no replies)

$
0
0
Hello

I am trying to set up a laravel installation in docker with php-fpm and
nginx server on a separate container. The issue is the laravel is installed
in a path like /home/apps/foo and the url I need is abcd.com/v11/. I
thought this was fairly simple, but I am not able to set it up. Here is my
location part in the nginx config

location /v11/ {
try_files $uri $uri/ /index.php?$query_string;
location ~ \.php$ {
fastcgi_index index.php;
fastcgi_pass php_wbv1.0:5000;
fastcgi_split_path_info ^(.+\.php)(/.*)$;
fastcgi_param SCRIPT_FILENAME
$document_root$fastcgi_script_name;
fastcgi_param QUERY_STRING $query_string;
include fastcgi_params;
}
}

I tried rewrite tag, and tried giving alias inside the location block, both
didn't work. The thing is if I remove the /v11/ from the location tag and
the URL, its working without any issues.

What is the right way to do this?


--

Sincerely,
Plato P
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

keepalive not work with grpc (no replies)

$
0
0
Hi, everyone! I have a problem that keepalive does not work with grpc.
My conf looks like this:

worker_processes 1;

events {
worker_connections 102400;
}

http {
include mime.types;
default_type application/octet-stream;

sendfile on;
#tcp_nopush on;

#keepalive_timeout 0;
keepalive_timeout 65;
keepalive_requests 1000;

upstream grpc_backend {
server localhost:2080;
keepalive 2;
}
...
server {
listen 1090 http2;
server_name localhost;
...

location / {
grpc_pass grpc://grpc_backend;
}
}
}
I have a C++ gprc client and C++ grpc server that works well together without nginx.
It seems like that nginx does not support the keepalive with grpc_pass at all. There are still so many connections built from nginx to grpc server that all my ports are consumed. I remember that nginx does support that, so how could it work? Where is my mistake?

limit_rate based on User-Agent; how to exempt /robots.txt ? (no replies)

$
0
0
Hi all, I’ve recently deployed a rate-limiting configuration aimed at protecting myself from spiders.

nginx version: nginx/1.15.1 (RPM from nginx.org)

I did this based on the excellent Nginx blog post at https://www.nginx.com/blog/rate-limiting-nginx/ and have consulted the documentation for limit_req and limit_req_zone.

I understand that you can have multiple zones in play, and that the most-restrictive of all matches will apply for any matching request. I want to go the other way though. I want to exempt /robots.txt from being rate limited by spiders.

To put this in context, here is the gist of the relevant config, which aims to implement a caching (and rate-limiting) layer in front of a much more complex request routing layer (httpd).

http {
map $http_user_agent $user_agent_rate_key {
default "";
"~our-crawler" "wanted-robot";
"~*(bot/|crawler|robot|spider)" "robot";
"~ScienceBrowser/Nutch" "robot";
"~Arachni/" "robot";
}

limit_req_zone $user_agent_rate_key zone=per_spider_class:1m rate=100r/m;
limit_req_status 429;

server {
limit_req zone=per_spider_class;

location / {
proxy_pass http://routing_layer_http/;
}
}
}



Option 1: (working, but has issues)

Should I instead put the limit_req inside the "location / {}" stanza, and have a separate "location /robots.txt {}" (or some generalised form using a map) and not have limit_req inside that stanza

That would mean that any other configuration inside the location stanzas would get duplicated, which would be a manageability concern. I just want to override the limit_req.

server {
location /robots.txt {
proxy_pass http://routing_layer_http/;
}

location / {
limit_req zone=per_spider_class;
proxy_pass http://routing_layer_http/;
}
}

I've tested this, and it works.


Option 2: (working, but has issues)

Should I create a "location /robots.txt {}" stanza that has a limit_req with a high burst, say burst=500? It's not a whitelist, but perhaps something still useful?

But I still end up with replicated location stanzas... I don't think I like this approach.

server {
limit_req zone=per_spider_class;

location /robots.txt {
limit_req zone=per_spider_class burst=500;
proxy_pass https://routing_layer_https/;
}

location / {
proxy_pass https://routing_layer_https/;
}
}


Option 3: (does not work)

Some other way... perhaps I need to create some map that takes the path and produces a $path_exempt variable, and then somehow use that with the $user_agent_rate_key, returning "" when $path_exempt, or $user_agent_rate_key otherwise.

map $http_user_agent $user_agent_rate_key {
default "";
"~otago-crawler" "wanted-robot";
"~*(bot/|crawler|robot|spider)" "robot";
"~ScienceBrowser/Nutch" "robot";
"~Arachni/" "robot";
}

map $uri $rate_for_spider_exempting {
default $user_agent_rate_key;
"/robots.txt" "";
}

#limit_req_zone $user_agent_rate_key zone=per_spider_class:1m rate=100r/m;
limit_req_zone $rate_for_spider_exempting zone=per_spider_class:1m rate=100r/m;


However, this does not work because the second map is not returning $user_agent_rate_key; the effect is that non-robots are affected (and the load-balancer health-probes start getting rate-limited).

I'm guessing my reasoning of how this works is incorrect, or there is a limitation or some sort of implicit ordering issue.


Option 4: (does not work)

http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate

I see that there is a variable $limit_rate that can be used, and this would seem to be the cleanest, except in testing it doesn't seem to work (still gets 429 responses as a User-Agent that is a bot)

server {
limit_req zone=per_spider_class;

location /robots.txt {
set $limit_rate 0;
}

location / {
proxy_pass http://routing_layer_http/;
}
}


I'm still fairly new with Nginx, so wanting something that decomposes cleanly into an Nginx configuration. I would quite like to be able just have one place where I specify the map of URLs I wish to exempt (I imagine there could be others, such as ~/.well-known/something that could pop up).

Thank you very much for your time.

--
Cameron Kerr
Systems Engineer, Information Technology Services
University of Otago

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

HTTPS over port 443 (10 replies)

$
0
0
I'm trying to enable site wide ssl over port 443 on a site that runs
on http port 80
In nginx.conf i have `listen 443 ssl;` for the server but requests for
the server get routed to the first available host on port 80, another
of my sites also in the nginx.conf How can I diagnose this to see
what's going on?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>