Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

N00b - confused ssl (1 reply)

$
0
0
I am reading this doc : https://www.nginx.com/blog/nginx-ssl/ and it shows
how to either terminate (de-crypt) ssl or how to receive un-encrypted
traffic over port 80 for example and encrypt it before sending to the
upstream servers.

From the doc:

listen 443 *ssl*;

*** tells nginx to decrypt the incoming traffic

proxy_pass https://backends;

*** and https tells nginx to encrypt the traffic going to the upstream
servers

so if I put both of these in one server block so that the incoming is
de-crypted and the outgoing is decrypted. Do I put both the server and
client certs in the same server block ?

confused.

Joel
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

UDP reverse proxying for OpenVPN isn't working using Nginx streams (3 replies)

$
0
0
Hi.

I was just wondering whether UDP stream proxying on Nginx is in its infacy or there is something which I am doing wrong. I have this simple config:

events { worker_connections 1024; }

worker_processes 1;
error_log /dev/stderr debug;
daemon off;

stream {
server {
listen X.X.X.X:1194 udp;
proxy_pass 127.0.0.1:1195;
}
}

to make Nginx a reverse proxy for my OpenVPN server listening on UDP port 1195 on localhost. But it just doesn't work. When a client connects, Nginx keeps logging these lines on stderr:

2017/04/26 12:14:43 [notice] 17125#0: using the "epoll" event method
2017/04/26 12:14:43 [notice] 17125#0: nginx/1.11.13
2017/04/26 12:14:43 [notice] 17125#0: built by gcc 4.9.2 (Debian 4.9.2-10)
2017/04/26 12:14:43 [notice] 17125#0: OS: Linux 3.16.0-4-amd64
2017/04/26 12:14:43 [notice] 17125#0: getrlimit(RLIMIT_NOFILE): 1024:4096
2017/04/26 12:14:43 [notice] 17125#0: start worker processes
2017/04/26 12:14:43 [notice] 17125#0: start worker process 17126
2017/04/26 12:14:47 [info] 17126#0: *1 udp client Y.Y.Y.Y:40332 connected to X.X.X.X:1194
2017/04/26 12:14:47 [info] 17126#0: *1 udp proxy 127.0.0.1:55424 connected to 127.0.0.1:1195
2017/04/26 12:14:47 [info] 17126#0: *3 udp client Y.Y.Y.Y:40332 connected to X.X.X.X:1194
2017/04/26 12:14:47 [info] 17126#0: *3 udp proxy 127.0.0.1:48958 connected to 127.0.0.1:1195
2017/04/26 12:14:47 [info] 17126#0: *5 udp client Y.Y.Y.Y:40332 connected to X.X.X.X:1194
2017/04/26 12:14:47 [info] 17126#0: *5 udp proxy 127.0.0.1:56732 connected to 127.0.0.1:1195
2017/04/26 12:14:47 [info] 17126#0: *7 udp client Y.Y.Y.Y:40332 connected to X.X.X.X:1194
2017/04/26 12:14:47 [info] 17126#0: *7 udp proxy 127.0.0.1:60363 connected to 127.0.0.1:1195
2017/04/26 12:14:50 [info] 17126#0: *9 udp client Y.Y.Y.Y:56226 connected to X.X.X.X:1194
2017/04/26 12:14:50 [info] 17126#0: *9 udp proxy 127.0.0.1:52499 connected to 127.0.0.1:1195
2017/04/26 12:14:50 [info] 17126#0: *11 udp client Y.Y.Y.Y:56226 connected to X.X.X.X:1194
2017/04/26 12:14:50 [info] 17126#0: *11 udp proxy 127.0.0.1:48850 connected to 127.0.0.1:1195
2017/04/26 12:14:50 [info] 17126#0: *13 udp client Y.Y.Y.Y:56226 connected to X.X.X.X:1194
2017/04/26 12:14:50 [info] 17126#0: *13 udp proxy 127.0.0.1:60125 connected to 127.0.0.1:1195
2017/04/26 12:14:50 [info] 17126#0: *15 udp client Y.Y.Y.Y:56226 connected to X.X.X.X:1194
2017/04/26 12:14:50 [info] 17126#0: *15 udp proxy 127.0.0.1:54133 connected to 127.0.0.1:1195
2017/04/26 12:14:52 [info] 17126#0: *17 udp client Y.Y.Y.Y:56226 connected to X.X.X.X:1194
2017/04/26 12:14:52 [info] 17126#0: *17 udp proxy 127.0.0.1:50184 connected to 127.0.0.1:1195
2017/04/26 12:14:52 [info] 17126#0: *19 udp client Y.Y.Y.Y:56226 connected to X.X.X.X:1194
2017/04/26 12:14:52 [info] 17126#0: *19 udp proxy 127.0.0.1:48836 connected to 127.0.0.1:1195
2017/04/26 12:14:53 [info] 17126#0: *21 udp client Y.Y.Y.Y:56226 connected to X.X.X.X:1194
2017/04/26 12:14:53 [info] 17126#0: *21 udp proxy 127.0.0.1:42665 connected to 127.0.0.1:1195
2017/04/26 12:14:56 [info] 17126#0: *23 udp client Y.Y.Y.Y:56226 connected to X.X.X.X:1194
.......................
.......................

Whereas the OpenVPN client is stuck on:

Wed Apr 26 12:14:50 2017 OpenVPN 2.3.4 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [EPOLL] [PKCS11] [MH] [IPv6] built on Nov 12 2015
Wed Apr 26 12:14:50 2017 library versions: OpenSSL 1.0.1t 3 May 2016, LZO 2.08
Wed Apr 26 12:14:50 2017 Control Channel Authentication: tls-auth using INLINE static key file
Wed Apr 26 12:14:50 2017 Outgoing Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
Wed Apr 26 12:14:50 2017 Incoming Control Channel Authentication: Using 160 bit message hash 'SHA1' for HMAC authentication
Wed Apr 26 12:14:50 2017 Socket Buffers: R=[212992->212992] S=[212992->212992]
Wed Apr 26 12:14:50 2017 UDPv4 link local: [undef]
Wed Apr 26 12:14:50 2017 UDPv4 link remote: [AF_INET]X.X.X.X:1194
Wed Apr 26 12:14:50 2017 TLS: Initial packet from [AF_INET]X.X.X.X:1194, sid=afcea479 758711e0

Even there trivial setups work as expected:

pen X.X.X.X:1194 127.0.0.1:1195 -U

OR

nc -u -l -p 1194 -c "nc -u 127.0.0.1 1195"

But I fail to understand why isn't Nginx working. By the way, if everything is replaced with TCP in both nginx and OpenVPN file, it works. Also UDP proxying for DNS:

listen X.X.X.X:53 udp;
proxy_pass 8.8.8.8:53;

works. The Nginx version is: 1.11.13. Will really appreciate any advice on this.

Thanks & Regards.

Client certificate authentication error (2 replies)

$
0
0
Hello.
I am trying to implement a client certificate using nginx on Ubuntu 16.04.
Firefox browser "400 Bad Request No required SSL certificate was sent "error occurs.

To solve the above error, I release everything for the development process and configuration tests.

1. create client certificate file(openssl 1.0.2g)


openssl genrsa -des3 -out ca.key 2048 (pass : 1234)

openssl req -new -key ca.key -out ca.csr -subj /C=KR/ST=Seoul/L=Guro-gu/O=company/CN=www.wemakeusa.com/emailAddress=company@wemakeusa.com

openssl x509 -req -days 1280 -in ca.csr -signkey ca.key -out ca.crt

openssl rsa -in ca.key -out ca_key.pem

--

openssl genrsa -des3 -out server.key 2048 (pass : 12345)

openssl req -new -key server.key -out server.csr -subj /C=KR/ST=Seoul/L=Guro-gu/O=req company/CN=www.wemakeusa.com/emailAddress=manager@wemakeusa.com

openssl x509 -req -in server.csr -out server.crt -signkey server.key -CA ca.crt -CAkey ca.key -CAcreateserial -days 365

openssl rsa -in server.key -out server_key.pem

--

openssl genrsa -des3 -out client.key 2048 (pass : 123456)

openssl req -new -key client.key -out client.csr -subj /C=KR/ST=Seoul/L=Guro-gu/O=Users/CN=www.wemakeusa.com/emailAddress=users@wemakeusa.com

openssl x509 -req -in client.csr -out client.crt -signkey client.key -CA server.crt -CAkey server.key -CAcreateserial -days 365

openssl rsa -in client.key -out client_key.pem


openssl pkcs12 -in client.crt -inkey client.key -export -out client.p12



2. Nginx configure(1.10.0)

server {
listen 443;
ssl on;
server_name www.wemakeusa.com;

error_log /home/ubuntu/nginx-error.log debug;

ssl_certificate /home/ubuntu/ssl-der/server.crt;
ssl_certificate_key /home/ubuntu/ssl-der/server_key.pem;
ssl_client_certificate /home/ubuntu/ssl-der/ca.crt;
ssl_verify_client on;
ssl_verify_depth 3;

location / {
root /var/www/wemakeusa.com;
index index.html;
if ($ssl_client_i_dn != "CN = company") {
return 403;
}
if ($ssl_client_i_dn != "emailAddress=user@wemakeusa.com") {
return 403;
}
}
}


3. SSL testing

https://www.ssllabs.com/ssltest/analyze.html?d=www.wemakeusa.com



4. Download files for exams

http://www.wemakeusa.com/certificate_file.tar



I have registered p12 certificate and ca certificate in my Firefox browser, but I get "400 Bad Request".

I need help with 'multiple user cilent certificate authentication' tips and solutions for errors.

Way to deny all requests if any upstream is down (no replies)

$
0
0
Hi All,

Is there any way to deny all requests if any upstream servers are down?.

Thanks

how add RewriteRule .* index.php in nginx (1 reply)

$
0
0
down vote
favorite
how set add " RewriteRule .* index.php " in nginx.conf for subfolder ?

and dont redirect for root example:

https://www.bidbarg.com/ (index.php redirect to none)

https://www.bidbarg.com/bimeh/index.php/ (index.php needed)

thank you

Passing small request_body to auth_request through a header (no replies)

$
0
0
I have a small body (less than 512 bytes) that I'd like to pass to
auth_request. Since the body is discarded, I've tried passing it
through a header with no luck. Is there any way to do this?

location = /my_auth
{
internal;

include fastcgi_params;
fastcgi_pass unix:/tmp/nginx/sock/my_auth.sock;

proxy_pass_request_body off;
proxy_set_header Content-Length "";

proxy_set_header MYBODY $request_body;
fastcgi_param MYBODY $request_body;
}

location = /login
{
auth_request /my_auth;
client_max_body_size 512;
}
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Why does nginx rewrite sending https to http? (no replies)

$
0
0
Hi,

I'm trying to rewrite some route in cloud foundry static buildpack, but
whenever I rewrite, the https goes to http.

So I add return with / with /login, then it goes to http://server/, even it
starts with https://server/login.


location /login {
return 301 /;

<% if ENV["FORCE_HTTPS"] %>
if ($http_x_forwarded_proto != "https") {
return 301 https://$host$request_uri;
}
<% end %>
}
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

critique my config file (no replies)

$
0
0
I am using Nginx 1.13 and have removed all the "if"s from the config file and would now like someone to analyse it, look at the rewrites, etc if possible. It works fine but seems a bit unorganized and I'm wondering if there are some duplicate things. I have created a bunch of 444 locations to drop the malicious scripts and visitors from reading such locations which don't exist. I also make the admin area unassessable then uncomment whenever I want to access such areas for security.

The only issue I'm aware of is the I am using the resolver 8.8.8.8; which is said to leave open to man in the middle DNS attack or spoofing but haven't been excited about running BIND with all the extra overhead so haven't done so.

Here is the config file:
https://pastebin.com/szFGQ2SD

Health checks and reloads (1 reply)

$
0
0
Hello,

We are using nginx plus and we use application health-checks. We want to
move to the 'mandatory' parameter which requires that servers pass the
health check before it becomes active.

Currently, we have a system which reloads all configs (rather than a diff
based system which would just only apply the changes via the APIs). It does
this by generating a new set of configs (routing rules, upstreams, etc) and
then calling reload on the parent (which essentially results in creating
new worker processes).

We are wondering what happens when nginx receives a reload. For an upstream
(for simplicity - say 1 host in that upstream) which is present in the old
config and is also present in the new config:
1. Will it block traffic till that host has successfully passed 'N' checks
(configured)
2. Will it return 502's as there are no more active hosts to serve for this
upstream.
3. Anything else?

Thanks,
Aditya
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

$upstream_addr returning "-" only on requests with "del" in them (2 replies)

$
0
0
Hi guys,

I have a problem with some of the requests sent to my Nginx load balancer, which reports (in the access_log configured to show $upstream_addr) that $upstream_addr is equal to "-", but only in a weird case where the post contains the word "del".

I'm using Nginx 1.10.0 packaged in Ubuntu 16.04.4, in a development cluster of VMs with Nginx serving as load balancer to serve a bunch Drupal 7 sites that an Apache+modPHP is serving (I could use Nginx+PHP-FPM but that's not the point here).

So it's a web-facing VM with Nginx that passes to another VM with Apache (through proxy_pass). No "effective" load balancing (only one upstream server in the backend block).

I've tried to maintain customizations to a reasonable minimum to avoid introducing too many variables.

Inside Drupal 7 (which I installed under the Apache backend server), I have nodes that I would like to edit.

Now, on several nodes, when I edit a textarea with whatever I like, everything works fine. The request is passed to Nginx, then to Apache, and I can see that in the access logs for both.

However, if the textarea contains the work "del" (I know... weird), then the request gets to Nginx and the $upstream_addr is generated as "-" and no request reaches the upstream server.

How can I debug that?
I've tried putting the error_log to "debug" but it's apparently not an error.
The access_log provides me with this weird case of $upstream_addr = '-', but that's all I get...

Thanks for your help!

xslt question (no replies)

$
0
0
Hi

I am using
https://gist.github.com/wilhelmy/5a59b8eea26974a468c9

for


location /ts/ {
#autoindex on;
#autoindex_format html;
try_files $uri @autoindex;
}


# need xlst module
location @autoindex {
autoindex on;
autoindex_format xml;
xslt_stylesheet xslt/dirlist.xslt path='$uri';

}


my problem is, i have a file with a % in it and I need to escape / encode
it as a uri

but when I use the xml functions to encode a uri I get function not found.

more reading I need xslt 2 not 1. how can I tell if I am using 1 or 2 ?

A
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How can I set a maximum limit for gzip module? (2 replies)

$
0
0
Hello,

I'm using nginx-1.11.2 for proxy server with gzip-module.

I hope to use such like a "gzip_max_length" directive in ngx_http_gzip_module.
Because some upstream response's sizes exceeded the settings of gzip_buffers.
(But there were no error... These are strange things for me...)

I can change the gzip_buffers to enough size for upstream, but there is no limit.

Can I set a limit of maximum content-size for gzip module?

Thank you.

Nginx reload process in detail (1 reply)

$
0
0
We have a persistent connection to Nginx on which we are issuing https requests. Now when we do a reload, the persistent connections (the requests which are already accepted) are failing as soon as the reload was issued. Those connections are being dropped. Is this the expected behavior?

In the Nginx documentation, it was mentioned that the older worker process would continue to run untile they have served the accepted inflight requests and then would go down. But the actual behavior seems to be different as the persistent connections are being dropped as soon as a configuration reload was issued.

To add to the above question, while reload was in progress, i am trying to establish a new connection and its not being established. Can't the new worker processes which were spawned as a result of configuration reload, straight away serve the incoming new connections?

SOAP Behind API Gateway (no replies)

$
0
0
Hi, I have an application in internal network with no public internet access.

I want NGINX API gateway deployed in public exposed network that authenticates incoming HTTP, and rest service traffic and then relays the traffic to upstream servers.

Similarly I want the NGINX API Gateway to authenticate SOAP requests also. But SOAP has its own WS security, and I believe SOAP request should not have any sessionid or token for web authentication. So this means API gateway will not be able to perform authentication as its doing for
normal http and rest service traffic.

What should be the best solution to secure a SOAP service behind a NGINX API gateway ? Or it should not be protected by NGINX API Gateway module ?

Logging requests / responses in multiple files (no replies)

$
0
0
I wanted to see if there was a way to log a request and response in
separate file, so that I end up with something like this:

request_1.log
response_1.log
request_2.log
response_2.log
request_3.log
response_3.log

.......

Is there a way to do this ?

Joel
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Dynamic Upstream (no replies)

$
0
0
Pupose: Need to create a local proxy to an image cdn (Beats the purpose of having a CDN, but can't help!!)

Setup: I have a cdn setup with multiple endpoints say t1.mycdn.com, t2.mycdn.com and t3.mycdn.com. I have a website (foobar.com) which uses images from these cdns. Because of some requirement, I need to have localized path to images (say https://foobar.com/images/sellers/s1/22.jpg). I need to proxy pass it to my 3 cdn endpoints with equal weight. Along the way, I need to change the URL too, https://foobar.com/images/sellers/s1/22.jpg will be fetched from t1.mycdn.com/img/slr/s1/22.jpg OR t2.mycdn.com/img/slr/s1/22.jpg OR t3.mycdn.com/img/slr/s1/22.jpg with equal probability.

I have compiled nginx with the following config
configure arguments: --prefix=/opt/nginx --pid-path=/home/ec2-user/pids/nginx.pid --user=ec2-user --group=ec2-user --without-http_autoindex_module --without-http_geo_module --without-http_memcached_module --without-http_scgi_module --without-http_uwsgi_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_geoip_module --with-http_sub_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_secure_link_module --without-mail_pop3_module --with-http_v2_module --without-mail_imap_module --without-mail_smtp_module --with-http_ssl_module --add-module=/tmp/ngx_devel_kit-0.3.0 --add-module=/tmp/lua-nginx-module-0.10.7 --add-module=/tmp/echo-nginx-module-0.60 --add-module=/tmp/set-misc-nginx-module-0.31 --add-module=/tmp/headers-more-nginx-module-0.32 --with-ld-opt=-Wl,-rpath,/opt/luajit/lib

Current Approach: I have the following block
location ~* ^/images/sellers/[^/]+/[^/]+$ {
error_log /tmp/images.log debug;
set $img_host "";
set $img_url "";
rewrite_by_lua '
math.randomseed(os.time())
ngx.var.img_host = "t"..math.random(1,3)..".mycdn.com"
ngx.var.img_url = string.gsub(ngx.var.uri, "/images/sellers", "/img/slr/")
';
proxy_set_header HOST $img_host;
proxy_pass https://$img_host$img_url;
}

When I try to fetch the image, I keep getting the following error

invalid URL prefix in "https://"

I tried changing https to http too. Doesn't help. Can you please let me know where am I going wrong here ?

Thanks,
Bhargava

Trailing Slash Redirect Loop Help (6 replies)

$
0
0
Hi,

I am having an issue getting rid of the trailing slashes for directories. I have used the following to get rid off the trailing slash:


#rewrite all URIs without any '.' in them that end with a '/'
#rewrite ^([^.]*)/$ $1 permanent;

&
#rewrite all URIs that end with a '/'
rewrite ^/(.*)/$ /$1 permanent;

They both work, but they do not when it comes to directories. When it is a directory the page is not displayed because it has different redirects. Here is the code that I have used inside the LOCATION definition as well as in the SERVER block and they just do not work.


if (!-e $request_filename) {
rewrite ^/(.*)/$ /$1 permanent;
}

Or, I have used this one:

if (!-e $request_filename) {
rewrite ^([^.]*)/$ $1 permanent;
}

Note: I have tested individually the two rules either in the server block or the location block and I still get the redirection problems with files or directories.


Could any of you please help me and tell me what I am doing wrong. Basically, when I enable the rewrite rule to get rid off the trailing slash all directories get multiple redirects and end up in a loop.

Here is the nginx configuration.


user www-data;
worker_processes X;
pid /xxxx.pid;

events {
worker_connections xxxx;
# multi_accept on;
}



http {
##
# Basic Settings
##

sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
#below hides the version of nginx
server_tokens off;

server_names_hash_bucket_size 64;
#this sets the url hash map table size for url rewrites
map_hash_bucket_size 128;
map_hash_max_size 2048;
# server_name_in_redirect off;

include /xxxx.types;
default_type application/octet-stream;



##
# Gzip Settings
##

gzip on;

gzip_vary on;
gzip_proxied any;
gzip_comp_level 7;
gzip_buffers 16 8k;
gzip_http_version 1.1;
# Disable for IE < 6 because there are some known problems
gzip_disable "MSIE [1-6].(?!.*SV1)";

gzip_types
text/plain
text/css
text/javascript
application/javascript
application/json
application/x-javascript
application/xml;

##
# Buffer Size
##
proxy_buffering on;
proxy_buffers 4 256k;
proxy_buffer_size 128k;
proxy_busy_buffers_size 256k;




##
## URL REWRITE MAP
## File that contains all url rewrite rules for server
include xxxurlmap.conf;


# SERVER DEFINITIONS




###FORWARD ALL 80 INCOMING REQUEST TO SSL
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name example.com www.example.com;
return 301 https://www.example.com$request_uri;
}

#HTTPS server

#CHANGE DOMAIN NAME from NO-WWW to WWW - CREATE ONE SERVER INSTANCE AND RETURN IT
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /xxx.pem;
ssl_certificate_key /xxxx.pem;
return 301 https://www.example.com$request_uri;
}


# Secure Server Configuration HTTPS Server

server {

listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name www.example.com;


ssl_certificate /etc/ssl/fullchain.pem;
ssl_certificate_key /etc/ssl/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_session_cache shared:SSL:20m;
ssl_session_timeout 60m;



## THIS CONDITION REWRITES ALL URLS TO THE NEW ONES
if ( $redirect_uri ) {
return 301 $redirect_uri;
}


location / {
proxy_pass https://xxxx:xportNumber;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 600;
proxy_send_timeout 600;
proxy_read_timeout 600;
proxy_redirect off;
#this deletes all the traling slash of all Content
if (!-e $request_filename) {
rewrite ^/(.*)/$ /$1 permanent;
}


}


}

}

Http proxy module (3 replies)

$
0
0
Hello everyone! i am trying to build nginx with proxy module from sources
../configure --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/
nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/acces
s.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastc
gi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/t
mp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-ipv6 --with-http_ssl_m
odule --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_modu
le --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_i
ndex_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_mod
ule --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynam
ic --with-stream_ssl_module --with-http_proxy_module --with-debug --with-cc-opt='-O3'


And me outs ./configure: error: invalid option "--with-http_proxy_module"

1.13 version and 1.12 too, wthats wrong?


Roman Pastushkov
xnucleargeminix@aol.com

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

hacker proxy attempt (no replies)

$
0
0
A bit OT, but can a guru verify I rejected all these proxy attempts.
I'm 99.9% sure, but I'd hate to allow some spammer or worse to route
through my server. The only edit I made is when they ran my IP address
though a forum spam checker. (I assume google indexes pastebin.)

https://pastebin.com/VCg28AZf

Pastebin made me captcha because they thought I was a spammer. ;-)


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Serve index.html file if exists try_files + proxy_pass? (no replies)

$
0
0
Hi guys,

I have a small scenario where I have a backend (s3 compatible storage), which by default generates a directory listing overview of the files stored.
I want to be able to serve an "index.html" file if the file exists, else just proxy_pass as normally.

https://gist.github.com/lucasRolff/c7ea13305e9bff40eb6729246cd7eb39

My nginx config for somewhat reason doesn't work – or maybe it's because I misunderstand how try_files actually work.

So I have URLs such as:

minio.box.com/bucket1/
minio.box.com/bucket43253/


When I request these URL's I want nginx to check if index.html exists in the directory (it's an actual file on the filesystem) - if it does, serve this one, else go to @minio location.

For any other file within the directory, I will just go to @minio location so if I request unicorn.png it should go in @minio location as well.

Is there any decent (non-evil) way of doing this?

I assume I have to define the root directive to make try_files work, but what would I actually have to define, to make nginx use try_files for index.html *within* the specific bucket?

Thanks in advance

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>