Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

Hash init (no replies)

$
0
0
Hello,

Maybe i am not in the right mailing list, please refer me to the good one if i am at the wrong one.

I just want to understand the " for (size = start; size <= hinit->max_size; size++) " loop in the ngx_hash_init function.
I do not understand what "size", "key" and "test[key]" mean in first place.

Thank you for your help.

How to deny access to a folder, but allow access to every subfolders (wildcard) (2 replies)

$
0
0
Hi,

I need to deny access to /members but allow access to every folders below.

There may be a lot of folders, maybe a thousan, and each of those folders contain 5 other folders. So i need a wildcard.

Here is what i tried :

location ~ ^/members/([^/]+)/([^/?]+)$ { allow all; } #allow every folders below /members with wildcard
location ~ ^/members/ { deny all; } #deny everything else

But it doesn't work.

What am i missing exactly?

Thank you,

Carl

A build of nginx with static-linked OpenSSL fails on Mac (no replies)

$
0
0
Hello.

A build of nginx with static-linked OpenSSL seems to fail on Mac.

$ uname -ar
Darwin host 14.0.0 Darwin Kernel Version 14.0.0: Fri Sep 19 00:26:44 PDT 2014; root:xnu-2782.1.97~2/RELEASE_X86_64 x86_64
$ cd nginx-1.7.9
$ ./configure \
--with-http_ssl_module \
--with-openssl=../openssl-1.0.1k
$ make
.
.
.
Operating system: i686-apple-darwinDarwin Kernel Version 14.0.0: Fri Sep 19 00:26:44 PDT 2014; root:xnu-2782.1.97~2/RELEASE_X86_64
WARNING! If you wish to build 64-bit library, then you have to
invoke './Configure darwin64-x86_64-cc' *manually*.
You have about 5 seconds to press Ctrl-C to abort.
.
.
.
(too many errors)
.
.
.
"_sk_value", referenced from:
_ngx_ssl_session_cache in ngx_event_openssl.o
_ngx_ssl_check_host in ngx_event_openssl.o
_ngx_ssl_stapling in ngx_event_openssl_stapling.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make[1]: *** [objs/nginx] Error 1
$

Though the rough patch below fixes failure, is there a better solution expect dynamic-linking OpenSSL?

diff -r e9effef98874 auto/lib/openssl/make
--- a/auto/lib/openssl/make Fri Dec 26 16:22:59 2014 +0300
+++ b/auto/lib/openssl/make Fri Jan 09 19:24:06 2015 +0900
@@ -56,7 +56,7 @@
$OPENSSL/.openssl/include/openssl/ssl.h: $NGX_MAKEFILE
cd $OPENSSL \\
&& if [ -f Makefile ]; then \$(MAKE) clean; fi \\
- && ./config --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\
+ && ./Configure darwin64-x86_64-cc --prefix=$ngx_prefix no-shared $OPENSSL_OPT \\
&& \$(MAKE) \\
&& \$(MAKE) install LIBDIR=lib

Nginx Configuration saying Not found. Why and How to get rid of it? (1 reply)

$
0
0
Hi,

I am compiling and installing NGinx from source and installed all following lib

sudo yum install gcc \
gcc-c++ \
pcre-devel \
zlib-devel \
make \
unzip \
openssl-devel \
libaio-devel \
glibc \
glibc-devel \
glibc-headers \
libevent \
linux-vdso.so.1 \
libpthread.so.0 \
libcrypt.so.1 \
libstdc++.so.6 \
librt.so.1 \
libm.so.6 \
libpcre.so.0 \
libssl.so.10 \
libcrypto.so.10 \
libdl.so.2 \
libz.so.1 \
libgcc_s.so.1 \
libc.so.6 \
/lib64/ld-linux-x86-64.so.2 \
libfreebl3.so \
libgssapi_krb5.so.2 \
libkrb5.so.3 \
libcom_err.so.2 \
libk5crypto.so.3 \
libkrb5support.so.0 \
libkeyutils.so.1 \
libresolv.so.2 \
libselinux.so.1

$yum groupinstall 'Development Tools'

But when I run the following configure command on REHL found some "not found"

$./configure \
--with-debug \
--prefix=/etc/nginx \
--sbin-path=/usr/sbin/nginx \
--conf-path=/etc/nginx/nginx.conf \
--pid-path=/var/run/nginx.pid \
--lock-path=/var/run/nginx.lock \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--with-http_gzip_static_module \
--with-http_stub_status_module \
--with-http_realip_module \
--with-http_secure_link_module \
--with-pcre \
--with-file-aio \
--with-cc-opt="-DTCP_FASTOPEN=23" \
--with-ld-opt="-L /usr/local/lib" \
--without-http_scgi_module \
--without-http_uwsgi_module \
--without-http_fastcgi_module \
| grep 'not found'

got the following output

checking for sys/filio.h ... not found
checking for /dev/poll ... not found
checking for kqueue ... not found
checking for crypt() ... not found
checking for F_READAHEAD ... not found
checking for F_NOCACHE ... not found
checking for directio() ... not found
checking for dlopen() ... not found
checking for SO_SETFIB ... not found
checking for SO_ACCEPTFILTER ... not found
checking for kqueue AIO support ... not found
checking for setproctitle() ... not found
checking for POSIX semaphores ... not found
checking for struct dirent.d_namlen ... not found


I figure out that the followings are found with another one
crypt
dlopen
kqueue
poll
POSIX semaphores

But other are not found yet. Why this is happening? How to resolve those? and Is it ok having not found while configuration. I am afraid to go further skipping these not found issues

Thanks in advance

Multiple matching limit_req (1 reply)

$
0
0
I would like to apply rate limiting based on 3 different criteria.

1. CDN should have rate limit of 100 r/s (identified by $http_host)
2. Whitelisted bots should have a rate limit of 15 r/s (identified by
$http_user_agent)
3. All other users should have a rate limit of 5 r/s

The rules should be applied in the above order of preference. If a rule
matches two criteria, the earlier one should get applied. How can I ensure
this?

I have tried the following config, but it is always rate limited to 5 r/s,
irrespective of the order of the limit_req entries.

map $http_host $limit_cdn {
default '';
"cdn-cname.mydomain.com" $binary_remote_addr;
}
map $http_user_agent $limit_bot {
default '';
~*(google|bing) $binary_remote_addr;
}

limit_req_zone $limit_cdn zone=limit_cdn:1m rate=100r/s;
limit_req_zone $limit_bot zone=limit_bot:1m rate=15r/s;
limit_req_zone $binary_remote_addr zone=limit_all:10m rate=5r/s;

limit_req zone=limit_all burst=12;
limit_req zone=limit_bot burst=50 nodelay;
limit_req zone=limit_cdn burst=200 nodelay;
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

download and movie alias (1 reply)

$
0
0
Hello,

i use static directory for my css files, video files and download
directory. I understand not why the link www.example.com/download
not work but www.example.com/download/ works. Has someone an idea
what is wrong?

# video files for all websites
location ~ ^/video/(.*)$ {
alias /var/www/static/video/$1;
mp4;
flv;
mp4_buffer_size 4M;
mp4_max_buffer_size 10M;
autoindex on;
}

# download directory for all websites
location ~ ^/downloads/(.*)$ {
alias /var/www/static/downloads/$1;
autoindex on;
}

Thank you for help & Nice day

Silvio

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

NGINX Access Logs (no replies)

$
0
0
Hi guys,

I'm new to nginx. Can anyone explain what does - - - "-" "-" "-" "-" - means in the access logs? Been getting lots of this in the log file.
Would like to know if this is the cause of nginx to show that there's a spike in traffic through the nginx graph. Example of log below:

[12/Feb/2014:11:25:28 +0800] "POST /...svc HTTP/1.1" 200 274 1.68 870 0.008 0.002 192.168.10.71:84 - - - "-" "-" "-" "-" -
HTTP/1.1" 200 274 1.68 869 0.026 0.006 10.14.241.70:84 - - - "-" "-" "-" "-" -

Nginx behind a reverse proxy sending 499 (no replies)

$
0
0
We have a Java based reverse proxy(developed in-house) which is talking to Nginx which is a proxy_pass for gunicorn server(python/django). The HTTP request flows from Java reverse proxy (JRPxy) to nginx to gunicorn. All these servers are running on the same machine.

Previously JRPxy was sending Connection: keep-alive to nginx to reuse the connections. However we decided to instead send Connection: close header and use a new connection for every request. Since we made this change we see nginx returning 499 status code.

I debugged the JRPxy at my end. I see that each time we write the request headers & body and the very next moment we try to read nginx response we get 0 (no bytes) or -1(eof) as the number of bytes read. When we get 0 we eventually get -1 subsequently (EOF after reading no bytes).

From the perspective of code, we do Socket.shutdownOutput() (http://docs.oracle.com/javase/7/docs/api/java/net/Socket.html#shutdownOutput%28%29) each time we send Connection:close header. In Java's terms it indicates to the remote socket that it is done sending data (http://stackoverflow.com/questions/15206605/purpose-of-socket-shutdownoutput). If I comment this line alone and still sending the Connection:close header, I get valid 200 OK response.

I have caputred the netstat output to see the connection state. When we do Socket.shutdownOutput() we see TIME_WAIT from nginx's end indicating that nginx initiated the socket close and is now waiting for an ACK from JRPxy.

------------------------------------------------------------
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (59.17/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (58.14/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (57.12/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (56.09/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:8888 127.0.0.1:47342 TIME_WAIT - timewait (55.07/0/0)
------------------------------------------------------------

However if I comment the Socket.shutdownOutput() I see the netstat output in reverse way. This time JRPxy is in TIME_WAIT state, indicating it initiated the socket close.

----------------------------------------------------------------------
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (59.59/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (58.57/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (57.54/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47385 127.0.0.1:8888 TIME_WAIT - timewait (59.87/0/0)
tcp6 0 0 127.0.0.1:47379 127.0.0.1:8888 TIME_WAIT - timewait (56.52/0/0)
tcp6 0 0 :::8888 :::* LISTEN 12156/docker-proxy off (0.00/0/0)
tcp6 0 0 127.0.0.1:47385 127.0.0.1:8888 TIME_WAIT - timewait (58.85/0/0)
------------------------------------------------------------------------

By any chance is Socket.shutdownOutput() indicating to nginx that it is closing the connection and hence nginx is sending 499? If that is true then should nginx treat this as half-close and still send back the data?

My other assumption is that nginx is responding very quickly and closing the socket immediately even before JRPxy gets a chance to read from the socket. This is less likely as there are delays due to gunicorn processing.

verifying that load balancing really works (1 reply)

$
0
0
Hi,

how can I verify that load balancing really works? I have 2 servers with
nginx, one of them functions as a LoadBalancer and redirects requests
either on itself or on the other server(by default round-robin). When I
look on the traffic using jnettop it looks like both servers are
loaded(while the LB is loaded more), but if I check traffic statistics
with my server provider I see that the LB server shows ca. 233Gb while
the other server only 0.012Gb in the same period. What is the rigth way
to verify it?

Thank you.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Adding timer in nginx.c main (1 reply)

$
0
0
Hi,

I am adding a timer in nginx's main loop.....

if (counter == -1) {
ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, "counter is null adding imer");
/* Registring Timer */
ngx_ipc_event.data = &dumb;
ngx_ipc_event.handler = ngx_ipc_event_handler;
ngx_ipc_event.log = cycle->log;
if (!ngx_ipc_event.timer_set) {
ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, "Addding timer");
ngx_add_timer(&ngx_ipc_event, 3000);
}
} else {
ngx_log_error(NGX_LOG_EMERG, cycle->log, 0, "Counter is not null %d",counter);
}

static void
ngx_ipc_event_handler(ngx_event_t *ev)
{
ngx_log_error(NGX_LOG_EMERG, ev->log, 0, "Invoked event handler");
}


My handler is not being triggered at all.......Although i get following logs in error.log

2015/01/12 21:56:48 [emerg] 22399#0: counter is null adding imer
nginx: [emerg] counter is null adding imer
2015/01/12 21:56:48 [emerg] 22399#0: Addding timer
nginx: [emerg] Addding timer

auth_request vs auth_pam_service_name (1 reply)

$
0
0
Hi, I am a newbie at nginx and looking at its authentication capabilities. It appears that when using auth_request, every client request would still require an invokation to the auth_request fastcgi or proxy_pass server.
Looking at auth_pam, I am not clear on how it works:

1. How does nginx pass the user credentials to the PAM module?

2. Would nginx remember that a user has been authenticated? Perhaps via a cookie that'd be returned by PAM? I looked at the nginx pam source code and didn't see it returning any cookie to nginx ... perhaps PAM does it by storing it on some context that's returned to NGINX?

3. Is the auth_pam directive mandatory? When I used it with
locate /
{
auth_pam "Login Banner";
auth_required_service_name "nginx";
}
where the PAM nginx file had 'auth required pam_unix.so"
a user/password login page popped up. But even after I entered a valid user/pwd and hit <cr>, the same login page would pop up again, prompting for a user/pwd. I got the same behavior even after removing the auth_required_service_name statement.
Can someone explain the behavior I experienced?

4. Is there a way for us to provide our own Login html page to the user? If yes, how do we do it and how would we pass the credentials to NGINX?

5. NGINX chooses the authentication method (local vs ldap vs rsa etc) based on the server/uri. For example, /www.example.org users would be authenticated via LDAP: location /example { auth_pam_service_name "authFile" } and the authFile would contains "auth required ldap.so"

Is there a way to configure nginx to base the authentication method on some user configuration outside of nginx?

Thank you for any clarifications!

Problem with wildcard of domain in nginx and in https (6 replies)

$
0
0
I have a domain.com and i can redirecto to other subdomains but not domain.com in https, my configuration is the following :

server {
listen 80;
server_name www.domain.com;
rewrite ^/(.*) https://www.domain.com/$1 permanent;
}

server {
listen 80;
server_name m.domain.com;

## redirect http to https ##
rewrite ^/(.*) https://m.domain.com/$1 permanent;
}

server {
listen 443 ssl spdy;

server_name www.domain.com;

...
}

server {
listen 443 ssl spdy;

server_name domain.com;

...
}

server {
listen 443 ssl spdy;

server_name www.domain.com;

...
}

server {
listen 443 ssl spdy;

server_name m.domain.com;

...
}

Nginx Stalls between RTMP Streams (no replies)

$
0
0
Hey NGINX folks,

I'm experiencing some troubles using NGINX as a RTMP media server. I wish to present a continuous video as a live stream (with up to 60 second latency). However, due to some hardware constraints, I am unable to stream directly from the device. Instead, I can save out X amount of seconds from the device's buffer as an MP4. My solution has been to save X seconds of video from the device then stream that X seconds, rise and repeat. This has been working mostly well, except for stalls (~20 seconds) in the stream between calls.

I have searched far and wide for a solution to this however most of the people experiencing this problem have the collection of videos before starting the stream and can simply concatenate them.

My running theory is that when a stream finishes, it does an unpublish event in NGINX followed by a timeout period. This prevents the NGINX server from receiving the next publish until the timeout period has expired. I have tried adjusting nginx.config values related to timeouts, respawns, restarts, and publish, but to no avail.

Pseudocode:
while true
-> capture X seconds of video to "output.mp4" (this takes less than 300ms)
-> stream the MP4 with FFMPEG (takes ~X seconds using -re)

FFMPEG call:
ffmpeg -re -i "output.mp4" -vcodec libx264 -preset veryfast -maxrate 2000k -bufsize 4000k -g 60 -acodec libmp3lame -b:a 128k -ac 2 -ar 44100 -f flv rtmp:/MYSERVER/live/output

I am using JWPlayer client side to watch the video stream, however I experience similar issues using VLC.

I have been trying to figure this out for a few days and I would appreciate any insight an expert to video streaming and NGINX can give. Thank you!

How to return a cookie to a client when auth_request is used? (1 reply)

$
0
0
Hi,

Question 1:

I would like to have an FastCGI authentication app assign a cookie to a client, and the Fast Auth app is called using auth_request. The steps are as follows:

1. Client sends a request
2. NGINX auth_request forwards the request to a FastCGI app to authenticate.
3. The authentication FastCGI app creates a cookie, using "Set-Cookie: name=value". I would like this value to be returned to the client.
4. Assuming the authentication was successful, NGINX then forwards the request to an upstream FastCGI app which sends a response to the client. The HTTP header should contain Set-Cookie: name=value

How do I get NGINX to include the cookie in the header that gets forwarded to the upstream module so the final response to the client contains the cookie? I tried using auth_request_set but got

location / {

auth_request /auth;
include fastcgi_params;
fastcgi_param HTTP_COOKIE $http_cookie;
#auth_request_set $http_cookie "test"; <======= I tried this just to see how auth_request_set works. NGINX j

fastcgi_pass 127.0.0.1:9000;
}

# new fastcgi to set the cookie
location /auth {
include fastcgi_params;
fastcgi_pass 127.0.0.1:9010;
}

Question 2. I also tried
auth_request_set $http_cookie "test";
to see how auth_request_set works. NGINX gave me this error at start time

nginx: [emerg] the duplicate "http_cookie" variable in /usr/local/nginx-1.7.9/conf/nginxWat.conf:25

Why did get such error?

Question 3. Can someone give me a pointer to a list of NGINX FastCGI supported env variables such as $http_cookie / HTTP_COOKIE?

Thank you!

Restrict URL access with pwd (1 reply)

$
0
0
Hi all,

How can I restrict access to my website with a password like with a
..htaccess file please?

Thanks
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Rsync access to Nginx repositories (no replies)

$
0
0
Hey everyone,

We would like to use the debian repo from nginx.org on about 15.000 servers and so wanted to setup a mirror at ftp.hosteurope.de( ftp://ftp.hosteurope.de/mirror/ ), like we do with debian and many other FOSS-projects.
Unfortunately there does not seem to be rsync access to any of the repos; Is there a way to get access for us?

Please let me know if you need additional information.


Thanks,
Sebastian
--
Sebastian Stabbert
Systemadministrator

Host Europe GmbH is a company of HEG

Telefon: +49 2203 1045-7362

-----------------------------------------------------------------------
Host Europe GmbH - http://www.hosteurope.de
Welserstraße 14 - 51149 Köln - Germany
HRB 28495 Amtsgericht Köln
Geschäftsführer: Tobias Mohr, Patrick Pulvermüller

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Getting expiration date of client certificate (no replies)

$
0
0
Hi,
I wanted to extract client certificate expiration date in nginx.conf. I have the below map command to extract CN name of client certificate. Do you know if any variables/directives nginx supports to extract client certificate expiration date?

map $ssl_client_s_dn $ssl_client_s_dn_cn {
default "";
~/CN=(?<CN>[^/]+) $CN;
}

Dynamic/Wildcard SSL certificates with SNI ? (1 reply)

$
0
0
Hi,

I'm working on a "Web simulator" designed to serve a large number of
web sites on a private, self-contained network, where I'm also in
control of issuing SSL certificates.

The relevant bits of my nginx.conf look like this:

server {
listen 80 default_server;
server_name $http_host;
root /var/www/vservers/$http_host;
index index.html index.htm;
}

ssl_certificate_key /var/www/vserver_certs/vserver.key;

server {
listen 443 default_server;
ssl on;
ssl_certificate /var/www/vserver_certs/vserver.cer;
server_name $http_host;
root /var/www/vservers/$http_host;
index index_html index.htm;
}


There is no consistency across the set of vserver host names (and
therefore not much to be gained by using wildcards in the certificate
common or alt name fields).

Right now, I'm trying to cram all of my vserver host names into the
alt_names field of the "vserver.cer" certificate, but I'm bumping up
against the 16k limit of the cert file size, after which browsers
start rejecting it with an error.

I'd like to generate per-vserver certs, and dynamically select the
correct certificate file based on the SSI-negotiated server name,
like so:

server {
listen 443 default_server;
ssl on;
ssl_certificate /var/www/vserver_certs/$ssl_server_name.cer;
server_name $http_host;
root /var/www/vservers/$http_host;
index index_html index.htm;
}

but nginx doesn't seem to currently support this (it wants to open the
certificate file at startup time, and doesn't appear to allow variable
expansion in the cert file name :(

The alternative would be to add an https server block for each vserver:

server {
listen 443;
ssl_certificate /var/www/vserver_certs/vserver1.foo.com.cer;
server_name vserver1.foo.com;
root /var/www/vservers/vserver1.foo.com;
index index_html index.htm;
}

server {
listen 443;
ssl_certificate /var/www/vserver_certs/vserver2.bar.org.cer;
server_name vserver2.bar.org;
root /var/www/vservers/vserver2.bar.org;
index index_html index.htm;
}

...
and so on, relying on SNI to match the correct block. But this could
get out of hand really fast, as I expect to be dealing with several
*thousand* vservers.

Am I missing something when attempting to dynamically use
$ssl_server_name to locate the appropriate certificate file ?

If that's not currently possible, is this something of interest to the
rest of the community, and would it be worth bringing up on the
development mailing list ?

Thanks much for any help, pointers, ideas, etc!

--Gabriel

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Ipad autodiscovery (1 reply)

$
0
0
Hi All
I have my owa reverse proxy working. For some strange reason, all ipads
cannot now sync. Android is fine. I am seeing an error that they are trying
to connect using autodiscovery and I am not seeing a way to disable this in
IOS. Is there a way then to proxy the autodiscovery attempt in nginx?

Regards
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Limiting gzip_static to two directories. (2 replies)

$
0
0
I have gzip enabled in Nginx as well as gzip_static. I am trying to limit gzip_static to just one or two sections. There are pre-compressed files inside the directory: media/po_compressor/ along with sub directories of this such as:
media/po_compressor/4/js
media/po_compressor/4/css

Here is what I have below in nginx. What is the best way to look inside directory and sub-directories using location entry?


## Gzip Static module to compress CSS, JS
location /media/po_compressor/ {
gzip_static on;
expires 365d;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}

## Compression
gzip on;
gzip_buffers 16 8k;
gzip_comp_level 4;
gzip_http_version 1.0;
gzip_min_length 1280;
gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon image/bmp;
gzip_vary on;
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>