Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

nginx conf 502 on reverse proxy location (1 reply)

$
0
0
my http conf is at http://p.ngx.cc/2018

currently getting me a 502 when I hit my_server.com/oauth/

Any idea what I'm doing wrong?

thanks
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Wildcard SSL and Wildcard hostnames (11 replies)

$
0
0
Hey there, I'm struggling to find the correct answer and unsure if there even is one.

We have a domain say, example.co and we've purchased a wildcard SSL certificate for it. We want to be able to provide what amounts to....with minimal configuration.

https://example.co
https://blah.example.co
https://somerandomsubdomain.example.co

all pointing at the same server so something like

server {
port 443
server_name example.co *.example.co;

ssl on;
ssl_protocols .....;
ssl_ciphers .....;
ssl_prefer_server_ciphers on;
ssl_certificate /data/nginx/ssl/example.co.crt;
ssl_certificate_key /data/nginx/ssl/example.co.key;
}

This doesn't appear to work as I would expect it to. Would we need to set up a different server for each subdomain explicity. or could we get away with one config for example.co and another for *.example.co? I've seen examples of using the same ssl key for different virtual servers with different hostnames but not pointing to the same one.

Anyone else have any joy with a similar config?

--

Official packages v1.8.0 do NOT include the GeoIP module (3 replies)

$
0
0
Hello,

We are facing quite some trouble with the official nginx packages:
their nginx -V does not show any sign of the GeoIP module.

Confirmed for:
- Debian package
- CentOS 6 package

As I have not read any deprecation message anywhere, and since its presence
is confirmed in earlier versions, why is it that way? Mistake?

​I was unable to report a bug in http://trac.nginx.org​ as I used to
connect to it through my Google account: 'OpenID 2.0 for Google Accounts
has gone away'
---
*B. R.*
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Possible limitation of ngx_http_limit_req_module (8 replies)

$
0
0
Hi,

I'm observing an inconsistent behavior of ngx_http_limit_req_module in nginx 1.7.12. The relevant excerpts from my config:

http {
...
# A fixed string used as a key, to make all requests fall into the same zone
limit_req_zone test_zone zone=test_zone:1m rate=5r/s;
...
server {
...
location /limit {
root /test
limit_req zone=test_zone nodelay;
}
...
}
}

I use wrk to hammer the server for 5 secs:

$ ./wrk -t 100 -c 100 -d 5 http://127.0.0.1/limit/test
Running 5s test @ http://127.0.0.1/limit/test
100 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.82ms 2.96ms 15.12ms 88.92%
Req/Sec 469.03 190.97 0.89k 62.05%
221531 requests in 5.00s, 81.96MB read
Non-2xx or 3xx responses: 221506
Requests/sec: 44344.69
Transfer/sec: 16.41MB

So, out of 221531 sent requests, 221506 came back with error. This gives (221531 - 221506) = 25 successful requests in 5 secs, so 5r/s, just as expected. So far so good.

Now, what happens if I set rate=5000r/s:

$ ./wrk -t 100 -c 100 -d 5 http://127.0.0.1/limit/test
Running 5s test @ http://127.0.0.1/limit/test
100 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 3.64ms 5.70ms 36.58ms 87.43%
Req/Sec 443.50 191.55 0.89k 65.04%
210117 requests in 4.99s, 77.38MB read
Non-2xx or 3xx responses: 207671
Requests/sec: 42070.61
Transfer/sec: 15.49MB

This time it is (210117 - 207671) = 2446 successful requests in 5 secs, which means 490r/s. Ten times lower then expected.

I gathered some more figures, showing the number of 200 responses for growing value of "rate" parameter.

rate=***r/s in zone cfg -- number of 200 responses
100 -- 87
200 -- 149
500 -- 344
1000 -- 452
10000 -- 468
100000 -- 466

As you can see, the server keeps returning pretty much constant number of 200 responses once the "rate" parameter has surpassed 1000.

I had a glimpse into the module's code, and this part caught my eye: https://github.com/nginx/nginx/blob/nginx-1.7/src/http/modules/ngx_http_limit_req_module.c#L402-L414.
Basically, if consecutive requests hit the server in the same millisecond, the "ms = (ngx_msec_int_t) (now - lr->last)" part evaluates to 0, which sets "excess" to 1000, which is very likely to be greater than a "burst" value, which results in rejecting the request. This could also mean, that only the very first request hitting the server in given millisecond would be handled, which seems to be in line with the wrk test results, I've presented above.

Please let me know if this makes sense to you!

Best regards,
Jakub Wroblewski

nginx_upstream_check_module doesn't work with nginx > 1.7.6 (no replies)

$
0
0
Hi,

I'm not sure if this is the right place to report this issue, but perhaps someone has already run across it and has some insights...

Basically, the "nginx_upstream_check_module" (versions 0.1.9 and 0.3.0) doesn't seem to be working with nginx 1.7 greater than 1.7.6.
Upstreams don't get pinged for status, and calling check_status directive results in the following error:

"
http upstream check module can not find any check server, make sure you've added the check servers
"

It looks like the module is not initialized correctly, e.g. it does not receive a list of upstream servers.

I also opened a github issue for the module itself -- https://github.com/yaoweibin/nginx_upstream_check_module/issues/58.

Best regards,
Jakub Wroblewski

24: Too many Open connections (no replies)

$
0
0
Hello,

We have 3 nginx web servers behind an nginx proxy and we were seeing "24: Too many Open connections" error in nginx log of one server and found that some of the users are getting 504 timed out error. We have observer that the problem was only in one of the web-server. We restarted php_cgi & nginx and the problem got solved.

Can someone help me to understand what caused this issue? As I mentioned we have 3 web-servers and all three are having the same setup and configuration. It wouldn't have caused because of heavy traffic, as it was non peak hour and other servers were not affected as well.

nginx-1.2.0-1.el5
OS - CentOS release 5.8

nginx.conf
worker_processes 4;
worker_connections 1000000;

sysctl.conf
fs.file-max = 70000

limits.conf
nginx soft nofile 1000000
nginx hard nofile 1000000

Nginx 1.9 from package & RTMP (1 reply)

$
0
0
I just install nginx 1.9 on my ubuntu 15.04 machine using precompiled package "apt-get install", nginx in working and now how can I add RTMP module?

about ssl support of nginx (no replies)

$
0
0
we use "--with-http_spdy_module --with-http_ssl_module --with-openssl=$(ROOTDIR)/deps/openssl-$(V_OPE NSSL) --with-openssl-opt=darwin64-x86_64-cc" to embed ssl support into nginx in macos system , but we find that "./config" which is in the file of nginx/auto/lib/openssl/make doesn't make action, but "./Configure" can be run successfully.

So, is it the problem of nginx or openssl, in my option, ssl doesn't take macos into account for embedded nginx.

[ANN] Windows nginx 1.9.1.1 Lizard (no replies)

$
0
0
11:16 14-5-2015 nginx 1.9.1.1 Lizard

Based on nginx 1.9.1.1 (8-5-2015, with 'stream' tcp load balancer) with;
+ pcre-8.37 (upgraded, regression tested)
+ During re-factoring nginx for Windows we've switched code base which
makes it easier for us to import original nginx code without Windows
issues by using a new native linux <> windows low level API which
natively deals with spinlock, mutex locking, Windows event driven
technology and full thread separation
nginx 1.9 has currently 1 known issue; ajp cache which basically has an
issue with the 1.7.12 code base caching (without cache ajp works fine)
https://github.com/yaoweibin/nginx_ajp_module/issues/37
nb. prove05 will have crashes / failed tests due to this issue
+ 1.9 api change fixes across all modules
- rtmp, 1.7.12.1 is the last free version with rtmp, we do have a rtmp
special offer for the 1.9 branch (which without rtmp you could use
to tcp load balance 1.7.12.1 with rtmp)
* 1.7.12 will be kept up to date with critical patches and fixes only,
no new functions will be added or imported. LTS versions are not affected
* Issues with spdy:
http://trac.nginx.org/nginx/ticket/714
http://trac.nginx.org/nginx/ticket/626
http://trac.nginx.org/nginx/ticket/346
disable spdy if you have this issue
+ Source changes back ported
+ Source changes add-on's back ported
+ Changes for nginx_basic: Source changes back ported
* Scheduled release: yes
* Additional specifications: see 'Feature list'
* This release is dedicated to my beloved wife Shirley Anne aged 57 who
passed away this May, I shall miss her dearly. After a 40 year relentless
battle with the effects of diabetes a welsh dragon has lost her fight
"Mae hen wlad fy nhadau lle rhuo y Dreigiau"

Builds can be found here:
http://nginx-win.ecsds.eu/
Follow releases https://twitter.com/nginx4Windows

Location not found when using / (no replies)

$
0
0
Hi,

I'm having a few problems with my routes and I'll appreciate any help
that you could provide.

Here is my nginx configuration:

upstream internal {
server 10.0.0.13:9001;
server 10.0.0.13:9002;
server 10.0.0.13:9003;
server 10.0.0.13:9004;
server 10.0.0.15:9001;
server 10.0.0.15:9002;
server 10.0.0.15:9003;
server 10.0.0.15:9004;
keepalive 1024;
}

server {
listen 12340;

location / {
proxy_pass http://internal;
}
}


Al the processes in the upstream matches the route: /process/a/N

Everything is running ok, but in a random fashion routes that worked in the
past, such as
/process/a/1 or /process/a/2 returns as HTTP/404 and the request never
reaches the upstream servers. So I think is nginx itself answering with the
404.

Also, in the logs I see:

/usr/local/nginx/html/process/a/1 failed (2: No such file or directory)

which has no sense given I didn't set a root and in nginx.conf.

Thank you in advance.




--
---
Guido Accardo
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Best approach for web farm type setup? (no replies)

$
0
0
What is the best approach for having nginx in a web farm type setup where I
want to forward http connections to an proxy upstream if they match one of
a very long/highly dynamic list of host names? All of the host names we are
interested in will resolve to our address space, so could it be as simple
as defining a resolver and having an allow for our CIDR's? Or do I need
something more elaborate like a database of allowed hostnames?

A related question might be, whats that best approach if I wanted to throw
TLS into the mix? Would I need to keep SSL certs for each of my very
long/highly dynamic list of hosts resident? Or is there a way to manage
that more dynamically? Assume that everyone connecting supports SNI.

In both cases I'm just looking for high level/best practices. I can work
out the details but want to make sure I'm going the right direction and
asking the right questions.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

issues about nginx proxy_cache (no replies)

$
0
0
Hi:
Dear all
It is very pleasure to join in nginx mail list, but exactly i met a problem When I use nginx1.7.9 as a reverse-proxy-server. more details as follows:
my design requirements are those:
what I want is that nginx download the files to local by parsing response-http-302-code .
But Unfortunately , nginx transmit the 302-redirect-link to my browser directly. When my browser receive the response,it download files from redirected-link.
So means that It doesn't via nginx when download the video-file.


for example:


my-browser ----------> Server-A(nginx)---------->Server-B(Server local file) Server-C(Server has video-file)
|<-------302+C-addr-------| <--------302 C-addr--------|
|----------------------request video file------------------------------------------------->|
|<-----------------------200 OK video file -----------------------------------------------|

What my problem is Server-A dosen't cache the video file.
I try to these two cache strategies as follows,but nothing effects,how can I fix it.




First I use proxy_store nginx.conf as follows :
-----------------------------------------------------------
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
server {
listen 8065;
server_name localhost;
location / {
expires 3d;
proxy_set_header Accept-Encoding '';
root /home/mpeg/nginx;
proxy_store on;
proxy_store_access user:rw group:rw all:rw;
proxy_temp_path /home/mpeg/nginx;
if ( !-e $request_filename) {
proxy_pass http://172.30.25.246:8099;
}
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
}
------------------------------------------------------------
And then I use proxy_cache,nginx.conf as follows
------------------------------------------------------------------------
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
client_body_buffer_size 512k;
proxy_connect_timeout 10;
proxy_read_timeout 180;
proxy_send_timeout 5;
proxy_buffer_size 16k;
proxy_buffers 4 64k;
proxy_busy_buffers_size 128k;
proxy_temp_file_write_size 128k;
proxy_temp_path /home/mpeg/cache/temp;
proxy_cache_path /home/mpeg/cache levels=1:2 keys_zone=content:20m inactive=1d max_size=100m;
server {
listen 8064;
server_name localhost;
location / {
proxy_cache content;
proxy_cache_valid 200 302 24h;
proxy_cache_valid any 1d;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_cache_key $host$uri$is_args$args;
proxy_pass http://192.168.15.159:7090;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}


}
---------------------------------------------------------------------------

anything will be help , Thanks_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

回复:Welcome to the "nginx" mailing list (Digest mode) (no replies)

$
0
0
Dear ALL: When I use nginx1.7.9 as a reverse-proxy-server. It will transmit http-302 message to my browser._______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

SSL Session Cache on Windows (no replies)

$
0
0
hi,

I know that in the past the directive:

ssl_session_cache shared:SSL:1m;

did not work on Windows.

in nginx 1.9.0 I see "*) Feature: shared memory can now be used on
Windows versions with address space layout randomization."

does that mean that ssl_session_cache can be used on Windows now? if
so, is 1m a good value?

thanks,

Igal Sapir
Lucee Core Developer
Lucee.org http://lucee.org/


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Question about source code: Any need to call ngx_event_pipe_remove_shadow_links in ngx_event_pipe_read_upstream? (no replies)

$
0
0
Hi, all:

nginx code version: 1.7.9 ( I have checked v1.9.0, no change about this)

The bufs used to invoke ngx_event_pipe_remove_shadow_links in
ngx_event_pipe_read_upstream come from p->preread_bufs or p->free_raw_bufs
or new allocated buf.

Both p->preread_bufs and new allocated buf have no shadow link.

p->preread_bufs is inited to be NULL, and there are two places which will
add free buffer into it: ngx_event_pipe_write_chain_to_temp_file and
ngx_event_pipe_write_to_downstream.

In both places, shadow links are cleared by ngx_event_pipe_add_free_buf or
by ngx_event_pipe_remove_shadow_links.

So, there is no need to call ngx_event_pipe_remove_shadow_links
in ngx_event_pipe_read_upstream at all, for shadow link will always be NULL.

Am I missing something ? Or just lack of code review ?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

upstream redirect instead proxy_pass (no replies)

$
0
0
Hello,

I would like to use Nginx as Load Balancer (traffic). My config is:


upstream storages {
least_conn;
server str1 weight=1 max_fails=1 fail_timeout=10s;
server str2 weight=1 max_fails=1 fail_timeout=10s;
}


server {
listen 80;
server_name verteilen;
location / {
proxy_pass http://storages;
#return 302 $scheme://storages;
}
}




How can I redirect to the server of upstream? With proxy_pass does it work but I want to move the traffic to several servers.
I just need the "storages" variable.


Sven

example.com is found, but not www.example.com (3 replies)

$
0
0
Well this looks so simple in the nginx manual. I have cleared the browser cache. so I am running out of simple idea. The domain is inplanesight.org.
http://www.inplanesight.org will 404
http://inplanesight.org works fine

Here is the server part of the nginx.conf file:
---------------------------------------------------------------
server {
listen 80;
server_name inplanesight.org www.inplanesight.org;

#charset koi8-r;

#access_log logs/host.access.log main;
access_log /var/log/nginx/access.log;

root /usr/local/www/nginx;
location / {
try_files $uri $uri/ =404;
}

#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/local/www/nginx-dist;
}

# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass unix:/var/run/php-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $request_filename;
include fastcgi_params;
}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}

# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}

If it matters, I have two active server sections in the nginx.conf file. This is the start of the second section:
-----------------------------------------------
server {
listen 80;
server_name lazygranch.xyz www.lazygranch.xyz;
-----------------------------------

nginx and php5-fpm have stopped working (no replies)

$
0
0
I am moving a Drupal 7 application on Ubuntu 14.04 from development to production. I use nginx (1.4.6-1ubuntu3.2) and php5-fpm (5.5.9+dfsg-1ubuntu4.9).

The production machine is a VPS hosted at 1&1 and was running alright up until about 4 hours ago. Nginx had been giving some errors on startup:

2015/05/17 16:21:39 [info] 27859#0: Using 32768KiB of shared memory for push module in /etc/nginx/nginx.conf:85
2015/05/17 16:21:39 [alert] 27859#0: mmap(MAP_ANON|MAP_SHARED, 1048576) failed (12: Cannot allocate memory)

but nginx and php5-fpm were working alright.

Later, however, after I had uploaded some corrected theme data (basically CSS generated from SASS), nginx went down. Since then I have been unable to make nginx AND php5-fpm start. I can start one or the other. Unfortunately php5-fpm has not generated any error messages that I have been able to find.

Rebooting doesn't solve the problem.

Nginx, however, now generates the following error.

2015/05/17 23:40:40 [alert] 1559#0: mmap(MAP_ANON|MAP_SHARED, 33554432) failed (12: Cannot allocate memory)

I'm pretty sure I have enough memory. The command free -m gives:

total used free shared buffers cached
Mem: 8192 168 8023 126 0 149
-/+ buffers/cache: 19 8172
Swap: 0 0 0

I have seen other discussions with similar symptoms which seem to suggest that parameters of the VPS need to be adjusted by the provider. Is this such a case? If so, which parameters do I need to ask to have adjusted?

I'm out of my depth on this, so I'd be grateful for any assistance anyone could offer.

Steve

Satistfy any not working as expected (no replies)

$
0
0
Hi,

I'm facing an issue using the "satisfy any" directive. What I'm trying to achieve is quite simple:
- have an auth_request directive protecting the entire website (hence set at the server level in the config file)
- have no such authentication for the local network

I've put the following lines in my nginx config file, under the 'server' directive:

----------------------------
server {

satisfy any;
allow 192.168.0.0/24;
deny all;

auth_request /path/to/authRequestScript.php;
[...]
}
----------------------------

Although that works well for the local network (ie: no authentication required anymore), I get a "403 Forbidden" message when I'm connecting from the outside network where I would expect the usual authentication mecanism to be triggered.

All the exemples I found rely on the "location /" directive, but I'd like it to be at the server level.

What am I doing wrong ?

Thanks for any help,
Arno

TCP Connection details (no replies)

$
0
0
I am using the Nginx as a reverse proxy and I want to find out the TCP connection information on both east and west bound connections.
With the following params on the access log, I am able to get the info about the client TCP connection. Now, I want to find the RTT between Nginx and the backend webserver. Any idea how can I get this ?

$tcpinfo_rtt, $tcpinfo_rttvar, $tcpinfo_snd_cwnd, $tcpinfo_rcv_space
information about the client TCP connection; available on systems that support the TCP_INFO socket option
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>