Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7148 articles
Browse latest View live

SEO gone mad... (23 replies)

$
0
0
Hi folks,

I have a requirement from a customer that the terminal slash be
rewritten when accessing the homepage - eg example.com/ is a 301 to
example.com

I've tried a simple rewrite of ^/$ but that just loops.

Any ideas?

Cheers,

Steve

--
Steve Holdoway BSc(Hons) MIITP
http://www.greengecko.co.nz
Linkedin: http://www.linkedin.com/in/steveholdoway
Skype: sholdowa

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

1.9.5 slower then 1.9.4 or 1.8.0 on static files (7 replies)

$
0
0
I just migrated a new customer to nginx/php-fpm and notice a considerable delay with requests.

nginx Version 1.9.5 compiled from rpm with http/2

hosting: linode
Virtualizer: KVM


I noticed that the site was slower with an avg ttfb of ~1.1s.
Only until i got to test static assets did i discover that the time to first byte for js/css was the same 1.1s.

Switching to nginx 1.8 bring the response time back to normal of ~0.145.

Did i stumble on a bug?

Anything i can give you guys to troubleshoot this figure out why 1.9.5. is slower.

Bogdan

Anyone know how the least_conn upstream option works in nginx plus? (2 replies)

$
0
0
Hello,

Anyone know how the least_conn upstream option works in nginx plus?

http://nginx.org/en/docs/http/ngx_http_upstream_module.html#least_conn
says
"Specifies that a group should use a load balancing method where a request is passed to the server with the least average response time and least number of active connections, taking into account weights of servers"

How does nginx plus use the three variable to determine which server to use?
Is the "total" weight dynamically determined as: weight * 1/response time * active connections or some other formula?

Thanks.

[ANN] Windows nginx 1.9.6.1 Kitty (no replies)

$
0
0
11:04 1-10-2015 nginx 1.9.6.1 Kitty

Whats small and slender red and peach, blue eyes, white face with
whiskers, pink bow ? She ain't really that nice, she ain't even
pretty but she gets the job done so here’s Kitty! Remember kids,
as long as it SMELLS like orange juice, you can drink it.
The nginx Kitty release is here!

Based on nginx 1.9.6 (29-9-2015) with;
+ lua-nginx-module v0.9.17 (upgraded 29-9-2015, *_by_lua_block)
+ nginx-module-vts (fix for nginx starttime going haywire)
* prove06 is still being worked on
+ Source changes back ported
+ Source changes add-on's back ported
+ Changes for nginx_basic: Source changes back ported
* Known broken issues: ajp cache
* Scheduled release: no, maintenance release, h2/Lua fixes
* Additional specifications: see 'Feature list'

Builds can be found here:
http://nginx-win.ecsds.eu/
Follow releases https://twitter.com/nginx4Windows

Nginx realip vs proxypass (1 reply)

$
0
0
Hi All
we're testing a new nginx implementation to put in front our web
application to retrieve the X-Forwarded-For header sent by an external
reverse proxy and configure it as realip address of the requests forwarded
to we app.

We have installed nginx with the realip module and from the access log we
can see the real IP address sent by X-Forwarded-For header but if we try to
forward the request to the web application via proxypass the address sent
is the IP of the nginx instance.

Is it possible via proxypass to present the real IP?

Thanks,
Marcello
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Custom error pages and access_log inheritance (1 reply)

$
0
0
Hi,

i was wondering if it is possible to have custom error pages *without* the error page locations inheriting the http or server level access_log directives. Please consider the following config:

------------------
server {
listen *:80;
server_name test.domain;
root /app/www/;

location / {}

location ~ \.html$ {
access_log /app/log/html.log custom;
}

access_log /app/log/test.log custom;
}
------------------

All access logs for requests "/exists.html" with status code 200 and "/nonexists.html" with status code 404 are in the html.log file.

However if i add a custom error page for the 404 error on the server level:
------------------
server {
/* above config */

error_page 404 /x;

location = /x {
internal;
echo "error";
}
}
------------------
The access logs for "/nonexists.html" are now located in the file "test.log", which makes perfect sense, since the server level access_log directive is inherited by location /x. The only way to fix this is a somewhat lengthy config like this:

------------------
server {
listen *:80;
server_name test.domain;
root /app/www/;

location / {}

location ~ \.html$ {
access_log /app/log/html.log custom;
error_page 404 /xhtml;
}

access_log /app/log/test.log custom;

error_page 404 /x;

location = /x {
internal;
echo "error";
}

location = /xhtml {
internal;
echo "error";
access_log /app/log/html.log custom;
}
}
------------------

Is there any way to have both 1) a custom error page and 2) all the access logs stay in the inital location's access_log, *without* having to define both error_page and the corresponding location with seperate log file?

How does nginx weighting work? (1 reply)

$
0
0
upstream myCloud{
server 10.0.0.1 weight=10;
server 10.0.0.2 weight=20;
}

For 30 sequential requests,
will it work like
A.
10.0.0.1 10.0.0.2 10.0.0.2 -> 10.0.0.1 10.0.0.2 10.0.0.2 -> ... total 10 times repeat

or will it work like
B.
10.0.0.1 ... 10 times repeat -> 10.0.0.2... 20 times repeat

Because if it works like A, it would be helpful to use large numbers to fine tune weighting.
But, if it works like B, it would be harmful to use large numbers to fine tune weighting.

upstream, next node when node returns busy (no replies)

$
0
0
Any ideas how to implement the following?

Upstream pool of 4 nodes
Node1
Node2
Node3
Node4

Proxy_pass upstream

If node in upstream returns a 501 move request to next node in upstream but don’t mark it as down or failed and keep client waiting while trying other nodes.

If all nodes return a 501 return a 503 to client, and still don’t mark any node as down or failed.

nb. any node can process 2 – 500 requests, depending on what its doing this can be sometimes 2, or 40 or 300 or anything between 2 and 500 and obviously varies per second per node.

Http2 not getting enable (8 replies)

$
0
0
None of the only http2 indicators are able to detect http2 although i have it enable

Website Link : https://www.onestopmarketing.club

Full Nginx Config : http://pastebin.com/ScGmZNwX

I also did restart nginx or reload the configuration with no change at all

a problem with chunked encoding (3 replies)

$
0
0
I wrote a php script, for socks over http proxy, generating “chunked encoding” format, adding a header(Transfer-Encoding: chunked)
and I first used apache, it works well.
But when I used nginx/php-fpm, I found the response chunked encoding twice.

I tried to turn off chunked encoding with "chunked_transfer_encoding off", but in the response header, there were no "Transfer-Encoding: chunked"
that i added in the php script.

How to solve the problem. Sorry about my broken English.

Here is my script:
https://github.com/esxgx/highway/blob/master/tun-crypt.php

Nginx, SOAP and POST redirect (2 replies)

$
0
0
Hello,

I use nginx as frontend for some soap service. This service actually works with CWMP protocol, what is based on soap. I mean, that I have session, that consists of sequence of HTTP POST requests and responses, where body contains SOAP.

It's important, that I can use only POST requests to communicate with client. And there is a problem, that some clients has broken browsers (most of the clients, actually) and it's not way to fix them quickly. And I want to use redirects in this soap session (I mean 307 redirects) But, unfortunately, some clients don't support redirects, because they are broken. I know that there are some way to make nginx handling redirects instead of the clients. I know about error_page, x-accel-redirect, subrequests in nginx, echo module and other stuff. But all this methods have significant disadvantages, mostly because they were designed for GET requests, not for the POST.

For example, there is a question about doing redirect in nginx the way I want: http://serverfault.com/questions/423265/how-to-follow-http-redirects-inside-nginx I tried it, but it turns my POST request into GET request and I loose the body. It' may be connected with this: https://en.wikipedia.org/wiki/Post/Redirect/Get

There is another example, where are mentioned another methods to achieve what I want: http://forum.nginx.org/read.php?2,254664,254664#msg-254664 But, again, it's not clear how to use it if I want POST requests only

By the way, redirect may be not what I need. Exactly I need following:

1. Client sends to me first message in the session.
2. I answer with 401 (because I need authentication)
3. Client sends to me the same message, but with authentication header
4. I need to get the same first message third time, but on the another URL.

Module ngx_http_auth_request_module is doing something close to what I want, but, again, it has issues with POST method.

So, I don't know what to do. Maybe there are any hope that nginx can help me or I should rely on another ways to do it? Or I should forget about doing redirects?

Please, help

stream, it always aborts the first server in upstream (no replies)

$
0
0
A strange 'bug' in stream, it always aborts the first server in upstream eventhough there is nothing wrong with the server.

2015/10/05 12:21:01 [info] 1436#684: *1 client 192.168.xxx.xxx:1994 connected to 0.0.0.0:xxxx
2015/10/05 12:21:01 [info] 1436#684: *1 proxy 192.168.xxx.xxx:1493 connected to 192.168.xxx.200:xxxx
2015/10/05 12:21:03 [info] 1436#684: *1 client disconnected, bytes from/to client:334/192600, bytes from/to upstream:192600/334
>>server1 in upstream aborted after 2 seconds
2015/10/05 12:21:04 [info] 1436#684: *3 client 192.168.xxx.xxx:1998 connected to 0.0.0.0:xxxx
2015/10/05 12:21:04 [info] 1436#684: *3 proxy 192.168.xxx.xxx:1494 connected to 192.168.xxx.200:xxxx
>>server2 (which is the same as server1) connects ok and streams perfect

stream {
upstream backendst {
# servers are all the same, when using different servers the problem remains
# when using only one server the logs say the same, abort on first attempt, after 2 seconds a second attempt works ok
server 192.168.xxx.200:xxxx;
server 192.168.xxx.200:xxxx;
}

server {
listen xxxx;
# extremely tight timeout settings, have tested with 10x these values which made no difference to the issue
proxy_connect_timeout 10s; # to proxy backend
proxy_timeout 10s; # to client
proxy_next_upstream on;
proxy_next_upstream_timeout 10;
proxy_next_upstream_tries 2;
proxy_pass backendst;
}
}

nb. I am aware it says "client disconnected" but this is not the case, wget, curl, and a dozen other apps all do and log the same thing. Connecting directly to upstream servers works fine too on the first try (no reconnect).

Can't find description for "post_action" in documentation (2 replies)

$
0
0
Hello.
I can't find description for "post_action" in documentation.
Earlier (in static version of documentation) was description of this directive.
This directive is still actual?

Nginx stats - why _handled_ value differs (no replies)

$
0
0
Hi,

Stats from server:

$ curl 'http://127.0.0.1/nginx-stats'; sleep 1; curl 'http://127.0.0.1/nginx-stats'
Active connections: 25849
server accepts handled requests
917796 917796 13323443
Reading: 0 Writing: 668 Waiting: 25180
Active connections: 25860
server accepts handled requests
918627 918627 13337317
Reading: 0 Writing: 706 Waiting: 25153

Let's count requests:
accepts: 831 rps
handled: 831 rps
requests: 13874 rps

Why last value is much larger than others? The only idea so far - several requests via keepalive connection. Am I right?

Regards,
Alex

301 executes before authentication (5 replies)

$
0
0
I have a server block that contains the following:

auth_basic "Please log in.";
location = / {
return 301 https://$host:$server_port/folder/;
}

I noticed that /folder/ is appended to the URL before the user is
prompted for authentication. Can that behavior be changed?

- Grant

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

fastcgi_cache / proxy_cache -> Mobile / Desktop (no replies)

$
0
0
Hi,

Actually, I use fastcgi_cache / proxy_cache but, sometimes, I have problem with how this cache is read... causing confusion for some sites when open mobile ou desktop version.

In sites/systems, there are the check for mobile detect, common like http://detectmobilebrowsers.com
but, for unknow reason, this information came wrong.. and mobile users view desktop user content...

this is not always... is aleatory...

I already try everything I know... I put rules for user_agent
### Map Mobile
map $http_user_agent $iphone_request {
default 0;
~*android|ip(hone|od)|windows\s+(?:ce|phone) 1;
~*symbian|sonyericsson|samsung|lg|blackberry 1;
~*mobile 1;
}

with this, I put cache_key the information about this variable....
BUT... still bugged something...

What more I can to do ?

Cant make Proxy Next Upstream work (no replies)

$
0
0
Hi guys!

I have a configuration taht cant make it work Here is the data. First i
check taht if i come from location2 with geo so taht it uses lcoation 2
backends.. But as there are things taht are not in location2 i need to go
to default location in case there is an 404 o 403 as git shows me 403
although the repo doesnt exist.

This isnt working.. i receive the 403 form location2 but never tries the
backup server.. Any help or any other way to do it will greatly appreciate!

geo $upstream {
default git_loc1;

x.x.0.0/16 git_loc1;
x.x.0.0/16 git_loc2;
}

upstream git_loc1 {
hash $remote_addr$remote_user;

server git1.loc1.bla:443;
server git2.loc1.bla:443;
}

upstream git_loc2 {

server git.loc2.bla:443;

server git.loc1.bla:443 backup;
}


server {
listen 80;
server_name git.bla;

error_log logs/git-error.log debug;
access_log logs/git_access.log upstreamlog;

location / {
proxy_intercept_errors on;
error_log logs/git-pp-error.log debug;
proxy_next_upstream error http_403 http_404;
proxy_pass https://$upstream;
}

}


Cheers!
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

merely testing for $ssl_protocol breaks upstream proxy only with IE8 (3 replies)

$
0
0
I am on nginx 1.9.4
One of my https site cannot be accessed by IE8 in XP and some IE in Win 7 (getting 404).
It seems nginx do the try_files locally and gave up, not going for @proxy.
Works fine with other browser.

I narrowed it down to this sample config

##### sample config that has issue #####
server {
listen *:443 ssl default;
server_tokens off;

server_name bb2.example.com;

ssl on;

ssl_certificate /etc/nginx/default.crt;
ssl_certificate_key /etc/nginx/default.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
root /var/nginx/www/bb2;

location / {
set $unsafe 0;
if ($ssl_protocol = TLSv1) {
set $unsafe 1;
}
proxy_intercept_errors on;
proxy_read_timeout 90;
try_files $uri $uri/index.html @proxy;
root /var/nginx/www/bb2;

}
location @proxy {
proxy_pass http://127.0.0.1:8888;
}

}

####### end of sample config ##############

When I try to access anything that is statically served, it is fine, but when I access anything proxied, I get a 404 on IE8 WinXP and some Win7,
Other browsers are fine.

I found that the problem disappear if I remove the block

if ($ssl_protocol = TLSv1) {
set $unsafe 1;
}

or if I don't use try_files and directly go for proxy_pass.
But of course I can no longer locally host static file.

I found that if I check for $ssl_protocol = SSLv3 , it is not causing problem, only TLSv1
If doesn't matter if I put any action in the "if" block, as soon as I do a test, it breaks.

Anyone can shed a clue of what is going on there?

*14177278 readv() failed (104: Connection reset by peer) while reading upstream (no replies)

$
0
0
Hi,

We have a tomcat and nginx setup and are seeing the below error in our
nginx logs.

2015/10/06 11:05:00 [error] 1005#0: *3026220 readv() failed (104:
Connection reset by peer) while reading upstream, client: 10.144.106.221,
server: _, request: "GET /exelate
/usersync?segment=3460,3461,3462,3463,3466,1475,3482,3485,8129,1443,8128,1444,1438,1440,1442,5174,5173,3457,3455,3456,3453,3454,3451,1447,1448,3452,3449,1451,3448,1449,5183,1463,16094,16095,16096,40438,40441,5845,40465,40422,40425,1309,5856,40473,5860,40471,5861,40470,5848,5846,40481,5847,40479,40478,5851,40491,5872,5871,40486,5874,40499,5865,40498,5862,5868,5867,4737,5804,4728,4725,5814,10370,4708,10369,7592,10368,10367,7593,10374,10373,10372,10371,7598,10362,7599,10361,10365,10364,10363,4702,4703,14089,14088,4698,14093,9989,9232,14100,14101,9234,9235,9210,5822,9202,5825,9205,3058,9206,5831,10427,17139,10428,17140,10429,17141,10430,10431,17135,17136,10432,10433,17137,10434,10435,17131,17132,10437,17133,10438,17134,346,10439,17127,10440,17128,344,10441,17129,539,10442,17124,10412,14141,10411,17123,17126,10414,10413,14138,10416,17120,10415,17119,14136,17122,10420,17116,10419,17115,17118,10422,10421,7566,10424,17111,10426,10425,17113,17169,17170,17167,17168,17165,17166,17163,17164,17161,17162,17159,17160,17158,17157,10444,17156,10443,17154,17153,17152,17151,17150,5736,17149,17148,9241,17147,17146,9239,9238,9237,17143,13883,10395,13884,17192,10396,7630,13881,10397,7628,13879,7635,13877,10393,13878,10394,13891,17183,10400,13892,13889,17185,13890,7627,13887,13888,13885,10380,10379,13604,10381,13603,10376,13895,10375,13894,10378,10377,13893,17181,7639,10388,10387,7640,7637,10389,10383,10386,10385,7664,13852,13853,13854,13621,13855,13848,391,13849,13850,13616,13851,13860,10410,13861,13862,10408,13856,13857,13858,25,13859,13866,13865,5794,13868,7683,13867,5796,10405,13864,13863,13874,13873,13875,13870,13869,7678,13872,7679,13871,17013,17016,17017,17019,17020,17021,17024,6985,17027,17028,6996,4877,4876,17029,4875,4874,17031,6992,4873,17034,6994,4871,6993,17035,17038,17037,4866,4863,16981,14169,4891,16982,14170,4892,16979,4893,16986,4888,4889,16984,141

Can someone help me understand what this means and if there is a setting we
need to change in either tomcat or nginx. Appreciate your help.


regards,
--
*Harshvardhan Chauhan* | Software Engineer
*GumGum* http://www.gumgum.com/ | *Ads that stick*
310-260-9666 | harsh@gumgum.com
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Content-Security-Policy header gets lowercased (2 replies)

$
0
0
Hi,

I noticed that when adding a `Content-Security-Policy` header to the
response, the header name gets lowercased. Neither
`Content-Security-Polic` or `Content-Security-Policyy`, or any other
header I came across gets lowercased. Is this a bug?


Ádám

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7148 articles
Browse latest View live




Latest Images