Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7153 articles
Browse latest View live

Nginx, SOAP and POST redirect (2 replies)

$
0
0
Hello,

I use nginx as frontend for some soap service. This service actually works with CWMP protocol, what is based on soap. I mean, that I have session, that consists of sequence of HTTP POST requests and responses, where body contains SOAP.

It's important, that I can use only POST requests to communicate with client. And there is a problem, that some clients has broken browsers (most of the clients, actually) and it's not way to fix them quickly. And I want to use redirects in this soap session (I mean 307 redirects) But, unfortunately, some clients don't support redirects, because they are broken. I know that there are some way to make nginx handling redirects instead of the clients. I know about error_page, x-accel-redirect, subrequests in nginx, echo module and other stuff. But all this methods have significant disadvantages, mostly because they were designed for GET requests, not for the POST.

For example, there is a question about doing redirect in nginx the way I want: http://serverfault.com/questions/423265/how-to-follow-http-redirects-inside-nginx I tried it, but it turns my POST request into GET request and I loose the body. It' may be connected with this: https://en.wikipedia.org/wiki/Post/Redirect/Get

There is another example, where are mentioned another methods to achieve what I want: http://forum.nginx.org/read.php?2,254664,254664#msg-254664 But, again, it's not clear how to use it if I want POST requests only

By the way, redirect may be not what I need. Exactly I need following:

1. Client sends to me first message in the session.
2. I answer with 401 (because I need authentication)
3. Client sends to me the same message, but with authentication header
4. I need to get the same first message third time, but on the another URL.

Module ngx_http_auth_request_module is doing something close to what I want, but, again, it has issues with POST method.

So, I don't know what to do. Maybe there are any hope that nginx can help me or I should rely on another ways to do it? Or I should forget about doing redirects?

Please, help

stream, it always aborts the first server in upstream (no replies)

$
0
0
A strange 'bug' in stream, it always aborts the first server in upstream eventhough there is nothing wrong with the server.

2015/10/05 12:21:01 [info] 1436#684: *1 client 192.168.xxx.xxx:1994 connected to 0.0.0.0:xxxx
2015/10/05 12:21:01 [info] 1436#684: *1 proxy 192.168.xxx.xxx:1493 connected to 192.168.xxx.200:xxxx
2015/10/05 12:21:03 [info] 1436#684: *1 client disconnected, bytes from/to client:334/192600, bytes from/to upstream:192600/334
>>server1 in upstream aborted after 2 seconds
2015/10/05 12:21:04 [info] 1436#684: *3 client 192.168.xxx.xxx:1998 connected to 0.0.0.0:xxxx
2015/10/05 12:21:04 [info] 1436#684: *3 proxy 192.168.xxx.xxx:1494 connected to 192.168.xxx.200:xxxx
>>server2 (which is the same as server1) connects ok and streams perfect

stream {
upstream backendst {
# servers are all the same, when using different servers the problem remains
# when using only one server the logs say the same, abort on first attempt, after 2 seconds a second attempt works ok
server 192.168.xxx.200:xxxx;
server 192.168.xxx.200:xxxx;
}

server {
listen xxxx;
# extremely tight timeout settings, have tested with 10x these values which made no difference to the issue
proxy_connect_timeout 10s; # to proxy backend
proxy_timeout 10s; # to client
proxy_next_upstream on;
proxy_next_upstream_timeout 10;
proxy_next_upstream_tries 2;
proxy_pass backendst;
}
}

nb. I am aware it says "client disconnected" but this is not the case, wget, curl, and a dozen other apps all do and log the same thing. Connecting directly to upstream servers works fine too on the first try (no reconnect).

Can't find description for "post_action" in documentation (2 replies)

$
0
0
Hello.
I can't find description for "post_action" in documentation.
Earlier (in static version of documentation) was description of this directive.
This directive is still actual?

Nginx stats - why _handled_ value differs (no replies)

$
0
0
Hi,

Stats from server:

$ curl 'http://127.0.0.1/nginx-stats'; sleep 1; curl 'http://127.0.0.1/nginx-stats'
Active connections: 25849
server accepts handled requests
917796 917796 13323443
Reading: 0 Writing: 668 Waiting: 25180
Active connections: 25860
server accepts handled requests
918627 918627 13337317
Reading: 0 Writing: 706 Waiting: 25153

Let's count requests:
accepts: 831 rps
handled: 831 rps
requests: 13874 rps

Why last value is much larger than others? The only idea so far - several requests via keepalive connection. Am I right?

Regards,
Alex

301 executes before authentication (5 replies)

$
0
0
I have a server block that contains the following:

auth_basic "Please log in.";
location = / {
return 301 https://$host:$server_port/folder/;
}

I noticed that /folder/ is appended to the URL before the user is
prompted for authentication. Can that behavior be changed?

- Grant

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

fastcgi_cache / proxy_cache -> Mobile / Desktop (no replies)

$
0
0
Hi,

Actually, I use fastcgi_cache / proxy_cache but, sometimes, I have problem with how this cache is read... causing confusion for some sites when open mobile ou desktop version.

In sites/systems, there are the check for mobile detect, common like http://detectmobilebrowsers.com
but, for unknow reason, this information came wrong.. and mobile users view desktop user content...

this is not always... is aleatory...

I already try everything I know... I put rules for user_agent
### Map Mobile
map $http_user_agent $iphone_request {
default 0;
~*android|ip(hone|od)|windows\s+(?:ce|phone) 1;
~*symbian|sonyericsson|samsung|lg|blackberry 1;
~*mobile 1;
}

with this, I put cache_key the information about this variable....
BUT... still bugged something...

What more I can to do ?

Cant make Proxy Next Upstream work (no replies)

$
0
0
Hi guys!

I have a configuration taht cant make it work Here is the data. First i
check taht if i come from location2 with geo so taht it uses lcoation 2
backends.. But as there are things taht are not in location2 i need to go
to default location in case there is an 404 o 403 as git shows me 403
although the repo doesnt exist.

This isnt working.. i receive the 403 form location2 but never tries the
backup server.. Any help or any other way to do it will greatly appreciate!

geo $upstream {
default git_loc1;

x.x.0.0/16 git_loc1;
x.x.0.0/16 git_loc2;
}

upstream git_loc1 {
hash $remote_addr$remote_user;

server git1.loc1.bla:443;
server git2.loc1.bla:443;
}

upstream git_loc2 {

server git.loc2.bla:443;

server git.loc1.bla:443 backup;
}


server {
listen 80;
server_name git.bla;

error_log logs/git-error.log debug;
access_log logs/git_access.log upstreamlog;

location / {
proxy_intercept_errors on;
error_log logs/git-pp-error.log debug;
proxy_next_upstream error http_403 http_404;
proxy_pass https://$upstream;
}

}


Cheers!
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

merely testing for $ssl_protocol breaks upstream proxy only with IE8 (3 replies)

$
0
0
I am on nginx 1.9.4
One of my https site cannot be accessed by IE8 in XP and some IE in Win 7 (getting 404).
It seems nginx do the try_files locally and gave up, not going for @proxy.
Works fine with other browser.

I narrowed it down to this sample config

##### sample config that has issue #####
server {
listen *:443 ssl default;
server_tokens off;

server_name bb2.example.com;

ssl on;

ssl_certificate /etc/nginx/default.crt;
ssl_certificate_key /etc/nginx/default.key;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
root /var/nginx/www/bb2;

location / {
set $unsafe 0;
if ($ssl_protocol = TLSv1) {
set $unsafe 1;
}
proxy_intercept_errors on;
proxy_read_timeout 90;
try_files $uri $uri/index.html @proxy;
root /var/nginx/www/bb2;

}
location @proxy {
proxy_pass http://127.0.0.1:8888;
}

}

####### end of sample config ##############

When I try to access anything that is statically served, it is fine, but when I access anything proxied, I get a 404 on IE8 WinXP and some Win7,
Other browsers are fine.

I found that the problem disappear if I remove the block

if ($ssl_protocol = TLSv1) {
set $unsafe 1;
}

or if I don't use try_files and directly go for proxy_pass.
But of course I can no longer locally host static file.

I found that if I check for $ssl_protocol = SSLv3 , it is not causing problem, only TLSv1
If doesn't matter if I put any action in the "if" block, as soon as I do a test, it breaks.

Anyone can shed a clue of what is going on there?

*14177278 readv() failed (104: Connection reset by peer) while reading upstream (no replies)

$
0
0
Hi,

We have a tomcat and nginx setup and are seeing the below error in our
nginx logs.

2015/10/06 11:05:00 [error] 1005#0: *3026220 readv() failed (104:
Connection reset by peer) while reading upstream, client: 10.144.106.221,
server: _, request: "GET /exelate
/usersync?segment=3460,3461,3462,3463,3466,1475,3482,3485,8129,1443,8128,1444,1438,1440,1442,5174,5173,3457,3455,3456,3453,3454,3451,1447,1448,3452,3449,1451,3448,1449,5183,1463,16094,16095,16096,40438,40441,5845,40465,40422,40425,1309,5856,40473,5860,40471,5861,40470,5848,5846,40481,5847,40479,40478,5851,40491,5872,5871,40486,5874,40499,5865,40498,5862,5868,5867,4737,5804,4728,4725,5814,10370,4708,10369,7592,10368,10367,7593,10374,10373,10372,10371,7598,10362,7599,10361,10365,10364,10363,4702,4703,14089,14088,4698,14093,9989,9232,14100,14101,9234,9235,9210,5822,9202,5825,9205,3058,9206,5831,10427,17139,10428,17140,10429,17141,10430,10431,17135,17136,10432,10433,17137,10434,10435,17131,17132,10437,17133,10438,17134,346,10439,17127,10440,17128,344,10441,17129,539,10442,17124,10412,14141,10411,17123,17126,10414,10413,14138,10416,17120,10415,17119,14136,17122,10420,17116,10419,17115,17118,10422,10421,7566,10424,17111,10426,10425,17113,17169,17170,17167,17168,17165,17166,17163,17164,17161,17162,17159,17160,17158,17157,10444,17156,10443,17154,17153,17152,17151,17150,5736,17149,17148,9241,17147,17146,9239,9238,9237,17143,13883,10395,13884,17192,10396,7630,13881,10397,7628,13879,7635,13877,10393,13878,10394,13891,17183,10400,13892,13889,17185,13890,7627,13887,13888,13885,10380,10379,13604,10381,13603,10376,13895,10375,13894,10378,10377,13893,17181,7639,10388,10387,7640,7637,10389,10383,10386,10385,7664,13852,13853,13854,13621,13855,13848,391,13849,13850,13616,13851,13860,10410,13861,13862,10408,13856,13857,13858,25,13859,13866,13865,5794,13868,7683,13867,5796,10405,13864,13863,13874,13873,13875,13870,13869,7678,13872,7679,13871,17013,17016,17017,17019,17020,17021,17024,6985,17027,17028,6996,4877,4876,17029,4875,4874,17031,6992,4873,17034,6994,4871,6993,17035,17038,17037,4866,4863,16981,14169,4891,16982,14170,4892,16979,4893,16986,4888,4889,16984,141

Can someone help me understand what this means and if there is a setting we
need to change in either tomcat or nginx. Appreciate your help.


regards,
--
*Harshvardhan Chauhan* | Software Engineer
*GumGum* http://www.gumgum.com/ | *Ads that stick*
310-260-9666 | harsh@gumgum.com
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Content-Security-Policy header gets lowercased (2 replies)

$
0
0
Hi,

I noticed that when adding a `Content-Security-Policy` header to the
response, the header name gets lowercased. Neither
`Content-Security-Polic` or `Content-Security-Policyy`, or any other
header I came across gets lowercased. Is this a bug?


Ádám

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Basic protection for different IPs (2 replies)

$
0
0
Hi All,

I want to keep casual visitors out of a test website, yet enable the
developers who come in via ssh to use it without entering a password,
and allow invited people to come in using a password.

In other words, how can I set up basic protection on my test website
when responding to the main IP of my server, and have no password prompt
when responding on localhost?

I tried using if (hostname = 'example.com') in spite of if-is-evil, and
could not get it to work.

Regards

Ian

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Vary header and cache_key (no replies)

$
0
0
Hello!

Before Nginx 1.7.7, the file name in a cache was a result of applying the
MD5 function to the cache_key.

Now the file name, when Vary header is present in a response of the proxied
request, is not the MD5 of the cache_key anymore.

The above requests generate two different cache files (response header
include *Vary: Accept-Encoding*):

curl -H 'Accept-Encoding: gzip' http://example.com/script.js
curl http://example.com/script.js

*proxy_cache_key: *$scheme://$http_host$uri$is_args$args


How is calculated the file name in this case?

Tks,

Guilherme
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Poor NGINX Performance due to configurations (4 replies)

$
0
0
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Chrome says both HTTP/2 and SPDY being used? (2 replies)

$
0
0
I always compile nginx from source. I upgraded to 1.9.5 by including the http2 module and not the spdy module, made the simple http2 change to my conf while removing all spdy-related declarations ,and HTTP/2 is now working. Sweet.

However, when I use Chrome to inspect pages on my site at https://jobmob.co.il/, I can see that some of my files are served using H2 while others are served using spdy. Why isn't it all just over H2?

Is this normal?

Thanks

Jacob

Http2 Priority (7 replies)

$
0
0
Hi,

My first time to ask question. Is this the mail list for FAQ?

If so, my question is that it seems nginx1.9.5 support the dependency tree
built in http/2. But through my test, the result is not what I expected. I
send 4 requests while A is B,C,D's parent.
But the response I received is a part of A's data frame and then B,C,D 's
whole data frame and then the left of A(A is big enough). According to
RFC7540, we should receive all the A's dataframe before receive B,C,D's
data frame. I don't know the reason(maybe because of flow control?). But
when I tried another server h2o, it gives me the result I want. A finished
first and then B,C,D.

Best Regards
Muhui Jiang
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

http/2 needs "weaker" ciphers? (1 reply)

$
0
0
I'm running nginx 1.9.5 and switched from spdy to http/2.
I wonder why I had to change my cipher list and add "weaker" ciphers?

before (worked fine with spdy):
ssl_ciphers 'AES256+EECDH:AES256+EDH';

after:
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';

[ANNOUNCE] ngx_brotli (7 replies)

$
0
0
Hey guys,
I'm pleased to announce ngx_brotli, a set of nginx modules adding support
for the new Brotli compression algorithm from Google.

You can read more about Brotli here:
http://google-opensource.blogspot.com/2015/09/introducing-brotli-new-compression.html

ngx_brotli is available on GitHub:
https://github.com/google/ngx_brotli

Brotli content encoding is already supported by Firefox 44 (Nightly) and
it's expected to be supported by Chrome soon.

Best regards,
Piotr Sikora
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

URGENT needed - Nginx + FastCGIwrap = 502 error (3 replies)

$
0
0
Hello,

I am using Nginx to configure nagios setup.

Nginx configured correctly and I am able to serve pages. But I get error
from error_log as follows:

2015/10/11 00:34:49 [crit] 18092#0: *9 connect() to
unix:/var/run/fcgiwrap.socket failed (13: Permission denied) while
connecting to upstream, client: 192.168.0.1, server: nagios.example.com,
request: "GET /nagios/cgi-bin/status.cgi?hostgroup=all&style=hostdetail
HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "
nagios.example.com", referrer: "http://nagios.example.com/side.php"


sudo service fcgiwrap status

* Checking status of FastCGI wrapper fcgiwrap
[ OK ]

sudo netstat -anp | grep cgi

unix 2 [ ACC ] STREAM LISTENING 2188667357 16645/f*cgi*wrap
/var/run/f*cgi*wrap.socket


ls -ltr /var/run/fcgiwrap.socket

srwxr-xr-x 1 www-data www-data 0 Oct 11 00:19 */var/run/fcgiwrap.socket*


*Please suggest and fix this error ?*


*Best regards, Kasino.*
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx does not work (1 reply)

$
0
0
my reverse proxy is not working with webmin. css is not resolved.

http {
include mime.types;

server {
listen 80;
server_name router;
access_log /var/log/nginx/access.log;

location ^~ /webmin {
proxy_pass http://192.168.1.5:10000/;
}
}
}

Error Nginx 502 [upstream prematurely closed connection] (2 replies)

$
0
0
Hello guys this is my first question here.

I'm working on nginx almost 2 years, but in the laste days I have founded an error, very complicated to solve.

I'm working on Amazon stack, with Elastic Load Balancer (Using TCP (SSL)), then the request is sent to Nginx (1.8.0), 2 EC2 Instances, each instance has a Sails project (running on 1337 port).

The connectio is amolst fine, actually sometimes works fine, but after 3 or 5 refresh page, throw the next 502 error:

--- code error ----
2015/10/11 20:14:46 [error] 4623#0: *38 upstream prematurely closed connection while reading response header from upstream, client: 172.31.35.22, server: , request: "GET /socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=websocket&sid=2Cn2Vmz2ovRgBxWNAAA3 HTTP/1.1", upstream: "http://127.0.0.1:1337/socket.io/?__sails_io_sdk_version=0.11.0&__sails_io_sdk_platform=browser&__sails_io_sdk_language=javascript&EIO=3&transport=websocket&sid=2Cn2Vmz2ovRgBxWNAAA3", host: "sails-load-balancer-1968414874.us-west-2.elb.amazonaws.com"
----- End code error----

My configurarion is here:

https://gist.github.com/Ajaxman/6dd5ea772823e45c1a74

And my hostvirtual is here:

https://gist.github.com/Ajaxman/7f98f6c8b92a95e55071

I have tried a lot of suggestions but anything works after 1 week.

Please someone can help me?
Viewing all 7153 articles
Browse latest View live




Latest Images