Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

Configure Nginx for virtual hosts with same port (1 reply)

$
0
0
Hi all,

I am trying to configure 3 virtual hosts for single server with same port, but php is not working for all virtual hosts.

My requirement is as below,

IP:port-A/
IP:port-A/local
IP:port-A/viewer

These 3 virtual hosts i want to configure, only html contents are displaying on browser, if i add php code its not displaying any php actions(like i written simply echo "guru";).

below is my configuration file content.

server {

listen 80 default;
listen 443 ssl;
server_name $hostname;
client_max_body_size 16384M;

location / {
root /opt/xxx/yyy/myweb/admin;
index index.php index.phtml index.html index.htm;
try_files $uri $uri/ /index.php$is_args$args;
}

location /viewer {
root /opt/xxx/yyy/myweb/viewer;
index index.php index.phtml index.html index.htm;
try_files $uri $uri/ /index.php$is_args$args;
}

location /local {
root /opt/xxx/yyy/myweb/local;
index index.php index.phtml index.html index.htm;
try_files $uri $uri/ /index.php$is_args$args;
}


location ~ \.php$ {
root /opt/xxx/yyy/myweb/admin;
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_intercept_errors on;
fastcgi_read_timeout 300;
include fastcgi_params;
}

location ~ \.php$ {
root /opt/xxx/yyy/myweb/viewer;
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_intercept_errors on;
fastcgi_read_timeout 300;
include fastcgi_params;
}

location ~ \.php$ {
root /opt/xxx/yyy/myweb/local;
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_intercept_errors on;
fastcgi_read_timeout 300;
include fastcgi_params;
}
}

advance thanks all.

block google app (1 reply)

$
0
0
I would like to block the google app from directly downloading images.

access.log:

200 186.155.157.9 - - [20/Jun/2017:00:35:47 +0000] "GET /images/photo.jpg HTTP/1.1" 334052 "-" "com.google.GoogleMobile/28.0.0 iPad/9.3.5 hw/iPad2_5" "-"


My nginx code in the images location:

if ($http_referer ~* (com.google.GoogleMobile)) {
return 403;
}

So what I am doing wrong?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Enabling NGINX to forward static file request to origin server if the file is absent (no replies)

$
0
0
BACKGROUND:
-----------------------
Currently NGINX supports static file caching wherein if the file is present in the location (derived from the config), then it will serve the client directly. Else it just intimates the client that the file is not present.

There is no capability to forward the request to origin server, get the the file, save it and serve it to the client AFAIK. I am not sure. Please correct me if I am wrong.

I need to achieve above capability along with few more additions as described below.

REQUIREMENT:
-------------------------

Currently I have a requirement with the following conditions:
- Both Nginx server and origin server should be on the same machine
- Nginx server should provide the file statically from the static cache when the client requests it
- if Nginx server does not find the static file in the static file location (generally the location path prepended by root), then it has to forward the request to the origin server asking for the file
- once it gets the file, it has to save it in the said location (static file location)
- after saving the file, it has to serve the same file back to the client

also
- care should be taken so that when multiple client requests arrive simultaneously and file cache is not present, it has to hold all the requests get the file from the origin server and then provide the files for all the clients (just like NGINX proxy handler does with proxy_cache_lock)
- purging support should also be provided.

IMPLEMENTATION:
----------------------------
- earlier I planned to write an NGINX module by myself but I had to take care of all the housekeeping and other stuffs already supported by static cache hander (ngx_http_static_module.c). This method seemed to be bit cumbersome
- then I planned to modify the static cache module itself so that whenever it does not find the file in the said location, I can modify the code so that I can forward the request to upstream server. In the same module, once the response is obtained from the upstream server, modify the code so that the file is saved in the said location and also served to the clients.
- NOTE: I am not enabling NGINX proxy handler.
- I also need to support the functionality similar to proxy_cache_lock where multiple client requests are held in the queue and served

QUESTION
------------------
- Please let me know if the approach I am planning serves the purpose ?
- Do you have any other alternative approach. Please do let me know.
- Is there any way to delegate the functionality to default handler ? (for example, if the static file is already present, in my handler can I delegate further processing to default static file cache module of NGINX)

Looking forward for your valuable input

Thank you

how nginx decide which server block to use (no replies)

$
0
0
Hi all,


i am running nginx version: nginx/1.12.0.i got following server block config as below, all request match regular expression work well,but request to server s01.example.com return 404.what's wrong? i googled for a while,most of the article said,it first try to match literal string ,then wildcard,and regular expression last.


------------------------------
server {
listen 80;
server_name _;
access_log /data/wwwlogs/access_nginx.log combined;
root /data/wwwroot/public_html;
index index.html index.htm index.php;
#error_page 404 /404.html;
#error_page 502 /502.html;
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
deny all;
}
location ~ [^/]\.php(/|$) {
#fastcgi_pass remote_php_ip:9000;
fastcgi_pass unix:/dev/shm/php-cgi.sock;
fastcgi_index index.php;
include fastcgi.conf;
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|flv|mp4|ico)$ {
expires 30d;
access_log off;
}
location ~ .*\.(js|css)?$ {
expires 7d;
access_log off;
}
location ~ /\.ht {
deny all;
}
}





server {
listen [ip1]:80;
server_name ~^(?<subdomain>[a-z0-9]+)\.(?<domain>[a-z0-9\-]+)\.(?<domext>[a-z]+);
index index.html index.php;


root /home/$domain.$domext/$subdomain;
location / {
try_files $uri $uri/ @apache =404;
}


location ~ (.*)\.html$ {
if (!-f '$document_root/$uri') {
rewrite /(.*)\.html$ /$1.php last;
}
try_files $uri @apache =404;
}


location @apache {
fastcgi_pass unix:/dev/shm/php-cgi.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ .*\.(php|php5|cgi|pl)$ {
fastcgi_pass unix:/dev/shm/php-cgi.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|flv|mp4|ico)$ {
expires 30d;
access_log off;
}
location ~ .*\.(js|css)?$ {
expires 7d;
access_log off;
}
location ~ /\.ht {
deny all;
}
}
server {
listen [ip2]:80;
#server_name ~^(?<subdomain>[a-z0-9]+).(?<domain>[a-z0-9.]+);
server_name ~^(?<subdomain>[a-z0-9]+)\.(?<domain>[a-z0-9\-]+)\.(?<domext>[a-z]+);
#server_name ~^(?<subdomain>[a-z0-9]+).com;
#access_log off;
index index.html index.php;


root /ip100/$domain.$domext/$subdomain;
#add_header aa $document_root;
location / {
try_files $uri $uri/ @apache =404;
}


location ~ (.*)\.html$ {
if (!-f '$document_root/$uri') {
rewrite /(.*)\.html$ /$1.php last;
}
try_files $uri @apache =404;
}


location @apache {
fastcgi_pass unix:/dev/shm/php-cgi.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ .*\.(php|php5|cgi|pl)$ {
fastcgi_pass unix:/dev/shm/php-cgi.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|flv|mp4|ico)$ {
expires 30d;
access_log off;
}
location ~ .*\.(js|css)?$ {
expires 7d;
access_log off;
}
location ~ /\.ht {
deny all;
}
#access_log /home/wwwlogs/$subdomain.$domain.com_access.log access;
#error_log /home/wwwlogs/subdomain.$domain.com_error.log error;
}


server {
listen [ip3]:80;
server_name ~^(?<subdomain>[a-z0-9]+)\.(?<domain>[a-z0-9\-]+)\.(?<domext>[a-z]+);
index index.html index.php;


root /ip155/$domain.$domext/$subdomain;
#add_header aa $document_root;
location / {
try_files $uri $uri/ @apache =404;
}


location ~ (.*)\.html$ {
if (!-f '$document_root/$uri') {
rewrite /(.*)\.html$ /$1.php last;
}
try_files $uri @apache =404;
}


location @apache {
fastcgi_pass unix:/dev/shm/php-cgi.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ .*\.(php|php5|cgi|pl)$ {
fastcgi_pass unix:/dev/shm/php-cgi.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|flv|mp4|ico)$ {
expires 30d;
access_log off;
}
location ~ .*\.(js|css)?$ {
expires 7d;
access_log off;
}
location ~ /\.ht {
deny all;
}
#access_log /home/wwwlogs/$subdomain.$domain.com_access.log access;
#error_log /home/wwwlogs/subdomain.$domain.com_error.log error;
}




server {
listen [ip3]:80;
server_name s01.example.com;
access_log off;
index index.html index.htm index.php;
root /data/ytginc.com/public;
rewrite /([a-z]+)$ /index.php/$1;
rewrite /([a-z0-9]+)/([a-z]+)/$ /index.php/$1/$2;


location / {
try_files $uri @apache;
}
location @apache {
include fastcgi_conf;
}
location ~ .*\.(php|php5|cgi|pl)?$ {
include fastcgi_conf;
}
location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|flv|mp4|ico)$ {
expires 30d;
access_log off;
}
location ~ .*\.(js|css)?$ {
expires 7d;
access_log off;
}
location ~ /\.ht {
deny all;
}
}_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Is proxy_cache_purge directive not available in NGINX free version ? (no replies)

$
0
0
Hi

I am trying to experiment with purging cache content in Nginx server. The Nginx I have is a free version (Not Nginx Plus version).

As per one of the document from Nginx website, in order to enable purging on Nginx, I need to use the directive - proxy_cache_purge.

But when I try to add this directive in Ngnix configuration and start the server, I get the following error:


"nginx: [emerg] unknown directive "proxy_cache_purge" in /etc/nginx/nginx.conf:104"


Is this directive proxy_cache_purge is not available in Nginx Free version ? If so it is available in Nginx Plus version ?

When will primary server come back in http upstream module? (no replies)

$
0
0
Hi,

Refer to http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server ,
if all primary server are unavailable, backup server will handle request.

I have two question?
1. What's the meaning of unavailable?
2. When will primary server come back, after fail_timeout?

Thanks,
Linbo
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Peer closed connection in SSL handshake marking upstream as failed (1 reply)

$
0
0
We're seeing an 502 bad gateway responses to client on an nginx load
balanced upstream due to "no live upstreams".

The upstream in question has 2 servers defined with default settings
running over https (proxy_pass https://myupstream).

When this happens we see "no live upstreams while connecting to
upstream" in the nginx error log and just prior to this:
"peer closed connection in SSL handshake (54: Connection reset by peer)
while SSL handshaking to upstream".

We currently believe that the client closing the connection is causing
the upstream to have a failure counted against it.

With the defaults of max_fails=1 and fail_timeout=10 it only takes two
such closes within a 10 second window to take down all upstream nodes
resulting in the "no live upstreams" and hence all subsequent
connections for the next 10 seconds fail instantly with 502 bad gateway.

Does this explanation seem plausible, is this a bug in nginx?

We're currently testing with max_fails=10 as a potential workaround.

Regards
Steve





_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

FastCGI KeepAlive (2 replies)

$
0
0
What does it take to enable KeepAlive for FastCGI upstream servers?
I've set upstream { keepalive 99; } and location { fastcgi_keep_conn on; ] but nginx is still closing the connection after each request.

add_header (2 replies)

$
0
0
Hi folks,

As a precaution against CORS, I

add_header Access-Control-Allow-Origin *;

outside any location{} block in my server{} definition.

I have recently had a problem where I deliver .css via a CDN, and that
CDN references font files also on the CDN, and this was triggering CORS,
so the font wasn't loaded.

The solution was to also add that header to the location{} block that I
use to manage the relevant static resources.

This seems rather strange. Is it supposed to work this way?

Cheers,

Steve

--
Steve Holdoway BSc(Hons) MIITP
https://www.greengecko.co.nz/
Linkedin: https://www.linkedin.com/in/steveholdoway
Skype: sholdowa

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

session ticket key rotation (no replies)

$
0
0
Hello,

https://nginx.org/r/ssl_session_ticket_key mention session ticket key rotation.

Which process read these files? master or worker?
Must it be readable for root only or nginx-user?
Must I signal nginx processes the rotation? If yes, how? via SIGHUP?

thanks for clarification,
Andreas
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx not caching the content when the response is just a plain text string (no replies)

$
0
0
SUMMARY:
Nginx not caching the content when the response is just a plain text string

DETAILS:
Below shows the connection between entities:
[ client ] <---> [ serverX ] <---> [ serverY ]

serverX and serverY both are virtual nginx servers on same machine.

serverX - proxy handler is enabled by using the directive (proxy_cache)
serverY - custom handler is used which simply outputs a text which serverX forwards to the client to a particular port say 8111

Below is the sample nginx.conf fragments for each of the server
###############################################################

#serverX (1st server)
server
{
location /cache1/
{
...
proxy_pass http://localhost:8111/custom/;
...
}

}

#serverY (2nd server)
server
{
location /custom/
{
#this is to invoke my custom handler for 2nd server
my_custom_module_directive;
}
}
###############################################################

I am trying to access the link "localhost/cache1/sample.txt" which hits serverX. serverX then finds that the file is not present, takes it as MISS, and then forwards the request probably as 'localhost:8111/custom/sample.txt'. But here since "/custom/" is used as filter in location, my custom module handler gets invoked which simply puts a text string in the response body, which is then forwarded to the client. I am able to see the response in the html.

The issue is every time I am accessing the file, this sample.txt is still taken, as per cache log, as a MISS (which should have been a HIT instead since previously 1st server serverX should have saved the text string as sample.txt and would have served directly)

Kindly let me know why this behaviour of serverX not caching such a response from serverY (which is a text in the response body).

Please let me know if you need further clarifications.

PS -
cache has been enabled and verified (used the directives - proxy_cache_path, proxy_cache, proxy_cache_valid)
content-type and other response headers from serverY is properly assigned.

[nginx logging module]$Request_time almost show 0.000 with proxy cache configuration (no replies)

$
0
0
Hi guys,

I've configured for nginx to cache static like jpeg|png. The problem is if request with MISS status, it will show a non-zero value request_time, but if a HIT request, the request_time value is 0.000.
This is an nginx bug and is there anyway to resolve it.

My log format

```
log_format cache '$remote_addr - [$time_local] $upstream_cache_status $upstream_addr '
'"$request" $status $body_bytes_sent $request_time ["$upstream_response_time"] "$http_referer" '
'"$http_user_agent" "$host" "$server_port" "$connection"';
```

I read a topic about this but this is not informational. I've try to set timer_resolution to 0ms but nothing was changed

Thanks

bcrypt (no replies)

$
0
0
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Changing upstream response headers, before nginx caching decisions (no replies)

$
0
0
Hello everybody,


I have the following working scheme:

Client --> Nginx [caching] --> Apache [backend]


Sometime the backend returns headers, which I want to modify before
nginx caching engine decides how to treat them. One such example is when
backend returns Vary: header.


I want to achieve the following:


[Apache backend returns Vary: User-Agent, Header2] --> [Nginx Modifies
"Vary:" and removes User-Agent] --> [Nginx caching sees only 'Vary:
Header2' (without User-Agent)] --> The final result is that Nginx cache
wont take 'User-Agent' into vary considerations. (no cache object per UA).

That's just an example. I would like to do such modification with other
headers also (for example Cache-Control).

Currently I'm already using Nginx Lua integration, but there is no hook
point before the caching engine.


Would be happy for any suggestions about achieving this scenario.


Regards

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx reverse proxy for M/Monit (not monit) (no replies)

$
0
0
Hello all -

Apologies if this has been asked & answered already - I can't find a way to search the mailing list and I'm largely learning nginx the hard way.

I have an internet-facing nginx https server reverse proxying a number of internal apps on varying servers. In general, they run http internally.

To this point, I've been able to get 11 of the 12 working this way. The last one (M/Monit) is proving to be difficult...

My config is below. When I go to https://nginx.serv.er/mmonit/, it comes back with a weird https://nginx.serv.er:2882/mmonit/ URL, making me think my config is just wrong. That said, the config came from the team at M/Monit....

Does anyone have any ideas they could share?

Thanks in advance.

### config start:

add_header Cache-Control public;
server_tokens off;
server {
include /etc/nginx/proxy.conf;
listen 443 ssl;
keepalive_timeout 70;
server_name nginx.serv.er;
ssl on;
ssl_certificate /etc/ssl/localcerts/autosigned.crt;
ssl_certificate_key /etc/ssl/localcerts/autosigned.key;
ssl_session_timeout 5m;
ssl_protocols SSLv3 TLSv1.2;
ssl_ciphers RC4:HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
add_header X-Frame-Options DENY;
root /var/www/html;
index index.html;
auth_basic "Access Restricted";
auth_basic_user_file "/etc/nginx/.htpasswd";
#limit_conn conn_limit_per_ip 20;
#limit_req zone=req_limit_per_ip burst=20 nodelay;

location /mmonit/ {
#proxy_set_header Host $host;
#proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header X-Forwarded-Host $host:$server_port;
#proxy_set_header X-Forwarded-Server $host;
#proxy_set_header X-Forwarded-Proto $scheme;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_pass http://mmonit.server.local:2882;
proxy_redirect http://mmonit.server.local:2882 /mmonit;
rewrite ^/mmonit/(.*) /$1 break;
proxy_cookie_path / /mmonit/;

# proxy_ignore_client_abort on;
# index index.csp
# auth_basic "Access Restricted";
# auth_basic_user_file "/etc/nginx/.htpasswd";
access_log /var/log/nginx/mmonit.access.log;
error_log /var/log/nginx/mmonit.error.log;
}

###Remainder of working config snipped

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How do I exclude one folder from a try_files? (no replies)

$
0
0
Hi guys,

I am having a configuration, which is basically rewriting all requests, for which a fitting file cannot be found, to a central index.php file:

location ~* "^/" {
root /home/$username/www/;
try_files $uri $uri/ /wiki/index.php$is_args$args;

location ~ \.php$ {
try_files $uri $uri/ /wiki/index.php$is_args$args;
include /etc/nginx/fastcgi_params;
fastcgi_pass unix:/var/run/php5-fpm-$username.sock;
}
}

Now, for one folder, wiki/images/, nothing should be rewritten at all. nginx should just try existing files, and if no file with the requested name is there, I want to get the nginx 404 error page.

I tried with

location ~* wiki/images/ {
# Nothing here
}

but nginx is still changing the URL.

Can someone tell me, how I can make nginx just deliver existing files from folder wiki/images/?

Cheers

Jörg

Help on proxy_ssl_trusted_certificate (no replies)

$
0
0
Hi,

I am trying to validate the upstream server by enabling the proxy_ssl_trusted_certficate and proxy_ssl_verify. I've tried to build the pem in so many ways. I tried just the CA, CA + intermmediate, CA+intermmediate + server. But I still keep getting this error message.

2017/06/24 23:56:31 [error] 3512#0: *1 upstream SSL certificate verify error: (20:unable to get local issuer certificate) while SSL handshaking t
o upstream, client: 127.0.0.1, server: , request: "POST / HTTP/1.1", upstream: "https://203.105.61.190:443/", host: "localhost:8443"

Below is my config file and my current pem file. I've commented in and out a number of this options but they still don't work.

The test website is https://test.paydollar.com. The pem file is created by downloading it through the browser.

The way I tested this is by issuing a curl request like this:

curl -X POST http://localhost:8443/x

Config File:
--------------------------------------------------
server {
listen 8443;

location / {
# proxy_set_header Host $host;
# proxy_set_header Host $remote_addr;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-Host $host;
# proxy_set_header X-Forwarded-Server $host;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://test.paydollar.com;
proxy_ssl_verify on;
proxy_ssl_trusted_certificate /etc/nginx/conf.d/test2.pem;
# proxy_ssl_name "test.paydollar.com";
# proxy_ssl_verify_depth 2;
# proxy_ssl_server_name on;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
-------------------------------------

PEM File:

-----BEGIN CERTIFICATE-----
MIIDxTCCAq2gAwIBAgIBADANBgkqhkiG9w0BAQsFADCBgzELMAkGA1UEBhMCVVMx
EDAOBgNVBAgTB0FyaXpvbmExEzARBgNVBAcTClNjb3R0c2RhbGUxGjAYBgNVBAoT
EUdvRGFkZHkuY29tLCBJbmMuMTEwLwYDVQQDEyhHbyBEYWRkeSBSb290IENlcnRp
ZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTA5MDkwMTAwMDAwMFoXDTM3MTIzMTIz
NTk1OVowgYMxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMRMwEQYDVQQH
EwpTY290dHNkYWxlMRowGAYDVQQKExFHb0RhZGR5LmNvbSwgSW5jLjExMC8GA1UE
AxMoR28gRGFkZHkgUm9vdCBDZXJ0aWZpY2F0ZSBBdXRob3JpdHkgLSBHMjCCASIw
DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAL9xYgjx+lk09xvJGKP3gElY6SKD
E6bFIEMBO4Tx5oVJnyfq9oQbTqC023CYxzIBsQU+B07u9PpPL1kwIuerGVZr4oAH
/PMWdYA5UXvl+TW2dE6pjYIT5LY/qQOD+qK+ihVqf94Lw7YZFAXK6sOoBJQ7Rnwy
DfMAZiLIjWltNowRGLfTshxgtDj6AozO091GB94KPutdfMh8+7ArU6SSYmlRJQVh
GkSBjCypQ5Yj36w6gZoOKcUcqeldHraenjAKOc7xiID7S13MMuyFYkMlNAJWJwGR
tDtwKj9useiciAF9n9T521NtYJ2/LOdYq7hfRvzOxBsDPAnrSTFcaUaz4EcCAwEA
AaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8BAf8EBAMCAQYwHQYDVR0OBBYE
FDqahQcQZyi27/a9BUFuIMGU2g/eMA0GCSqGSIb3DQEBCwUAA4IBAQCZ21151fmX
WWcDYfF+OwYxdS2hII5PZYe096acvNjpL9DbWu7PdIxztDhC2gV7+AJ1uP2lsdeu
9tfeE8tTEH6KRtGX+rcuKxGrkLAngPnon1rpN5+r5N9ss4UXnT3ZJE95kTXWXwTr
gIOrmgIttRD02JDHBHNA7XIloKmf7J6raBKZV8aPEjoJpL1E/QYVN8Gb5DKj7Tjo
2GTzLH4U/ALqn83/B2gX2yKQOC16jdFU8WnjXzPKej17CuPKf1855eJ1usV2GDPO
LPAvTK33sefOT6jEm0pUBsV/fdUID+Ic/n4XuKxe9tQWskMJDE32p2u0mYRlynqI
4uJEvlz36hz1
-----END CERTIFICATE-----



Thanks.

Alf

$request_id not logged in the nginx logs (no replies)

$
0
0
Hello,
I have enabled request_id headers in nginx (which works as reverse proxy) by the following way:

In nginx.co http://nginx.com/nf my log format hs included $request_id as follows:

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent $request_id "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for”';


In the ghost configs I have headers like the following:
location / {
....
add_header X-Request-Id $request_id;
proxy_set_header X-Request-Id $request_id;

I would ike to accomplish the following thing.

1. In all logs and all requests (access, error, mod security audit logs) the request_id to be logged (as it should be, but currently not work).
2. When I open the site X-Request-ID to be set in request headers, not only in response headers. Currently I have the x-request-id header only in the response headers.
3. When I have been blocked my some mod security rule with status 403 the headers to be present and the id to be logged too in the logs. Currently on 403 response I haven’t the header neither in request headers and response headers (only on normal query).

Can you please explain me where I am wrong? Thank you in advance._______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Help! 503 Service Temporarily Unavailable when trying to reverse-proxy wordpress (no replies)

$
0
0
Hello,

I am trying to use nginx to reverse-proxy a wordpress website. The
wordpress website works fine when being accessed without nginx in the
middle.

The problem I am having is that when accessing the home page (which is
about 50k of html alone), nginx responds with "503 Service Temporarily
Unavailable" responses. Using wireshark and tcpdump, it looks like what
happens is that the browser starts requesting elements of the html home
page (css, pictures, etc.) while the html home has not finished
downloading yet.

I can see using tcpdump that while the html home page is downloading,
nginx responds "503 Service Temporarily Unavailable" and does not
forward the subsequent requests to wordpress. The last item to be
requested by the browser is the favicon, which is served properly
because it is requested through the same TCP connection once the home
page has finished downloading. By contrast, the other elements are
requested using other TCP connections.

So it looks like nginx decides to responds 503 instead of forwarding
requests to wordpress because a request is being served.

I am using nginx 1.9.12, it is running in a docker container; the host
is Ubuntu 16.04. Please find below the config files. I can provide the
logs as well if necessary. I tried firefox and chrome with the same results.

Thanks a lot for any help!

Fabrice



nginx.conf:

user nginx;
worker_processes auto;

error_log /var/log/nginx/error.log debug;
pid /var/run/nginx.pid;

worker_rlimit_nofile 1024;

events {
worker_connections 1024;
}


http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local]
"$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on;
#tcp_nopush on;

keepalive_timeout 65;

gzip off;

include /etc/nginx/conf.d/*.conf;
}
daemon off;



There is only one file in "conf.d/", which is named "default.conf":

# If we receive X-Forwarded-Proto, pass it through; otherwise, pass
along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass
along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise,
delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript
application/json application/x-javascript text/xml application/xml
application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never
trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# incise.co
upstream incise.co {
## Can be connect with "bridge" network
# wp
server 172.17.0.4:80;
}
server {
server_name incise.co;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://incise.co;
}
}


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

"Client closed connection" when using nginx on Windows (no replies)

$
0
0
As part of a project I'm working on, I've been using nginx on Linux systems for a while.
Currently I'm trying to run it on a Windows system.
When I send a request to nginx (using a browser) it fails, and I get this error message in the error log:

2017/06/26 14:15:56 [info] 34092#16900: *1 client closed connection while waiting for request, client: xxx.xxx.xxx.xxx, server: xxx.xxx.xxx.xxx:4322

Note that this message appears immediately upon making the request, not after some timeout period.
Additionally, the nginx is configured to just return a 400 error code, so nginx isn't trying to proxy to some external server.
I wasn't able to detect any problems in WireShark - SYNs being exchanged, ACK returned, HTTP request, ACK, 2 more SYNs + ACK, then ~30 seconds later the connection closes (FIN).

The relevant config section:
server {
listen xxx.xxx.xxx.xxx:4322;
location / {
return 400;
}
}

Does anyone have an idea of why this happens?
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>