Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

Handling URL with the percentage character (no replies)

$
0
0
Hi,

Is there a way URL like

++++++++++++++++++++++++++++++++++++
http://domain.com/%product_cat%/myproduct
+++++++++++++++++++++++++++++++++++++

to be passed as is to an Apache proxy backend.

Currently, Nginx is throwing a 400 bad request error (which is correct),
but the Apache httpd using a php script can handle this . so is there a way
I can do like ..hey this will be handled someplace else so i just need to
pass on whatever i get to upstream?

Also if I encode the URL with

http://domain.com/%25product_cat%25/myproduct


That works too. So if the first is not possible is there a way to rewrite
all % to %25 ?


--
*Anoop P Alias*
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Using the mirror module (4 replies)

$
0
0
Hi,

I’m having trouble using the new mirror module. I want to mirror incoming requests from Nginx to other upstream servers. 1) a production server 2) a staging server

This is my config:

server {
listen 80 default_server;
listen [::]:80 default_server;

location / {
mirror /mirror;
proxy_pass http://www.example.com;
}

location /mirror {
internal;
proxy_pass http://staging.example.com$request_uri;
}
}

So, I request http://myserver.com (where Nginx is hosted) and it successfully redirects me to www.example.com, however I don’t see any requests hitting staging.example.com.

What could be the error?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

$upstream_cache_status output definitions (2 replies)

$
0
0
I read this in the documentation

$upstream_cache_status
keeps the status of accessing a response cache (0.8.3). The status can be either “MISS”, “BYPASS”, “EXPIRED”, “STALE”, “UPDATING”, “REVALIDATED”, or “HIT”.

But I’m sort of at a loss as to what the meanings are, specifically what is Hit and Miss? The only two I really understand are Stale and Bypass. Does Hit mean it hit the upstream server or that it hit the cached copy? I’m basically trying to map these to determine whether a cache copy was served or if a fresh copy was pulled from the upstream server because I am getting some mixed results compared to my settings and realized I might be using incorrect definitions for each status.

Thanks,
-mike



___________________________________________
Michael Friscia
Office of Communications
Yale School of Medicine
(203) 737-7932 - office
(203) 931-5381 - mobile
http://web.yale.eduhttp://web.yale.edu/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx 1.12.1 Memory Consumption (1 reply)

$
0
0
Hello,

I have encountered what I consider to be an interesting behavior. We have
Nginx 1.12.1 configured to do SSL termination as well as reverse proxy.
Whenever there is a traffic spike (300 req/s > 1000 req/s, 3k active
connections > 20k active connections), there is a corresponding spike in
Nginx memory consumption. In this case 500M > 8G across 10 worker
processes. What is interesting is that Nginx never seems to release this
memory after the traffic returns to normal. Is this expected? What is Nginx
using this memory for? Is there a configuration that will rotate the
workers based on some metric in order to return memory to the system?

Requests per second:
https://www.dropbox.com/s/cl2yqdxgqk2fn89/Screenshot%202018-03-14%2012.38.10.png?dl=0

Active connections:
https://www.dropbox.com/s/s3j4oux77op3svo/Screenshot%202018-03-14%2012.44.14.png?dl=0

Total Nginx memory usage:
https://www.dropbox.com/s/ihp5zxky2mgd2hr/Screenshot%202018-03-14%2012.44.43.png?dl=0

Thanks,

Matt
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

redirect to a .php file with try_files if required .php file not found (no replies)

$
0
0
Hello,

I would like to redirect to /virtual_new.php with try_files if
required .php file not found, is it the right way to do so:

location ~ \.php$ {

if ($args ~ "netcat_files/") {
expires 7d;
add_header Cache-Control "public";
}

fastcgi_split_path_info ^(.+\.php)(/.+)$;
try_files $uri /virtual_new.php =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root
$fastcgi_script_name;
include fastcgi_params;
}

Thank you!

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx Connections (1 reply)

$
0
0
I want to limit the connections used by nginx using CLI.

I know that we can set worker connections to different values in nginx conf file. But no of worker connections will include not only the connections to the host. It also includes proxy connections too.

If I want to give user flexibility to limit the connections, user will not know about proxy connections.

Is there any flexibility in nginx source code to know whether the connection established by nginx is to the proxy server or host connections ?

Can you please help me with this ?

Let me know if more information is needed.

redirect to a .php file with try_files if required .php file not found (no replies)

$
0
0
PS:

maybe I pasted too much of my config, basically the important line is:

try_files $uri /virtual_new.php =404;

Does it look legitim to you? Is it the proper way to redirect in such a
case or should I better use rewrite/redirect?

Thank you!

-------------------------------------------

Hello,

I would like to redirect to /virtual_new.php with try_files if
required .php file not found, is it the right way to do so:

location ~ \.php$ {

if ($args ~ "netcat_files/") {
expires 7d;
add_header Cache-Control "public";
}

fastcgi_split_path_info ^(.+\.php)(/.+)$;
try_files $uri /virtual_new.php =404;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root
$fastcgi_script_name;
include fastcgi_params;
}

Thank you!


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Proxy requests that return a 403 error - issue with sending headers (2 replies)

$
0
0
I hope I can explain this well enough to understand what I’m doing wrong.

The problem I am trying to solve is that I am making proxy requests to a site that has IP restrictions. Nginx is making a request to another Proxy URL rewrite server we use which then makes the request to the web application. So what happens without any work is that the second proxy server is making the request with the Nginx server IP address. So we made some changes to headers in Nginx to pass the client IP and then it would forward through the second proxy, make it to the web app and process the IP restriction.

I have a block in my global settings that offers these header additions.

add_header X-Origin-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Server $hostname;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Origin-Forwarded-For $remote_addr;
proxy_set_header Accept-Encoding identity;

It’s really the X-Origin… that I care about. But what seems to be happening is that for any normal request, the client IP address is being passed to the web app but when I make the request for a page that returns the 403 error because of the IP restriction, none of the headers above are being applied to the request. So the web app is never getting passed my custom headers.

My question is if there is some sort of setting I am missing and I ask that making an assumption that the problem is that Nginx is making a request without sending headers, getting the 403 error and then all processing stops and I just get an access denied page.

Any thoughts on how to handle this problem would be appreciated. I’ve tried numerous things and the root of the problem seems to be that Nginx is not making the full request. My next assumption is that this global configuration is to blame by having “error” in the list
proxy_cache_use_stale error timeout updating invalid_header http_500 http_502 http_503 http_504;

Thanks,
-mike

___________________________________________
Michael Friscia
Office of Communications
Yale School of Medicine
(203) 737-7932 - office
(203) 931-5381 - mobile
http://web.yale.eduhttp://web.yale.edu/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

What is canonical filter workflow (no replies)

$
0
0
Hello.

I'm working on a zero-copy brotli compression filter. With zero-copy I wrap compressor output into a buffer and send it to next filter in a chain.

The problem is - it is not clear how to properly wait until this buffer is released.

If I just continue asking the next filter to do its work, until buffer is released, it is possible to get into infinite loop (see https://github.com/eustas/ngx_brotli/issues/9#issuecomment-373737792).

If I return NGX_AGAIN in a case the next filter is not able to use more of the buffer data, the previous filter never gives a chance to continue compression (https://github.com/eustas/ngx_brotli/issues/9#issuecomment-371513645).

Fwd: [nginx] The gRPC proxy module. (no replies)

$
0
0
Hello,

for those who don't follow nginx-devel@.

We also published a blog post on this topic

https://www.nginx.com/blog/nginx-1-13-10-grpc/

-------- Forwarded Message --------
Subject: [nginx] The gRPC proxy module.
Date: Sat, 17 Mar 2018 20:08:27 +0000
From: Maxim Dounin <mdounin@mdounin.ru>
Reply-To: nginx-devel@nginx.org
To: nginx-devel@nginx.org

details: http://hg.nginx.org/nginx/rev/2713b2dbf5bb
branches: changeset: 7233:2713b2dbf5bb
user: Maxim Dounin <mdounin@mdounin.ru>
date: Sat Mar 17 23:04:24 2018 +0300
description:
The gRPC proxy module.

The module allows passing requests to upstream gRPC servers. The
module is built by default as long as HTTP/2 support is compiled in.
Example configuration:

grpc_pass 127.0.0.1:9000;

Alternatively, the "grpc://" scheme can be used:

grpc_pass grpc://127.0.0.1:9000;

Keepalive support is available via the upstream keepalive module.
Note that keepalive connections won't currently work with grpc-go as
it fails to handle SETTINGS_HEADER_TABLE_SIZE.

To use with SSL:

grpc_pass grpcs://127.0.0.1:9000;

SSL connections use ALPN "h2" when available. At least grpc-go
works fine without ALPN, so if ALPN is not available we just
establish a connection without it.

Tested with grpc-c++ and grpc-go.

--
Maxim Konovalov

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Upstream connections using rotating local IPs (no replies)

$
0
0
Hello everyone,

I have a server with 100+ IP addresses, and source IPs for outbound
connections to remote upstreams are rotated in iptables using the method
described at
https://serverfault.com/questions/490854/rotating-outgoing-ips-using-iptables

Is there a way to round robin through local IPs for remote upstream
connections directly in nginx instead?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Aborting malicious requests (5 replies)

$
0
0
Just a thought before I start crafting one. I am creating a location{} block with the intention of populating it with a ton of requests I want to terminate immediately with a 444 response. Before I start, I thought I’d ask to see if anyone has a really good one I can use as a base.

For example, we don’t serve PHP so I’m starting with
Location ~* .php {
Return 444;
}

Then I can just include this into all my server blocks so I can manage the aborts all in one place. This alone reduces errors in the logs significantly. But now I will have to start adding in all the wordpress stuff, then onto php myadmin, etc. I will end up with something like

Location ~* (.php|wp-admin|my-admin) {
Return 444;
}

I can imagine the chunk inside the parenthesis is going to be pretty huge which is why I thought I’d reach out to see if anyone has one already.

Thanks,
-mike

___________________________________________
Michael Friscia
Office of Communications
Yale School of Medicine
(203) 737-7932 - office
(203) 931-5381 - mobile
http://web.yale.eduhttp://web.yale.edu/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx erroneously reports period character as illegal in request headers (1 reply)

$
0
0
Hello -

Nginx is reporting invalid incoming headers with RFC-compliant headers that use a '.' (meaning, a period) within the name.

As an example, I am curling to a very basic proxy setup while trailing the error log:

The following is valid:

# curl -vvvH "a-b-c: 999" localhost:81/test/v01
* About to connect() to localhost port 81 (#0)
* Trying ::1... connected
* Connected to localhost (::1) port 81 (#0)
> GET /test/v01 HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:81
> Accept: */*
> a-b-c: 999
>
< HTTP/1.1 204 No Content
< Server: nginx
< Date: Mon, 19 Mar 2018 22:58:35 GMT
< Content-Length: 0
< Connection: keep-alive
< Cache-Control: max-age=0, no-store
<
* Connection #0 to host localhost left intact
* Closing connection #0
2018/03/19 22:58:35 [info] 432544#432544: *526 client ::1 closed keepalive connection

However a very similar request but using a period within the header:
[root@dtord01stg02p ~]# curl -vvvH "a.b.c: 999" localhost:81/test/v01
* About to connect() to localhost port 81 (#0)
* Trying ::1... connected
* Connected to localhost (::1) port 81 (#0)
> GET /test/v01 HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: localhost:81
> Accept: */*
> a.b.c: 999
>
2018/03/19 22:58:38 [info] 432544#432544: *528 client sent invalid header line: "a.b.c: 999" while reading client request headers, client: ::1, server: , request: "GET /test/v01 HTTP/1.1", host: "localhost:81"
< HTTP/1.1 204 No Content
< Server: nginx
< Date: Mon, 19 Mar 2018 22:58:38 GMT
< Content-Length: 0
< Connection: keep-alive
< Cache-Control: max-age=0, no-store
<
* Connection #0 to host localhost left intact
* Closing connection #0
2018/03/19 22:58:38 [info] 432544#432544: *528 client ::1 closed keepalive connection


I am aware that I can allow illegal requests, but standards compliance is a strict requirement in our enterprise.

nginx-1.13.10 (no replies)

$
0
0
Changes with nginx 1.13.10 20 Mar 2018

*) Feature: the "set" parameter of the "include" SSI directive now
allows writing arbitrary responses to a variable; the
"subrequest_output_buffer_size" directive defines maximum response
size.

*) Feature: now nginx uses clock_gettime(CLOCK_MONOTONIC) if available,
to avoid timeouts being incorrectly triggered on system time changes.

*) Feature: the "escape=none" parameter of the "log_format" directive.
Thanks to Johannes Baiter and Calin Don.

*) Feature: the $ssl_preread_alpn_protocols variable in the
ngx_stream_ssl_preread_module.

*) Feature: the ngx_http_grpc_module.

*) Bugfix: in memory allocation error handling in the "geo" directive.

*) Bugfix: when using variables in the "auth_basic_user_file" directive
a null character might appear in logs.
Thanks to Vadim Filimonov.


--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: [nginx-announce] nginx-1.13.10 (no replies)

$
0
0
Hello Nginx users,

Now available: Nginx 1.13.10 for Windows
https://kevinworthington.com/nginxwin11310
(32-bit and 64-bit versions)

These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.

Announcements are also available here:
Twitter http://twitter.com/kworthington
Google+ https://plus.google.com/+KevinWorthington/

Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
https://kevinworthington.com/
https://twitter.com/kworthington
https://plus.google.com/+KevinWorthington/
https://twitter.com/kworthington/

On Tue, Mar 20, 2018 at 12:12 PM, Maxim Dounin <mdounin@mdounin.ru> wrote:

> Changes with nginx 1.13.10 20 Mar
> 2018
>
> *) Feature: the "set" parameter of the "include" SSI directive now
> allows writing arbitrary responses to a variable; the
> "subrequest_output_buffer_size" directive defines maximum response
> size.
>
> *) Feature: now nginx uses clock_gettime(CLOCK_MONOTONIC) if available,
> to avoid timeouts being incorrectly triggered on system time
> changes.
>
> *) Feature: the "escape=none" parameter of the "log_format" directive.
> Thanks to Johannes Baiter and Calin Don.
>
> *) Feature: the $ssl_preread_alpn_protocols variable in the
> ngx_stream_ssl_preread_module.
>
> *) Feature: the ngx_http_grpc_module.
>
> *) Bugfix: in memory allocation error handling in the "geo" directive.
>
> *) Bugfix: when using variables in the "auth_basic_user_file" directive
> a null character might appear in logs.
> Thanks to Vadim Filimonov.
>
>
> --
> Maxim Dounin
> http://nginx.org/
> _______________________________________________
> nginx-announce mailing list
> nginx-announce@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

[PATCH] Send Connection: close for draining. (no replies)

$
0
0
Howdy,

First off sorry if the C code is a bit ugly, been a while since I did
some C with nginx.

So I've some very long running keep-alive connections that although
keepalive_timeout helps it is just "not enough" or "fast enough" when
I need it to stop draining connections. I looked at the headers-more
plugin and then noted that nginx was still adding the Connection
header via ngx_http_header_filter.

So I came up with a patch, feel free to use it.

The patch does two things:

- Adds a new variable ($disable_keep_alive_now).
- Disables if we (nginx) are told to disable keep-alive (aka start
sending Connection: close).

The flow is a little bit like this:

1. You set $disable_keep_alive_now in your nginx location or wherever you want.
2. If the value is "yes" (set $disable_keep_alive_now "yes") then
nginx will start sending the Connection: close header rather than the
Connection: keep-alive.

For example:

location / {
if (!-f "healthcheck/path") {
set $disable_keep_alive_now "yes";
}
}

That way if you take your host out of service via checking a specific
file then you just set the header and nginx will start sending close
and thus connections will start draining faster.

I opted for a variable rather than a setting so that I still let nginx
manage the keepalive timeouts but at the same time having a "lever"
that I can turn on dynamically for changing the Connection header.

When I googled for it I noted that there were a few people asking how
to do something similar so that's why I opted in publishing the patch.

Ah, the url is:
https://github.com/pfischermx/nginx-keepalive-disable-patch

Thanks
--
Pablo
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Only compressed version of file on server , and supporting clients that don't send Accept-Encoding. (1 reply)

$
0
0
Hi,
We have only gzipped files stored on nginx and need to serve client that :
A) Support gzip transfer encoding (> 99% of the clients). They send
Accept-Encoding: gzip header...
B) < 1% of the clients that don't support transfer encoding. The don't send
Accept-Encoding header.

There is ample CPU in the nginx servers to support clients of type B). But
I am unable to figure out a config/reasonable script
to help us serve these clients.

Clients of type A) are served with the following config.
--- Working config that appends .gz in the try_files ----
location /compressed_files/ {
add_header Content-Encoding "gzip";
expires 48h;
add_header Cache-Control private;
try_files $uri.gz @lua_script_for_missing_file;
}


----- Not working config with gunzip on; likely because gunzip filter
runs before add_header?

location /compressed_files/ {
add_header Content-Encoding "gzip";

expires 48h;
add_header Cache-Control private;
*# gunzip on fails to uncompress likely because it does not notice the
add_header directive.*
* gunzip on;*
* gzip_proxied any;*
try_files $uri.gz @lua_script_for_missing_file;
}


I would appreciate any pointers on how to do this. I may be missing some
obvious configuration for such case.
We did discuss keeping both unzipped and zipped version on the server, but
unfortunately that is unlikely to happen.

Thanks,
Hemant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

proxy_cache_key case sensitivity question (no replies)

$
0
0
The question is if these are cached as different files
http://myurl.html
http://MyUrl.html

I’m assuming that both would be different cache locations since the md5 would be different on each but ideally these would be the same cached file to prevent dupes.

My question is about the proxy_cache_key, when that is generated, is it case sensitive? We ran a quick test and it seemed to be true that changing the case in the URL created a new/different version of the page. If our test was accurate and this is how it works, then is there a way to make it so that the key used to generate the MD5 always uses a lower case string?

One possible solution is to install the module that changes strings to lower/upper and then wrap that around the string used for the key. But before I go down that path, I wanted to find out if I would be wasting my time.


___________________________________________
Michael Friscia
Office of Communications
Yale School of Medicine
(203) 737-7932 - office
(203) 931-5381 - mobile
http://web.yale.eduhttp://web.yale.edu/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Redirect question (no replies)

$
0
0
I’m wondering how to achieve this in the config

I have a url like this
http://example.com/people/mike

and I want to redirect to
https://www.othersite.com/users/mike

the problem at hand is switching “/people/” to “/users/” but keep everything else so if I was to have
http://example.com/people/mike/education?page=1
I would still get redirected to
https://www.othersite.com/users/mike/education?page=1

I currently have redirects where I just append $request_uri to the new domain name but in this case I need to alter the $request_uri before I use it. So the question is how should I approach making this sort of change?

___________________________________________
Michael Friscia
Office of Communications
Yale School of Medicine
(203) 737-7932 - office
(203) 931-5381 - mobile
http://web.yale.eduhttp://web.yale.edu/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Unit 0.7 beta release (no replies)

$
0
0
Hello,

I'm glad to announce a new beta of NGINX Unit with a number of bugfixes
and Ruby/Rack support. Now you can easily run applications like Redmine
with Unit.

The full list of supported languages today is PHP, Python, Go, Perl, and Ruby.
More languages are coming.

Changes with Unit 0.7 22 Mar 2018

*) Feature: Ruby application module.

*) Bugfix: in discovering modules.

*) Bugfix: various race conditions on reconfiguration and during
shutting down.

*) Bugfix: tabs and trailing spaces were not allowed in header fields
values.

*) Bugfix: a segmentation fault occurred in Python module if
start_response() was called outside of WSGI callable.

*) Bugfix: a segmentation fault might occur in PHP module if there was
an error while initialization.


Binary Linux packages and Docker images are available here:

- Packages: https://unit.nginx.org/installation/#precompiled-packages
- Docker: https://hub.docker.com/r/nginx/unit/tags/

Packages and images for the new Ruby module will be built next week.

wbr, Valentin V. Bartenev

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live