Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

domain only reachable with https:// in front (3 replies)

$
0
0
Hi,

I'm using nginx as reverse proxy for guacamole, I can only reach my domain with https://pstn.host or https://www.pstn.host, it won't work without https or with even with https.

here's my sites-enabled/pstn.host https://pastebin.com/raw/dKiEi72q

any ideas what's wrong or missing?

thanks!

Moving SSL termination to the edge increased the instance of 502 errors (no replies)

$
0
0
Hi All,

We installed nginx as load balancer/failover in front of two upstream web servers.

At first SSL terminated at the web servers and nginx was configured as TCP passthrough on 443.

We rarely experiences 502s and when it did it was likely due to tuning/tweaking.

About a week ago we moved SSL termination to the edge. Since then we've been getting daily 502s. A small percentage - never reaching 1%. But with ½ million requests per day, we are starting to get complaints.

Stranger: the percentage seems to be rising.

I have more details and a pretty picture here:

https://serverfault.com/questions/885638/moving-ssl-termination-to-the-edge-increased-the-instance-of-502-errors


Any advice how to squash those 502s? Should I be worried nginx is leaking?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Installing module in Nginx with souce, after installing from Repositiory (no replies)

$
0
0
I have installed the mainline version of Nginx using the Lauchpad PPA Respositiy;
Via: add-apt-repository ppa:nginx/development

I wouldlike to install an additonal module into it, however I need to install it from source.

If I was to download the correct version from the Nginx site to my server and install the module via source, will this work ok, or will it mess up what had already been installed?

Thanks

Overridable header values (with map?) (no replies)

$
0
0
We're using nginx for several different types of servers, but we're trying
to unify the configuration to minimize shared code. One stumbling block is
headers. For most requests, we want to add a set of standard headers:

# headers.conf:

add_header Cache-Control $cache_control;
add_header X-Robots-Tag $robots_tag always;
add_header X-Frame-Options $frame_options;

add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options nosniff;
# several more...

Many of the headers are the same for all requests, but the first three are
tweaked for specific resources or target servers.

The first approach I took was to define two files:

# header-vars.conf:

# Values for the $cache_control header. By default, we use $one_day.
set $no_cache "max-age=0, no-store, no-cache, must-revalidate";
set $one_day "public, max-age=86400";
set $one_year "public, max-age=31536000";
set $cache_control $one_day;

# To allow robots, override this variable using `set $robots_tag all;`.
set $robots_tag "noindex, nofollow, nosnippet, noarchive";
set $frame_options "SAMEORIGIN";


....and the headers.conf above. Then, at appropriate contexts (either a
server or location block), different servers would include the files as
follows:

include header-vars.conf;
include headers.conf;

That would give them all of our defaults. If the specific application or
context needs to tweak the caching and robots, it might do something like
this:

include header-vars.conf;
set $cache_control $no_cache;
set $robots_tag all;
include headers.conf;


This was fine, but I recently came across an interesting use of map
https://serverfault.com/a/598106/405305 that I thought I could generalize
to simplify this pattern. My idea was to do something like:

# header-vars.conf:

map $robots $robots_tag {

# Disallowed
default "noindex, nofollow, nosnippet, noarchive";
off "noindex, nofollow, nosnippet, noarchive";

# Allowed
on all;
}

map $frames $frame_options {

# Allow in frames only on from the same origin (URL).
default "SAMEORIGIN";

# This isn't a real value, but it will cause the header to be ignored.
allow "ALLOW";
}

map $cache $cache_control {

# no caching
off "max-age=0, no-store, no-cache, must-revalidate";

# one day
default "public, max-age=86400";
1d "public, max-age=86400";

# one year
1y "public, max-age=31536000";
}


I thought this would allow me to include both header-vars.conf and
headers.conf in the http block. Then, within the server or location blocks,
I wouldn't have to do anything to get the defaults. Or, to tweak robots and
caching:

set $cache off;
set $robots on;

Since the variables wouldn't be evaluated till the headers were actually
added, I thought this would work well and simplify things a lot.
Unfortunately, I was mistaken that I would be able to use an undefined
variable in the first position of a map directive (I thought it would just
be empty):

unknown "robots" variable

Of course, I can't set a default value for that variable since I'm
including header-vars.conf at the http level. I'd rather not need to
include defaults in every server (there are many).

Does anyone have any suggestions for how I can better solve this problem?

Thanks!
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How to control the total requests in Ngnix (4 replies)

$
0
0
Hi guys,

I want to use ngnix to protect my system,to allow max 2000 requests sent to my service(http location).
The below configs are only for per client ip,not for the total requests control.
##########method 1##########

limit_conn_zone $binary_remote_addr zone=addr:10m;
server {
location /mylocation/ {
limit_conn addr 2;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}

##########method 2##########

limit_req_zone $binary_remote_addr zone=one:10m rate=10r/s;
server {
location /mylocation/ {
limit_req zone=one burst=5 nodelay;
proxy_pass http://my_server/mylocation/;
proxy_set_header Host $host:$server_port;
}
}



How can I do it?




Tong
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Multiple Cache Manager Processes or Threads (2 replies)

$
0
0
Hello,

I have an issue with the cache manager and the way I use it.
When I configure 2 different caches zones, one very huge and one very fast, the cache manager can't delete files quickly enough and lead to a partition full.

For example:
proxy_cache_path /mnt/hdd/cache levels=1:2:2 keys_zone=cache_hdd:40g max_size=40000g inactive=5d;
proxy_cache_path /mnt/ram/cache levels=1:2 keys_zone=cache_ram:300m max_size=300g inactive=1h;

On the beginning, ram cache is correctly purge around 40GB (+/- Input bandwidth*10sec) , but when the hdd cache begins to fill up, ram cache growing over 50GB. I think the cache manager is stuck by the slowness of the filesystem / hardware.

I can fix this by using 2 nginx on the same machine, one configured as ram cache, the other as hdd cache; but I wonder if it would be possible to create a cache manager process for each proxy_cache_path directive.

Thank in advance.

Plan to support proxy protocol v2? (no replies)

$
0
0
Hi,

The aws elbv2 just works with proxy protocol v2. Is there any plan to support this version in nginx soon?

regards

Return 408 to ELB (1 reply)

$
0
0
I am running into an issue, that I believe was documented here (https://trac.nginx.org/nginx/ticket/1005).

Essentially, I am seeing alerts as our ELBs are sending 504s back to clients with no backend information attached, but when I look through our nginx request logs, I see that we "should have" sent them a 408. However, it appears that nginx is just closing the connection.

We are using keep-alive connections, and I was looking at using the reset_timedout_connection parameter, but based on the documentation it doesn't seem like this will help.

Is there a way to actually send a 408 back to the client using nginx and ELBs?

lua code in log_by_lua_file not executed when the upstream server is down (no replies)

$
0
0
the nginx.conf as below:

upstream my_server {
server localhost:8095;
keepalive 2000;
}

location /private/rush2purchase/ {
limit_conn addr 20;
proxy_pass http://my_server/private/rush2purchase/;
proxy_set_header Host $host:$server_port;
rewrite_by_lua_file D:/tmp/lua/draw_r.lua;
log_by_lua_file D:/tmp/lua/draw_decr.lua;
}

when I send request to http://localhost/private/rush2purchase/ ,it works fine the the stream server is up,
but when I shutdown the upstream server(port:8095),I find the code not executed in log_by_lua_file (draw_decr.lua).

info in nginx access.log:
127.0.0.1 - - [01/Dec/2017:21:03:20 +0800] "GET /private/rush2purchase/ HTTP/1.1" 504 558 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3236.0 Safari/537.36"

error message in nginx error.log:
2017/12/01 21:02:20 [error] 35292#42868: *3298 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/ HTTP/1.1", upstream: "http://[::1]:8095/private/rush2purchase/", host: "localhost"
2017/12/01 21:03:20 [error] 35292#42868: *3298 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while connecting to upstream, client: 127.0.0.1, server: localhost, request: "GET /private/rush2purchase/ HTTP/1.1", upstream: "http://127.0.0.1:8095/private/rush2purchase/", host: "localhost"

How to fix it?




Tong
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

debian package nginx-naxsi (no replies)

$
0
0
Hi,

Debian Wheezy had a package called nginx-naxsi that had the WAF NAXSI compiled. I’ve not seen this on recent versions of Debian.

Does this still exist or was this moved to another repository?

Kind regards, Sophie




_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

location ~* .(a|b|c)$ {} caused an error (2 replies)

$
0
0
Hi,

When I put this location block for case insensitive matching into the vhost produced an error.

nginx: [emerg] location "/" is outside location ".(jpg|jpeg|png|gif|ico|css|js)$" in /etc/nginx/sites-enabled/example.conf:13

My vhost has this:

server {

location ~* .(jpg|jpeg|png|gif|ico|css|js)$ {
expires 65d;

location / {
allow all;
limit_req zone=app burst=5;
limit_rate 64k;
}

}

Did I misread the http://nginx.org/en/docs/http/ngx_http_core_module.html#location doxs?

Quote "A location can either be defined by a prefix string, or by a regular expression. Regular expressions are specified with the preceding “~*” modifier (for case-insensitive matching),"


Thanks, Sophie.




_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: memory leak in django app? (no replies)

$
0
0
Hi Antonis,

My development server appears unaffected by this problem. Plus I can get
heap stats using guppy, which is pretty cool. :)

Both my production and development servers have __debug__ enabled in
Python 2.7.13.

However when using ab -c 100 to benchmark my nginx server I get:

SSL handshake failed (1).

Do you have any ideas how to configure nginx to allow a minimum of 100
concurrent connections when using SSL encryption?

I have worker_connections set to 512 in my nginx.conf

Regards,

Etienne


Le 2017-12-06 à 12:17, Antonis Christofides a écrit :
> Does this happen only in production? What about when you run a development
> server? What is the memory usage of your development server?
>
> Antonis Christofides
> http://djangodeployment.com
>
>
> On 2017-12-06 15:05, Etienne Robillard wrote:
>> Hi Antonis,
>>
>> Thank you for your reply. I installed the htop utility and found that 2 of my
>> 4 uWSGI processes are using 882M (42.7%) of resident memory each. Theses two
>> processes takes about  (85%) of the available RAM memory! That can explain why
>> I get "out of memory" errors when no more memory is available for sshd.
>>
>> I tried to debug memory allocation with guppy following instructions here:
>> https://www.toofishes.net/blog/using-guppy-debug-django-memory-leaks/
>>
>> But I get "hp.Nothing" when I attempt to get the heap stats from the master
>> uWSGI process.
>>
>> Example:
>>>>> hp.setref()
>>>>> hp.heap()
>> hp.Nothing
>>
>> Any help would be appreciated!
>>
>> Etienne
>>
>>
>> Le 2017-12-06 à 07:53, Antonis Christofides a écrit :
>>> Hello,
>>>
>>> the amount of memory you need depends on what Django does and how many workers
>>> (instances of Django) you run (which usually depends on how many requests you
>>> are getting and how I/O intensive your Django application is). For many
>>> applications, 512 MB is enough.
>>>
>>> Why are you worried? The only symptom you describe is that your free memory is
>>> decreasing. This is absolutely normal. The operating system doesn't like RAM
>>> that is sitting down doing nothing, so it will do its best to make free RAM
>>> nearly zero. Whenever there's much RAM available, it uses more for its caches.
>>>
>>> How much memory is your Django app consuming? You can find out by executing
>>> "top" and pressing "M" to sort by memory usage.
>>>
>>> Regards,
>>>
>>> Antonis
>>>
>>> Antonis Christofides
>>> http://djangodeployment.com
>>>
>>> On 2017-12-06 14:04, Etienne Robillard wrote:
>>>> Hi all,
>>>>
>>>> I'm struggling to understand how django/python may allocate and unallocate
>>>> memory when used with uWSGI.
>>>>
>>>> I have a Debian system running Python 2.7 and uwsgi with 2GB of RAM and 2 CPUs.
>>>>
>>>> Is that enough RAM memory for a uWSGI/Gevent based WSGI app running Django?
>>>>
>>>> I'm running uWSGI with the --gevent switch in order to allow cooperative
>>>> multithreading, but my free RAM memory is always decreasing when nginx is
>>>> running.
>>>>
>>>> How can I debug memory allocation in a Django/uWSGI app?
>>>>
>>>> I defined also in my sitecustomize.py "gc.enable()" to allow garbage
>>>> collection, but it does not appears to make any differences.
>>>>
>>>> Can you recommend any libraries to debug/profile memory allocation in Python
>>>> 2.7 ?
>>>>
>>>> Is Django more memory efficient with --pymalloc or by using the default linux
>>>> malloc() ?
>>>>
>>>> Thank you in advance,
>>>>
>>>> Etienne
>>>>

--
Etienne Robillard
tkadm30@yandex.com
https://www.isotopesoftware.ca/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

simple reverse web proxy need a little help (no replies)

$
0
0
I'm new to nginx but needed a solution like this. It's very cool but I'm a newbie with a small problem.

I'm using nginx as a simple reverse web proxy. I have 3 domains on 2 servers. I'm using 3 files in sites-enabled called your-vhost1.conf , your-vhost2.conf and so on. The stand alone domain is vhost1. The problem is, one of the domains on the server that has 2 isn't resolving correctly from the outside world. It only resolves correctly when you use just http://domain.com. If you use http://www.domain.com it resolves to the vhost1 domain. I tried shuffling the vhost1,2, & 3 files to different domains but that breaks it.

A bit more info I've got an A record in DNS for WWW for the domain in question. It is hosted on a Windows server with IIS7 and I also have WWW in site bindings. This server was standalone before we added the 3rd domain on the second server. It did resolve correctly before we added the nginx server so I'm fairly certain I just don't have the syntax right. The standalone server is Debian with a Wordpress site. Here's the vhost files:

VHOST1 (Standalone)

server {

server_name domain1.com;

set $upstream 192.168.7.8;

location / {

proxy_pass_header Authorization;
proxy_pass http://domain1.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;

}
}


VHOST2

server {

server_name domain2.com;

set $upstream 192.168.7.254;

location / {

proxy_pass_header Authorization;
proxy_pass http://www.domain2.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;

}
}

VHOST3

server {

server_name domain3.com;

set $upstream 192.168.7.254;

location / {

proxy_pass_header Authorization;
proxy_pass http://domain3.com;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-forwarded-For $proxy_add_x_forwarded_for;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
proxy_redirect off;

}
}

Can Nginx used as a reverse proxy send HTTP(s) requests through a forward proxy ? (no replies)

$
0
0
Hi,

I'm wondering if it's possible to do what's described in the mail subject ?
I've had a look through Internet and docs but haven't been able to figure
it out. The question is similar to the one that's asked here :
https://stackoverflow.com/questions/45900356/how-to-configure-nginx-as-reverse-proxy-for-the-site-which-is-behind-squid-prox,
but that thread doesn't provide an answer.
I've been able to do this with Apache and its ProxyRemote directive, but I
can't figure out if this is doable with Nginx.


Thanks,

Nicolas

--
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

can i use a another function like more_set_input_headers (no replies)

$
0
0
hi everybody,

i have a question about more_set_input_headers. I use your default dist
package from
fedora 27, but i mess the packets more headers. I would like to convert
this two functions

more_set_headers -s 401 'WWW-Authenticate: Basic
realm="server.domain.tld"';
more_set_input_headers 'Authorization: $http_authorization';

I have try with proxy_set_header, but its not working.

Have anyone a idea, who can i set up with a basic function on nginx?

Best wishes

Alex

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

safari websocket using different ssl_session_id (no replies)

$
0
0
I am trying to use a Cookie along with ssl_session_id to identify
connections. This seems to work fine in Chrome and Firefox, but Safari
looks like it uses a different ssl_session_id when it makes a
websocket connection. Is there something else I can use to uniquely
tie the cookie to a connection?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Multiple HTTP2 reverse proxy support ? (no replies)

$
0
0
Hello


I'm trying to configure a reverse proxy for multiple domains with a single nginx server.

Is that possible

eg.

<Clients> ----[HTTP 2.0]--> < Nginx > ---[HTTP 1.1]--- > <OriginServer>

Problem using `proxy_redirect` to rewrite the same Location 2-or-more times? (no replies)

$
0
0
Hi! I am using Nginx 1.12.2 in a large and complex reverse-proxy
configuration, with lots of content-rewriting (subs_filter, lua, ...).

Problem:

- the client connects to my proxy
- my proxy forwards the request to the origin
- the origin responds with a 302:

"Location: http://www.foo.com/" ...or
"Location: http://www.bar.com/" ...or
"Location: http://www.baz.com/"

....and I have the following dynamic rewriting that I want to perform:

* foo.com -> ding.org
* bar.com -> dong.org
* baz.com -> dell.org
* ...more?

So I have the following three (or more) rules:

http {
# ...etc
proxy_redirect ~*^(.*?)\\b{foo\\.com}\\b(.*)$ $1ding.org$2;
proxy_redirect ~*^(.*?)\\b{bar\\.com}\\b(.*)$ $1dong.org$2;
proxy_redirect ~*^(.*?)\\b{baz\\.com}\\b(.*)$ $1dell.org$2;
# ...more?

....and these work well most of the time.

However: these do not function as-desired when the origin produces a 302
which mentions two or more *different* rewritable site names:

"Location: http://www.foo.com/?next=%2F%2Fcdn.baz.com%2F" <-- INPUT

....which I *want* to be rewritten as:

"Location: http://www.ding.org/?next=%2F%2Fcdn.dell.org%2F" <-- WANTED

....but instead I get:

"Location: http://www.ding.org/?next=%2F%2Fcdn.baz.com%2F" <-- ACTUAL

i.e. the location is converted from `foo.com` to `ding.org`, but no further
processing happens to convert `baz.com` in this example.


The issue seems to be that `proxy_redirect` stops after executing the first
rule that succeeds?

Is this intended behaviour, please? And is there a way to achieve what I
want, e.g. via options-flags or Lua? I am making heavy use of subs_filter,
proxy_cookie_domain, etc.

I've put one of my Nginx configuration files at
https://gist.github.com/alecmuffett/f5cd8abcf161dbdaffd7a81ed8a088b9 if
you'd like to see the issue in context.

Thanks!

- alec

--
http://dropsafe.crypticide.com/aboutalecm
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

-e vs -f and -d (1 reply)

$
0
0
Hi,

Considering that I don't have symbolic links why do these configs work differently ?

===config one===
if (!-f $request_filename) {
rewrite ^/(.*)$ /init.php;
}
if (!-d $request_filename) {
rewrite ^/(.*)$ /init.php;
}
===config one===

This one above works, rewrite happens.

Being changed to this it stops working, all other lines left intact:
===config two===
if (!-e $request_filename) {
rewrite ^/(.*)$ /init.php;
}
===config two===

This is inside the location / {}.

Thanks.

"sub_filter_once off" not working as advertised? (2 replies)

$
0
0
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>