Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

keepalive_requests default 100 (1 reply)

$
0
0
Does anybody have any history/rationale on why keepalive_requests
use default of 100 requests in nginx? This same default is also used in
Apache. But the default seems very small in today's standards.

http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests

Regards,
Tolga
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

MP4 progressive download needs hint tracks in MP4 file? (no replies)

$
0
0
I'm wondering if RTP hint tracks in MP4 files are necessary to provide progressive download functionality over my NGINX server configuration. I can't remember where but I read somewhere that these additional tracks support the server by seeking operations (byte-range).
Thanks for your feedback, Hannes

mp4 recording using nginx rtmp module (no replies)

$
0
0
We are using wowza streaming engine to record Live TV shows which gives recorded output in the mp4 format. We are evaluating Nginx RTMP module to do same mp4 recording. However it is observed that this module does recording only in the flv format. Is there any way to do direct recording of live stream in the mp4 format?

Re: combining map (3 replies)

$
0
0
If you are going to use it inside proxy_no_cache directive, you can
combine proxy_cache_method (POST is not included by default) and
'proxy_no_cache $query_string$cookie__mcnc'
The latter will not cache the request until there is query string or a
cookie with a value set.
So basically, it looks like you can avoid using maps in this case.

On 09.03.2017 10:01, Anoop Alias wrote:
> Hi,
>
> I have 3 maps defined
> ############################
> map $request_method $requestnocache {
> default 0;
> POST 1;
> }
>
> map $query_string $querystringnc {
> default 1;
> "" 0;
> }
>
> map $http_cookie $mccookienocache {
> default 0;
> _mcnc 1;
> }
> ###############################
>
> I need to create a single variable that is 1 if either of the 3 above
> is 1 and 0 if all are 0. Will the following be enough
>
> map "$requestnocache$querystringnc$mccookienocache" {
> default 0;
> ~1 1;
> }
>
>
>
> Thanks,
> --
> *Anoop P Alias*
>
>
>
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx


_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Conflict between form-input-nginx-module and nginx-auth-request-module? (no replies)

$
0
0
Hi,
I was able to get both form-input-nginx-module and nginx-auth-request-module work fine, individually, with nginx-1.9.9. Putting them together, and the HTTP POST request just timeout (I never got any response back). Any one had any experience using both of them in the same location block?
Thanks.Yongtao
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Fastcgi_cache permissions (no replies)

$
0
0
Hello, I was searching for an answer for this question quite a bit, but unfortunately I was not able to find such, so any help is much appreciated. The issue is the following - I have enabled Fastcgi_cache for my server and I have noticed that the cache has very restricted permissions 700 to be precise. I need to be able to change those permissions, but unfortunately I am not able to do so. I do not see any configuration variable that is responsible for this, neither the nginx process uses the umask value set for generating the permissions for those files. If someone has an idea how can I make nginx to use custom permissions for the cache that would great.

Thanks a lot.

Regards.

configuration nginx server block [virtual host] with Ipv6. (2 replies)

$
0
0
Hi, I have installed nginx + php-fpm (php5.4 / php5.6), i'm trying to set everything up for ipv6 in Centos 7.3, install from official nginx repo:

[/etc/nginx/nginx.conf]:

user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;

keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}

[/etc/nginx/conf.d/default.conf]:
server {
listen [::]:80;

server_name localhost;

location ~ \.php$ {
root html;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
try_files $uri =404;
fastcgi_pass [::]:9056;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
include fastcgi_params;
}

location / {
root /usr/share/nginx/html;
index index.php index.html index.htm;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}


[domain1.conf]:

# create new
server {

listen [::]:80;

root /home/domain1/public_html;
index index.php index.html index.htm;

server_name domain1 www.domain1;

location / {
try_files $uri $uri/ =404;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass [::]:9056;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 4k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;

}
}

[subdomain.domain1.conf]:

# create new
server {

listen [::]:80;

root /home/domain1/public_html/subdomain;
index index.php index.html index.htm;

server_name subdomain.domain1 www.subdomain.domain1;

location / {
try_files $uri $uri/ =404;
}

location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass [::]:9056;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include /etc/nginx/fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_script_name;
fastcgi_buffer_size 128k;
fastcgi_buffers 256 4k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;

}
}

If in [domain.conf] change to:

Listen 80;
fastcgi_pass 127.0.0.1:9056;

It works perfect, because this behavior I'm doing wrong,

thank you in advance for your answers,

Wilmer.

Re: map directive doubt (no replies)

$
0
0
On Thu, Mar 09, 2017 at 11:01:38AM +0530, Anoop Alias wrote:

Hi there,

> Just have a doubt in map directive
>
> map $http_user_agent $upstreamname {
> default desktop;
> ~(iPhone|Android) mobile;
> }
>
> is correct ?

That doesn't look too hard to test.

==
server {
listen 8880;
return 200 "user agent = $http_user_agent; map = $upstreamname\n";
}
==

$ curl -H User-Agent:xxAndroidxx http://localhost:8880/x

f
--
Francis Daly francis@daoine.org
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

proxy_cache_use_stale based on IP address (no replies)

$
0
0
Hi!

Is it possible to send a stale version of the website based on the IP address of the client?

Thank you in advance

upload xml file (no replies)

$
0
0
Hello,

I am new with web servers and nginx.
I would like to ask if nginx support xml , and what does it mean to
upload xml to web server ?
Does it just keep the xml as file in some directory , or does it do
parse the xml file and do some actions ?

Thank you,
Ran
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Random rewrite/proxy_pass timeouts (no replies)

$
0
0
I'm seeing some strange behavior with nginx. I using it to proxy requests from one domain to another. What I'm seeing is at times requests to help.example.com start hanging. In order to fix it I need to either reload or restart the nginx service, which makes me think it's a resource issue but thought I'd check here first. Has anyone experienced anything like this before?

Here are a few technical details:
- nginx version: nginx/1.10.1
- t2.small in AWS
- Fronted by a classic ELB

I've also attached my nginx.conf and site.conf for reference. I have a few other sites in use that also use proxy_pass and it works just fine. So I'm thinking it may have something to do with the rewrite, but not 100% sure.

ngnix.conf
include /usr/share/nginx/modules/*;

# nginx Configuration File
# http://wiki.nginx.org/Configuration

# Run as a less privileged user for security reasons.
user www-data;

# How many worker threads to run;
# "auto" sets it to the number of CPU cores available in the system, and
# offers the best performance. Don't set it higher than the number of CPU
# cores if changing this parameter.

# The maximum number of connections for Nginx is calculated by:
# max_clients = worker_processes * worker_connections
worker_processes 1;

# Maximum open file descriptors per process;
# should be > worker_connections.
worker_rlimit_nofile 8192;

events {
# When you need > 8000 * cpu_cores connections, you start optimizing your OS,
# and this is probably the point at which you hire people who are smarter than
# you, as this is *a lot* of requests.
worker_connections 8000;
}

# Default error log file
# (this is only used when you don't override error_log on a server{} level)
# options are also notice and info
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

http {

# Hide nginx version information.
server_tokens off;

# Define the MIME types for files.
include /etc/nginx/mime.types;
default_type application/octet-stream;

# Update charset_types due to updated mime.types
charset_types text/xml text/plain text/vnd.wap.wml application/x-javascript application/rss+xml text/css application/javascript application/json;

# Format to use in log files
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

# Default log file
# (this is only used when you don't override access_log on a server{} level)
access_log /var/log/nginx/access.log main;

# How long to allow each connection to stay idle; longer values are better
# for each individual client, particularly for SSL, but means that worker
# connections are tied up longer. (Default: 65)
keepalive_timeout 35;

# Speed up file transfers by using sendfile() to copy directly
# between descriptors rather than using read()/write().
sendfile on;

# Tell Nginx not to send out partial frames; this increases throughput
# since TCP frames are filled up before being sent out. (adds TCP_CORK)
tcp_nopush on;

# Compression
# Enable Gzip compressed.
gzip on;

# Compression level (1-9).
# 5 is a perfect compromise between size and cpu usage, offering about
# 75% reduction for most ascii files (almost identical to level 9).
gzip_comp_level 5;

# Don't compress anything that's already small and unlikely to shrink much
# if at all (the default is 20 bytes, which is bad as that usually leads to
# larger files after gzipping).
gzip_min_length 256;

# Compress data even for clients that are connecting to us via proxies,
# identified by the "Via" header (required for CloudFront).
gzip_proxied any;

# Tell proxies to cache both the gzipped and regular version of a resource
# whenever the client's Accept-Encoding capabilities header varies;
# Avoids the issue where a non-gzip capable client (which is extremely rare
# today) would display gibberish if their proxy gave them the gzipped version.
gzip_vary on;

# Compress all output labeled with one of the following MIME-types.
gzip_types
application/atom+xml
application/javascript
application/json
application/ld+json
application/manifest+json
application/rdf+xml
application/rss+xml
application/schema+json
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-javascript
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/eot
font/opentype
image/bmp
image/svg+xml
image/vnd.microsoft.icon
image/x-icon
text/cache-manifest
text/css
text/javascript
text/plain
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy
text/xml;
# text/html is always compressed by HttpGzipModule

# This should be turned on if you are going to have pre-compressed copies (.gz) of
# static files available. If not it should be left off as it will cause extra I/O
# for the check. It is best if you enable this in a location{} block for
# a specific directory, or on an individual server{} level.
# gzip_static on;

# For behind a load balancer
real_ip_header X-Forwarded-For;
set_real_ip_from 0.0.0.0/0;

client_max_body_size 100m;

# default blank catch-all
server {
listen 80 default_server;
root /var/www/html/default;
index index.html;
}

# Include files in the sites-enabled folder. server{} configuration files should be
# placed in the sites-available folder, and then the configuration should be enabled
# by creating a symlink to it in the sites-available folder.
# See doc/sites-enabled.md for more info.
include /etc/nginx/sites-enabled/*;
}

site.conf
server {
listen 80;

server_name help.example.com;

access_log /var/log/nginx/access.log access;
error_log /var/log/nginx/error.log error;

location /assets/ {
proxy_pass https://site.help/assets/;
proxy_set_header Host site.help;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Scheme https;
}

location /_api/ {
proxy_pass https://site.help/_api/;
proxy_set_header Host site.help;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Scheme https;
}

location / {
rewrite ^/(.*)/$ /example/$1/ permanent;
proxy_pass https://site.help;
proxy_set_header Host help.site.com;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto https;
}

# Deny config files
location ~* \.(sh|lock|json)$ {
deny all;
}

# Deny any hidden and old version of files
location ~ /\. { access_log off; log_not_found off; deny all; }
location ~ ~$ { access_log off; log_not_found off; deny all; }
}

proxy_pass and weird behaviour (no replies)

$
0
0
Hi —

(This is nginx 1.11.10 and up to date FreeBSD STABLE-11)

I recently implemented LE certificates for my virtual domains, which will be served at two hosts, accessed by round-robin DNS, aka two IP addresses. In order to get the acme challenges running, I did implement the following configuration:

Host A and Host B:

# port 80
server {
include include/IPs-80;
server_name example.com;
location / {
# redirect letsencrypt ACME challenge requests to local-at-host-A.lan
location /.well-known/acme-challenge/ {
proxy_pass http://local-at-host-A.lan;
}
# all other requests are redirect to https, permanently
return 301 https://$server_name$request_uri;
}
}

# port 443
[snip]


Server local-at-host-A.lan (LE acme) finally serves the acme challenge directory:

server {
include include/IPs-80;
server_name local-at-host-A.lan;
# redirect all letsencrypt ACME challenges to one global directory
location /.well-known/acme-challenge/ {
root /var/www/acme/;
}
}



Well, that is working, somehow, except: If the LE server addresses Host A, the challenge file is going to be retrieved instantaneously. If the LE server addresses Host B, only every *other* request is being served instantaneously:

1. access: immediately download
2. access: 60 s wait, then download
3. access: immediately download
4. access: 60 s wait, then download
etc.


Hmm, default proxy_connect_timeout is 60s, I know. But why every other connect?

Every feedback on how to solve/debug that issue is highly welcome.

Thanks and regards,
Michael
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx serving extra ssl certs (no replies)

$
0
0
Hello nginx world,

I hope you can help me track down my issue.

First, I'm running:

Centos 7.3.1611
Nginx 1.11.10
Openssl 1.0.1e-fips

My issue is I run 11 virtual sites, all listening on both ipv4 & 6, same two addresses, so obviously I rely on SNI. One site also listens on tor.

When I check the ssl responses using either ssllabs server test or openssl s_client, my sites work fine but also serve an extra 2nd cert meant for the wrong hostname. I'm confused as I see no issue with my config files.

I've attached a sample of my config files for one site for your perusal.

You can also check this domain for yourself:

server1.garbage-juice.com

Thanks for your help.


--
Thanks.
Fabian S._______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

OAuth Access token validation (1 reply)

$
0
0
Hi,

Does Nginx provide the support for verifying the access token in the incoming request from an Identity Server?


Regards, Santos

proxy_bind with hostname from /etc/hosts possible? (no replies)

$
0
0
Hi!

is it possible to use an hostname from local /etc/hosts as proxy_bind value?
In our current

Background:
We use nginx 1.8.1 as reverse proxy.
In order to overcome the "Overcoming Ephemeral Port Exhaustion" problem (64k+ connections), we use proxy_bind to iterate over all loccally available IP addresses and assign them as source IP (see https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/)
In order to have an generic nginx configuration for all of our nginx instances, we don't want to hard code server specific IPs in the nginx.conf but use hostnames that are defined in the local /etc/hosts.

You can see our current configuration above.
Unfortunately nginx cannot resolve the hostname (localip0 etc.). There is an error log "invalid local address "localip0"...).
We also tested the usage of upstream directive. Same result.
I'm worry that I only can use explicit IP addresses in this situation. Or do you have an alternative solution?

/etc/host:
192.168.1.130 localip0
192.168.1.132 localip1
...

nginx.conf:

split_clients "${remote_addr}${remote_port}AAAA" $source_ip {
10% localip0;
10% localip1;
...
}

server {
listen 443;
proxy_bind $source_ip;
...

adding LUA module into yum repo? (no replies)

$
0
0
Hello,

Is there any plans to add LUA module into official nginx Yum repository?

Thanks a lot!

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

upstream sent unexpected FastCGI record: 3 while reading response header from upstream (5 replies)

$
0
0
I am getting random "upstream sent unexpected FastCGI record: 3 while reading response header from upstream" while using Nginx with PHP7. So far I was waiting for bug https://bugs.php.net/bug.php?id=67583 to be fixed by PHP which they now have but I still get this error.

Everything works fine for few days and then suddenly I start getting this error. How can I debug this in detail? Is there a way I could see or log response from PHP (FastCGI) when this error occurs?

nginx with ssl as reverse proxy (1 reply)

$
0
0
Hello,

I have followed the article which explains how to configure nginx with
ssl as reverse proxy for Jenkins:
https://www.digitalocean.com/community/tutorials/how-to-configure-nginx-with-ssl-as-a-reverse-proxy-for-jenkins

Yet, I don't understand one thing,
The ssl key is configured only for nginx:

client ------- nginx proxy ----Jenkins server
<ssl key>

Doesn't Jenkins server need to be provided with keys too ?

Thank you,
Ran
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

HTTP/2 seems slow (1 reply)

$
0
0
Hi guys,

I'm working on a HTTP/2 website solution and when testing we can see HTTP/2 works.

The thing is that everything loads at the same time, but very slow. While the bandwith cannot be the problem.

At the end HTTP/2 is a bit slower than HTTP/1.1

For example:

When I use HTTP/1.1 I have 5 files loading after each other. The time to first byte needed per file is between 13 and 20 ms (plus some loading time)
When I use HTTP/2 The same 5 files load at the same time, but the time to first byte needed per file is between 40 to 60 ms. (plus some loading time).

Where do I need to look?

200ms delay when serving stale content and proxy_cache_background_update enabled (no replies)

$
0
0
Hi,

I noticed a delay of approx. 200ms when the proxy_cache_background_update
is used and Nginx sends stale content to the client.

Current setup:
- Apache webserver as backend sending a slow response delay.php that simply
waits for 1 second: <?php usleep(1000000); ?>
- Nginx in front to cache the response, and send stale content it the cache
needs to be refreshed.
- wget sending a request from another machine

Nginx config-block:
location /delay.php {
proxy_pass http://backend;
proxy_next_upstream error timeout invalid_header;
proxy_redirect http://$host:8000/ http://$host/;
proxy_buffering on;
proxy_connect_timeout 1;
proxy_read_timeout 30;
proxy_cache_background_update on;

proxy_http_version 1.1;
proxy_set_header Connection "";

proxy_cache STATIC;
proxy_cache_key "$scheme$host$request_uri";
proxy_cache_use_stale error timeout invalid_header updating http_500
http_502 http_503 http_504;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Accept-Encoding "";

# Just to test if this caused the issue, but it doesn't change
tcp_nodelay on;
}

Wget request: time wget --server-response --output-document=/dev/null "
http://www.example.com/delay.php?teststales=true"
Snippet of wget output: X-Cached: STALE
Output of time command: real 0m0.253s

Wget request: time wget --server-response --output-document=/dev/null "
http://www.example.com/delay.php?teststales=true"
Snippet of wget output: X-Cached: UPDATING
Output of time command: real 0m0.022s

So a cache HIT (not shown) or an UPDATING are fast, sending a STALE
response takes some time.
Tcpdump showed that all HTML content and headers are send immediately after
the request has been received, but the last package will be delayed; that's
why I tested the tcp_nodelay option in the config.

I'm running version 1.11-10 with the patch provided by Maxim:
http://hg.nginx.org/nginx/rev/8b7fd958c59f

Any idea's on this?

Thanks,

Jean-Paul
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>