Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

Different Naxsi rulesets (no replies)

$
0
0
Hi!

I'm using Nginx together with Naxsi; so not sure it this is the correct
place for this post, but I'll give it a try.

I want to configure two detection thresholds: a strict detection threshold
for 'far away countries', and a less-strict set
for local countries. I'm using a setup like:

location /strict/ {
include /usr/local/nginx/naxsi.rules.strict;

proxy_pass http://app-server/;
}

location /not_so_strict/ {
include /usr/local/nginx/naxsi.rules.not_so_strict;

proxy_pass http://app-server/;
}

location / {
# REMOVED BUT THIS WORKS:
# include /usr/local/nginx/naxsi.rules.not_so_strict;
set $ruleSet "strict";
if ( $geoip_country_code ~ (TRUSTED_CC_1|TRUSTED_CC_2TRUSTED_CC_3) ) {
set $ruleSet "not_so_strict";
}

rewrite ^(.*)$ /$ruleSet$1 last;
}

location /RequestDenied {
return 403;
}


The naxsi.rules.strict file contains the check rules:
CheckRule "$SQL >= 8" BLOCK;
etc.

For some reason this doesn't work. The syntax is ok, and I can reload
Nginx. However the firewall never triggers. If I uncomment the include in
the location-block / it works perfectly.
Any idea's why this doesn't work, or any better setup to use different
rulesets based on some variables?

Thanks,

JP
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

SIGIO only mean readable or writable, how channel event avoid writable (no replies)

$
0
0
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx proxy with rewrite and limit_except (no replies)

$
0
0
Would any have solutions to the problem described here:

https://stackoverflow.com/questions/47255564/nginx-proxy-with-rewrite-and-limit-except-not-working

Re: SIGIO only mean readable or writable", how channel event, "avoid writable (Zhang Chao) (no replies)

$
0
0
Thank you for you reply, but the channel each side can read or wirte not
like pipe, so it's not readonly but readable and writable, please correct
it if anywhere is improper.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Track egress bandwidth for a server block (1 reply)

$
0
0
Is there a way to measure and store the amount of egress bandwidth in GB a given server{} block uses in a certain amount of days? Needs to be somewhat performant. Using NGINX Unit or Lua are both possible, just no idea how to implement it.

Re: SIGIO only mean readable or writable, how channel event, avoid writable (no replies)

$
0
0
I write a c source file, and test in my machine. At the beginning, the fd
is writable when the fd is open but the process doesn't receive the SIGIO.
so i'm confused a lot of paper or books say that the process will receive
SIGIO when the fd is writable or readable. but in fact it doesn't. so any
idea?

#include<stdio.h>
#include<string.h>
#include<sys/types.h>
#include<stdlib.h>
#include<unistd.h>
#include<sys/socket.h>
#include<fcntl.h>
#include <sys/ioctl.h>
#include<signal.h>
#include<string.h>


void ngx_signal_handler(int signo, siginfo_t *siginfo, void *ucontext){
printf("%d\n", signo);
}

int main()
{
int sv[2];
if(socketpair(PF_LOCAL,SOCK_STREAM,0,sv) < 0)
{
perror("socketpair");
return 0;
}
struct sigaction sa;

memset(&sa, 0, sizeof(struct sigaction));

sa.sa_sigaction = ngx_signal_handler;
sa.sa_flags = SA_SIGINFO;

sigaction(SIGIO, &sa, NULL);

pid_t id = fork();
if(id == 0)
{

close(sv[0]);

const char* msg = "i'm child\n";
char buf[1024];
while(1)
{
/*write(sv[1],msg,strlen(msg));*/
sleep(1);

/*ssize_t _s = read(sv[1],buf,sizeof(buf)-1);*/
/*if(_s > 0)*/
/*{*/
/* buf[_s] = '\0';*/
/* printf(" %s\n",buf);*/
/*}*/
}
}
else //father
{
close(sv[1]);
const char* msg = "i'm father\n";
char buf[1024];
int on = 1;
if (ioctl(sv[0], FIOASYNC, &on) == -1) {
return 1;
}

if (fcntl(sv[0], F_SETOWN, getpid()) == -1) {
return 1;
}
while(1){

/*ssize_t _s = read(sv[0],buf,sizeof(buf)-1);*/
/*if(_s > 0)*/
/*{*/
/* buf[_s] = '\0';*/
/* printf(" %s\n",buf);*/
/* sleep(1);*/
/*}*/
/*write(sv[0],msg,strlen(msg));*/
}
}
return 0;
}
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

[ANN] OpenResty 1.13.6.1 released (no replies)

$
0
0
Hi there,

I am excited to announce the new formal release, 1.13.6.1, of the
OpenResty web platform based on NGINX and LuaJIT:

https://openresty.org/en/download.html

Both the (portable) source code distribution, the Win32 binary
distribution, and the pre-built binary Linux packages for all those
common Linux distributions are provided on this Download page.

Special thanks go to all our developers and contributors! And thanks
OpenResty Inc. for sponsoring a lot of the OpenResty core development
work.

We have the following highlights in this release:

1. Based on the latest mainline nginx core 1.13.6.

2. Included new component ngx_stream_lua_module which can do nginx TCP
servers with Lua:

https://github.com/openresty/stream-lua-nginx-module#readme

3. New ttl(), expire(), free_space(), and capacity() Lua methods for
lua_shared_dict objects:

https://github.com/openresty/lua-nginx-module/#ngxshareddictttl

4. New resty.limit.count module in lua-resty-limit-traffic for doing
GitHub API style limiting:

https://github.com/openresty/lua-resty-limit-traffic/blob/master/lib/resty/limit/count.md#readme

5. Added JIT controlling command-line options to the resty command-line utility:

https://github.com/openresty/resty-cli#synopsis

6. Wildcard support in the more_clear_input_headers directive:

https://github.com/openresty/headers-more-nginx-module/#more_clear_headers

7. HTTP/2 support in the opm client tool (through curl).

The complete change log since the last (formal) release, 1.11.2.5:

* upgraded the Nginx core to 1.13.6.

* see the changes here: http://nginx.org/en/CHANGES

* bundled the new component, ngx_stream_lua_module 0.0.4, which is
also enabled by default. One can disable this 3rd-party Nginx C
module by passing "--without-stream_lua_module" to the
"./configure" script. We provide compatible Lua API with ngx_lua
wherever it makes sense. Currently we support content_by_lua*,
preread_by_lua* (similar to ngx_lua's access_by_lua* ),
log_by_lua*, and balancer_by_lua* in the stream subsystem.
thanks Mashape Inc. for sponsoring the OpenResty Inc. team to do
the development work on rewriting ngx_stream_lua for recent
nginx core version.

* change: applied a patch to the nginx core to make sure the
"server" header in HTTP/2 response shows "openresty" when the
"server_tokens" diretive is turned off.

* feature: added nginx core patches needed
by(https://github.com/openresty/stream-lua-nginx-module)/'s
balancer_by_lua*.

* win32: upgraded PCRE to 8.41.

* upgraded ngx_lua to 0.10.11.

* feature: shdict: added pure C API for getting free page size
and total capacity for lua-resty-core. thanks Hiroaki
Nakamura for the patch.

* feature: added pure C functions for shdict:ttl() and
shdict:expire() API functions. thanks Thibault Charbonnier
for the patch.

* bugfix: *_by_lua_block directives might break nginx config
dump ("-T" switch). thanks Oleg A. Mamontov for the patch.

* bugfix: segmentation faults might happen when pipelined http
requests are used in the downsteram connection. thanks Gao
Yan for the report.

* bugfix: the ssl connections might be drained and reused
prematurely when ssl_certificate_by_lua* or
ssl_session_fetch_by_lua* were used. this might lead to
segmentation faults under load. thanks guanglinlv for the
report and the original patch.

* bugfix: tcpsock:connect(): when the nginx resolver's
"send()" immediately fails without yielding, we didn't clean
up the coroutine ctx state properly. This might lead to
segmentation faults. thanks xiaocang for the report and root
for the patch.

* bugfix: added fallthrough comment to silence GCC 7's
"-Wimplicit-fallthrough". thanks Andriy Kornatskyy for the
report and spacewander for the patch.

* bugfix: tcpsock:settimeout, tcpsock:settimeouts: throws an
error when the timeout argument values overflow. Here we
only support timeout values no greater than the max value of
a 32 bits integer. thanks spacewander for the patch.

* doc: added "413 Request Entity Too Large" to the possible
short circuit response list. thanks Datong Sun for the
patch.

* upgraded lua-resty-core to 0.1.13.

* feature: ngx.balancer now supports the ngx_stream_lua; also
disabled all the other FFI APIs for the stream subsystem for
now.

* feature: resty.core.shdict: added new methods
shdict:free_space() and shdict:capacity(). thanks Hiroaki
Nakamura for the patch.

* feature: implemented the ngx.re.gmatch function with FFI.
thanks spacewander for the patch.

* bugfix: ngx.re: fix an edge-case where re.split() might not
destroy compiled regexes. thanks Thibault Charbonnier for
the patch.

* feature: implemented the shdict:ttl() and shdict:expire()
API functions using FFI.

* upgraded lua-resty-dns to 0.20.

* feature: allows "RRTYPE" values larger than 255. thanks
Peter Wu for the patch.

* upgraded lua-resty-limit-traffic to 0.05.

* feature: added new module resty.limit.count for GitHub API
style request count limiting. thanks Ke Zhu for the original
patch and Ming Wen for the followup tweaks.

* bugfix: resty.limit.traffic: we might not uncommit previous
limiters if a limiter got rejected while committing a state.
thanks Thibault Charbonnier for the patch.

* bugfix: resty.limit.conn: we incorrectly specified the
exceeded connection count as the initial value for the
shdict key decrement which may lead to dead locks when the
key has been evicted in very busy systems. thanks bug had
appeared in v0.04.

* upgraded resty-cli to 0.20.

* feature: resty: impelented the "-j off" option to turn off
the JIT compiler.

* feature: resty: implemented the "-j v" and "-j dump" options
similar to luajit's.

* feature: resty: added new command-line option "-l LIB" to
mimic lua and luajit -l parameter. thanks Michal Cichra for
the patch.

* bugfix: resty: handle "SIGPIPE" ourselves by simply killing
the process. thanks Ingy dot Net for the report.

* bugfix: resty: hot looping Lua scripts failed to respond to
the "INT" signal.

* upgraded opm to 0.0.4.

* bugfix: opm: when curl uses HTTP/2 by default opm would
complain about "bad response status line received". thanks
Donal Byrne and Andrew Redden for the report.

* debug: opm: added more details in the "bad response status
line received from server" error.

* upgraded ngx_headers_more to 0.33.

* feature: add wildcard match support for
more_clear_input_headers.

* doc: fixed more_clear_input_headers usage examples. thanks
Daniel Paniagua for the patch.

* upgraded ngx_encrypted_session to 0.07.

* bugfix: fixed one potential memory leak in an error
condition. thanks dyu for the patch.

* upgraded ngx_rds_json to 0.15.

* bugfix: fixed warnings with C compilers without variadic
macro support.

* doc: added context info for all the config directives.

* upgraded ngx_rds_csv to 0.08.

* tests: varous changes in the test suite.

* upgraded LuaJIT to v2.1-20171103:
https://github.com/openresty/luajit2/tags

* optimize: use more appressive JIT compiler parameters as the
default to help large OpenResty Lua apps. We now use the
following jit.opt defaults: "maxtrace=8000 maxrecord=16000
minstitch=3 maxmcode=40960 -- in KB".

* imported Mike Pall's latest changes:

* "LJ_GC64": Make "ASMREF_L" references 64 bit.

* "LJ_GC64": Fix ir_khash for non-string GCobj.

* DynASM/x86: Fix potential "REL_A" overflow. Thanks to
Joshua Haberman.

* MIPS64: Hide internal function.

* x64/"LJ_GC64": Fix type-check-only variant of SLOAD.
Thanks to Peter Cawley.

* PPC: Add soft-float support to JIT compiler backend.
Contributed by Djordje Kovacevic and Stefan Pejic from
RT-RK.com. Sponsored by Cisco Systems, Inc.

* x64/"LJ_GC64": Fix fallback case of "asm_fuseloadk64()".
Contributed by Peter Cawley.

The HTML version of the change log with lots of helpful hyper-links
can be browsed here:

https://openresty.org/en/changelog-1013006.html

OpenResty is a full-fledged web platform
by bundling the standard Nginx core, Lua/LuaJIT, lots of
3rd-party Nginx modules and Lua libraries, as well as most of their external
dependencies. See OpenResty's homepage for details:

https://openresty.org/

We have run extensive testing on our Amazon EC2 test cluster and
ensured that all the components (including the Nginx core) play well
together. The latest test report can always be found here:

https://qa.openresty.org/

We also always run our OpenResty Edge commercial software based on the
latest open source version of OpenResty in our own global CDN network
(dubbed "mini CDN") powering our openresty.org and openresty.com
websites. See https://openresty.com/ for more details.

Have fun!

Best regards,
Yichun
---
President & CEO of OpenResty Inc.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Question about reverse proxy for Prometheus (no replies)

$
0
0
Hi,
How do I setup reverse proxy for an application like Prometheus/Node-exporter which is running on port 9100 and setup TLS.
The issue I am facing is when I update the "listen parameter in the block to port 9090", I am NOT able to start the Node-exporter process as Nginx seems to be blocking this port.
I want to redirect all the traffic on port 9100 and reroute it thru https.

Ganesh



Nothing in this message is intended to constitute an electronic signature unless a specific statement to the contrary is included in this message.

Confidentiality Note: This message is intended only for the person or entity to which it is addressed. It may contain confidential and/or privileged material. Any review, transmission, dissemination or other use, or taking of any action in reliance upon this message by persons or entities other than the intended recipient is prohibited and may be unlawful. If you received this message in error, please contact the sender and delete it from your computer.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

ry efficient ayurvedic solutions to enhance vars performance-related power. Shilajit ES (no replies)

$
0
0
to t products, excellent outcomes by using this remedy. Overview the product: Mast Emotions capsules: This is one vars performance the efficient organic choices which are absolutely prepared with natural herbs. Individuals may use this tablets to enhance vars performance-related power. The various components vars performance Mast Emotions products are ashwagandha, abhrak bhasma, lauh bhasma, sudh shilajit, semal musliFree Reprint Content, safed musli and kaunch and ras sindoor. Experts recommend the an all-natural aphrodisiac which has the nickname, Indian The glowing blue ta [url=https://evaherbalist.com/vars-performance/]vars performance[/url] let. This is because it can improve vars performance-related performing vars performance our individual human body and enhance our vars performance produce or vars performance-related listlessness. It is also efficient in enhancing sperm mobile cell infertility or dealing with men construction problems. These products also have anti-oxidant functions which get them to on the best as well as extremely efficient organic choices to any type vars performance vars performance-related listlessness all the way from inducing a higher timespan and better hardons to male impotence and men construction problems. These products are great for relieving pressure as well and can enhance emotional wellness and fitness and wellness too. With so several benefits, it’s not obscure why these products cure so many vars performance-related complications so easily. There are many places where you can buy shilajit products on the world wide web. What are you waiting for? One vars performance the best places to buy shilajit products online is dharmanis.com They are one vars performance the main manufacturers vars performance shilajit products in India and can definitely vars performancefer you with the item you are looking for. SometimesFeature Content, vars performance-related listlessness may occur because vars performance the malfunctioning vars performance the renal or urinary human body places. Even the development or swelling vars performance the prostate gland can lead to very uncomfortable encounters during vars performance. It also causes excellent difficulty in urination. Shilajit products are an effective organic therapy for these problems as well. It can even enhance kidney performing to a certain extent. Vars performance-related Intimacy is an ... that blows us all away at first. As a ... proceeds however, the ... fire tends to fade and burn out. If you figure out out how to adapt to the new levels in your re Vars performance Intimacy is a working encounter that blows us all away at first. As a relationship proceeds however, the passionate fire tends to fade and burn out. If you figure out out how to adapt to the new levels in your connections, you can keep your passion fire burning and notice a vars performance-related greatness you never thought you could accomplish

Nginx drops server our LB if it see HTTP 400. (no replies)

$
0
0
Good day Guys

I'm sitting with a very peculiar problem, and I was hoping someone could
be of assistance.

Right now everything is a theory, but when I switch back to LVS
everything works. Reason I like and want Nginx is for the reverse
caching (caching of images).

As said, Im using Nginx for reverse caching and load balancing, and I'm
seeing the following in the Nginx error log.

2017/11/16 10:16:38 [error] 75140#75140: *27952 no live upstreams while
connecting to upstream, client: 52.169.148.4, server:
REMOVEDCLIENTDOMAIN, request: "GET /1298310/SNIPPET_OF_URL HTTP/1.1",
upstream: "https://sslloadbalance/1298310/SNIPPET_OF_URL", host:
"REMOVEDCLIENTDOMAIN".

I understand that, Nginx says is cant connect to the backend servers,
but the backend servers are 100%.

My theory is, when ever a URL is called and an HTTP 400 is returned,
Nginx picks this up, and does not like it, and then drops the server
out., e.g.

REMOVED_IP_OF_LB - - [16/Nov/2017:10:16:38 +0200] "GET
/wp-admin/admin-ajax.php?action=yop_poll_load_js&#038;id=-1&#038;location=page&#038;unique_id=_yp5a0d4965c79ac&#038;ver=5.5
HTTP/1.0" 400 226 "-" "-"


My configuration(s) is very standard / basic.

https://pastebin.com/B6rFwnb6

https://pastebin.com/wsdGPC74

I would have liked to have used 'health_check', but that is only
available in Nginx Plus.

If I run a tcptraceroute to port 80 in a while loop to the back end
servers, everything is ok, and going back to LVS, everything is ok.

I would like to ask, If is a way to either ignore 400 error status or if
I can ask, what is a better way to manage and handle this.


Many thanks, regards

Brent



_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Regex on Variable ($servername) (3 replies)

$
0
0
Hello,

i try to setup a catch all proxy server with nginx.
I want to catch domains like this but have only domainname (without
subdomain) in $domain

In this example from nginx docs domain has "fullname".

server {
    server_name ~^(www\.)?(*?<domain>*.+)$;
    root /sites/*$domain*;
}

servername: www.example.com -> $domain should be example.com

Best Regards
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx optimal speed in limit_rate for video streams (no replies)

$
0
0
So when dealing with mp4 etc video streams what is the best speed to send / transfer files to people that does not cause delays in latency / lagging on the video due etc.

My current :

location /video/ {
mp4;
limit_rate_after 1m;
limit_rate 1m;
}


On other sites when i download / watch videos it seems they transfer files at speeds of 200k/s

Should I lower my rates ?

Nginx dynamic proxy_pass keeps redirecting to wrong domain (3 replies)

$
0
0
I am using the following config:

http {
server {
listen 80;

location / {
resolver 127.0.0.11;

auth_request /auth;
auth_request_set $instance $upstream_http_x_instance;

proxy_pass http://$instance;
}

location = /auth {
internal;
proxy_pass http://auth;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header X-Original-URI $request_uri;
}
}
}

I want to auth all routes (location /) to this server. It is a content
server.

The proxy_pass http://auth; call does the real authentication and is a Go
Lang server. The response in this requests sets also a header X-Instance.
Which reflects a name of a docker service, for example instance-001.

If authentication succeeds auth_request_set is set with the value of the
header X-Instance for example instance-001.

Now I want to serve content from this instance by utilizing proxy_pass
http://$instance;. Now I have read a lot about dynamic proxy_pass and what
to do, but nothing succeeds.

The problem is, when I go to http://example.com/cdn/test/test.jpg in the
browser, it redirects me to http://instance-001/cdn/test/test.jpg. Which is
ofcourse not correct. It should proxy the docker service with name
instance-001.

I have looked into proxy_redirect but for me it isn't clear how to set it
correctly. I also tries a rewrite like rewrite ^(/.*) $1 break; in location
= /auth. But still have the annoying redirect to
http://instance-001/cdn/test/test.jpg. I've been struggling with this for a
very long time now and I also can't find a solid solution.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

why is proxy_cache_lock not working? (no replies)

$
0
0
I am running nginx server in front of httpd server.

nginx -V
nginx version: nginx/1.10.2
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt=' -Wl,-E'



Below are the parts of the nginx configuration


proxy_cache_path /tmp/nginx/cache
levels=1:2
keys_zone=my_cache:1m
max_size=10g
inactive=30s
use_temp_path=off;

location ~*\.(ts)$ {

##### Proxy cache settings

proxy_http_version 1.1;
proxy_cache my_cache;
proxy_cache_revalidate on;
proxy_cache_key $uri;
proxy_cache_use_stale updating;
proxy_cache_valid any 1m;
proxy_cache_min_uses 1;
proxy_cache_lock on;
proxy_cache_lock_age 5s;
proxy_cache_lock_timeout 1h;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;
proxy_ignore_headers Set-Cookie;


proxy_pass http://192.168.2.225:8080;
}


Even with proxy_cache_lock on, the original server is getting hit on every miss request per the logs below.

nginx access logs:

HOST82 - MISS [17/Nov/2017:00:22:03 -0600] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (298.00E04108A)" "-"
HOST225 - MISS [17/Nov/2017:00:22:04 -0600] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (508.00E03138A)" "-"
HOST187 - MISS [17/Nov/2017:00:22:05 -0600] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992 "-" "Roku/DVP-7.70 (047.70E04135A)" "-"
HOST125 - MISS [17/Nov/2017:00:22:06 -0600] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (248.00E04108A)" "-"
HOST80 - MISS [17/Nov/2017:00:22:08 -0600] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (288.00E04108A)" "-"

HOST80 - MISS [17/Nov/2017:00:22:15 -0600] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (288.00E04108A)" "-"
HOST82 - MISS [17/Nov/2017:00:22:16 -0600] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (298.00E04108A)" "-"
HOST225 - MISS [17/Nov/2017:00:22:18 -0600] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 206 262144 "-" "Roku/DVP-8.0 (508.00E03138A)" "-"
HOST187 - MISS [17/Nov/2017:00:22:19 -0600] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604 "-" "Roku/DVP-7.70 (047.70E04135A)" "-"


Origin httpd logs

192.168.2.226 - - [17/Nov/2017:06:29:40 +0000] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992
192.168.2.226 - - [17/Nov/2017:06:29:41 +0000] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992
192.168.2.226 - - [17/Nov/2017:06:29:42 +0000] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992
192.168.2.226 - - [17/Nov/2017:06:29:43 +0000] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992
192.168.2.226 - - [17/Nov/2017:06:29:45 +0000] "GET profile1/76861/HD_profile1_00001.ts HTTP/1.1" 200 4696992

192.168.2.226 - - [17/Nov/2017:06:29:52 +0000] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604
192.168.2.226 - - [17/Nov/2017:06:29:53 +0000] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604
192.168.2.226 - - [17/Nov/2017:06:29:55 +0000] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604
192.168.2.226 - - [17/Nov/2017:06:29:56 +0000] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604
192.168.2.226 - - [17/Nov/2017:06:29:56 +0000] "GET profile1/76861/HD_profile1_00002.ts HTTP/1.1" 200 4527604

192.168.2.226 - - [17/Nov/2017:06:30:03 +0000] "GET profile1/76861/HD_profile1_00003.ts HTTP/1.1" 200 4866004
192.168.2.226 - - [17/Nov/2017:06:30:05 +0000] "GET profile1/76861/HD_profile1_00003.ts HTTP/1.1" 200 4866004
192.168.2.226 - - [17/Nov/2017:06:30:05 +0000] "GET profile1/76861/HD_profile1_00003.ts HTTP/1.1" 200 4866004
192.168.2.226 - - [17/Nov/2017:06:30:07 +0000] "GET profile1/76861/HD_profile1_00003.ts HTTP/1.1" 200 4866004
192.168.2.226 - - [17/Nov/2017:06:30:08 +0000] "GET profile1/76861/HD_profile1_00003.ts HTTP/1.1" 200 4866004


What am I missing?

Nginx can't reload even when config is OK (1 reply)

$
0
0
Hello everybody,

I have been working with Nginx for quite a long time, but I have stumbled upon a very strange error.
Nginx is compiled from source on CentOS 7.

All systemctl commands work, except the reload function.
Job for nginx.service failed because the control process exited with error code. See "systemctl status nginx.service" and "journalctl -xe" for details.

I have already tried to do the actions manually, which seemed to be working. The
/bin/kill -s HUP $MAINPID

systemctl status nginx.service -l gives me:
nginx.service - The nginx HTTP and reverse proxy server
Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
Active: active (running) (Result: exit-code) since Sat 2017-11-18 08:17:14 EST; 1min 34s ago
Process: 22036 ExecReload=/bin/kill -s HUP (code=exited, status=1/FAILURE)
Process: 22025 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
Process: 22020 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
Process: 22018 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
Main PID: 22028 (nginx)
CGroup: /system.slice/nginx.service
22028 nginx: master process /usr/sbin/ngin
22029 nginx: worker proces

Nov 18 08:17:21 host kill[22036]: -s, --signal <sig> send specified signal
Nov 18 08:17:21 host kill[22036]: -q, --queue <sig> use sigqueue(2) rather than kill(2)
Nov 18 08:17:21 host kill[22036]: -p, --pid print pids without signaling them
Nov 18 08:17:21 host kill[22036]: -l, --list [=<signal>] list signal names, or convert one to a name
Nov 18 08:17:21 host kill[22036]: -L, --table list signal names and numbers
Nov 18 08:17:21 host kill[22036]: -h, --help display this help and exit
Nov 18 08:17:21 host kill[22036]: -V, --version output version information and exit
Nov 18 08:17:21 host kill[22036]: For more details see kill(1).
Nov 18 08:17:21 host systemd[1]: nginx.service: control process exited, code=exited status=1
Nov 18 08:17:21 host systemd[1]: Reload failed for The nginx HTTP and reverse proxy server.

nginx -t are passed successfully. The logs don't provide any information. Config is default config.

Does anybody know where I have to look now?

Thanks!

on the planet, no problem who you are or where you are. What is also a spot is that training and getting less meals coul (no replies)

$
0
0
d make you get thinner. This isn't too tricky, we all know it, so why don't we just eat less and work out then? Because the thoughts tells you it need meals, the fattier the better, more sugar please and NOW! Why it happens to a lot greenlyte forskolin individuals is still unknown, but all brains are wired dgreenlyte forskolin hoaxferently. The reality is, the thoughts is stopping you from residing living you deserve. Want to know how to fool your brain?There are a couple greenlyte forskolin strate <a href="https://nutritionless.com/greenlyte-forskolin/"> greenlyte forskolin hoax </a> ies to do it. I don't know greenlyte forskolin hoax you've probably observed greenlyte forskolin Pavlov's dog. Pavlov changed the dog's thoughts, the behavior greenlyte forskolin it by greenlyte forskolinfering it jolts greenlyte forskolin electricity whenever the dog wanted to do what he wanted. The dog's thoughts then learned that greenlyte forskolin hoax it was going to do what he wanted, he would get electrocuted. That's one way greenlyte forskolin doing it, but it seems type greenlyte forskolin extreme does not it? I suppose you could have someone present you with an electrical shock whenever you believed greenlyte forskolin extra fat, but that needs a lot greenlyte forskolin persistence and what you don't have 's time, and you definitely don't want the attempt. The other way is the smart way greenlyte forskolin doing it…Have you observed greenlyte forskolin a cactus known as Hoodia Gordonii? Why would you want to know about that? Because it will actually change your everyday way greenlyte forskolin lgreenlyte forskolin hoaxestyle absolutely. It's a place growing in Africa that was discovered duringou are more than likely sick greenlyte forskolin hearing about what will not execute. You have greenlyte forskolinten observed all the reports on how fad weight-loss programs will never help you get thinner. The problem then is what can you do to reduce weight? Here are some recommendations that can certainly make it easier to get rid

Nginx reverse proxy with Sharepoint web (1 reply)

$
0
0
Hi Guys,

I am kindaa facing an issue with sharepoint sub-sites authentication with nginx as a reverse proxy. Somehow primary site is perfectly getting authenticated with upstream and ntlm however subsites shows 401 and 404 error.

Does anyone have any use case or working configuration with sharepoint and nginx as reverse proxy?

Thanks and Regards,
Blason

confused about ngx_write_file (no replies)

$
0
0
Hi
I'm writing some module with using ngx_write_file. I found file's offset is incorrect, in this case:
```
u_char av = 0x01 | 0x04;
// file cur offset -> 4096;
ngx_write_file(file, &av, 1, 4)
```
file->offset would be 4097, but real offset is 4096.

I'm confused with the codes :
192 ssize_t
193 ngx_write_file(ngx_file_t *file, u_char *buf, size_t size, off_t offset)
194 {
........
221
222 file->offset += n;
223 written += n;
224
225 if ((size_t) n == size) {
226 return written;
227 }
228
229 offset += n;
230 size -= n;
231 }

should it be:
222 written += n;
223 offset += n;
224
225 if (offset > file->offset) {
226 file->offset = offset;
227 }
228
229 if ((size_t) n == size) {
230 return written;
231 }
232
233 size -= n;
234 }


Thanks!

Issue with flooded warning and request limiting (no replies)

$
0
0
Hello

We are using nginx as a proxy server in front of our IIS servers.

We have a client who needs to call us up to 200 times per second. Due to
the roundtrip-time, 16 simultanious connections are opened from the client
and each connection is used independently to send a https request, wait for
x ms and then send again.



I have been doing some tests and looked into the throttle logic in the
nginx-code. It seems that when setting request limit to 200/sec it is
actually interpreted as “minimum 5ms per call” in the code. If we receive 2
calls at the same time, the warning log will show an “excess”-message and
the call will be delayed to ensure a minimum of 5ms between the calls..
(and if no burst is set, it will be an error message in the log and an
error will be returned to the client)



We have set burst to 20 meaning, that when our client only sends 1 request
at a time per connection, he will never get an error reply from nginx,
instead nginx just delays the call. I conclude that this is by design.



The issue, however, is that a client using multiple connections naturally
often wont be able to time the calls between each connection. And even
though our burst has been set to 20, our log is spawned by warning-messages
which I do not think should be a warning at all. There is a difference
between sending 2 calls at the same time and sending a total of 201
requests within a second, the latter being the only case I would expect to
be logged as a warning.



Instead of calculating the throttling by simply looking at the last call
time and calculate a minimum timespan between last call and current call, I
would like the logic to be that nginx keeps a counter of the number of
requests withing the current second, and when the second expires and a new
seconds exists, the counter Is reset.



I know this will actually change the behavior of nginx, so I understand why
this is a breaking change if the solution was just to replace the other
logic. However, configuring which logic that should be used would be of
huge value to us. This will allow us to keep using the warning-log for
stuff that should actually be warned and not just for “10 calls per second
which happened to be withing a few milis”.


I hope you will read this mail and please let me know If I need to explain
something in more details about the issue.

---

Med venlig hilsen / Best Regards
Stephan Ryer Møller
Partner & CTO

inMobile ApS
Axel Kiers Vej 18L
DK-8270 Højbjerg

Dir. +45 82 82 66 92
E-mail: sr@inmobile.dk

Web: www.inmobile.dk
Tel: +45 88 33 66 99
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Issue with AWS NLB and nginx (no replies)

$
0
0
Hi all,

I was hoping someone might have an idea here.. I have a number of nginx
doing load balancing sitting behind AWS's network load balancers (TCP) -
which seem to only support TCP checks.

Recently a few have stopped working / frozen - they still seem to accept a
tcp connection from the NLB - which leads the health check not to fail.
But they cannot internally process the request and you cannot even ssh into
the machine. A reboot is required and that takes longer than normal.

I think the failure is related to a disk issue since the only error in the
entire logs where regarding the disk. (error logs below)

Ideally if nginx or the O/S fails it would be better if the port just
closed. I've considered writing a small daemon that monitors via http
locally and keeps a port open if everything is ok.

These machines have been running for months now without any issues until
now.

Anyone have an idea?

Thanks!

----

[4161960.544106] INFO: task jbd2/xvda1-8:271 blocked for more than 120 seconds.

[4161960.551035] Not tainted 4.4.0-1022-aws #31-Ubuntu

[4161960.556118] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.

[4161960.562846] INFO: task monit:13224 blocked for more than 120 seconds.

[4161960.567394] Not tainted 4.4.0-1022-aws #31-Ubuntu

[4161960.571120] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.

[4162080.576076] INFO: task dhclient:696 blocked for more than 120 seconds.

[4162080.579596] Not tainted 4.4.0-1022-aws #31-Ubuntu

[4162080.582355] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.

[4162080.586470] INFO: task monit:13224 blocked for more than 120 seconds.

[4162080.589847] Not tainted 4.4.0-1022-aws #31-Ubuntu

[4162080.592654] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.

[4162200.596100] INFO: task jbd2/xvda1-8:271 blocked for more than 120 seconds.

[4162200.599646] Not tainted 4.4.0-1022-aws #31-Ubuntu

[4162200.602422] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.

[4162200.606423] INFO: task dhclient:696 blocked for more than 120 seconds.

[4162200.610118] Not tainted 4.4.0-1022-aws #31-Ubuntu

[4162200.613093] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.

[4162200.617889] INFO: task monit:13224 blocked for more than 120 seconds.

[4162200.621641] Not tainted 4.4.0-1022-aws #31-Ubuntu

[4162200.624506] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.

[4162244.551431] systemd[1]: Failed to start Journal Service.

[4162320.628099] INFO: task jbd2/xvda1-8:271 blocked for more than 120 seconds.

[4162320.631942] Not tainted 4.4.0-1022-aws #31-Ubuntu

[4162320.635012] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.

[4162320.639647] INFO: task dhclient:696 blocked for more than 120 seconds.

[4162320.643241] Not tainted 4.4.0-1022-aws #31-Ubuntu

[4162320.646233] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.

[4162320.650712] INFO: task monit:13224 blocked for more than 120 seconds.

[4162320.654190] Not tainted 4.4.0-1022-aws #31-Ubuntu

[4162320.657183] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
disables this message.

[4162334.801390] systemd[1]: Failed to start Journal Service.

[4162425.051503] systemd[1]: Failed to start Journal Service.

[4162515.301393] systemd[1]: Failed to start Journal Service.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live