Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

$upstream_addr returning "-" only on requests with "del" in them (4 replies)

$
0
0
Hi guys,

I have a problem with some of the requests sent to my Nginx load balancer, which reports (in the access_log configured to show $upstream_addr) that $upstream_addr is equal to "-", but only in a weird case where the post contains the word "del".

I'm using Nginx 1.10.0 packaged in Ubuntu 16.04.4, in a development cluster of VMs with Nginx serving as load balancer to serve a bunch Drupal 7 sites that an Apache+modPHP is serving (I could use Nginx+PHP-FPM but that's not the point here).

So it's a web-facing VM with Nginx that passes to another VM with Apache (through proxy_pass). No "effective" load balancing (only one upstream server in the backend block).

I've tried to maintain customizations to a reasonable minimum to avoid introducing too many variables.

Inside Drupal 7 (which I installed under the Apache backend server), I have nodes that I would like to edit.

Now, on several nodes, when I edit a textarea with whatever I like, everything works fine. The request is passed to Nginx, then to Apache, and I can see that in the access logs for both.

However, if the textarea contains the work "del" (I know... weird), then the request gets to Nginx and the $upstream_addr is generated as "-" and no request reaches the upstream server.

How can I debug that?
I've tried putting the error_log to "debug" but it's apparently not an error.
The access_log provides me with this weird case of $upstream_addr = '-', but that's all I get...

Thanks for your help!

Is it possible to analyze result and query a second server? (no replies)

$
0
0
Hi,

Tried to search on this for a couple of hours, but had no luck, hoping you
guys can help.

I have a use case, that I need to proxy the request to server-A first, then
if returns 200, then it'll query server-B and return that result. If it
returned != 200, just return 404. Something like this:

function pseudoCode() {
if (server-a.process() == 200) {
return server-b.process()
}

// Got a non-200 from server-a, just return 404
return 404
}

Is there a straightforward way of doing this in nginx?

Thanks!
Jason
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Page loading is very slow with ngx_http_auth_pam_module (no replies)

$
0
0
Hello,

I built nginx with ngx_http_auth_pam_module, setup linux-pam for local
passwords with pam_unix module and setup nginx to use this pam config.
Linux-pam config file is below:

auth sufficient pam_unix.so nullok
account required pam_unix.so

When I did this, loading of page is very slow. If i remove this config
and simply setup nginx for basic authentication (with auth_basic), page
loading is again turns to normal.

Is there anyone who observed same thing with me ? Any information will
be helpful.

Kind regards,
Cumali Ceylan
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

different Memory consumption for H1 and H2 (1 reply)

$
0
0
Hi

Recently, I did an experiment to test the memory consumption of nginx. I
request a large static zip file. I explored the debug information of nginx.

For H2, below is a part of the log, I noticed that every time server will
allocate 65536 bytes, I increase the connection number, I noticed that the
server's memory consumption will reach to a threshhold and then increase
very slowly:
2017/05/11 04:54:20 [debug] 29451#0: *10499 http2:1 HEADERS frame
00000000026155F0 was sent
2017/05/11 04:54:20 [debug] 29451#0: *10499 http2 frame sent:
00000000026155F0 sid:1 bl:1 len:119
2017/05/11 04:54:20 [debug] 29451#0: *10499 http output filter
"/image/test.zip?"
2017/05/11 04:54:20 [debug] 29451#0: *10499 http copy filter:
"/image/test.zip?"
2017/05/11 04:54:20 [debug] 29451#0: *10499 malloc: 0000000002699A80:65536
2017/05/11 04:54:20 [debug] 29451#0: *10499 read: 14, 0000000002699A80,
65536, 0
2017/05/11 04:54:20 [debug] 29451#0: *10499 http postpone filter
"/image/test.zip?" 0000000002616098
2017/05/11 04:54:20 [debug] 29451#0: *10499 write new buf t:1 f:1
0000000002699A80, pos 0000000002699A80, size: 65536 file: 0, size: 65536
2017/05/11 04:54:20 [debug] 29451#0: *10499 http write filter: l:0 f:1
s:65536
2017/05/11 04:54:20 [debug] 29451#0: *10499 http write filter limit 0
2017/05/11 04:54:20 [debug] 29451#0: *10499 http2:1 create DATA frame
00000000026155F0: len:1 flags:0
2017/05/11 04:54:20 [debug] 29451#0: *10499 http2 frame out:
00000000026155F0 sid:1 bl:0 len:1
2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL buf copy: 9
2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL buf copy: 1
2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL to write: 138
2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL_write: 138
2017/05/11 04:54:20 [debug] 29451#0: *10499 http2:1 DATA frame
00000000026155F0 was sent
2017/05/11 04:54:20 [debug] 29451#0: *10499 http2 frame sent:
00000000026155F0 sid:1 bl:0 len:1
2017/05/11 04:54:20 [debug] 29451#0: *10499 http write filter
00000000026160A8
2017/05/11 04:54:20 [debug] 29451#0: *10499 malloc: 00000000026A9A90:65536

For H/1.1, below is a part of the debug log, no malloc is noticed during
the send file process. And even when I increase the connection number to a
very large value, the result shows nginx's memory consumption is still very
low. :
2017/05/11 22:29:06 [debug] 29451#0: *11015 http run request:
"/image/test.zip?"
2017/05/11 22:29:06 [debug] 29451#0: *11015 http writer handler:
"/image/test.zip?"
2017/05/11 22:29:06 [debug] 29451#0: *11015 http output filter
"/image/test.zip?"
2017/05/11 22:29:06 [debug] 29451#0: *11015 http copy filter:
"/image/test.zip?"
2017/05/11 22:29:06 [debug] 29451#0: *11015 http postpone filter
"/image/test.zip?" 0000000000000000
2017/05/11 22:29:06 [debug] 29451#0: *11015 write old buf t:0 f:1
0000000000000000, pos 0000000000000000, size: 0 file: 72470952, size: 584002
2017/05/11 22:29:06 [debug] 29451#0: *11015 http write filter: l:1 f:0
s:584002
2017/05/11 22:29:06 [debug] 29451#0: *11015 http write filter limit 0
2017/05/11 22:29:06 [debug] 29451#0: *11015 sendfile: @72470952 584002
2017/05/11 22:29:06 [debug] 29451#0: *11015 sendfile: 260640 of 584002
@72470952
2017/05/11 22:29:06 [debug] 29451#0: *11015 http write filter
0000000002670F70
2017/05/11 22:29:06 [debug] 29451#0: *11015 http copy filter: -2
"/image/test.zip?"
2017/05/11 22:29:06 [debug] 29451#0: *11015 http writer output filter: -2,
"/image/test.zip?"
2017/05/11 22:29:06 [debug] 29451#0: *11015 event timer: 3, old:
1494513006630, new: 1494513006763

Hope to get your comments and what are the difference of nginx's memory
allocation mechanisms between HTTP/2.0 and HTTP/1.1. Many Thanks

Regards
Muhui
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reverse-proxying: Flask app with Bokeh server on Nginx (1 reply)

$
0
0
I have created a website with Flask that is serving a Bokeh app on a
Digital Ocean VPN. Everything worked fine until I secured the server with
Let's Encrypt following this tutorial
https://www.digitalocean.com/community/tutorials/how-to-secure-nginx-with-let-s-encrypt-on-ubuntu-14-04
..

In step 3 of the tutorial the Nginx configuration file is changed, which
might be the crux of the problem I'm getting:

When I go on the website, the Flask content is rendered perfectly. However,
the Bokeh app is not running. In the Inspection Console I get the following
Error (note that I hashed out the IP address of my website):

Mixed Content: The page at 'https://example.com/company_abc/' was
loaded over HTTPS,
but requested an insecure script
'http://###.###.###.##:5006/company_abc/autoload.js?bokeh-autoload-element=f…aab19c633c95&bokeh-session-id=AvWhaYqOzsX0GZPOjTS5LX2M7Z6arzsBFBxCjb0Up2xP'.
This request has been blocked; the content must be served over HTTPS.

I understand that I might have to use a method called reverse proxying,
which is described here
<http://bokeh.pydata.org/en/latest/docs/user_guide/server.html#reverse-proxying-with-nginx-and-ssl>.
However, I wasn't able to get it to work.

Does anybody have an idea how to solve this? A similar problem was
described here
<http://stackoverflow.com/questions/38081389/bokeh-server-reverse-proxying-with-nginx-gives-404/38505205#38505205>
..

Here are my modified server files:

'/etc/nginx/sites-available/default':

upstream flask_siti {
server 127.0.0.1:8118 fail_timeout=0;}
server {
listen 443 ssl;

server_name example.com www.example.com;

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_ciphers
'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security max-age=15768000;

charset utf-8;
client_max_body_size 75M;

access_log /var/log/nginx/flask/access.log;
error_log /var/log/nginx/flask/error.log;

keepalive_timeout 5;

location / {
# checks for static file, if not found proxy to the app
try_files $uri @proxy_to_app;
}

location @proxy_to_app {
proxy_redirect off;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://flask_siti;
}}

server {
listen 80;
server_name example.com www.example.com;
return 301 https://$host$request_uri;}

'/etc/supervisor/conf.d/bokeh_serve.conf':

[program:bokeh_serve]
command=/opt/envs/virtual/bin/bokeh serve company_abc.py
company_xyz.py --allow-websocket-origin=www.example.com
--allow-websocket-origin=example.com --host=###.###.###.##:5006
--use-xheaders
directory=/opt/webapps/flask_telemetry
autostart=false
autorestart=true
startretries=3
user=nobody

'/etc/supervisor/conf.d/flask.conf':

[program:flask]
command=/opt/envs/virtual/bin/gunicorn -b :8118 website_app:app
directory=/opt/webapps/flask_telemetry
user=nobody
autostart=true
autorestart=true
redirect_stderr=true

And here is my Flask app (Note that I hashed out security related info):

from flask import Flaskfrom flask_sqlalchemy import SQLAlchemyfrom
flask import render_template, request, redirect, url_forfrom
flask_security import Security, SQLAlchemyUserDatastore, UserMixin,
RoleMixin, login_required, roles_accepted, current_userfrom
flask_security.decorators import anonymous_user_requiredfrom
flask_security.forms import LoginFormfrom bokeh.embed import
autoload_serverfrom bokeh.client import pull_sessionfrom wtforms
import StringFieldfrom wtforms.validators import InputRequiredfrom
werkzeug.contrib.fixers import ProxyFix

app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] =
'postgresql://###:###@localhost/telemetry'
app.config['SECRET_KEY'] = '###'
app.config['SECURITY_REGISTERABLE'] = True
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
app.config['SECURITY_USER_IDENTITY_ATTRIBUTES'] = 'username'
app.config['SECURITY_POST_LOGIN_VIEW'] = '/re_direct'
app.debug = True
db = SQLAlchemy(app)
# Define models
roles_users = db.Table('roles_users',
db.Column('user_id', db.Integer(), db.ForeignKey('user.id')),
db.Column('role_id', db.Integer(), db.ForeignKey('role.id')))
class Role(db.Model, RoleMixin):
id = db.Column(db.Integer(), primary_key=True)
name = db.Column(db.String(80), unique=True)
description = db.Column(db.String(255))
class User(db.Model, UserMixin):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(255), unique=True)
password = db.Column(db.String(255))
active = db.Column(db.Boolean())
confirmed_at = db.Column(db.DateTime())
roles = db.relationship('Role', secondary=roles_users,
backref=db.backref('users', lazy='dynamic'))
class ExtendedLoginForm(LoginForm):
email = StringField('Username', [InputRequired()])
# Setup Flask-Security
user_datastore = SQLAlchemyUserDatastore(db, User, Role)
security = Security(app, user_datastore, login_form=ExtendedLoginForm)
# Views@app.route('/')@anonymous_user_requireddef index():
return render_template('index.html')
@app.route('/re_direct/')@login_requireddef re_direct():
identifier = current_user.username
print(identifier)
return redirect(url_for(identifier))
@app.route('/index/')@login_required@roles_accepted('admin')def admin():
return render_template('admin.html')
@app.route("/company_abc/")@login_required@roles_accepted('company_abc',
'admin')def company_abc():
url='http://###.###.###.##:5006'
session=pull_session(url=url,app_path="/company_abc")
bokeh_script=autoload_server(None,app_path="/company_abc",session_id=session.id,url=url)
return render_template("company_abc.html", bokeh_script=bokeh_script)
@app.route("/company_xyz/")@login_required@roles_accepted('company_xyz',
'admin')def company_xyz():
url='http://###.###.###.##:5006'
session=pull_session(url=url,app_path="/company_xyz")
bokeh_script=autoload_server(None,app_path="/company_xyz",session_id=session.id,url=url)
return render_template("company_xyz.html", bokeh_script=bokeh_script)

app.wsgi_app = ProxyFix(app.wsgi_app)
if __name__ == '__main__':
app.run()
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: fastcgi cache background update ssi подзапросов (no replies)

$
0
0
https://mailman.nginx.org/mailman/listinfo/nginx-ru
---
*B. R.*

2017-05-10 19:18 GMT+02:00 Roman Arutyunyan <arut@nginx.com>:

> Добрый день,
>
> On Wed, May 10, 2017 at 12:04:39PM -0400, metalfm1 wrote:
> > Приветствую!
> >
> > Директива fastcgi_cache_background_update странно ведёт себя при ssi
> > подзапросах.
> > Есть сервис со сложной бизнес логикой, главная страница загружается 1
> сек,
> > стек nginx + php-fpm. В ходе оптимизации скорости загрузки было решено
> > вынести генерацию самого долгого куска страницы в отдельный ssi
> подзапрос и
> > кешировать его на 1 час. Кешированием управляет fastcgi сервер на php с
> > помощью заголовка Cache-Control.
> >
> > С кеширование проблем нет, nginx успешно кеширует подзапросы /ssi_dev/ и
> > складывает их на диск. Проблемы начинаются когда кеш протухает.
> >
> > Текущее поведение nginx
> > - если есть элемент в кеше, то он успешно отдается(HIT)
> > - если есть элемент в кеше, но он устарел, то клиенту отдаётся устаревшая
> > версия(STALE) и делается подзапрос на прогрев кеша(EXPIRED)
> >
> > Проблема заключается в том, что подзапрос на прогрев кеша выполняется в
> > блокирующем режиме. То есть основной запрос ждёт выполнения подзапроса.
> > Указанная выше проблема не наблюдается, если в кеш класть всю страницу,
> > поведение nginx соответствует документации. Клиенту отдаётся старая
> версия
> > контента и делается неблокирующий подзапрос на обновление.
>
> На текущий момент background update реализован так, что он блокирует
> основной
> запрос, если запущен в подзапросе. Это как раз ваш случай.
> В таск https://trac.nginx.org/nginx/ticket/1249 я аттачил патч, который
> должен
> это вылечить.
>
> [..]
>
> --
> Roman Arutyunyan
> _______________________________________________
> nginx mailing list
> nginx@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Last roadblock changing from Apache: SSL & PHP (5 replies)

$
0
0
People,

If I can solve this last problem (that I have just spent all night on),
I can completely replace Apache with Nginx. I am using RoundCubeMail as
my Webmail client - it is written in PHP (the only PHP thing on my
server) but it has been working happily with Apache for many years. I
have RCM in an SSL protected directory:

/home/ssl/webmail

When I couldn't get that working I tried testing the setup with a
simple:

/home/ssl/index.php

file that outputs PHP info (attached) - but I had exactly the same
problem with that - a blank screen except for a green block cursor in
the bottom right of the screen ie no text output in the browser and no
errors in any of the logs.

I also attach:

/etc/nginx/conf.d/php-fpm.conf

and:

/etc/php-fpm.d/www.conf

I would _really_ appreciate it if anyone could tell me what is wrong
with my configuration . . (running on Fedora 25 x86_64).

Thanks,

Phil.
--
Philip Rhoades

PO Box 896
Cowra NSW 2794
Australia
E-mail: phil@pricom.com.au# PHP-FPM FastCGI server
# network or unix domain socket configuration

upstream php {
server unix:/run/php-fpm/www.sock ;
}

server {
listen 443;
root /home/ssl ;
index index.php index.html index.htm;
server_name pricom.com.au ;

ssl on ;
# ssl_certificate should be pointed to the file with combined certificates (file you created in step 2)
ssl_certificate /etc/nginx/ssl/cert_chain.crt ;
# ssl_certificate_key should be pointed to the Private Key that has been generated with the CSR code that you have used for activation of the certificate.
ssl_certificate_key /etc/nginx/ssl/www.pricom.com.au.key ;

location / {
try_files $uri $uri/ =404;
}

location ~ .php$ {
fastcgi_index index.php;
include fastcgi_params;
fastcgi_pass php;
}
}

# server {
# listen 443 ;
# root /home/ssl ;
# index index.php index.html index.htm;
# server_name pricom.com.au ;
#
# ssl on ;
# # ssl_certificate should be pointed to the file with combined certificates (file you created in step 2)
# ssl_certificate /etc/nginx/ssl/cert_chain.crt ;
# # ssl_certificate_key should be pointed to the Private Key that has been generated with the CSR code that you have used for activation of the certificate.
# ssl_certificate_key /etc/nginx/ssl/www.pricom.com.au.key ;
#
# location / {
# try_files $uri $uri/ @missing;
# }
#
# location @missing {
# rewrite (.*) /index.php;
# }
#
# location ~ .php$ {
# fastcgi_index index.php;
# include fastcgi_params;
# fastcgi_pass php;
# }
# }



_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx ssl_verify_client on leads to segmentation fault (no replies)

$
0
0
Hello,
I'm running nginx from git HEAD, when I add the following two lines to a
https server:

ssl_client_certificate /tmp/ca.crt;
ssl_verify_client on;

and connect to the website, I get:

2017/05/15 08:12:04 [alert] 9109#0: worker process 12908 exited on signal 11 (core dumped)
2017/05/15 08:12:04 [alert] 9109#0: worker process 12909 exited on signal 11 (core dumped)
2017/05/15 08:12:10 [alert] 9109#0: worker process 12916 exited on signal 11 (core dumped)

I enabled cores and get:

(infra) [/tmp] gdb /local/nginx/sbin/nginx core
Reading symbols from /local/nginx/sbin/nginx...done.
[New LWP 12916]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `nginx: worker process '.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 0x00007fbd9b8653db in ?? () from /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
(gdb) bt
#0 0x00007fbd9b8653db in ?? () from /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
#1 0x00007fbd9c5c2a16 in ngx_ssl_remove_cached_session (ssl=0x0, sess=0x7fbd9eb7ccf0) at src/event/ngx_event_openssl.c:2698
#2 0x00007fbd9c5d3633 in ngx_http_process_request (r=r@entry=0x7fbd9e67d6b0) at src/http/ngx_http_request.c:1902
#3 0x00007fbd9c5d3a2a in ngx_http_process_request_headers (rev=rev@entry=0x7fbd9eb0fa30) at src/http/ngx_http_request.c:1358
#4 0x00007fbd9c5d3ceb in ngx_http_process_request_line (rev=rev@entry=0x7fbd9eb0fa30) at src/http/ngx_http_request.c:1031
#5 0x00007fbd9c5d4092 in ngx_http_wait_request_handler (rev=0x7fbd9eb0fa30) at src/http/ngx_http_request.c:506
#6 0x00007fbd9c5d4142 in ngx_http_ssl_handshake_handler (c=0x7fbd9ec7b4c0) at src/http/ngx_http_request.c:814
#7 0x00007fbd9c5c1714 in ngx_ssl_handshake_handler (ev=<optimized out>) at src/event/ngx_event_openssl.c:1389
#8 0x00007fbd9c5beb6d in ngx_epoll_process_events (cycle=<optimized out>, timer=<optimized out>, flags=<optimized out>) at src/event/modules/ngx_epoll_module.c:902
#9 0x00007fbd9c5b6102 in ngx_process_events_and_timers (cycle=cycle@entry=0x7fbd9ec39cd0) at src/event/ngx_event.c:242
#10 0x00007fbd9c5bcdb4 in ngx_worker_process_cycle (cycle=cycle@entry=0x7fbd9ec39cd0, data=data@entry=0x2) at src/os/unix/ngx_process_cycle.c:749
#11 0x00007fbd9c5bb473 in ngx_spawn_process (cycle=cycle@entry=0x7fbd9ec39cd0, proc=0x7fbd9c5bcd3a <ngx_worker_process_cycle>, data=0x2, name=0x7fbd9c64b42d "worker process", respawn=respawn@entry=4) at src/os/unix/ngx_process.c:198
#12 0x00007fbd9c5bd818 in ngx_reap_children (cycle=0x7fbd9ec39cd0) at src/os/unix/ngx_process_cycle.c:621
#13 ngx_master_process_cycle (cycle=0x7fbd9ec39cd0) at src/os/unix/ngx_process_cycle.c:174
#14 0x00007fbd9c5988a0 in main (argc=<optimized out>, argv=<optimized out>) at src/core/nginx.c:375

I attached the ca.crt. It is a self signed with not all fields filled
out.

Please advice, if I should do any more testing.

Cheers,
Thomas
-----BEGIN CERTIFICATE-----
MIIDXTCCAkWgAwIBAgIJAJiHhD7iXgUPMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV
BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX
aWRnaXRzIFB0eSBMdGQwHhcNMTcwNTEyMTEzNTQyWhcNMjcwNTEwMTEzNTQyWjBF
MQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50
ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEAtEfckobyI1uk4n+rqJUiVjKhGt3e98zjGaAZQ49S1Lc+0ZRm5Pch9c7N
koscg6UiR7xPIuGl6GeqRar6vsoSeLXK1ZOA2pEDgRznrISB2NC8kuNL/GQG+Qey
VeVj+to/pi+y3zL7vSX68iM3L8Kn6Ekh5qlOA2f7Jf7ie8evlKx3uLIMiBEddpUz
JJHcNLxIpqXHJbHziyXXrXdFvNm7P34/Qr0ZEu8wPj9qUJbMd/FQ3t5DCDgC5R6w
9P8Mb/yD8EXATRPf0z4LBUmomNvnYgI2azCrxciGwhrwj3w5BQl0Vz5h2tewQjMf
clMkQKu5/6ATJ1SbMXNpLt+rBOPFyQIDAQABo1AwTjAdBgNVHQ4EFgQUBUoxjdMM
JB989mnoEHmEnjOfQjAwHwYDVR0jBBgwFoAUBUoxjdMMJB989mnoEHmEnjOfQjAw
DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAmBfdIoSWvxsrHeRoXSHR
4x/Ec/Y/UF9Zc42RouDhtki8MnFz2HY9BqpMpRY87ECEnTgqzoUEgQe2sd3B1fu8
sfKZ0VSxoWX6ltVK9oB+ThSe1bOQesNrzBjj42d+wHAfjNUBjEEpvmvClu2sl4XF
vwxkRUvDh/zCdnCKp549fhjuBGZYy+I9ETgunyJ1+e7SD9zuMQhqra+HGABhAFs+
+us4gdQd8vB5SV4j0L1Ib+vjPWcO93Vybxtl2ispGt1WkzLYgtaYQ9KsAnP3LMoS
lQeJC2ELGblpZxkA7Lpr8hfW5e9WzK1YhnOs9N2PgUEgVLPnsD2UNpBCQSHB7/Zz
CQ==
-----END CERTIFICATE-----
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re :Re: Re:Reverse-proxying: Flask app with Bokeh server on Nginx (no replies)

$
0
0
>
> Message: 4
> Date: Fri, 12 May 2017 18:26:39 +0300
> From: "Reinis Rozitis" <r@roze.lv>
> To: <nginx@nginx.org>
> Subject: Re: Re:Reverse-proxying: Flask app with Bokeh server on Nginx
> Message-ID: <437D05EFD1A24D9292CCE7BE45B2127C@Neiroze>
> Content-Type: text/plain; format=flowed; charset="UTF-8";
> reply-type=original
>
> > 3. in the Flask app, I changed the URL
> > to:url='https://138.197.132.46:5006/bokeh/'
> > Now, when I open the app in the browser I get a 502 Bad Gateway, and the
> > Flask log file says the following:
> > raise IOError("Cannot pull session document because we failed to connect
> > to the server (to start the server, try the 'bokeh serve' command)")
>
> Well seems that the Flask app uses the url also for background requests.
>
> You can't mix 'https://' and :5006 port in same url - this way the
> request
> goes to port 5006 but it expects to be also encrypted but if I understand
> correctly bokeh doesn't support SSL.
>
>
> p.s. for best performance you could tweak that the Flask->bokeh requests go
> through http but for the html template/output sent to clients there is
> another variable or relative paths.
>
>
> rr
>
>
>
> ------------------------------
>
> Message: 5
> Date: Fri, 12 May 2017 18:33:47 +0300
> From: "Reinis Rozitis" <r@roze.lv>
> To: <nginx@nginx.org>
> Subject: Re: Re:Reverse-proxying: Flask app with Bokeh server on Nginx
> Message-ID: <24A35BED74E7436B9515F58950D01034@Neiroze>
> Content-Type: text/plain; format=flowed; charset="UTF-8";
> reply-type=original
>
> What I forgot to add you need to change the 'url' (note the domain part)
> to:
>
> url='https://yourdomain/bokeh/'
>
> by looking at your error messages it seems that the 'url' is also directly
> used for client requests (probably placed in the html templated) - which
> means you can't use plain IP because then the browser most likely will just
> generate a SSL certificate and domain mismatch.
>
>
> rr
>


Thanks for answering again.

I followed your advise and change the Flask app script so that I have one
URL to pull the Bokeh session and another one to create the HTML script:

def company_abc():

url='http://127.0.0.1:5006/bokeh'

session=pull_session(url=url,app_path="/company_abc")

url_https='https://www.example.com'

bokeh_script=autoload_server(None,app_path="/company_abc",session_id=
session.id,url=url_https)

return render_template("company_abc.html", bokeh_script=bokeh_script)


This, however, results in the following error in Chrome:

GET
https://www.geomorphix.net/geomorphix/autoload.js?bokeh-autoload-element=dd
…6035f61fef5e&bokeh-session-id=hLR9QX79ofSg4yu7DZb1oHFdT14Ai7EcVCyh1iArcBf5


There's no other explanation. Both, Flask and Bokeh, log files don't
contain error messages.


>
>

> ------------------------------
>
> Message: 6
> Date: Fri, 12 May 2017 21:46:30 +0100
> From: Francis Daly <francis@daoine.org>
> To: J K via nginx <nginx@nginx.org>
> Cc: J K <cas.xyz@googlemail.com>
> Subject: Re: Reverse-proxying: Flask app with Bokeh server on Nginx
> Message-ID: <20170512204630.GC10157@daoine.org>
> Content-Type: text/plain; charset=us-ascii
>
> On Fri, May 12, 2017 at 04:28:12PM +0200, J K via nginx wrote:
>
> Hi there,
>
> > > location /bokeh/ {
> > > proxy_pass http://127.0.1.1:5006;
> > >
> > > # .. with the rest of directives
> > > }
> > >
> > > relaunch the Bokeh app with
> > >
> > > --prefix=/bokeh/
> > >
> > > and (if takes part in the url construction rather than application
> > > background requests) change the url variable in the Flask app
> > >
> > > url='http://###.###.###.##:5006'
> > > to
> > > url='https://yourserver/bokeh/'
>
> > 1. in '/etc/nginx/sites-available/default' I added a new location as
> follow:
> >
> > location /bokeh/ {
> >
> > proxy_pass http://127.0.0.1:5006; # you suggested
> 127.0.
> > *1*.1, but I figured that was a typo
>
> The proxy_pass address should be wherever your "bokeh" http server is
> actually listening.
>
> Which probably means that whatever you use up there...
>
> > command=/opt/envs/virtual/bin/bokeh serve company_abc.py company_xyz.py
> > geomorphix.py --prefix=/bokeh/ --allow-websocket-origin=www.example..com
> > --allow-websocket-origin=example.com --host=138.197.132.46:5006
> > --use-xheaders
>
> you should also use up there as --host.
>
> I suspect that making them both be 127.0.0.1 will be the easiest
> way of reverse-proxying things; but I also suspect that the
> "--allow-websocket-origin" part suggests that you may want to configure
> nginx to reverse proxy the web socket connection too. Notes are at
> http://nginx.org/en/docs/http/websocket.html
>
> It will be helpful to have a very clear picture of what talks to what,
> when things are working normally; that should make it easier to be
> confident that the same links are in place with nginx in the mix.
>
>
Hi Francis,

Thanks for your answer!

As you suggested, I did the following:

1. in '/etc/supervisor/conf.d/bokeh_serve.conf' I changed the host to
127.0.0.1:

[program:bokeh_serve]

command=/opt/envs/virtual/bin/bokeh serve company_abc.py --prefix=/bokeh/
--allow-websocket-origin=www.example.com --allow-websocket-origin=
example.com --host=127.0.0.1:5006 http://138.197.132.46:5006/
--use-xheaders

directory=/opt/webapps/flask_telemetry

autostart=false

autorestart=true

startretries=3
user=nobody

2. I configure nginx to reverse proxy the web socket connection by adding
the following lines to each location block in '/etc/nginx/sites-available/
default':

proxy_http_version 1.1;

proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

3. In the Flask web app code I changed the URL of the route accordingly to
127.0.0.1:

@app.route("/company_abc/")

@login_required

@roles_accepted('company_abc', 'admin')

def geomorphix():

url='http://127.0.0.1:5006/bokeh'

session=pull_session(url=url,app_path="/company_abc")

bokeh_script=autoload_server(None,app_path="/geomorphix",session_id=
session.id,url=url)

return render_template("geomorphix.html", bokeh_script=bokeh_script)


When I enter the website with the Bokeh script in my browser, I get a
connection refused error:

GET http://127.0.0.1:5006/bokeh/example/autoload.js?bokeh-autoload-element=…
9cf799610fb8&bokeh-session-id=8tvMFfJwtVFccTctGHIRPPsT3h6IF6nUFkJ8l6ZQALXl
net::ERR_CONNECTION_REFUSED

Looking at the log file of the Bokeh server, everything seems to be fine:

2017-05-15 08:56:19,267 Starting Bokeh server version 0.12.4
2017-05-15 08:56:19,276 Starting Bokeh server on port 5006 with
applications at paths ['/company_abc']
2017-05-15 08:56:19,276 Starting Bokeh server with process id: 28771
2017-05-15 08:56:24,530 WebSocket connection opened
2017-05-15 08:56:25,304 ServerConnection created

Also the Flask log file shows no error:

[2017-05-15 08:56:13 +0000] [28760] [INFO] Starting gunicorn 19.6.0
[2017-05-15 08:56:13 +0000] [28760] [INFO] Listening at: http://0.0.0.0:8118
(28760)
[2017-05-15 08:56:13 +0000] [28760] [INFO] Using worker: sync
[2017-05-15 08:56:13 +0000] [28765] [INFO] Booting worker with pid: 28765



The Nginx error log '/var/log/nginx/flask/error.log' is empty.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reload of NGinX doesnt kill some of the older worker processes (no replies)

$
0
0
I am facing an issue where once I issued a reload to the NGinX binary, few of the older worker processes are not dying. They still remain orphaned.

This is the configuration before issuing a reload :

[poduser@ucfc2z3a-1582-lb8-nginx1 logs]$ ps -ef | grep nginx
poduser 12540 22030 0 06:39 ? 00:00:00 nginx: worker process
poduser 12541 22030 0 06:39 ? 00:00:00 nginx: worker process
poduser 12762 11601 0 06:41 pts/0 00:00:00 grep nginx
poduser 22030 1 1 May12 ? 00:49:01 nginx: master process /u01/app/Oracle_Nginx/sbin/nginx
poduser 23528 22030 0 May12 ? 00:00:22 nginx: worker process
poduser 24950 22030 0 May12 ? 00:00:22 nginx: worker process

Configuration after issuing a relaod

[poduser@ucfc2z3a-1582-lb8-nginx1 logs]$ ps -ef | grep nginx
poduser 13280 22030 2 06:45 ? 00:00:00 nginx: worker process
poduser 13281 22030 2 06:45 ? 00:00:00 nginx: worker process
poduser 13323 11601 0 06:45 pts/0 00:00:00 grep nginx
poduser 22030 1 1 May12 ? 00:49:02 nginx: master process /u01/app/Oracle_Nginx/sbin/nginx
poduser 23528 22030 0 May12 ? 00:00:22 nginx: worker process
poduser 24950 22030 0 May12 ? 00:00:22 nginx: worker process

If you notice, there are two worker processes orphaned with PID's 23528 and 24950. Could someone please explain the cause for this, as to why few of the worker processes are orphaned?

behavior of cache manager in version 1.10.3 (no replies)

$
0
0
Hi,

The documentation for proxy_cache_path states:
The data is removed in iterations configured by manager_files, manager_threshold, and manager_sleep parameters (1.11.5).

I was wondering what the behavior of the cache manager was prior to release 1.11.5 (specifically, in version 1.10.3).
How often does the cache manager wake up to clean?

Thanks

worker_rlimit_nofile is for total of all worker processes or single worker process? (1 reply)

$
0
0
Hello

I'm confused if the worker_rlimit_nofile directive is for total of all worker processes or single worker process? As I know, the worker_connections is for single worker process. Let's say if I have two worker processes and have worker_connections 512, then should I set worker_rlimit_nofile to 512 or 1024?

Thanks
Xiaofeng

Not having resume ability on secure links (1 reply)

$
0
0
Hello my friends
my problem is that i have resume ability on direct link but when change it to secure link this ability not working, for example :

Direct link
http://www.mydomain.com/uploads/myfolder/1.rar

change to Secure link
http://www.mydomain.com/vfm-admin/vfm-downloader.php?q=dXBsb2Fkcy8xMS4yMi42My8xLnJhcg==&h=4737ee045e8b3b9be5d1fa4caf7de8e9&sh=7774b265a13e0664872a2bbf9d00b40f

What do you think?

UDP Load balancer does not scale (no replies)

$
0
0
Hi

I am trying to set up a UDP load balancer using Nginx. Initially, I configured 4 usptream servers with two server processes running on each of them.
It gave a throughput of around 24000 query per second when tested with dnsperf. When I try to add two more upstreams servers, the throughput is not increasing as expected. In fact, it deteriorates to the range of 5000 query per second with the following error:

[warn] 5943#0: *10433175 upstream server temporarily disabled while proxying connection, udp client: xxx.xxx.xxx.29, server: 0.0.0.0:53, upstream: "xxx.xxx.xxx.224:53", bytes from/to client:80/0, bytes from/to upstream:0/80
[error] 5943#0: *10085077 no live upstreams while connecting to upstream, udp client: xxx.xxx.xxx.224, server: 0.0.0.0:53, upstream: "dns_upstreams", bytes from/to client:80/0, bytes from/to upstream:0/0

I understood that the above error appears when Nginx doesn't receive responses from upstream on time, and it is marked as unavailable temporarily. I used to get this error before even with 4 upstream servers, but after adding the following additional configuration, it had got resolved:

user nginx;
worker_processes 4;
worker_rlimit_nofile 65535;

load_module "/usr/lib64/nginx/modules/ngx_stream_module.so";

error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
worker_connections 10240;
}

stream {
upstream dns_upstreams {
server xxx.xxx.xxx.0:53 max_fails=2000 fail_timeout=30s;
server xxx.xxx.xxx.0:6363 max_fails=2000 fail_timeout=0s;
server xxx.xxx.xxx.187:53 max_fails=2000 fail_timeout=30s;
server xxx.xxx.xxx.187:6363 max_fails=2000 fail_timeout=30s;
server xxx.xxx.xxx.183:53 max_fails=2000 fail_timeout=30s;
server xxx.xxx.xxx.183:6363 max_fails=2000 fail_timeout=30s;
server xxx.xxx.xxx.212:53 max_fails=2000 fail_timeout=30s;
server xxx.xxx.xxx.212:6363 max_fails=2000 fail_timeout=30s;
}

server {
listen 53 udp;
proxy_pass dns_upstreams;
proxy_timeout 1s;
proxy_responses 1;
}
}

Even though this configuration works fine with 4 upstream servers, it doesn't help when I increase the number of servers.

The Nginx server has enough memory and CPU capacity remaining when running with 4 upstream servers as well as 6 upstream servers. And the dnsperf client is not a bottleneck here because it can send much more load in a different setup. Also, the individual upstream server can serve a bit more than 5000 request per second.

I am trying to get some hints about why I am observing more upstream failures and eventual unavailability when I add more servers. If anybody has faced a similar issue in the past and can give me some pointers to solve it, that would of great help.

Thanks,
Ajmal

Auto refresh for expired content? (1 reply)

$
0
0
Hi Folks,

i´m using Nginx as a proxy for my mobile app - which works pretty fine so far!

My main cache has the following config:
proxy_cache_path /var/cache/nginx/spieldaten levels=1:2 keys_zone=spieldaten:100m max_size=150m inactive=5m use_temp_path=off;

proxy_cache_valid 200 302 5m;

If a request is not cached by Nginx, it could take about 5 sec til it comes back from the defined proxy_pass - once it is cached its below 1 sec :)

Now my question, is it possible that Nginx can "automaticly" get a fresh copy from the proxy_pass, when it recognized that the cached request is expired - so that none User has to wait about 5 sec to get fresh content and gets always fast data from the Nginx cache?

Thanks in advance!

Regrads,
Maik

request_buffering gotchas? (2 replies)

$
0
0
Hi!

I'm new to nginx, and working on trying to stream an upload using a multipart-form through nginx to uwsgi and to my app. My client posts the request, and I expect nginx to begin forwarding it on to uwsgi as soon as data begins coming in...but...no matter what I do, uwsgi is not called until after the upload buffers in nginx. The entire upload completes, and \then\ the uwsgi call is made.

I am sure the correct option is

uwsgi_request_buffering off;

...but I'm wondering if there are any other requirements or options that need to be set, or gotchas that might be thwarting my attempt to make this work. It ain't working and I'm at my wit's end.

Thanks in advance for any advice!

~Sean

Occasional successful upstreamed requests that don't get picked up (1 reply)

$
0
0
Hello, I believe I have a tuning issue with NGINX - hoping someone can point me in the right direction!

About 1 in 60,000 requests being proxied through Kong/NGINX are timing out. These requests are getting to the upstreamed host and are successfully logged in the load balancer in front of this upstreamed host. So either there's a network issue between that load balancer and NGINX or NGINX is simply not able/willing to process the response.

Assuming this is an NGINX tuning issue, these are my settings (note these hosts have 32 cores). Traffic is not all that high, less than 10 req/sec per instance and requests are usually satisfied in less than a second:

worker_processes auto;
worker_priority -10;
daemon off;
pid pids/nginx.pid;
error_log logs/error.log debug;

worker_rlimit_nofile 20000;

events {
worker_connections 10000;
multi_accept on;
}

Most all other config settings are "default" values. There's nothing in the Kong logs that indicate these dropped responses are being processed by Kong. There's no indication there aren't enough workers. These timeouts do not happen in clusters, they are more like singletons.

Any advice on things I should look at or diagnosis possibilities? Thanks very much, Ryan

nginx binaries with auth_request module (1 reply)

$
0
0
Hi

Is there any binary linux version of nginx *with* the http_auth_request_module? The documentation says the source has to be compiled with a special flag, but it seems that the windows addition already has it inside.

Thanks.




_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

why reuseport don't increases throughoutput? (no replies)

$
0
0
Hello

It shows the new feature reusport from v1.9.1 can increase the QPS by 2-3 times than accept_mutex on and off in this article https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/. But the result is disappointed when we have the test in our production with V1.11.2.2. It don't even have the improvement but reduced, by 10%, dropped from 42K QPS(with accept_mutex off) to 38K QPS(with reuseport enabled). and it indeed reduce the latency. The two test cases have anything identicial except that the later have reuseport enabled. I wonder if I have missed some special configuration.

Thanks.
Xiaofeng

Upstream block: backup with max_fails=0 does not appear to work as expected (no replies)

$
0
0
Hello,

I have an upstream block with two servers as follows:

upstream {
server foo.com;
server bar.com max_fails=0 backup;
}

My desired use case would be that the foo.com server is hit for all
requests and can be marked as down by nginx if it starts serving errors. In
this case nginx will fallback to hitting bar.com, however bar.com should
not be allowed to be marked down by nginx.

What is actually happening is the "max_fails=0" statement is essentially
being ignored causing the error message "no live upstreams while connecting
to upstream" in my logs.

Is there a configuration here that obtains my desired use case?

Thank you,
~Jonathan

--
Jonathan Simowitz | Jigsaw | Software Engineer | simowitz@google.com |
631-223-8608
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>