Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

Logfile formatting (1 reply)

$
0
0
I'm currently looking at swapping out some of our Apache web servers for
Nginx to act as a reverse proxy.

One of my issues is that I need, at least in the short term, for the log
format to remain the same.

I have two issues that are cropping up.

The first is that with my current configuration I am getting the following
error if I try to start nginx:

nginx: [emerg] unknown "bytes_received" variable

I am using the latest version avialble in the nginx repo:

# nginx -V
nginx version: nginx/1.14.0
built by gcc 4.4.7 20120313 (Red Hat 4.4.7-18) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx
--modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error.log
--http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid
--lock-path=/var/run/nginx.lock
--http-client-body-temp-path=/var/cache/nginx/client_temp
--http-proxy-temp-path=/var/cache/nginx/proxy_temp
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp
--http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx
--with-compat --with-file-aio --with-threads --with-http_addition_module
--with-http_auth_request_module --with-http_dav_module
--with-http_flv_module --with-http_gunzip_module
--with-http_gzip_static_module --with-http_mp4_module
--with-http_random_index_module --with-http_realip_module
--with-http_secure_link_module --with-http_slice_module
--with-http_ssl_module --with-http_stub_status_module
--with-http_sub_module --with-http_v2_module --with-mail
--with-mail_ssl_module --with-stream --with-stream_realip_module
--with-stream_ssl_module --with-stream_ssl_preread_module
--with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -fPIC'
--with-ld-opt='-Wl,-z,relro -Wl,-z,now -pie'

Secondaly i am unable to find an equivelent for the:

%f, %R or %l apache logs

This is the log file I am trying to replicate:
LogFormat
"%v,%V,%h,%l,%u,%t,\"%m\",\"%U\",\"%q\",\"%H\",\"%{UNIQUE_ID}e\",%>s,\"%{Referer}i\",\"%{User-Agent}i\",\"%{SSL_PROTOCOL}x\",\"%{SSL_CIPHER}x\",%p,%D,%I,%O,%B,\"%R\",\"%f\""
vhostcombined

and what I have so far:

log_format proxylog
'$server_name,$hostname,$remote_addr,-,$remote_user,[$time_local],'
'"$request_method","$request_uri","$query_string",'

'"$server_protocol","$request_id","$status","$http_referer,"'

'"$http_user_agent","$ssl_protocol","$ssl_cipher",$server_port,'

'$request_time,$bytes_received,$bytes_sent,"proxy-server"';

Any pointers for the above issues would be gratefully received.
--
Callum
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx Lua Caching and removing unwanted Arguements for higher HIT ratio issue (no replies)

$
0
0
So my issue is mostly directed towards Yichun Zhang (agentzh) if he is still active here. I hope so.


My problem is I am trying to increase my Cache HIT ratio by removing arguments from the URL that are fake / unwanted and order the arguments in a alphabetical (same order every time) for a higher Cache HIT ratio.

Here is my code.

location ~ \.php$ { ##within the PHP location block

## Create fastcgi_param vars

# remove duplicate values like index.php?variable=value&&&&&etc from URL's for higher cache hit ratio
set_by_lua_block $cache_request_uri {
local function fix_url(s,C)
for c in C:gmatch(".") do
s=s:gsub(c.."+",c)
end
return s
end

return string.lower(fix_url(ngx.var.request_uri, "+/=&?;~*@$,:"))
}

#TODO : Order request body variables so they are the same order for higher cache hit ratio
set_by_lua_block $cache_request_body { return ngx.var.request_body }

# Order Arguement variables for higher cache hit ratio and remove any custom defined arguements that users may be using to bypass cache in an attempt of DoS.
set_by_lua_block $cache_request_uri {
ngx.log(ngx.ERR, "before error: ", ngx.var.request_uri)
ngx.log(ngx.ERR, "test: ", ngx.var.uri)
local function has_value (tab, val)
for index, value in ipairs(tab) do
-- We grab the first index of our sub-table instead
if string.lower(value) == string.lower(val) then
return true
end
end

return false
end

--Anti-DDoS and Remove arguements from URLs
local args = ngx.req.get_uri_args()
local remove_args_table = { --table of blacklisted arguement to remove from url to stop DoS and increase Cache HIT ratio.
"rnd",
"rand",
"random",
"ddos",
"dddddooooossss",
"randomz",
}
for key,value in pairs(args) do
if has_value(remove_args_table, value) then
--print 'Yep'
--print(value .. " ")
ngx.log(ngx.ERR, "error: ", key .. " | " .. value)
args[key] = nil --remove the arguement from the args table
else
--print 'Nope'
end
end
--ngx.req.set_uri_args(args)
--for k,v in pairs(args) do --[[print(k,v)]] ngx.log(ngx.ERR, "error: ", k .. " | " .. v) end
ngx.log(ngx.ERR, "after error: ", ngx.var.request_uri)
--return ngx.req.set_uri_args(args)
return ngx.var.uri .. args
--Anti-DDoS and Remove arguements from URLs
}

fastcgi_cache microcache;
fastcgi_cache_key "$scheme$host$cache_request_uri$request_method$cache_request_body";
fastcgi_param REQUEST_URI $cache_request_uri; #need to make sure that web application URI has been modified by Lua

Nginx runs out of memory with large value for 'keepalive_requests' (no replies)

$
0
0
Hi all,
I'm using nginx as a Revers proxy to a service (A). nginx receives a large number of persistent connections from a single client service(B).
Service B sends a lot of requests (2K rps) over these persistent connections.

The amount of memory nginx uses seems to increase as a function of 'keepalive_requests 2147483647' . The memory used keeps raising until the machine runs out of memory (4GB, aws instance). While a smaller ''keepalive_requests 8192' doesn't create the exact problem.

Some additional observations:
When I reload nginx the memory usage comes down and then slowly starts building up.
when I test nginx with a gatling test tool as a client, this behaviour is not observed.
When I use the actual service(B), this behaviour seems to reappear.

I curious to know what exactly is happening and how can I fix this issue of high memory usage ?

my nginx server side configuration looks like:

server {
listen 443 ssl default_server;
...
...

location / {
# keepalive_timeout 14400s;
# keepalive_requests 2147483647; ----> over 10 hrs, memory usages go to 4 GB

keepalive_timeout 600s;
keepalive_requests 8192;

proxy_pass http://ingress;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
..
}


Thanks for all the help,

Unit 1.3 release (no replies)

$
0
0
Hello,

I'm glad to announce a new release of NGINX Unit.

Changes with Unit 1.3 13 Jul 2018

*) Change: UTF-8 characters are now allowed in request header field
values.

*) Feature: configuration of the request body size limit.

*) Feature: configuration of various HTTP connection timeouts.

*) Feature: Ruby module now automatically uses Bundler where possible.

*) Feature: http.Flusher interface in Go module.

*) Bugfix: various issues in HTTP connection errors handling.

*) Bugfix: requests with body data might be handled incorrectly in PHP
module.

*) Bugfix: individual PHP configuration options specified via control
API were reset to previous values after the first request in
application process.


Here's an example configuration with new parameters:

{
"settings": {
"http": {
"header_read_timeout": 30,
"body_read_timeout": 30,
"send_timeout": 30,
"idle_timeout": 180,
"max_body_size": 8388608
}
},

"listeners": {
"127.0.0.1:8034": {
"application": "mercurial"
}
},

"applications": {
"mercurial": {
"type": "python 2",
"module": "hgweb",
"path": "/data/hg"
}
}
}


All timeouts values are specified in seconds.
The "max_body_size" value is specified in bytes.

Please note that the parameters of the "http" object in this example are
set to their default values. So, there's no need to set them explicitly
if you are happy with the values above.

Binary Linux packages and Docker images are available here:

- Packages: https://unit.nginx.org/installation/#precompiled-packages
- Docker: https://hub.docker.com/r/nginx/unit/tags/

Also, please follow our blog posts to learn more about new features in
the recent versions of Unit:

- https://www.nginx.com/blog/tag/nginx-unit/

wbr, Valentin V. Bartenev

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

env TZ : timezone setting (no replies)

$
0
0
Has this been addressed with a new release?

https://forum.nginx.org/read.php?2,214494,214536#msg-214536

"env TZ=Asia/Shanghai".

It still does not work for me with nginx 1.13.x on CentOS 7

thanks,
Kamalkishor.

redirect based on file content (1 reply)

$
0
0
I want to have files in the filesystem that specify the response code and
redirect location instead of relying on the nginx configuration for it.

Imagine a file foo.ext looking like:

301 https://some.host.com/foo.bla

On a GET of foo.ext it should result in a 301 to
https://some.host.com/foo.bla

So far I haven't found a module for this. I presume it should not be too
terribly hard to write a module for it but maybe I missed something? So I
thought I rather double check if there is an easier route.

Any thoughts?

cheers,
Torsten
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

UDP load balancing and ephemeral ports (no replies)

$
0
0
Hello,

A couple of questions regarding UDP load balancing. If a UDP listener is configured to expect a response from its upstream nodes, is it possible to have another IP outside of the pool of upstream nodes send a response to the ephemeral port where nginx is expecting a response? I'm pretty sure the answer is no and the response has to come from the IP where the request was forwarded, but wanting to verify. We have a use case where another part of our backend system could possibly send that response if coded to do so, but I'm pretty sure this simply will not work.

Secondly, how long will nginx keep the ephemeral port open waiting for a response from the upstream node where the request is sent and is this configurable? It looks like proxy_responses might be helpful in quickly terminating a session after the desired number of responses are received?

Thanks.

Dynamic module is not binary compatible (no replies)

$
0
0
Hello!

I want to add dynamic module for nginx , but I got "module is not binary compatible".
environment:
Ubuntu 16.04
Nginx 1.12.1 (apt-get)

Nginx itself parameters:
nginx -V
nginx version: nginx/1.12.1
built with OpenSSL 1.0.2g 1 Mar 2016
TLS SNI support enabled
configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with-mail_ssl_module --add-dynamic-module=/build/nginx-aqArPM/nginx-1.12.1/debian/modules/nginx-auth-pam --add-dynamic-module=/build/nginx-aqArPM/nginx-1.12.1/debian/modules/nginx-dav-ext-module --add-dynamic-module=/build/nginx-aqArPM/nginx-1.12.1/debian/modules/nginx-echo --add-dynamic-module=/build/nginx-aqArPM/nginx-1.12.1/debian/modules/nginx-upstream-fair --add-dynamic-module=/build/nginx-aqArPM/nginx-1.12.1/debian/modules/ngx_http_substitutions_filter_module

I downloaded nginx-1.12.1.tar.gz and https://github.com/leev/ngx_http_geoip2_module.git
and then unzipped them

$ ./configure --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -fPIC -Wdate-time -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now -fPIC' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --modules-path=/usr/lib/nginx/modules --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_v2_module --with-http_dav_module --with-http_slice_module --with-threads --with-http_addition_module --with-http_geoip_module=dynamic --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module=dynamic --with-http_sub_module --with-http_xslt_module=dynamic --with-stream=dynamic --with-stream_ssl_module --with-stream_ssl_preread_module --with-mail=dynamic --with-mail_ssl_module --add-dynamic-module=../ngx_http_geoip2_module-master
$ make modules && sudo objs/ngx_http_geoip2_module.so /usr/share/nginx/modules

Then configure nginx load_module and test.

nginx: [emerg] module "/usr/share/nginx/modules/ngx_http_geoip2_module.so" is not binary compatible in /etc/nginx/nginx.conf


Can someone help me?
Thank you in advance.

limit_req applied to upstream auth_request requests? (2 replies)

$
0
0
Hi, I currently have an nginx configuration that uses the limit_req directive to throttle upstream content requests. Now I'm trying to add similar rate limiting for auth requests, but I haven't been able to get the auth throttle to kick in during testing (whereas the content throttle works as expected). Is there some known limitation of using limit_req against auth_request requests, or do I simply have a problem in my configuration? Thank you.

http {

map $request_uri $guid {
default "unknown";
~^/out/(?P<id>.+?)/.+$ $id;
}

map $http_x_forwarded_for $last_client_ip {
default $http_x_forwarded_for;
~,\s*(?P<last_ip>[\.\d]+?)\s*$ $last_ip;
}

limit_req_zone $guid zone=content:20m rate=500r/s;
limit_req_zone $guid zone=auth:20m rate=100r/s;

server {

location /out/ {
auth_request /auth;

proxy_pass $upstream_server;
proxy_cache content_cache;
set $cache_key "${request_path}";
proxy_cache_key $cache_key;
proxy_cache_valid 200 301 302 10s;

#Throttling works here <---
limit_req zone=content burst=50 nodelay;
limit_req_status 429;
}

location /auth {
internal;

proxy_pass_request_body off;
proxy_pass $upstream_server/auth?id=$guid&requestor=$last_client_ip;

proxy_cache auth_cache;
set $auth_cache_key "${guid}|${last_client_ip}";
proxy_cache_key $auth_cache_key;
proxy_cache_valid 200 301 302 5m;
proxy_cache_valid 401 403 404 5m;

#Throttling seems not to work here <---
limit_req zone=auth burst=50 nodelay;
limit_req_status 429;
}
}
}

How are you managing CI/CD for your nginx configs? (1 reply)

$
0
0
Last year I gave a talk at nginx.conf describing some success we have had using Octopus Deploy as a CD tool for nginx configs. The particular Octopus features that make this good are

* Octopus gives us a good variable replacement / template system so that I can define a template along with variables for different environments (which really helps me ensure consistency between environments)
* Octopus has good abstractions for grouping servers into roles and environments (So say, DMZ and APP servers living in DEV, TEST, and PROD environments)
* Octopus has a good release model and great visibility of "which release is deployed to which environment". As in "1.2.2 is in dev, 1.2.1 is in test, 1.1.9 is in production"
* Octopus has good security controls so I can control who is allowed to "push the button" to deploy dev->test->prod
* Octopus can be driven via APIs and supports scripting (particularly powershell) that can be used to interact with other APIs. When I demoed this at nginx conf I was using mono on the nginx VM to invoke bash scripts.

The only problem is that Octopus is a very Windows-centric product. I'm interested in doing this same sort of management using a "linux-centric" toolchain and would be interested to hear what tool chains others might be using. Ansible? Jenkins? Puppet/Chef?

The process I describe above is what we do with servers that are relatively long-lived. I would also be curious what toolchains you've found to be effective when servers are more transient. E.g. do you build server images that have the nginx config "baked in"? Or do you stand up the VM and push configs / certs in a secondary deployment step.

Thanks!
Jason
This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster@equifax.com. Equifax® is a registered trademark of Equifax Inc. All rights reserved.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Redirect without and SSL certificate (2 replies)

$
0
0
We have a problem where we have a large number of vanity domain names that are redirected. For example we have surgery.yale.edu which redirects to medicine.yale.edu/surgery. This works fine until someone tries to request https://surgery.yale.edu. For administrative reasons, I cannot get a wildcard certificate to handle *.yale.edu and make this simple to solve.

My question is if there is any way to redirect a request listening on port 80 and 443 but bypass the SSL certificate warning so it will redirect? I would assume the order of operation with HTTPS is to first validate the certificate but I really want the 301 redirect to take place before the SSL cert is verified.

I’m open to ideas but we are limited in what we can actually do so as it stands the only solution we have is to request a certificate for each of the 600+ domains.

___________________________________________
Michael Friscia
Office of Communications
Yale School of Medicine
(203) 737-7932 - office
(203) 931-5381 - mobile
http://web.yale.eduhttp://web.yale.edu/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Upload large files via Nginx reverse proxy (no replies)

$
0
0
We have Nginx as a reverse proxy server to a Pydio server backend running with Apache2. We are attempting to upload a 50G file. The Nginx server, 1.14.0, is attempting to write the file locally to /var/lib/nginx/body/ rather than sending it directly to the backend. Our proxy server has a very small footprint of just 13G.

Is there an option to send the file directly to the backend without writing locally?

Thank you.

Need help with regex (1 reply)

$
0
0
I have a regex that works in an online tool but when I put this into my configuration file it is not working.

The problem is that I want all URLs that start with /info to be redirected with the exception of one unfortunately named PDF file. This regex tests perfectly in an online tool
^/info(\/)?(?!\.pdf)
which shows that anything /information /info and /info/ all redirect and this will not /informationforFamiliesAlliesPacket_298781_284_5_v1.pdf
But when I put this into action, the PDF requests are still being redirected like any other /info call made.

I use a config file with a number of redirects so the full location block is simply:
location ~* ^/info(\/)?(?!\.pdf) { return 301 https://www.yalemedicine.org/conditions/; }

My thought process was to still redirect unless “.pdf” existed in the URL just in case we upload more “info….pdf” documents into the system, I didn’t want to make this exception too specific.

Any thoughts on this would be great, my regex skills are good enough most of the time but failing me right now.


___________________________________________
Michael Friscia
Office of Communications
Yale School of Medicine
(203) 737-7932 - office
(203) 931-5381 - mobile
http://web.yale.eduhttp://web.yale.edu/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Caching result of auth_request_set? (no replies)

$
0
0
I'm currently using the auth_request directive and caching the result based on a guid + IP address:

>location /auth {
> internal;
>
> proxy_pass_request_body off;
> proxy_pass $upstream_server/auth?id=$guid&requestor=$last_client_ip;
>
> proxy_cache auth_cache;
> set $auth_cache_key "${guid}|${last_client_ip}";
> proxy_cache_key $auth_cache_key;
> proxy_cache_valid 200 301 302 5m;
> proxy_cache_valid 401 403 404 5m;
>
>}

It would be very convenient for me to return back a bit of metadata associated with the guid from the upstream auth request, and send that bit of metadata along with the actual request that will follow if the auth subrequest succeeds. It looks like this is possible via the auth_request_set directive, but I am not sure how auth_request_set would interact with proxy_cache.

For auth requests, is proxy cache only caching the HTTP response code? Or is it caching the full response including variables? In other words, will auth_request_set still work correctly to set a variable when the auth response is cached? Thank you!

mail proxy (IMAP/POP3): balancing between workers (no replies)

$
0
0
Hi,

I run NginX as mail proxy (IMAP/POP3) and have a setup with

worker_processes 8;
worker_rlimit_nofile 32768;
events {
worker_connections 4096;
multi_accept on;
}


I upgraded this setup from Linux 3.16 and NginX 1.10 to Linux 4.9 and
NginX 1.14.
After this upgrade I run into trouble, since after reaching a maximum
of approximately 2600 proxied connections I run into the following
error messages:
4096 worker_connections are not enough
or
4096 worker_connections are not enough while in http auth state


I found out, that nearly all connections were proxied by the first
worker process, while nearly all other worker processes seem to be
mostly inactive.

But I want to balance the connections between the workers, otherwise
the worker_rlimit_nofile and the worker_connections are too low.


As a first workaround I defined
accept_mutex on;
which changed it's defaults in 0.11.3 from "on" to "off".
This seems to mitigate the issue for me (at least all worker processes
now use the CPU again according to ps output). I'm not sure, whether
the balancing is as good as with the old setup, but it looks much
better than before the workaround.


But what's the correct way to tell NginX, that it should balance the
connections between all worker processes? According to manual
"accept_mutex on" isend needed with EPOLLEXCLUSIVE which should be
active on my system with Linux 4.9 and glibc 2.24.

Greetings
Roland
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Is Nginx the ideal reverse proxy for deploying cPanel and Exchange 2016 behind one single public IP address? (no replies)

$
0
0
Good morning from Singapore,


Is Nginx the ideal reverse proxy for deploying cPanel and Exchange 2016 behind one single public IP address?

I am planning to use nginx to be the reverse proxy for DNS, IMAP, IMAP/S, POP3, POP3/S, SMTP, and SMTP/S protocols for cPanel and Exchange 2016 behind one single public IP address. cPanel will use one domain name and Exchange 2016 groupware will use another domain name.

Can nginx do that? Please point me to the best installation and configuration guides for all of my requirements above.

Thank you very much.


===BEGIN SIGNATURE===

Turritopsis Dohrnii Teo En Ming's Academic Qualifications as at 30 Oct 2017

[1] https://tdtemcerts.wordpress.com/

https://tdtemcerts.wordpress.com/[2] http://tdtemcerts.blogspot.sg/

http://tdtemcerts.blogspot.sg/[3] https://www.scribd.com/user/270125049/Teo-En-Ming

https://www.scribd.com/user/270125049/Teo-En-Ming===END SIGNATURE===
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Have problems adding header 'Set-Cookie' to headers_out in my nginx sub request module (no replies)

$
0
0
**nginx version: `1.10.3`**

Here's my code to add 'Set-Cookie' to headers:

void add_headers_out(ngx_http_request_t *r, char* cookies)
{
ngx_table_elt_t *h;
ngx_str_t k = ngx_string("Set-Cookie");
ngx_str_t v = ngx_string(cookies);
h = ngx_list_push(&r->headers_out.headers);
if(h == NULL)
{
return ;
}
h->hash = ngx_hash_key_lc(k.data, k.len);
h->key.len = k.len;
h->key.data = k.data;
h->value.len = v.len;
h->value.data = v.data;
}
When I call `add_headers_out` in my parent request handler:

static void multipost_post_handler(ngx_http_request_t *r)
{
...
///////// fill up headers and body
//// body
int bodylen = body.len;
ngx_buf_t *b = ngx_create_temp_buf(r->pool, bodylen);
b->pos = body.data;
b->last = b->pos + bodylen;
b->last_buf = 1;
ngx_chain_t out;
out.buf = b;
out.next = NULL;

//// headers
r->headers_out.content_type = myctx->content_type;
r->headers_out.content_length_n = bodylen;
r->headers_out.status = myctx->status_code;

// myctx->cookie1: "PHPSESSID=1f74a78647e192496597c240de765d45;"
add_headers_out(r, myctx->cookie1);

// Test: checking additional headers by iterating headers_out.headers
get_headers_out(r);
// returns: "Set-Cookie : PHPSESSID=1f74a78647e192496597c240de765d45;"

// Send response to client
r->connection->buffered |= NGX_HTTP_WRITE_BUFFERED;
ngx_int_t ret = ngx_http_send_header(r);
ret = ngx_http_output_filter(r, &out);
ngx_http_finalize_request(r, ret);
return ;
}
It seems no problem in my code, but when I use my nginx module as a reverse proxy module to some sites, I find `Set-Cookie` is different. For example, I can only see some small part of original `Set-Cookie: PHPSES(then go with nothing)` through chrome. I do not know what cause that problem. Thanks for helping!

nginx-1.15.2 (no replies)

$
0
0
Changes with nginx 1.15.2 24 Jul 2018

*) Feature: the $ssl_preread_protocol variable in the
ngx_stream_ssl_preread_module.

*) Feature: now when using the "reset_timedout_connection" directive
nginx will reset connections being closed with the 444 code.

*) Change: a logging level of the "http request", "https proxy request",
"unsupported protocol", and "version too low" SSL errors has been
lowered from "crit" to "info".

*) Bugfix: DNS requests were not resent if initial sending of a request
failed.

*) Bugfix: the "reuseport" parameter of the "listen" directive was
ignored if the number of worker processes was specified after the
"listen" directive.

*) Bugfix: when using OpenSSL 1.1.0 or newer it was not possible to
switch off "ssl_prefer_server_ciphers" in a virtual server if it was
switched on in the default server.

*) Bugfix: SSL session reuse with upstream servers did not work with the
TLS 1.3 protocol.


--
Maxim Dounin
http://nginx.org/
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: [nginx-announce] nginx-1.15.2 (no replies)

$
0
0
Hello Nginx users,

Now available: Nginx 1.15.2 for Windows
https://kevinworthington.com/nginxwin1152
(32-bit and 64-bit versions)

These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Officially supported native Windows binaries are at
nginx.org.

Announcements are also available here:
Twitter http://twitter.com/kworthington
Google+ https://plus.google.com/+KevinWorthington/

Thank you,
Kevin
--
Kevin Worthington
kworthington *@* (gmail] [dot} {com)
https://kevinworthington.com/
https://twitter.com/kworthington
https://plus.google.com/+KevinWorthington/

On Tue, Jul 24, 2018 at 1:28 PM, Maxim Dounin <mdounin@mdounin.ru> wrote:

> Changes with nginx 1.15.2 24 Jul
> 2018
>
> *) Feature: the $ssl_preread_protocol variable in the
> ngx_stream_ssl_preread_module.
>
> *) Feature: now when using the "reset_timedout_connection" directive
> nginx will reset connections being closed with the 444 code.
>
> *) Change: a logging level of the "http request", "https proxy
> request",
> "unsupported protocol", and "version too low" SSL errors has been
> lowered from "crit" to "info".
>
> *) Bugfix: DNS requests were not resent if initial sending of a request
> failed.
>
> *) Bugfix: the "reuseport" parameter of the "listen" directive was
> ignored if the number of worker processes was specified after the
> "listen" directive.
>
> *) Bugfix: when using OpenSSL 1.1.0 or newer it was not possible to
> switch off "ssl_prefer_server_ciphers" in a virtual server if it was
> switched on in the default server.
>
> *) Bugfix: SSL session reuse with upstream servers did not work with
> the
> TLS 1.3 protocol.
>
>
> --
> Maxim Dounin
> http://nginx.org/
> _______________________________________________
> nginx-announce mailing list
> nginx-announce@nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx-announce
>
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

question: http2_push_preload request with cookie (no replies)

$
0
0
Hello.

I've recently experimented with the `http2_push_preload` directive to preemptively
submit a response to an XHR request. I've noticed that in the request that nginx
performs to fetch the hinted resource, no cookies are submitted. However, Chrome
does not consider the cached response a candidate for serving the actual XHR that is later
sent by the client, which contains `withCredentials=true` and does contain cookies.

This is problematic in scenarios where cookies are required to be present. For example,
assume the following case:

- a logged in user visits page A that we know will trigger an XHR to B.json
- information about the session of the user is persisted in a cookie
- B.json can only be served to logged in users
- we want to push B.json to the client using an early hint, since we know it'll be needed

what happens now is the following:

1) Chrome requests page A, nginx responds with page A and an early hint for B.json
2) nginx requests B.json *without* sending any cookies
3) Chrome fetches response for A and B.json
3) Chrome performs an XHR(withCredentials=true) to fetch B.json and does not use B.json from the push cache,
since it considers it a different request altogether

My question is: how are we supposed to treat such a case? Are there any plans to support this?

Thanks in advance,

P.S. The ruby script I've used is the following and can be run with `bundle exec rackup test.rb` (requires ruby and bundler):

```
require 'rack'
require 'webrick'

XHR = "/foo.json"

body = %{
<html>
<head></head>
<body>
I'm the homepage and I'm performing an XHR

<script>
var oReq = new XMLHttpRequest();
oReq.open("GET", "#{XHR}");

// if set to true, it doesn't work in Chrome
oReq.withCredentials = false;
oReq.send();
</script>
</body>
</html>
}

require 'pp'
app = Proc.new do |env|
puts
if env["PATH_INFO"].include?(".json")
['200', {'Content-Type' => 'application/json'}, ['{"foo":"bar"}']]
else
['200', {'Content-Type' => 'text/html', "Link" => "<#{XHR}>; rel=preload; as=fetch; crossorigin"}, [body]]
end
end

Rack::Handler::WEBrick.run(app, Port: 8123)
```

OS: Darwin 17.4.0 Darwin Kernel Version 17.4.0: Sun Dec 17 09:19:54 PST 2017; root:xnu-4570.41.2~1/RELEASE_X86_64 x86_64

nginx version: nginx/1.15.1
built by clang 9.1.0 (clang-902.0.39.2)
built with OpenSSL 1.0.2o 27 Mar 2018
TLS SNI support enabled
configure arguments: --prefix=/usr/local/Cellar/nginx/1.15.1 --sbin-path=/usr/local/Cellar/nginx/1.15.1/bin/nginx --with-cc-opt='-I/usr/local/opt/pcre/include -I/usr/local/opt/openssl/include' --with-ld-opt='-L/usr/local/opt/pcre/lib -L/usr/local/opt/openssl/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --pid-path=/usr/local/var/run/nginx.pid --lock-path=/usr/local/var/run/nginx.lock --http-client-body-temp-path=/usr/local/var/run/nginx/client_body_temp --http-proxy-temp-path=/usr/local/var/run/nginx/proxy_temp --http-fastcgi-temp-path=/usr/local/var/run/nginx/fastcgi_temp --http-uwsgi-temp-path=/usr/local/var/run/nginx/uwsgi_temp --http-scgi-temp-path=/usr/local/var/run/nginx/scgi_temp --http-log-path=/usr/local/var/log/nginx/access.log --error-log-path=/usr/local/var/log/nginx/error.log --with-debug --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_degradation_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_st
atic_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-ipv6 --with-mail --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>