Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

Targeting homepage (not sub pages/dirs/) (1 reply)

$
0
0
What should I add to this directive to target also the home page (root of
the website, e.g. website.com)?
I already have 'index.php' in the directive but what if I visit
website.com(without /index.php)?

if ($request_uri ~*
"(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)")

I think I need to add another if, this time not with ~* though but with =
Something like

if ($request_uri = "(/)")

...?

Thanks in advance!
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Open Text Summarizer Upstream Module 1.0 Release (no replies)

$
0
0
Hi,
I have developed a highly efficient version of OTS - the popular open source text summarizer s/w. For few documents, while OTS takes about 40ms to produce text summary, my version takes around 8ms only. I created a service using my version that listens to summary requests and provide summaries. I have developed an nginx upstream module for this service.
You can use it in web sites that involve showing summaries of documents and would be thinking about performance due to scale and other features.
Performance note: the service uses select and non-blocking socket I/O for communicating to client.
Nginx upstream module for Summarizer:https://github.com/reeteshranjan/summarizer-nginx-module
Highly efficient version of OTS:https://github.com/reeteshranjan/summarizer
Original OTS:https://github.com/neopunisher/Open-Text-Summarizerhttp://libots.sourceforge.net/
Regards,Reetesh _______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Rate limiting intervals (3 replies)

$
0
0
Hello,

With the rate limiting module you can easily rate limit based on seconds or on minutes.
What I would like to do however is rate limit based on a 100 millisecond or 10 millisecond interval.

That way you do not have a burst of requests at the beginning of a second. But a more continuous flow of requests.

Is this possible with nginx?
If not can anyone point me to the right location in the source where the rate limiting is actually done.

(I assume it has something to do with ngx_http_limit_req_lookup in ngx_http_limit_req_module.c)

Thanks in advance,
Pieter

try_files (source) (no replies)

$
0
0
In ngx_http_core_try_files_phase (ngx_http_core_module.c) I can see how $uri $uri/ =404 are handled, but for example in;

try_files /test.html $uri $uri/ =404;

where(or how) is /test.html handled?

nested location, proxy, alias and multiple "/" locations (1 reply)

$
0
0
Ciao,

I'm setting up several isolated <a href="http://laravel.com/" title="PHP framework">Laravel</a> apps in sub-folders of a single site. Web root folder of Laravel is called "public", and I want to access such installation by URI "/app1/". There are static files, maybe few custom php, and a single entry point `/index.php`.

So I came up with a config like this:
[code]
location ^~ /app1 {
root /var/www/apps.mydomain.com/Laravel_app1/public;
rewrite ^/app1/?(.*)$ /$1 break;

location ~* \.(jpg|gif|png)$ {
try_files $uri =404;
...
}

location ~* !(\.(jpg|gif|png))$ {
proxy_pass http://127.0.0.1:8081;
...
}
}
[/code]

Two questions:

1. what happens to an "alias" inside a "^~" location like "location ^~ /app1 { ... }" – seems like $uri is not changed and "/abcdef" part remains in place.

2. how can I write a nested default "/" location after a rewrite and a regexp location? Got [emerg] errors when trying to write it like this:
location ^~ /app1 {
rewrite ^/app1/?(.*)$ /$1 break;
location ~* \.(jpg|gif|png)$ { ...static files instructions... }
location / { proxy_pass ...php files and folders go to Laravel... }
}


Serge.

understanding proxy_buffering (3 replies)

$
0
0
Hi
I'm trying to setup a reverse proxy for some private downloads. Here is our setup:
3 Storage servers with High capacity but slow HDDs running nginx
1 loadbalancing server with SSD and high internet uplink.
my file sizes are several hundred megabytes (500+ up to 2GB) running nginx
downloaders are on slow connections with download managers with up to 16 connections for each file

here is what I want to do:
a user sends a request to the SSD server, the ssd server requests the file from Slow servers and caches to response to its fast HDD and serving it to the client. But If I use the proxy_cache , the file serving has to wait till the file has been completly transfered and cache on the SSD disk wich (if several files are requested at the same time) results in a slow connection and timeout or other errors on the client side. so this is not an option.

However I think proxy_buffering is answer to my problem, I think this means each part of the requested file (defined by ranges header) is cached independently.
1. Am I right?
If I'm right then
2. how can I tell the nginx, to buffer like 5mb of requested part in memory (and the excess on the SSD disk) and serve the file to the client until the client has downloaded the part and then request another 5mb?
I'm looking for a setting like output_buffers 1 5m; but for the proxied file.
3. Is there a better solution?

Regards

cookie bomb - how to protect? (5 replies)

$
0
0
very interesting read: http://homakov.blogspot.de/2014/01/cookie-bomb-or-lets-break-internet.html

from thze blogpost:
"TL;DR I can craft a page "polluting" CDNs, blogging platforms and other major networks with my cookies. Your browser will keep sending those cookies and servers will reject the requests, because Cookie header will be very long. The entire Internet will look down to you.
I have no idea if it's a known trick, but I believe it should be fixed. Severity: depends. I checked only with Chrome.

We all know a cookie can only contain 4k of data.
How many cookies can I creates? **Many!**
What cookies is browser going to send with every request? **All of them!**
How do servers usually react if the request is too long? **They don't respond**
"

i checked it, and it works, i get the following error back:

400 Bad Request

Request Header Or Cookie Too Large

my question: is there a generic way to check the size of such headers like cookies etc
and to cut them off, or should we live with such malicious intent?




regards,


mex

duplicate Vary: Accept-Encoding header (2 replies)

$
0
0
Hello,

I use nginx/1.4.4 with gunzip = on and gzip_vary = on. This leads to a
duplicate Vary Header.

gzip_vary should do nothing if the header is already present:

moki@mysrv:~$ curl -I
http://192.168.1.196/home.htmlhttp://213.223.158.196/home.html
HTTP/1.1 200 OK
Server: nginx/1.4.4
Date: Sun, 19 Jan 2014 11:30:59 GMT
Content-Type: text/html
Connection: keep-alive
Vary: Accept-Encoding
Vary: Accept-Encoding
Location: home.html

I have no control of the upstream server it may send a Vary header or not,
in order to be safe I would like to use gzip_vary = on in order to prevent
any problems here.

This issue is in standard ngx_http_header_filter_module so can anyone
suggest solution?

Thanks,
Makailol
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

How to define dynamic proxy_cache_path directory. (1 reply)

$
0
0
Hello,

I use Nginx/1.4.4 as a reverse proxy caching server for multiple sites. So
far I have been using same proxy_cache_path for all sites. Now I want to
use separate cache path for all sites.

Is there anyway to make proxy_cache_path dynamic i.e. using some variable
in proxy_cache_path like $host ?

Thanks,
Makailol
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

"ssl_session_cache" not working on windows Version (1 reply)

$
0
0
Hello,
I have an nginx in front of an Windows IIS to delete some headers send
my IIS from proprietary application.
Now I try to use the following parameter on nginx running on windows:

# SSL session cache
#ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000
sessions, so we can hold 40000 sessions
#ssl_session_cache builtin:1000 shared:SSL:10m;

both of them dosnt run.
When I start nginx it will closed indemedaly, without any error on
command line.
Whats going on here?
Can someone confirm this behaviour?

I have try Version 1.4.4 and 1.5.8 from http://nginx.org/en/download.html.

Regards,
Basti

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Decompressing a compressed response from upstream, applying transformations and then compressing for downstream again (4 replies)

$
0
0
Hello
Is there a way we can achieve the following when nginx is acting as a reverse proxy
1. Client sends HTTP request with Accept-Encoding as gzip
2. Nginx proxy forwards the request with the request header intact
3. Origin server sends a compressed response
4. At the nginx proxy, we *decompress* the response, apply transformations on the response body and then *again* compress it
In other words, is there a way to use the functionality of gzip and gunzip modules simultaneously for a processing a response and in a particular order_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

imap proxy limited to about 210 connections (no replies)

$
0
0
I have nginx proxying imap and pop between 3 different backend servers, but it seems to be limited to about 210 concurrent connections. Requests beyond this get a connection timed out. I tried adding more worker processes but that didn't do anything. I have multi_accept on and have raised the number of worker_connections, but still no luck. I see rate limiting and connection bandwidth limiting, but these appear to apply to the http protocol and not the imap/pop protocol. What parameters to I adjust to increase the number of concurrent sessions to imap/pop?

Issue with multipart response compression. (no replies)

$
0
0
Hello

I use Nginx/1.4.4 as a reverse proxy and my backend webserver generates
multipart response with some dynamic boundary.

I use nginx gzip module to send compress data to the client but it is
unable to compress this multipart response which contains dynamic boundary
in content_type.

If I use gzip_type as below, it doesn't work.
gzip_types 'multipart/mixed';

If I include boundary in gzip_type, it works fine but boundary is dynamic
in my case.
gzip_types 'multipart/mixed; boundary="Ajm,e3pN"' ;

Can someone suggest solution for this?

Thanks,
Makailol
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Help required in Setting up FTP Load Balancer in NGINX- (1 reply)

$
0
0
I required help to configure FTP Load balancer in NGINX.

Can you all please help me with the necessary steps or any link which explains the same.

Note: My incoming FTP request will come in FTP protocol only. We cannot configure FTP –over HTTP in our application’

if NGINX is not supporting can you please suggest any load balancer which will server my purpose.

Nested location block for merging config returns 404? (6 replies)

$
0
0
I want a location to proxy to another service, and need to add an extra header for certain file types, but when trying to merge the configuration with a nested location instead of duplicating nginx instead returns a 404.

For example, this configuration works:

location ~ /(dir1/)?dir2/ {
add_header X-My-Header value1;
proxy_pass http://myproxy;
}

location ~ /(dir1/)?dir2/.*\.(txt|css) {
add_header X-My-Static value2;
add_header X-My-Header value1;
proxy_pass http://myproxy;
}

Passing valid URLs to the above config works perfectly, and I'll get the extra header set if it's a txt or css file (examples for the sake of argument). However, what I want to accomplish is to merge these two blocks into one nested location block to save on the duplication, however when I do that I just get a 404 returned for the previously workings URLs:

location ~ /(dir1/)?dir2/ {
location \.(txt|css) {
add_header X-My-Static value2;
}
add_header X-My-Header value1;
proxy_pass http://myproxy;
}

Can location blocks actually be nested in this way? I'm wondering if the 404 is because it's only parsing the specific nested block, and doesn't fallback onto the remaining config underneath (and therefore never gets sent to the proxy, and it's nginx returning a 404 which would be expected for that URL).

Rewriting GET request parameters while configured as a reverse proxy (1 reply)

$
0
0
Hello
Is there a way to make nginx rewrite the GET request parameters while configured as a reverse proxy. e.g. if nginx receives a request GET / foo.html?abc=123 , can nginx rewrite it to GET /foo.html?abc=456 (nginx admin specifies 123 to be changed to 456) and then do a proxy pass to the origin server. 
I did a test run with using $args on the lines of 
if($args~post=140){
rewrite ^ http://example.com/ permanent; }
as explained at http://wiki.nginx.org/HttpRewriteModule. However, this seems to work only for when nginx is the web server. It tries to fetch the content from the local nginx html folder.

Please provide inputs.  _______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Re: proxy_redirect can't be def'd in an include file; ok in main config? (1 reply)

$
0
0
On Tue, Jan 21, 2014, at 11:10 AM, Maxim Dounin wrote:
....
> And it's certainly a wrong list to write such questions. Thank
> you for cooperation.
....

> Hello!
>
> On Tue, Jan 21, 2014 at 10:55:58AM -0800, rand@sent.com wrote:
>
> > i've nginx 1.5.8
> >
> > If I check a config containing
> >
> > cat sites/test.conf
> > ...
> > location / {
> > proxy_pass http://VARNISH;
> > include includes/varnish.conf;
> > }
> > ...
> > cat includes/varnish.conf
> > proxy_redirect default;
> > proxy_connect_timeout 600s;
> > proxy_read_timeout 600s;
> > ...
> >
> > I get an error
> >
> > nginx: [emerg] "proxy_redirect default" should be placed after
> > the "proxy_pass" directive in
> > //etc/nginx/includes/varnish.conf:1
> > nginx: configuration file //etc/nginx/nginx.conf test failed
> >
> > but if I change to,
> >
> > cat sites/test.conf
> > ...
> > location / {
> > proxy_pass http://VARNISH;
> > + proxy_redirect default;
> > include includes/varnish.conf;
> > }
> > ...
> > cat includes/varnish.conf
> > - proxy_redirect default;
> > + #proxy_redirect default;
> > proxy_connect_timeout 600s;
> > proxy_read_timeout 600s;
> > ...
> >
> > then config check returns
> >
> > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
> > nginx: configuration file /etc/nginx/nginx.conf test is
> > successful
> >
> > Why isn't the proxy_redirect viable in the include file? Intended, or a
> > bug?
>
> Most likely, there are other uses of the "includes/varnish.conf"
> include file in your config, and they cause error reported.

No, there aren't. There is only one site enabled, and only one instance
of includes/varnish.conf, in that site config.

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Trying to set-up a local development environment (no replies)

$
0
0
I am a complete newbie to the server side of things. My background has
been installing wamp (http://www.wampserver.com/en/), dumping files
into the www folder, and testing via localhost. It all just worked.

Recently I switched to linux (archlinux) and am trying to set-up a
local development environment. This is my project structure:

www
├── process.php
├── css
│ └── registration.css
├── images
│ ├── bg.png
│ ├── bg_content.jpg
│ ├── in_process.png
│ ├── in_use.png
│ ├── okay.png
│ └── status.gif
├── index.html
├── registration.html
└── thanks.html

This is my nginx.conf:

user http;
worker_processes 1;

events {
worker_connections 1024;
}

http {
server {
listen 80;
server_name localhost;

location / {
root /srv/www;
index index.html;
}
}
}

Now here is where I am stuck and what I think nginx is doing according
to my nginx.conf.

1. The root directive maps request to localhost/ to the local
directory /srv/www in the filesystem and serves up index.html

2. But when I access localhost/registration.html, the html file loads
OK, but the corresponding registration.css fails to take effect. When
I checked it in chrome inspector, the GET request is getting a 304 Not
Modified (even in incognito mode) or a 200 OK (from cache)

What am I doing wrong?

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx-1.5.9 (no replies)

$
0
0
Changes with nginx 1.5.9 22 Jan 2014

*) Change: now nginx expects escaped URIs in "X-Accel-Redirect" headers.

*) Feature: the "ssl_buffer_size" directive.

*) Feature: the "limit_rate" directive can now be used to rate limit
responses sent in SPDY connections.

*) Feature: the "spdy_chunk_size" directive.

*) Feature: the "ssl_session_tickets" directive.
Thanks to Dirkjan Bussink.

*) Bugfix: the $ssl_session_id variable contained full session
serialized instead of just a session id.
Thanks to Ivan Ristić.

*) Bugfix: nginx incorrectly handled escaped "?" character in the
"include" SSI command.

*) Bugfix: the ngx_http_dav_module did not unescape destination URI of
the COPY and MOVE methods.

*) Bugfix: resolver did not understand domain names with a trailing dot.
Thanks to Yichun Zhang.

*) Bugfix: alerts "zero size buf in output" might appear in logs while
proxying; the bug had appeared in 1.3.9.

*) Bugfix: a segmentation fault might occur in a worker process if the
ngx_http_spdy_module was used.

*) Bugfix: proxied WebSocket connections might hang right after
handshake if the select, poll, or /dev/poll methods were used.

*) Bugfix: the "xclient" directive of the mail proxy module incorrectly
handled IPv6 client addresses.


--
Maxim Dounin
http://nginx.org/en/donation.html

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Implementing CONNECT in nginx (1 reply)

$
0
0
Hello everyone,
I would like to extend nginx with a CONNECT statement which connects to
a TCP socket. Could someone walk me through which source files I need to
modify and which fucntions I should have a look at?

Or if there is anything else that can give me a quickstart?

My use case is that I would like to share one tcp port between a
webserver that I already have and a SSL VPN. The SSL VPN does the
following:

CONNECT /CSCOSSLC/tunnel HTTP/1.1
Host: lync.gmvl.de
User-Agent: Cisco AnyConnect VPN Agent for Windows 3.0.07059
Cookie: webvpn=02F9D1@12288@188C@D7B405A4A46480CF364F1A6FD51998A0025DC727
X-CSTP-Version: 1
X-CSTP-Hostname: lenovo
X-CSTP-MTU: 1306
X-CSTP-Address-Type: IPv6,IPv4
X-DTLS-Master-Secret: D40F07275F15A18F5872905B79FDAC4FD8C33EA13503DF29878C10FE6DA1D025B1128C66AB06E3EB1CEBBBFFF00CBC08
X-DTLS-CipherSuite: AES256-SHA:AES128-SHA:DES-CBC3-SHA:DES-CBC-SHA
X-DTLS-Accept-Encoding: lzs
X-CSTP-Accept-Encoding: lzs,deflate
X-CSTP-Protocol: Copyright (c) 2004 Cisco Systems, Inc.

References:
http://www.infradead.org/ocserv/
http://article.gmane.org/gmane.network.vpn.openconnect.devel/1040

Cheers,
Thomas

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>