Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

Convert Apache .htaccess rewrite to nginx (no replies)

$
0
0
Hello,
I am new to nginx and I am having trouble converting htaccess rewrite to nginx rewrite. Please help me convert the following mod_rewrites with a brief explanation. And should I put the nginx rewrite back into the .htaccess file or is there a designated config file to put the nginx location blocks?

<IfModule mod_rewrite.c>
<IfModule mod_negotiation.c>
Options -MultiViews
</IfModule>

RewriteEngine On

# Redirect Trailing Slashes...
RewriteRule ^(.*)/$ /$1 [L,R=301]

# Handle Front Controller...
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^ index.php [L]
</IfModule>

Caching fastcgi url (3 replies)

$
0
0
Hello,

I am looking for advice. I am using nginx to terminate SSL and forward the
request to php via fastcgi. Of all of requests I am forwarding to fastcgi
there is one particular URL that I want to cache, hopefully bypassing
communication with the fastcgi and php processes altogether.

- Would I need to define a separate location stanza for the URL I want to
cache and duplicate all of the fastcgi configuration that is normally
required? Or is there a way to indicate that of all the fastcgi requests
only the one matching /xyz is to be cached?

- If multiple request for the same URL arrive at around the same time, and
the cache is stale, they will all wait on the one request that is
refreshing the cache, correct? So I should only see one request for the
cached location per worker per minute on the backend?

- Since my one URI is fairly small, can I indicate that no file backing is
needed?
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

502 Error Issue (1 reply)

$
0
0
Hi All,

We recently migrated from apache to nginx.

OS - CetnOS 6
Nginx - 1.62 (4 CPU 8 GB RAM)
PHP-FPM (php 5.4.37) (4 CPU 8 GB RAM)
APC (Code Cache APC 3.1.13 beta)
Memcache (data cache)

I have upstream of 4 php servers for php-fpm service.

I am facing two issues:

1. I am getting 502 status code in access log means nginx generating 502 bad gateway error

2. Randomly connections got pile up on nginx up to 2500 and on php-fpm servers we do not see any load in top only 1-2 process seems to be running during that time and without taking any restart after 2-3 minutes it starts working normally and connections goes down to normal without taking any restart.


Please check nginx configuration in-line

user nguser;
worker_processes 4;

pid /var/run/nginx.pid;
worker_rlimit_nofile 70000;

events {
worker_connections 1024;
multi_accept on;
use epoll;
}

http {
##
# Basic Settings
##

sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_requests 1000;
keepalive_timeout 65;
send_timeout 15;
types_hash_max_size 2048;
server_tokens off;
client_max_body_size 50M;
client_body_buffer_size 1m;
client_body_timeout 15;
client_header_timeout 15;
server_names_hash_bucket_size 64;
server_name_in_redirect off;

include /etc/nginx/mime.types;
default_type application/octet-stream;

##
# Logging Settings
##
#access_log on;

access_log /var/log/nginx/access.log combined;
#error_log /dev/null crit;
error_log /var/log/nginx/error.log error;


#gzip on;
gzip on;
gzip_static on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_min_length 512;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/css text/javascript text/xml text/plain text/x-component
application/javascript application/x-javascript application/json
application/xml application/rss+xml font/truetype application/x-font-ttf
font/opentype application/vnd.ms-fontobject image/svg+xml;

open_file_cache max=2000 inactive=20s;
open_file_cache_valid 60s;
open_file_cache_min_uses 5;
open_file_cache_errors off;

fastcgi_buffers 256 16k;
fastcgi_buffer_size 128k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_connect_timeout 300;
fastcgi_send_timeout 300;
fastcgi_read_timeout 300;
fastcgi_intercept_errors on;
reset_timedout_connection on;

upstream fastcgiservers {
least_conn;
server xxx.xxx.xx.xx:9000;
server xxx.xxx.xx.xx:9000;
server xxx.xxx.xx.xx:9000;
server xxx.xxx.xx.xx:9000;
}

include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}

location ~ \.php$ {
try_files $uri =404;
include fastcgi_params;
fastcgi_pass fastcgiservers;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}

---------------------------------------------------------PHP-FPM Settings ------------------------------------------------------------------

pm.max_children = 150
pm.start_servers = 90
pm.min_spare_servers = 70
pm.max_spare_servers = 100
pm.max_requests = 1500

Any help or suggestion would be greatly appreciated.

Thanks

ModSecurity compile, ""WARNING: APR util was not compiled with crypto support." (1 reply)

$
0
0
Hi,

When compiling modsec, I came across the following -

"configure: WARNING: APR util was not compiled with crypto support. SecRemoteRule will not support the parameter 'crypto'"

Basically the rhel6 apr-devel rpm does not have crypto support. Trying to determine what are the ramifications are here.

Might anyone know what this means? Am having difficulty finding what this SecRemoteRule is.

Thanks

Nginx support fot Weedfs !! (no replies)

$
0
0
Hi,

We're deploying WeedFS distributed filesystem for thumbs storage and
scalabilty. Weedfs is composed of two layers (Master, Volume). Master
server does all metadata mapping to track the corresponding volume server
against user requested file whereas volume server is the actual storage to
serve those requested files back to user via HTTP. Currently, weedfs
default webserver is being used as HTTP but it would be better to have
nginx webserver on volume servers for its low foot prints, stability and
robust response time for static .jpg files.

So we need to know if we can use nginx with weedFS ? Following is the
github we found, but need to confirm if it will fulfill our needs ?

https://github.com/medcl/lua-resty-weedfs

Thanks in advance.

Regards.
Shahzaib
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

uWSGI - upstream prematurely closed connection while reading response header from upstream (1 reply)

$
0
0
Hi,

I have a script which runs for 70 seconds. I have NGINX connecting to it via uWSGI.

I have set "uwsgi_read_timeout 90;". However, NGINX drops the connection exactly at 60 seconds -

"upstream prematurely closed connection while reading response header from upstream"

My script continues to run and completes after 70 seconds, however my browser connection has died before then (502 error).

The option "uwsgi_read_timeout" does its job for anything less than 60 seconds (ie uwsgi_read_timeout 30;) and terminates with a 504 error as expected.

I don't quite understand what is catching the 502 bad gateway error even though I have instructed nginx to permit a uwsgi read timeout of 90

Other -

I also have set "keepalive_timeout 300 300;"

Any ideas as to the cause? I see numerous posts on the internet advising to use "uwsgi_read_timeout"

Thanks

trac.nginx.org incorrect https (no replies)

$
0
0
I noticed that trac.nginx.org has https/SNI configured for the host
but no actual ssl configuration (how do you even do that):

$ openssl s_client -connect trac.nginx.org:443 -servername trac.nginx.org
CONNECTED(00000003)
140010415498912:error:14077410:SSL
routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake
failure:s23_clnt.c:770:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 318 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
---

Relevant (which is how I noticed it in the first place):

https://github.com/EFForg/https-everywhere/pull/1993

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Dash in request url messes up regex? (1 reply)

$
0
0
I am requesting the url /magento-check.php and it gives me the php
code instead of running it through fpm. Other php files work just
fine. Seems like the dash is screwing with the php regex location and
going through root location with try_files, serving the php code.

location ~* \.php(/.*)?$
{
if (!-e $request_filename) { return 404; }

fastcgi_pass unix:/var/run/php-fpm/blah.sock;
fastcgi_split_path_info ^(.*\.php)(/.*)?$;
include fastcgi.conf;

expires off;
}

location /
{
try_files $uri $uri/ =404;
expires 28d;
}


Live long and prosper,

Christ-Jan Wijtmans
https://github.com/cjwijtmans
http://facebook.com/cj.wijtmans
http://twitter.com/cjwijtmans

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Using threads and poll in nginx module (no replies)

$
0
0
Dear all,

I run into following problem when writing a module for nginx under Linux.

Within my module I have to use a library which internally uses multiple threads (pthreads) as well as poll.

When using/calling methods of this library in main initialization handler of my nginx module everything works fine.

Problem: But when I try to execute same code within other handling methods then the library is no longer working and method calls that internally use poll and threads seems to hang forever.

So my questions are:

1) Is it possible to use a library in an nginx module that internally uses poll and multiple threads?
2) If not, is there any approach to use such a library within a nginx module?

Thanks for your help.
Best regards Kline

Requests are not concurrent with SPDY and proxy_cache modules (1 reply)

$
0
0
Hi all,

I'm experiencing a strange behaviour on my SPDY tests.

I use nginx as a reverse proxy of a node.Js application.

I enabled spdy module and proxy_cache (nginx.conf file bellow) and I actually see no difference in requests speed. Requests are well handed by SPDY protocol (one TCP connection and multiple streams catched with tcpdump), but they should be concurrent and they actually are sequentials like without SPDY support.

Here are two screenshots of the waterfalls to see the behaviour
1) With proxy_cache support and SPDY not enabled :
https://lut.im/9QKXlT5T/LVFNWqxN

2) With proxy_cache support and SPDY enabled :
https://lut.im/DmJ2wqFp/43ZOyhTE

Here is the configuration file I use to test on my laptop :
8------------------------------------------------------------------------------------------------------------------------------------>
...
####### tests SPDY
keepalive_requests 1000;
keepalive_timeout 10;

client_body_timeout 10;
client_header_timeout 10;
types_hash_max_size 2048;

server_tokens off;

server_names_hash_bucket_size 128;
server_name_in_redirect off;

proxy_http_version 1.1;

fastcgi_buffers 256 4k;

gzip_static on;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
gzip_vary on;
gzip_http_version 1.1;

ssl_session_timeout 5m;
ssl_session_cache shared:SSL:20m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128:AES256:AES:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK';
ssl_prefer_server_ciphers on;
ssl_ecdh_curve secp384r1;
ssl_dhparam /etc/nginx/ssl/dhparam.pem;

upstream node_servers {
keepalive 64;
server node-server1;
server node-server2;
}

# cache options
proxy_buffering on;
proxy_cache_path /etc/nginx/cache/ levels=1:1:2 keys_zone=nodes:256m inactive=2048m max_size=4096m;
proxy_ignore_client_abort off;
proxy_intercept_errors on;
proxy_ignore_headers X-Accel-Expires Expires Cache-Control;

proxy_cache_methods GET;
spdy_keepalive_timeout 10s; # inactivity timeout after which the SPDY connection is closed
spdy_recv_timeout 4s; # timeout if nginx is currently expecting data from the client but nothing arrives
server {
listen 127.0.0.1:443 ssl spdy;
listen [::1]:443 ssl spdy;

server_name beta.mydomain.com;

ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;

add_header Alternate-Protocol "443:npn-spdy/3.1";
location / {
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_pass http://node_servers;
proxy_cache nodes;
proxy_cache_valid any 1h;
proxy_cache_min_uses 10;
add_header X-Nginx-Cached $upstream_cache_status;
}
}
...
<------------------------------------------------------------------------------------------------------------------------------------8

More infos : nginx-1.8.0-4.fc22.x86_64

So my question is : can those two modules work together or am I missing something in the conf ?

Thank you

Malware in /tmp/nginx_client (1 reply)

$
0
0
The software maldet, discovered some malware in the the /tmp/nginx_client directory, like this:

> {HEX}php.cmdshell.unclassed.357 : /tmp/nginx_client/0050030641
> {HEX}php.cmdshell.unclassed.357 : /tmp/nginx_client/0060442670

I did some research, and found out that indeed, there were some malicious code in them.

I did a extensive search in the sites, and nothing malicious was found, including the code that appeared in the tmp files.

Around the time the files were created, there were similar requests, to non existent Worpress plugins, and to a file of the Worpres backend.

Digging up a little, I found this: blog.inurl.com.br/2015/03/wordpress-revslider-exploit-0day-inurl.html

Basically an exploit for a Wordpress plugin vulnerability, (it doesn't affect my sites, though), that do similar requests to the ones I found.

One of those, is a post request that includes an attacker's php, file that thanks to this vulnerability will be uploaded to the site and it can be run by the attacker.

So what it seems to be happenning is that nxing is caching post requests with malicious code, that later is found by the antimalware software.

Could this be the case? I read and seems that Nginx does't cache post request by default, so it seems odd.

Is there a way to know if that tmp files are caching internal or external content?

I will be thankful for any info about it.

Nginx is working as reverse proxy only.


This is a bit of another file that was marked as malware:

>
> --13530703071348311
> Content-Disposition: form-data; name="uploader_url"
>
> http:/MISITE/wp-content/plugins/wp-symposium/server/php/
> --13530703071348311
> Content-Disposition: form-data; name="uploader_uid"

> 1
> --13530703071348311
> Content-Disposition: form-data; name="uploader_dir"
>
> ./NgzaJG
> --13530703071348311
> Content-Disposition: form-data; name="files[]"; filename="SFAlTDrV.php"
> Content-Type: application/octet-stream

Documentation of buf struct (no replies)

$
0
0
I am not very clear on the purpose of different data members within the buf structure.(appended below)

After looking through the code, I can figure out the purpose of 

- pos,last (sliding window) 

- file_pos, file_last, start, end ,(data start and end)

- tag, (which module owns this buf)

- file (name of the file if any associated with the data)

- memory(cannot be released by any module that processes the buf).

- mmap (buf is memory map)

- last_in_chain(last in the chain of bufs)

- last_buf(last in the response)

- For temporary: can the temporary buffer be released by any module that processes it or can it be released by only the module that owns it as indicated in the tag



It will be good if the purpose of other data members is described also. Thanks for any inputs


struct ngx_buf_s {    u_char          *pos;    u_char          *last;    off_t            file_pos;    off_t            file_last;
    u_char          *start;         /* start of buffer */    u_char          *end;           /* end of buffer */    ngx_buf_tag_t    tag;    ngx_file_t      *file;    ngx_buf_t       *shadow;

    /* the buf's content could be changed */    unsigned         temporary:1;
    /*     * the buf's content is in a memory cache or in a read only memory     * and must not be changed     */    unsigned         memory:1;
    /* the buf's content is mmap()ed and must not be changed */    unsigned         mmap:1;
    unsigned         recycled:1;    unsigned         in_file:1;    unsigned         flush:1;    unsigned         sync:1;    unsigned         last_buf:1;    unsigned         last_in_chain:1;
    unsigned         last_shadow:1;    unsigned         temp_file:1;
    /* STUB */ int   num;};

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Nginx and fastcgi with c (1 reply)

$
0
0
Where can I find good tutorials on writing fastcgi applications with nginx
and c ? ( Googling didn't help much. )

How do I handle HTTP requests received by the nginx server with c code ?

Please point out some resources towards learning the same .

Thanks
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Small bug in src/stream/ngx_stream_proxy_module.c (no replies)

$
0
0
(1066) : warning C4244: '=' : conversion from 'off_t' to 'size_t', possible loss of data

diff line 1066:
if (size > (size_t) limit) {
- size = limit;
+ size = (size_t) limit;
}

Nginx-1.9.2 fatal code 2 !! (no replies)

$
0
0
Hi,

We've just compiled latest nginx-1.9.2 on Debian wheezy 7 in order to
utilize aio threads directive for our storage but nginx started to crash
since we enabled aio threads on it. Following is the compiled options and
log about the crash :

root@archive3:/usr/local/nginx/conf/vhosts# nginx -V
nginx version: nginx/1.9.2
built by gcc 4.7.2 (Debian 4.7.2-5)
configure arguments: --sbin-path=/usr/local/sbin/nginx
--with-http_flv_module --with-http_mp4_module --with-threads --with-stream
--with-debug

error_log :

2015/06/30 04:14:07 [alert] 32076#32076: worker process 11097 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:03 [alert] 32079#32079: pthread_create() failed (11:
Resource temporarily unavailable)
2015/06/30 04:14:07 [alert] 32076#32076: worker process 17232 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 18584 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 595 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 32121 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 7557 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 16852 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 32083 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 5933 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:07 [alert] 32076#32076: worker process 32079 exited with
fatal code 2 and cannot be respawned
2015/06/30 04:14:03 [alert] 25360#25360: pthread_create() failed (11:
Resource temporarily unavailable)
2015/06/30 04:14:03 [alert] 18540#18540: pthread_create() failed (11:
Resource temporarily unavailable)
2015/06/30 04:14:03 [alert] 11093#11093: pthread_create() failed (11:
Resource temporarily unavailable)
2015/06/30 04:14:03 [alert] 23953#23953: pthread_create() failed (11:
Resource temporarily unavailable)

Thanks in advance.

Regards.
Shahzaib
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Serving from cache even when the origin server goes down (no replies)

$
0
0
Is it possible to configure Nginx to serve from cache even when the origin
server is not accessible?

I am trying to figure out if I can use a replicated Nginx instance that has
cache files rsynced (lsyncd
http://t.sidekickopen03.com/e1t/c/5/f18dQhb0S7lC8dDMPbW2n0x6l2B9nMJW7t5XYg1qwrMlW63Bdqv8rBw8YW4XXPpC56dBXDf1zVF0j02?t=https%3A%2F%2Fcode.google.com%2Fp%2Flsyncd%2F&si=6435350837723136&pi=2518a249-a8f4-4dd6-9e96-783723ac8e1a)
from the primary instance and serve from the replicated instance (DNS
switch) if the primary goes down.
- Cherian
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reverse proxy setup problem (no replies)

$
0
0
I have created a reverse proxy for 2 web servers both are running about 10 sites each, they use to be on seperate external IP's but now need to be behind the same one.

Using Nginx I setup a reverse proxy on a seperate VM, Its seems to work but one site is refusing to go to the correct server.

I have nothing in /conf.d/ I am using /sites-available and /sites-enabled, each site has its own config file. Im using proxy_pass with the servers IP address. What am I doing wrong?

server {
listen 10.0.0.125:80;
server_name .example.com;
location / {
proxy_pass http://10.0.0.110/;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;
client_max_body_size 10m;
client_body_buffer_size 128k;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}

Can thread pool improve performance for such scenario (3 replies)

$
0
0
Hi All:
I am using Nginx as a reverse proxy which provide a web API (HTTP GET
) to client.
and the backend application will get request from nginx and do some
time-consuming processing (1-2 seconds) then response result to nginx,
Nginx return result to client.
I think this is synchronize operation.

As I know, Nginx import thread pools feature, so is it useful for my
scenario and improve performance?

--
Rejoice,I Desire!

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

slow response times (no replies)

$
0
0
We are currently using nginx/1.7.10 @Ubuntu12.04 for serving high load
webapp with about 12-15k QPS mostly tiny requests / HTTP/1.1 keep-alive
mode. Nginx is used as proxy for http backends. From time to time we
are noticing traffic gaps and during this time nginx answers very slowly
(up to 20-25s) across all virtual hosts. There are no any error
messages in general error log and also server has much of resources
available. The nature of service is that http backends have very low
read/connect proxy timeout value (40ms) and nginx should be able to send
static file during error interception, those timeout happen very often
(2-3k during a second) But unfortunately we cant make it stable enough
to provide fast static answer during backend slowness periods since it
hangs and answers very slow

Location configuration:

access_log off;

error_page 500 501 502 503 504 408 404 =200 /file.dat;

include proxy_params;
proxy_intercept_errors on;
proxy_cache off;
proxy_redirect off;
proxy_pass_request_body on;
proxy_pass_request_headers on;
proxy_next_upstream off;
proxy_read_timeout 40ms;
proxy_connect_timeout 40ms;
proxy_send_timeout 40ms;

set $args src=g2;
proxy_pass http://upstream/;
proxy_http_version 1.1;
proxy_set_header Connection "";
}




Stub_status during normal operation:
Active connections: 831
server accepts handled requests
919758 919758 945641607
Reading: 0 Writing: 44 Waiting: 787

I can share any needed debug/stats info for sure.

Thank you for your cooperation

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Static content (3 replies)

$
0
0
Hi,
I have NGinx 1.8.0 installed successfully and configured NGinx... with upstream servers provided. NGinx and Services are deployed on separate machines. Now, when an request is made via NGinx, the service is invoked resulting in UI with no static content loaded.

I have tried root, rewrite directives... to specify the static content path on another server (workspace.corp.test.no), which didn't helped. I might be wrong with configuration. Pls. assist


server {
listen 80;
server_name workspace.corp.test.no;

location ~"*\.(js|jpg|png|css)$" {
root /workspace/WEB-INF/classes/static;
expires 30d;
}

location /{
proxy_pass http://workspace.corp.test.no/workspace/agentLogin/;
}
}


And, the above configuration is expecting static content on the server where NGinx is installed, when an request is made. But, I expect the static content to be expected from remote server.



Best regards,
Maddy
Viewing all 7229 articles
Browse latest View live