Quantcast
Channel: Nginx Forum - Nginx Mailing List - English
Viewing all 7229 articles
Browse latest View live

nginx upstream closed when spdy is active (1 reply)

$
0
0
hi there
we have a problem while downloading files from our website while spdy is enabled.

the download wont start and after a few seconds is runs to a timeoute.
if i disable spdy, the download is working.

what could that be.
The log files show me the following error

2015/06/10 09:26:14 [error] 2538#0: *6497287 upstream prematurely closed connection while reading upstream, client: x.x.x.x, server: www.x.com, request: "GET /xxx/report.do?action=ApprovedProductsCSV&xxxx&dateFrom=14.04.2015 HTTP/1.1", upstream: "http://xxxx:xxxx/xxx/report.do?action=ApprovedProductsCSV&xxx&dateFrom=14.04.2015", host: "www.xxx.com", referrer: "https://www.xxx.com/xxx/report.do?action=ApprovedProducts"

SO_REUSEPORT (8 replies)

$
0
0
Hello,
Ihave rebuild nginx 1.9.1 from source to use SO_REUSEPORT on my wheezy
install with kernel 3.16 (from backports).
(packages from http://nginx.org/packages/mainline/debian/has not include
SO_REUSEPORT)

Some errors are still present:

[emerg] 19351#19351: duplicate listen options for 0.0.0.0:80 in ...

Is there a way to use "reuseports" for multiple locations?
How can I test if it works for a special location?
Is there a header send or something else? Or is the only way to compare
"stress test" like siege?


Regards,
Basti

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

nginx plus with ssl on TCP load balance not work (12 replies)

$
0
0
Hi,

I’m using nginx plus with ssl on TCP load balance, Configured like the documentation, but it not work. (All the IP below is not real-ip)
I have web servers behind, I want to use ssl offloading, and I choose TCP load balance. listen on 443 and proxy to web server's 80.

Page access always report ERR_TOO_MANY_REDIRECTS.

Error log
2015/06/11 03:00:32 [error] 8362#0: *361 upstream timed out (110: Connection timed out) while connecting to upstream, client: 10.0.0.1, server: 0.0.0.0:443, upstream: "10.0.0.2:443", bytes from/to client:656/0, bytes from/to upstream:0/0

10.0.0.2 this ip is the nginx ip, while it is used as upstream?

The configuration is like this, remove the real ip

server {
listen 80 so_keepalive=30m::10;
proxy_pass backend;
proxy_upstream_buffer 2048k;
proxy_downstream_buffer 2048k;

}

server {
listen 443 ssl;
proxy_pass backend;
#proxy_upstream_buffer 2048k;
#proxy_downstream_buffer 2048k;
ssl_certificate ssl/chained.crt;
#ssl_certificate ssl/4582cfef411bb.crt;
ssl_certificate_key ssl/zoomus20140410.key;
#ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
#ssl_ciphers HIGH:!aNULL:!MD5;
ssl_handshake_timeout 3s;
#ssl_session_cache shared:SSL:20m;
#ssl_session_timeout 4h;

}


upstream backend {
server *.*.*.*:80;
server *.*.*.*:80;
}



nginx -v
nginx version: nginx/1.7.11 (nginx-plus-r6-p1)

And I’m using amazon linux
uname -a
Linux ip-*.*.*.* 3.14.35-28.38.amzn1.x86_64 #1 SMP Wed Mar 11 22:50:37 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux


BTW, tcp how to set access log?

auth_basic plain password in html (1 reply)

$
0
0
All,

I have setup aut_basic on my nginx webserver, whenever I authenticate the username and password are send as plain text via the html request from my webbrowser, is there an easy solution for this? Or should I switch to the non default nginx_http_auth_digest module?

Thanks,

Tom

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Upcoming NGINX Events (no replies)

$
0
0
Hi All!

NGINX has lots of upcoming events and we're looking for a variety of talks for them ranging from beginner to advanced – topics on and around NGINX and its use cases - from APIs, to resilient modern web architecture, to microservice implementations, to well won war stories. This is your opportunity to share your insights and tell us what you’re working on.

If you're newer to NGINX and wouldn’t feel comfortable speaking, please join us as a Summit attendee. We’ll host our NGINX Fundamentals course in the morning at each of the Summits and have guest speakers in the afternoon. Past speakers have included Jeff Kaufman of Google, Andrew Stein of Distil Networks, Vanessa Ramos and Nicolas Grenié of 3Scale, Andrew Fong of Dropbox, John Wason ofDisqus, Dustin Whittle of AppDynamics, and Chris Richardson, creator of the original CloudFoundry.com.

After all the content, stick around for snacks, drinks, and social time with fellow NGINX users and the NGINX team.

The NGINX summit series
(CFP here - https://docs.google.com/forms/d/1qwjBzyqVYjoeBKnSpby51cBjX55ss-HWbD3B0J2QUNA/viewform)

Raleigh, NC – July 9
— https://www.eventbrite.com/e/nginx-summit-training-raleigh-tickets-16979353704
Portland, OR – July 19 (co-locating with OSCON, Sunday before OSCON kicks off)
— https://www.eventbrite.com/e/nginx-summit-training-portland-tickets-17031615019
Boston, MA – August 18
— https://www.eventbrite.com/e/nginx-summit-training-boston-tickets-17031867775
New York City, NY – August 20
— https://www.eventbrite.com/e/nginx-summit-training-new-york-city-tickets-17032035276
Chicago, IL – August 25
— https://www.eventbrite.com/e/nginx-summit-training-chicago-tickets-17032251924
Denver, CO – August 27 (coming soon)
Austin, TX – October 21
— https://www.eventbrite.com/e/nginx-summit-training-austin-tickets-17032667166
Los Angeles, CA – early November (coming soon)

and, please do mark your calendar for the the NGINX user conference at Fort Mason, San Francisco from September 22-24. Additional details are on our call for proposals page. https://nginxconf15.busyconf.com/proposals/new

I look forward to seeing your submission(s) and meeting you at the events.

Sarah

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

A bit confused... (11 replies)

$
0
0
I'm tryiong to make some sense out of this and am left a bit cold! What
could cause this:

( I've left out any attempt at anonymising in case I hide something )

From the docroot...

$ ls -l images/models/Lapierre/Overvolt*
-rw-r--r-- 1 right-bike right-bike 342373 Jun 11 20:09
images/models/Lapierre/Overvolt FS.png
-rw-r--r-- 1 right-bike right-bike 318335 Jun 11 20:09
images/models/Lapierre/Overvolt HT.png


$ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ FS.png
HTTP/1.1 200 OK
Server: nginx/1.9.1
Date: Fri, 12 Jun 2015 01:47:14 GMT
Content-Type: image/png
Last-Modified: Thu, 11 Jun 2015 10:09:52 GMT
ETag: "55795e70-53965"
Expires: Sat, 13 Jun 2015 01:47:14 GMT
Cache-Control: max-age=86400
Accept-Ranges: bytes
Content-Length: 342373
Connection: Keep-Alive

$ curl -I http://backend.right.bike/images/models/Lapierre/Overvolt\ HT.png
HTTP/1.1 400 Bad Request
Server: nginx/1.9.1
Date: Fri, 12 Jun 2015 01:47:05 GMT
Content-Type: text/html
Content-Length: 172
Connection: close

The second one shows no entry at all in the access log but I can't find
any reason why they're processed differently at all.

Suggestions please!

--
Steve Holdoway BSc(Hons) MIITP
http://www.greengecko.co.nz
Linkedin: http://www.linkedin.com/in/steveholdoway
Skype: sholdowa

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

[ANN] Windows nginx 1.9.2.1 Lizard (no replies)

$
0
0
19:19 12-6-2015 nginx 1.9.2.1 Lizard

Based on nginx 1.9.2 (9-6-2015) with;
+ Openssl-1.0.1o (upgraded 12-6-2015)
+ Naxsi WAF v0.53-3 (upgraded 12-6-2015)
+ Openssl-1.0.1n (CVE-2015-4000, CVE-2015-1788, CVE-2015-1789,
CVE-2015-1790, CVE-2015-1792, CVE-2015-1791)
+ pcre-8.37b-r1566 (upgraded 10-6-2015, overflow fixes)
+ nginx-module-vts (fix for 32bit overflow counters including totals)
+ nginx-auth-ldap (upgraded 9-6-2015)
+ nginx-module-vts, fixes for 1.9.1 (upgraded 19-5-2015)
+ LuaJIT-2.0.4 (upgraded 18-5-2015) Tnx to Mike Pall for his hard work!
+ lua51.dll (upgraded 18-5-2015) DO NOT FORGET TO REPLACE THIS FILE !
+ Source changes back ported
+ Source changes add-on's back ported
+ Changes for nginx_basic: Source changes back ported
* Scheduled release: yes
* Additional specifications: see 'Feature list'

Builds can be found here:
http://nginx-win.ecsds.eu/
Follow releases https://twitter.com/nginx4Windows

config parsing (fastouter) (no replies)

$
0
0
I am trying to configure fastrouter through environment variable and
running into trouble.

1. A blank loop still seems to run.. expect that no subscription would
take place?

[uwsgi]
....
fastrouter_keys=
fastrouter_ip=
fastrouter_port=

# Subscribe this instance to a fastrouter
for=%(fastrouter_keys)
subscribe-to=%(fastrouter_ip):%(fastrouter_port):%(_)
endfor=

and the log ...

subscribing to ::
send_subscription()/sendto(): Invalid argument [core/subscription.c line
665]
send_subscription()/sendto(): Invalid argument [core/subscription.c line
665]
send_subscription()/sendto(): Invalid argument [core/subscription.c line
665]



2. A list of values is treated single?

export FASTROUTER_KEYS="a b c"

[uwsgi]
fastrouter_keys=$(FASTROUTER_KEYS)
fastrouter_ip=...
fastrouter_port=...

# Subscribe this instance to a fastrouter
for=%(fastrouter_keys)
subscribe-to=%(fastrouter_ip):%(fastrouter_port):%(_)
endfor=

And logs on fastrouter

[uwsgi-subscription for pid 5] new pool: a b c (hash key: 3007)
fastrouter_1 | [uwsgi-subscription for pid 5] a b c => new node:
172.17.1.37:56481

I was expecting to see three separate subscribes.


Any help appreciated.
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

error_page at http context (no replies)

$
0
0
I can't get error_page override default error pages when using it in HTTP context (rather than server {} / location {}).
Can someone share a real-world working example?

Help secure my location block (no replies)

$
0
0
I have files that are served by the backend web app at
|/xxx/File?file=yyy.png|. These files are stored at |/storage/files| on
the server. So, I wrote a location block to serve these files from
storage directly from the web server.

Here is my first take:

|location /xxx/File {
if ($request_method = POST ) {
proxy_pass http://backend;
}

alias /storage/files/;
try_files $arg_file =404;

}
|

The issue is I can do something like |/xxx/File?file=../../etc/foo.bar|
and nginx will serve the foo.bar file for me. So, I switched to this
following:

|location /xxx/File {
if ($request_method = POST ) {
proxy_pass http://backend;
}
if ($arg_file ~ \.\.) { return 403; }
alias /storage/files/$arg_file;

}
|

Can someone point me to any corner cases that can be exploited and what
is the best practice for situations like these?

--
Abhi

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Dynamic configuration (no replies)

$
0
0
HI there, these days i reinstall my windows and ... i found hot water, with vagrant. I very like it those stuff, i installed ubuntu, with nginx, php5-fpm and many other stuff, but still missing basic configuration.
In my project directory, i have many projects - symfony2, wordpress, my own framework, facebook app and simple test files. In my nginx configuration, those not work actually, becouse (i think) a root directory.
I wanna every project can open in a separate sub directory (ex: /localhost/projects/wordpress1, or /localhost/development/symfony2/web/). Is it possible with nginx :S
Can u give me some example configuration like those

sendfile_max_chunk breaking unbufferred php-fcgi (no replies)

$
0
0
Hi,

I was having the problem that if a single client on the local LAN is downloading a large static file, the download is effectively monopolizing nginx, and no other requests are handled simultaneously.
Reading the manual I came across the sendfile_max_chunk option that sounded like that may fix it:

==
Syntax: sendfile_max_chunk size;
Default: sendfile_max_chunk 0;
Context: http, server, location

When set to a non-zero value, limits the amount of data that can be transferred in a single sendfile() call. Without the limit, one fast connection may seize the worker process entirely.
==


However I noticed that if I enable that, PHP scripts running without buffering suddenly no longer work properly.


nginx.conf:

==
events {
worker_connections 1024;
}

http {
include mime.types;
default_type application/octet-stream;

server {
listen 80;
server_name $hostname;
sendfile on;
sendfile_max_chunk 8192;

root /var/www;

location / {
index index.php index.html index.htm;
}

location ~ \.php$ {
try_files $uri =404;

fastcgi_buffering off;
fastcgi_pass unix:/var/run/php-fpm.sock;
include fastcgi.conf;
}
}
}
==


t2.php for testing purposes:

==
<?php

for ($i = 0; $i < 10; $i++)
{
echo "test!\n";
flush();
sleep(1);
}
==

When retrieving that, the connection stalls after the first flush:

==
$ telnet 192.168.178.26 80
Trying 192.168.178.26...
Connected to 192.168.178.26.
Escape character is '^]'.
GET /t2.php HTTP/1.0

HTTP/1.1 200 OK
Server: nginx/1.6.3
Date: Sun, 14 Jun 2015 13:21:53 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
X-Powered-By: PHP/5.6.9

test!
==

If I remove either the "sendfile_max_chunk 8192;" or "fastcgi_buffering off;" line it does work, and I do get all 10 test! messages:

==
telnet 192.168.178.26 80
Trying 192.168.178.26...
Connected to 192.168.178.26.
Escape character is '^]'.
GET /t2.php HTTP/1.0

HTTP/1.1 200 OK
Server: nginx/1.6.3
Date: Sun, 14 Jun 2015 13:22:23 GMT
Content-Type: text/html; charset=UTF-8
Connection: close
X-Powered-By: PHP/5.6.9

test!
test!
test!
test!
test!
test!
test!
test!
test!
test!
Connection closed by foreign host.
==

Am I doing something wrong, or is this a bug?

Redirect on specific threshold !! (no replies)

$
0
0
Hi,

We're using Nginx to serve videos on one of our Storage server(contains
mp4 videos) and due to high amount of requests we're planning to have a
separate caching Node based on Fast SSD drives to serve "Hot" content in
order to reduce load from Storage. We're planning to have following method
for caching :

If there are exceeding 1K requests for http://storage.domain.com/test.mp4 ,
nginx should construct a Redirect URL for rest of the requests related to
test.mp4 i.e http://cache.domain.com/test.mp4 and entertain the rest of
requests for test.mp4 from Caching Node while long tail would still be
served from storage.

So, can we achieve this approach with nginx or other like varnish ?

Thanks in advance.

Regards.
Shahzaib
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

TCP-Loadbalancer and allow/deny (2 replies)

$
0
0
Hello,

happily testing the stream{} - feature and loadbalancing-mechanism with nginx 1.9
and it works very smoth; looks like we ca use nginx as http-lb as well as tcp-lb
in production very soon; thank you, nginx-team!

is there something like allow/deny planned for the stream {} - method?
http://nginx.org/en/docs/http/ngx_http_access_module.html#allow

atm we use a packetfilter, but having this feature in nginx - stream {} would be
a great addition.



thanx in advance,


mex

Deploying newly compiled nginx from test server to production (1 reply)

$
0
0
Hello

What is a good method for deploying a newly compiled nginx binary with an extra module? (mod_security)

I can get all to compile ok. However, I do not want to compile on my production server. There are two many dependencies (ie HTTPD for mod_sec).

In the case of mod_security, it seems only the Apache Portable Runtime (apr-util) is required if I manually move the binary over.

I tried building my own RPM but hit some issues.

Page with ssl doesn't open from safari (no replies)

$
0
0
Good afternoon

UCC ssl the certificate is bought from godaddy for 5 domain names.

One of the sites (settles down on the separate server) doesn't open from safari "Safari can't open the page https://sendy .mysite.com because the server unexpectedly dropped the connection. This sometimes occurs when the server is busy. Wait for a few minutes, and then TR again." In too time it normally opens from all other browsers.

Debug log of nginx (nginx 1.8.0, ubuntu 14.04):

2015/06/15 09:48:27 [debug] 15611#0: *6 SSL NPN advertised
2015/06/15 09:48:27 [debug] 15611#0: *6 SSL_do_handshake: -1
2015/06/15 09:48:27 [debug] 15611#0: *6 SSL_get_error: 2
2015/06/15 09:48:27 [debug] 15611#0: *6 reusable connection: 0
2015/06/15 09:48:27 [debug] 15611#0: *6 SSL handshake handler: 0
2015/06/15 09:48:30 [debug] 29320#0: *7 SSL_do_handshake: -1
2015/06/15 09:48:30 [debug] 29320#0: *7 SSL_get_error: 2
2015/06/15 09:48:30 [debug] 29320#0: *7 reusable connection: 0
2015/06/15 09:48:31 [debug] 29320#0: *7 SSL handshake handler: 0
2015/06/15 09:48:33 [debug] 29322#0: *8 SSL_do_handshake: -1
2015/06/15 09:48:33 [debug] 29322#0: *8 SSL_get_error: 2
2015/06/15 09:48:33 [debug] 29322#0: *8 reusable connection: 0
2015/06/15 09:48:33 [debug] 29322#0: *8 SSL handshake handler: 0

Config vhost:
server {
listen 80;
server_name sendy.mysite.com;

location / {
rewrite ^(.*) https://sendy.mysite.com$1 permanent;
}
}


server
{
listen 443;
server_name sendy.mysite.com;

ssl on;
ssl_certificate /etc/nginx/ssl/www.mysite2.com.crt;
ssl_certificate_key /etc/nginx/ssl/www.mysite2.com.key;

index index.php index.html;
root /home/ubuntu/sendy;
access_log /var/log/nginx/sendy.access.log;
error_log /var/log/nginx/sendy.error.log debug;
proxy_buffers 8 32k;
proxy_buffer_size 64k;
fastcgi_buffers 16 16k;
fastcgi_buffer_size 32k;

location = / {
index index.php; }

location / {
if (!-f $request_filename){
rewrite ^/([a-zA-Z0-9-]+)$ /$1.php last;}
}

location /l/ {
rewrite ^/l/([a-zA-Z0-9/]+)$ /l.php?i=$1 last; }

location /t/ {
rewrite ^/t/([a-zA-Z0-9/]+)$ /t.php?i=$1 last; }

location /w/ {
rewrite ^/w/([a-zA-Z0-9/]+)$ /w.php?i=$1 last; }

location /unsubscribe/ {
rewrite ^/unsubscribe/(.*)$ /unsubscribe.php?i=$1 last; }

location /subscribe/ {
rewrite ^/subscribe/(.*)$ /subscribe.php?i=$1 last; }

location ~* \.(ico|css|js|gif|jpe?g|png)(\?[0-9]+)?$ {
expires max;
log_not_found off; }

location ~ \.php {
fastcgi_index index.php;
include fastcgi_params;
keepalive_timeout 0;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_pass unix:/var/run/php5-fpm.sock; }
}

All other sites / domains are located on another server with the same certificate ucc, and they open in Safari without any problems

What is the cause of the problem?

nginx-1.9.2 (1 reply)

$
0
0
Changes with nginx 1.9.2 16 Jun 2015

*) Feature: the "backlog" parameter of the "listen" directives of the
mail proxy and stream modules.

*) Feature: the "allow" and "deny" directives in the stream module.

*) Feature: the "proxy_bind" directive in the stream module.

*) Feature: the "proxy_protocol" directive in the stream module.

*) Feature: the -T switch.

*) Feature: the REQUEST_SCHEME parameter added to the fastcgi.conf,
fastcgi_params, scgi_params, and uwsgi_params standard configuration
files.

*) Bugfix: the "reuseport" parameter of the "listen" directive of the
stream module did not work.

*) Bugfix: OCSP stapling might return an expired OCSP response in some
cases.


--
Maxim Dounin
http://nginx.org/

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Writing a new auth module - request for comments (1 reply)

$
0
0
Hi,

I'm writing a new module (out-of-tree) for supporting authentication
using Stormpath's user management API (https://stormpath.com/).

Basically, the module makes one or more HTTP requests to the
Stormpath API to determine if the client request should be authorized
to access a location or not.

Since this is somewhat different than other modules I could learn from, and
since all my knowledge about nginx internals is from looking at how other
modules & core is written, I'm wondering if anyone could comment on how I
designed the module and raise any issues if I did anything problematic,
wrong or weird.

For reference, the work-in-progress code for the module is available
here: https://github.com/stormpath/stormpath-nginx-module

Since I have to contact the external API I'm using the upstream module to
do it. But I don't want the users (admins) to have to define an upstream
block in nginx.conf so my module creates and configures an upstrem
configuration internally instead.

https://github.com/stormpath/stormpath-nginx-module/blob/master/src/ngx_http_auth_stormpath_module.c#L864

I haven't seen any other module do that, but I don't see that
it's possible to avoid users having to define upstream manually otherwise.

For the above reasons (wanting to handle everything invisible to the user),
I'm not using nginx_http_proxy_module, but implement the upstream handler
(create_request & friends) myself. But since I have to construct a HTTP
request, parse status line, parse headers, parse body (eg. if it's chunked
transfer-encoding), I end up duplicating a lot of functionality already
in http proxy (although greatly simplified because I know exactly how to
talk to the upstream server and what to expect in return).

One example is I parse the headers manually, because I haven't found a way
to init the http_upstream header parser hash, and to reuse the parser
(originally the init is done in ngx_http_upstream_init_main_conf).

https://github.com/stormpath/stormpath-nginx-module/blob/master/src/ngx_http_auth_stormpath_module.c#L304

(I'll also hit similar problems with caching the requests to the upstream.
I'd like to reuse the caching functionality already in nginx, but it seems
to me like http_proxy_module does a lot of manual heavy lifting in that
regard that I'd have to reimplement (or *shudder* copy-paste) to
support it?)

Does the above make sense? Is there an obvious way to do it differently that
I've missed? Are there any guides or documentation on how this should be
done (besides Evan Miller's obsolete-but-useful guides I went through
already)?

Any comments, suggestions, warnings or flames are welcome.

Thanks,
Senko

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Move old apache project (1 reply)

$
0
0
HI again, guys. I have another trouble with own old projects basec on large htaccess rewriet rules.
That's my configuration http://pastebin.com/7JCmiaSm in server block for one site, and everything work fine except when open url without "index.php" there. Nginx return me "Access denied", and i can't understand why.

Trying to mirror nginx repository for centos6/7 (8 replies)

$
0
0
Hi,
I'm trying to sync nginx repositories to my local mirror but all the old
package are no longer available. See this log from reposync command below.

Would it be possible to either update the package list from the repo to no
longer include the packages that are not available or to put back the old
packages in your repositry?

Thank you.

for centos7
/usr/bin/reposync --repoid=nginx7 --norepopath -p /opt/data/repos/nginx/7/
nginx-1.6.0-2.el7.ngx.x86_64.r FAILED

nginx-1.6.1-1.el7.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.6.2-1.el7.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.6.3-1.el7.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.6.0-2.el7.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.6.1-1.el7.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.6.2-1.el7.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.6.3-1.el7.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debuginfo-1.6.0-2.el7.ng FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debuginfo-1.6.1-1.el7.ng FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debuginfo-1.6.2-1.el7.ng FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debuginfo-1.6.3-1.el7.ng FAILED
] 0.0 B/s | 0 B --:--:-- ETA
1:nginx-debug-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try.
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try.
1:nginx-debuginfo-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to
try.
nginx-debuginfo-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debuginfo-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.6.0-2.el7.ngx.x86_64: [Errno 256] No more mirrors to try.
1:nginx-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try.
1:nginx-debug-1.6.2-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try.
1:nginx-debuginfo-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to
try.
1:nginx-1.6.3-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.6.1-1.el7.ngx.x86_64: [Errno 256] No more mirrors to try.


and for centos6
/usr/bin/reposync --repoid=nginx6 --norepopath -p /opt/data/repos/nginx/6/
nginx-1.0.5-1.el6.ngx.x86_64.r FAILED

nginx-1.0.6-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.0.7-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.0.8-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.0.8-2.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.0.9-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.0.10-1.el6.ngx.x86_64. FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.0.11-1.el6.ngx.x86_64. FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.0.12-1.el6.ngx.x86_64. FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.0.13-1.el6.ngx.x86_64. FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.0.14-1.el6.ngx.x86_64. FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.0.15-1.el6.ngx.x86_64. FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.2.0-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.2.1-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.2.2-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.2.3-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.2.4-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.2.5-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.2.6-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.2.7-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.2.8-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.4.0-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.4.1-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.4.2-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.4.3-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.4.4-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.4.5-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.4.6-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.4.7-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.6.0-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.6.0-2.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.6.1-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.6.2-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-1.6.3-1.el6.ngx.x86_64.r FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.0.9-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.0.10-1.el6.ngx.x FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.0.11-1.el6.ngx.x FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.0.12-1.el6.ngx.x FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.0.13-1.el6.ngx.x FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.0.14-1.el6.ngx.x FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.0.15-1.el6.ngx.x FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.2.0-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.2.1-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.2.2-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.2.3-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.2.4-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.2.5-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.2.6-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.2.7-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.2.8-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.4.0-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.4.1-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.4.2-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.4.3-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.4.4-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.4.5-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.4.6-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.4.7-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.6.0-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.6.0-2.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.6.1-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.6.2-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.6.3-1.el6.ngx.x8 FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debuginfo-1.6.0-1.el6.ng FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debuginfo-1.6.0-2.el6.ng FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debuginfo-1.6.1-1.el6.ng FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debuginfo-1.6.2-1.el6.ng FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debuginfo-1.6.3-1.el6.ng FAILED
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.2.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
] 0.0 B/s | 0 B --:--:-- ETA
nginx-debug-1.6.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.0.12-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.6.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.6.0-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.2.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.4.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.2.8-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debuginfo-1.6.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.4.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.2.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debuginfo-1.6.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.2.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.0.13-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.4.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debuginfo-1.6.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.2.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.2.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.4.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.4.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.2.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.4.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.6.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.4.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.2.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.6.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.4.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debuginfo-1.6.0-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.11-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.2.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.13-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.0.15-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.4.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.2.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.6.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.10-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.0.10-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.0.11-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.12-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.0.14-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.2.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.4.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.4.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.2.4-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.2.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.0.9-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.4.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.8-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.6.0-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.2.8-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.4.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.9-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.2.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.14-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.4.3-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.2.6-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.15-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.6.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.5-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debuginfo-1.6.2-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.4.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-debug-1.6.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.0.8-2.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.6.1-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.4.0-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.
nginx-1.2.7-1.el6.ngx.x86_64: [Errno 256] No more mirrors to try.

--
*Jean-Sébastien Frerot*
_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx
Viewing all 7229 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>