Nginx proxy_pass with absolute URI

Nginx has nasty habit that it strips away everything before the first slash after hostname when it passes URI to upstream server. HTTP 1.1 standard requires proxy servers must require absolute URI from clients. Therefore proxy_pass by default is unable to connect to backend proxies. Luckily, this has workaround.

Below is minimal sample how to accomplish sending full GET URI via proxy_pass

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=cache0:10M max_size=100G use_temp_path=off;

server {

 location / {
 proxy_cache cache0;

 rewrite ^(.*)$ "://$http_host$uri$is_args$args" break;
 rewrite ^(.*)$ "http$uri$is_args$args" break;


URL query parameter statistics with awk

Lets say we have log file of proxy or web server and we are interested in how certain query parameter length is distributed over all requests. This can be useful for example when using string type stick tables with HAproxy where we must specify string length for table.

For memory consumption reduction – the shorter is string we store, the better. Even if some of tags get truncated, it is not problem if string after truncation is still unique.

# awk '$0=$2' FS=tag\= RS=\& logfile.log|awk '{print length}'|sort -n|uniq|sort -nr -c
  35344 8
  32520 6
  27032 7
   9655 10
   7608 9
   6498 21
   4509 13
   3957 22
   3824 11
   3557 12
   2736 23
   2417 14
   2005 24
   1304 25
   1199 15
   1023 20
    735 26
    646 19
    309 27
    296 28
    274 5
    243 16
    218 30
    149 17
    148 4
    110 18
     44 29
      7 31
      2 32
      1 3

In the output below, first column is length occurrences count and second one is tag length.

It makes sense to limit stick-table size to 32 characters.

RHEL/CentOS 7 add default gateway that is outside of the network

ip route add x.x.x.x dev ens192
ip route add default via x.x.x.x dev ens192

# ip r
default via x.x.x.x dev ens192
x.x.x.x dev ens192 scope link

To kickstart RHEL/CentOS 7 which has point to point network configuration and uses remote point as gateway, use this as kickstart networking configuration.


Gateway and peer in this setup have exatly same value.



How to filter lines with awk

Task: from the list of lines, filter out those which have only hexadecimal characters and exact length of 24 characters. The resulting list should have only lines that are exactly 24 characters long and contain only 0-9 and abcde or ABCDE.

Luckily, there is awk that can help us a lot.

awk '$1 ~ /^[[:xdigit:]]{24}$/ { print $1 }' file.txt


  • $1 is first field of the line (default awk field separator is space).
  • ~ is regular expression matching operator in awk.
  • / is beginning of regular expression pattern.
  • ^ is the beginning of the string meaning that pattern must be matched from the very beginning of the string.
  • [[:xdigit:]] character class that matches only hexadecimal digits.
  • {24} number in braces denotes how many times to repeat preceding regular expression.
  • $ matches end of the string. We use it because we want to be sure that string is exactly 24 characters long.
  • / end of the regular expression pattern.

HAproxy remote backend health check

It’s trivial to do local backend health check, but what if we want to know if the server we are failing over is actually with healthy backends or if remote backends are down. Also, this task must make sure that clients get redirected to remote servers, not passed through local instance.
Lets say, we have 3 servers. For configuration management simplicity, all of them must have identical configuration.

frontend main_frontend
  mode http
  option httplog
  bind *:443 ssl crt /path/cert.pem
  acl local_server_dead nbsrv(local_backend) lt 1
  use_backend remote_servers if local_server_dead
  default_backend local_backend

frontend health_status
  mode http
  bind *:1443 ssl crt /path/cert.pem ca-file /path/ca-file.crt verify required
  acl local_backend_down nbsrv(local_backend) lt 1
  monitor-uri /testfile.html
  monitor fail if local_backend_down

backend local_backend
  mode http
  option httplog
  balance leastconn
  server check
  server check

backend remote_servers
  mode http
  option httplog
  option httpchk HEAD /testfile.html HTTP/1.1\r\nHost:\
  balance roundrobin
  server server1 redir check ssl crt /path/cert.pem ca-file /path/ca-file.crt verify required
  server server2 redir check ssl crt /path/cert.pem ca-file /path/ca-file.crt verify required
  server server3 redir check ssl crt /path/cert.pem ca-file /path/ca-file.crt verify required

There are two different ACL-s which do the same thing. It is because haproxy does allow acl statements only inside frontend, listen and backend statements. And acl specified inside one frontend cannot be used within other frontend.

HTTPS connections to main_frontend are proxied to local_backend servers in the manner that all servers should have equal amount of connections.
If there are less than 1 healthy server in local_backend, connections are proxied to remote_server servers in round_robin fashion. Because we do redirect, we have no idea how many connections remote server has and roundrobin is the most equal possible distribution.
Redirection is done only if server is in UP state.

When all servers in local_backend are down, frontend health_status answers to health check requests from remote_servers with 503 instead of 200. This causes that server to go into state DOWN and it does not get any redirections.

Choosing fastest aes-ni ciphers for internal use

When considering ciphers for public service, it is important to be compatible as many clients as possible without losing certain level of security. What is the required level of security is matter of companys policy.
On the other hand, when securing communications between internal service components, only restriction is protocol support in software that is under control of the company. Usually this means that we are able to upgrade that to recent versions and use most effective and secure communication between internal communications.
Obviously, that would be TLS 1.2, but not all ciphers in that protocol version are created equal. AES-NI support on server cpu-s is widely spread nowadays and helps to gain huge speed increase with supported ciphers.
I only use AES128 ciphers, because those are faster than AES256 ciphers and offer practically the same security. From AES128 ciphers, I only use AESGCM, because those are most efficient on AES-NI CPU-s. Obviously, disabling all the possible anonymous ciphers.

$ openssl ciphers 'AES128+AESGCM:!ADH:!AECDH' -v
DHE-DSS-AES128-GCM-SHA256 TLSv1.2 Kx=DH       Au=DSS  Enc=AESGCM(128) Mac=AEAD
DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH       Au=RSA  Enc=AESGCM(128) Mac=AEAD
AES128-GCM-SHA256       TLSv1.2 Kx=RSA      Au=RSA  Enc=AESGCM(128) Mac=AEAD

Some tests to prove that CBC is slower than GCM.
CBC is faster only with very small blocks (16 bytes). To make comparision easier, I added spaces.

$ openssl speed -evp aes-128-cbc
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128-cbc     731 744.96k   815 436.80k   826 966.36k   831 090.01k   834 852.47k
$ openssl speed -evp aes-128-gcm
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
aes-128-gcm     467 229.56k  1 211 511.74k  1 679 008.09k  1 806 843.89k  1 838 764.74k