Flawless Application Delivery
James Tacker
Technology Consultant & Content Developer
Previous Training Work:
If you haven't done so already, please take the time to SSH into your EC2 Instances (Windows users use PuTTY).
Check your email for the login credentials, check your spam folder!
ssh student<number>@<ec2-server-hostname>
This module reviews the following topics:
upstream
proxy_pass
health_check
upstream myServers {
server localhost:8080;
server localhost:8081;
server localhost:8082;
}
server {
listen 80;
root /usr/share/nginx/html;
location / {
proxy_pass http://myServers;
}
}
ip_hash
& hash
least_conn
least_time
main.conf
in /etc/nginx/conf.d
with a server
that listens on 80
upstream
block (ask your instructor for the backend urls)location
prefix to proxy_pass
to your upstream
group.error_log
with a level of info
and an access_log
with a level of combined
access_log
to see destination of requestF5 BIG-IP LTM | NGINX+ |
---|---|
Self-IP address | N/A NGINX uses underlying OS networking |
Management IP addresses and port | Linux host IP (primary interface) |
Virtual Server | server and location |
Pool and node list | upstream |
iRules | server , location , NGINX Lua, or nginScript modules |
High Availability | nginx-ha-keepalived |
# create pool test_pool members add { 10.10.10.10:80 10.10.10.20:80 }
# create virtual test_virtual { destination 192.168.10.10:80 pool test_pool source-address-translation { type automap } ip-protocol tcp profiles add { http } }
# save sys config
upstream test_pool {
server 10.10.10.10:80;
server 10.10.10.20:80;
}
server {
listen 192.168.10.10:80;
location / {
proxy_pass http://test_pool;
}
...
}
# create pool ssl_test_pool members add { 10.10.10.10:443 10.10.10.20:443 }
# create virtual test_ssl_virtual { destination 192.168.10.10:443 pool ssl_test_pool source-address-translation { type automap } ip-protocol tcp profiles add { http } }
# save /sys config
# create profile client-ssl test_ssl_client_profile cert test.crt key test.key
# modify virtual test_ssl_virtual profiles add { test_ssl_client_profile }
# save /sys config
# create profile server-ssl test_ssl_server_profile cert test.crt key test.key
# modify virtual test_ssl_virtual profiles add { test_ssl_server_profile }
# save /sys config
upstream ssl_test_pool {
server 10.10.10.10:443;
server 10.10.10.20:443;
}
server {
listen 192.168.10.10:443 ssl;
ssl_certificate /etc/nginx/ssl/test.crt;
ssl_certificate_key /etc/nginx/ssl/test.key;
location / {
proxy_pass http://ssl_test_pool;
}
}
return
rewrite
sub_filter
try_files
#F5 iRule
when HTTP_REQUEST {
HTTP::redirect "https://[getfield [HTTP::host] ":" 1][HTTP::uri]"
}
----------------------------------------------------------------------
#NGINX
location / {
return 301 https://$host$request_uri;
}
#F5 iRule
when HTTP_REQUEST {
if {[string tolower [HTTP::uri]] matches_regex {^/music/([a-z]+)/([a-z]+)/?$} } {
set myuri [string tolower [HTTP::uri]]
HTTP::uri [regsub {^/music/([a-z]+)/([a-z]+)/?$} $myuri "/mp3/\\1-\\2.mp3"]
}
}
-------------------------------------------------------------------------------
#NGINX
location ~*^/music/[a-z]+/[a-z]+/?$ {
rewrite ^/music/([a-z]+)/([a-z]+)/?$ /mp3/$1-$2.mp3 break;
proxy_pass http://music_backend;
}
#F5 iRule
when HTTP_RESPONSE {
if {[HTTP::header value Content-Type] contains "text"} {
STREAM::expression {@/mp3/@/music/@}
STREAM::enable
}
}
--------------------------------------------------------------
#NGINX
location / {
sub_filter '/mp3/' '/music/';
proxy_pass http://default_backend;
}
NetScaler | NGINX+ |
---|---|
NetScaler IP (NSIP) | NGINX+ host IP |
Subnet IP (SNIP) | NGINX+ host IP |
Virtual IP (VIP) | Same Concept |
Virtual Servers | server , server_name , and location |
Server, Service, Service Group | upstream |
High Availability | nginx-ha-keepalived |
add lb vserver myvserver HTTP 10.0.0.99 80
server {
listen 10.0.0.99:80;
server_name .example.com;
...
}
add serviceGroup myapp HTTP
bind serviceGroup myapp 10.0.0.100 80
bind serviceGroup myapp 10.0.0.101 80
bind serviceGroup myapp 10.0.0.102 80
upstream myapp {
server 10.0.0.100:80;
server 10.0.0.101:80;
server 10.0.0.102:80;
}
NGINX does ALL Load Balancing
NGINX Works in Parallel with Legacy Hardware
NGINX Sits behind Legacy Hardware
if
directive is bad practice
try_files
directive is a better choice
if
Directivelocation
block that has to run on every request
return...;
rewrite ... last/permanent;
if ($request_method = POST ) {
return 405;
}
---------------------------------------------------
if ($args ~ post=140){
rewrite ^ http://example.com/ permanent;
}
try_files
Directive$uri
variableinternal
redirect
location / {
try_files $uri $uri/ @proxy;
}
location @proxy {
proxy_pass http://backend/index.php;
}
error_page
Directiveroot
for error_page
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/html;
}
error_page 500 502 503 504 /50x.html;
location /50x.html {
root /usr/share/nginx/html;
}
This module enables you to:
stream
and http
contextstream
contexthttp
vs. stream
http
http
requeststream
stream
ContextKey Differences
proxy_pass
relegated to server
contexthealth_checks
work differently than http
load balancerproxy_protocol
, and Direct Server Return (DSR) instead of proxy_set_header
Retain source IP during a TCP (or HTTP) reverse proxy to an application server
proxy_bind
directive + transparent
paramerter
stream {
server {
listen 3306;
location / {
proxy_bind $remote_addr transparent;
proxy_pass http://mysql_db_upstream;
}
}
}
proxy_protocol
Directiveproxy_protocol
from proxy servers/load balancers
stream {
server {
listen 12345;
proxy_pass example.com:12345;
proxy_protocol on;
}
}
proxy_protocol
Example
log_format combined '$proxy_protocol_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
server {
listen 80 proxy_protocol;
listen 443 ssl proxy_protocol;
set_real_ip_from 192.168.1.0/24;
real_ip_header proxy_protocol;
proxy_set_header X-Real-IP $proxy_protocol_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
}
}
health_checks
no longer work
server {
listen 53 udp;
proxy_bind $remote_addr:$remote_port transparent;
proxy_responses 0;
# proxy_timeout 1s;
}
preread
feature can inspect incoming SSL/TLS and determine targetmap
to determine complex routing method
stream {
server {
listen 443;
ssl_preread on;
proxy_pass $ssl_preread_server_name;
}
}
stream
access_log
to inspect data rates, protocols, error conditions, etc.
log_format tcp_log '$remote_addr [$time_local] ' '$protocol $status $bytes_sent $bytes_received' '$upstream_session_time $upstream_addr $proxy_protocol_addr’;
allow
, deny
proxy_download_rate
, proxy_upload_rate
limit_conn
, limit_zone
slow-start
to prevent overloaddrain
backup
down
tcp
directory, create/open lb.conf
server
that listens on port 90
and proxies to tcp_backend
stream {
upstream tcp_backend {
zone tcp_upstream 64k;
server backend1:8080;
server backend2:8080;
server backend3:8080;
}
server {
listen 90;
proxy_pass tcp_backend;
}
}
server
that listens on 53
, and append the udp
parameterproxy_pass
to proxy to a new upstream udp_backend
upstream udp_backend {
zone udp_upstream 64k;
server ec-2:53;
server ec-2:53;
server ec-2:53;
}
server {
listen 53 udp;
proxy_pass udp_backend;
}
health_check
interval
, passes
, fails
match
block
send
: text string or hexidecimalsexpect
: literal string or regex data responsehealth_check
for udp
and tcp
upstreamsstatus.html
match
block the uses a GET
request to confirm TCP connection
match http {
send "GET / HTTP/1.0\r\nHost: localhost:8080\r\n\r\n";
expect ~* "200 OK";
}
server {
listen 90;
health_check interval=10 passes=5 fails=5 match=http;
proxy_pass tcp_backend;
}
3306
)
stream {
upstream db {
server db1:3306;
server db2:3306;
server db3:3306; }
server {
listen 3306;
proxy_pass db;
}
}
db2
acts as a backup
and db1
receives connections to replicate across other nodesdb3
is a silent partner to db1
and db2
proxy_connect_timeout
set to low value (1
second or less) to catch early failures
upstream db {
server db1:3306;
server db2:3306 backup;
server db3:3306 down;
}
server {
listen 3306;
proxy_pass db;
proxy_connect_timeout 1s;
}
This module enables you to:
nginx-ha-keepalived
solutionkeepalived
nginx-ha-keepalived
Separate daemon from NGINX
keepalived
keepalived
Configurationunicast_src_ip
unicast_peer
priority
notify
vrrp_instance
global_defs {
vrrp_version 3
}
vrrp_script chk_manual_failover {
script "/usr/libexec/keepalived/nginx-ha-manual-failover"
interval 10
weight 50
vrrp_script chk_nginx_service {
script "/usr/libexec/keepalived/nginx-ha-check"
interval 3
weight 50
}
vrrp_instance VI_1 {
interface eth0
priority 101
virtual_router_id 51
advert_int 1
accept
garp_master_refresh 5
garp_master_refresh_repeat 1
unicast_src_ip 192.168.100.100
unicast_peer {
192.168.100.101
}
virtual_ipaddress {
192.168.100.150
}
track_script {
chk_nginx_service
chk_manual_failover
}
notify "/usr/libexec/keepalived/nginx-ha-notify"
}
No fencing mechanism
chk_nginx_service
weight
interval
rise
fall
vrrp_script chk_manual_failover {
script "/usr/libexec/keepalived
/nginx-ha-manual-failover"
interval 10
weight 50
vrrp_script chk_nginx_service {
script "/usr/libexec/keepalived
/nginx-ha-check"
interval 3
weight 50
}
Note: script
path should be on one line
$ apt-get install nginx-ha-keepalived
$ nginx-ha-setup
vrrp_script chk_nginx_service {
script "/usr/libexec/keepalived/nginx-ha-check"
interval 3
weight 50
}
Documentation:
virtual_ipaddress
block replicates ip
utility
virtual_ipaddress {
192.168.100.150
192.168.100.200
}
Feb 27 14:42:04 centos7-1 systemd: Starting LVS and VRRP High Availability Monitor...
Feb 27 14:42:04 Keepalived [19242]: Starting Keepalived v1.2.15 (02/26,2015)
Feb 27 14:42:04 Keepalived [19243]: Starting VRRP child process, pid=19244
Feb 27 14:42:04 Keepalived_vrrp [19244]: Registering Kernel netlink reflector
Feb 27 14:42:04 Keepalived_vrrp [19244]: Registering Kernel netlink command channel
Feb 27 14:42:04 Keepalived_vrrp [19244]: Registering gratuitous ARP shared channel
Feb 27 14:42:05 systemd: Started LVS and VRRP High Availability Monitor.
Feb 27 14:42:05 Keepalived_vrrp [19244]: Opening file '/etc/keepalived/keepalived.conf '.
Feb 27 14:42:05 Keepalived_vrrp [19244]: Truncating auth_pass to 8 characters
Feb 27 14:42:05 Keepalived_vrrp [19244]: Configuration is using: 64631 Bytes
Feb 27 14:42:05 Keepalived_vrrp [19244]: Using LinkWatch kernel netlink reflector...
Feb 27 14:42:05 Keepalived_vrrp [19244]: VRRP_Instance(VI_1) Entering BACKUP STATE
Feb 27 14:42:05 Keepalived_vrrp [19244]: VRRP sockpool: [ifindex(2), proto(112), unicast(1), fd(14,15)]
Feb 27 14:42:05 nginx -ha-keepalived: Transition to state 'BACKUP ' on VRRP instance 'VI_1 '.
Feb 27 14:42:05 Keepalived_vrrp [19244]: VRRP_Script(chk_nginx_service) succeeded
Feb 27 14:42:06 Keepalived_vrrp [19244]: VRRP_Instance(VI_1) forcing a new MASTER election
Feb 27 14:42:06 Keepalived_vrrp [19244]: VRRP_Instance(VI_1) forcing a new MASTER election
Feb 27 14:42:07 Keepalived_vrrp [19244]: VRRP_Instance(VI_1) Transition to MASTER STATE
Feb 27 14:42:08 Keepalived_vrrp [19244]: VRRP_Instance(VI_1) Entering MASTER STATE
Feb 27 14:42:08 Keepalived_vrrp [19244]: VRRP_Instance(VI_1) setting protocol VIPs.
Feb 27 14:42:08 Keepalived_vrrp [19244]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.100.150
Feb 27 14:42:08 nginx -ha-keepalived: Transition to state 'MASTER ' on VRRP instance 'VI_1 '.
Feb 27 14:42:13 Keepalived_vrrp [19244]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.100.150
vrrp_script chk_nginx_service {
script "/usr/lib/keepalived/nginx-ha-check"
interval 3
weight 50
}
vrrp_instance VI_1 {
interface eth0
state BACKUP
priority 101
virtual_router_id 51
advert_int 1
accept
unicast_src_ip 192.168.10.10
unicast_peer {
192.168.10.11
}
virtual_ipaddress {
192.168.10.100
}
track_script {
chk_nginx_service
}
notify "/usr/lib/keepalived/nginx-ha-notify"
}
vrrp_instance VI_2 {
interface eth0
state BACKUP
priority 100
virtual_router_id 61
advert_int 1
accept
unicast_src_ip 192.168.10.10
unicast_peer {
192.168.10.11
}
virtual_ipaddress {
192.168.10.101
}
track_script {
chk_nginx_service
}
notify "/usr/lib/keepalived/nginx-ha-notify"
}
Must have facility to determine mastership
Method | HA Type | Address Type |
---|---|---|
ELB | Active‑active | Dynamic; requires CNAME delegation |
Route 53 | Active‑active or active‑passive | Static; DNS hosted in Route 53 |
Elastic IPs (keepalived ) |
Active-passive | Static; DNS hosted anywhere |
Elastic IP w/Lambda | Active-passive | Static; DNS hosted anywhere |
Disadvantages
Disadvantages
Disadvantages
For applications that require state data on backend servers
NGINX supports the following methods:
sticky cookie
sticky learn
sticky route
sticky cookie
Syntax:
sticky cookie name
upstream myServers {
server backend1;
server backend2;
server backend3;
sticky cookie my_srv expires=1h domain=example.com path=/cart;
}
sticky learn
Syntax:
sticky cookie name
upstream myServers {
server backend1;
server backend2;
server backend3;
sticky learn create=$upstream_cookie_sessionid lookup=$cookie_sessionid zone=client_sessions:1m;
}
server {
location / {
proxy_pass http://myServers;
}
}
sticky learn
Part 1sticky learn
Part 2sticky learn
Part 3sticky route
upstream myServers {
zone backend 64k;
server backend1 route=backend1;
server backend2 route=backend2;
server backend3 route=backend3;
sticky route $route_cookie $route_uri;
}
Client is assigned a route from the backend proxied server, the first time it sends a request
Routing information is stored in a cookie or in the URI
On subsequent requests NGINX will examine the routing information to determine which server to send the request to
sticky directive uses the route parameter followed by multiple variables
The value of the variables will determine the route
The first non empty variable will be used to find the matching server
map $cookie_JSESSIONID $route_cookie {
~.+\.(?P<route>\w+)$ $route;
}
map $request_uri $route_uri {
~JSESSIONID=.+\.(?P<route>\w+)$ $route;
}
main.conf
. In the http
context, create a log_format
called sticky
that logs the following:
log_format sticky "$request \t $status \t
Client: $remote_addr \t
Upstream IP: $upstream_addr \t
Route URI: $route_uri \t
Route Cookie: $route_cookie \t";
access_log
level to sticky
access_log /var/log/nginx/main.access.log sticky;
Note to instructor, if you haven't already please spinup the following Tomcat instances that are already pre-configured with the jvmRoute parameter
Command to access AMI (assuming you have the public key)
:
ssh -i AWS_service_rocket_key.pub ec2-user@ec2-**-***-***-**.compute-1.amazonaws.com
(Lookup the password) and Spinup the following machines:
ngx-launch-class ubuntu-backend1 1
ngx-launch-class ubuntu-backend2 1
ngx-launch-class ubuntu-backend3 1
Copy the URL for each machine and give it to the students. You won't have to log into any of these machines so don't worry about the student numbers and passwords
Lab Solution
log_format sticky "$request \t Upstream: $upstream_addr \t Route URI: $route_uri \t Routing Cookie: $route_cookie \t All Cookies: $http_cookie \t ";
server {
...
access_log /var/log/nginx/main.access.log sticky;
#access_log /var/log/nginx/main.access.log combined;
...
}
sticky route
with two variables: $route_cookie $route_uri;
route
parameter and a shared memory zone
zone backend 64k;
server <backend_url>:8080 route=backend1;
server <backend_url>:8080 route=backend2;
server <backend_url>:8080 route=backend3;
maps
map $cookie_jsessionid $route_cookie {
~.+\.(?P<route>\w+)$ $route;
}
map $request_uri $route_uri {
~jsessionid=.+\.(?P<route>\w+)$ $route;
}
Lab Solution Continued
upstream myServers { server <backend1> route=backend1 server <backend2> route=backend2; server <backend3> route=backend3; sticky $route_cookie $route_uri; } map $cookie_jsessionid $route_cookie { ~.+\.(?P<route>\w+)$ $route; } map $request_uri $route_uri { ~jsessionid=.+\.(?P<route>\w+)$ $route; } server { listen 8080; ... access_log /var/log/nginx/upstream.access.log sticky; }curl
requests:curl http://<localhost>:8080
tail -f
command on your upstream_access.log
<localhost>/examples/servlets/servlet/SessionExample
Solution
You can test the below URIs in a browser (recommended so you can show students the source code once you hit the SessionExample), or you can use the curl requests below.
Make sure you run a tail command to show the new log_format in a separate or tabbed shell
sudo tail -f /var/log/nginx/upstream.access.log
curl http://localhost:8080/
curl http://localhost:8080/examples/
curl http://localhost:8080/examples/servlets/
curl http://localhost:8080/examples/servlets/servlet/
curl http://localhost:8080/examples/servlets/servlet/SessionExample/
Active-Active + sticky sessions
This module enables you to:
resolver
and resolve
directivesMonolithic Architecture
Microservices Architecture
resolver
and resolve
resolver
will re-resolve the domain namevalid
parameter overrides frequencyresolve
queries individual servers in an upstream
resolver 10.0.0.2 valid=10s;
#example 1
server {
location / {
set $backend_servers backends.example.com;
proxy_pass http://$backend_servers:8080;
}
#example 2
upstream myServers {
server backend1 resolve;
server backend2 resolve;
}
The resolver directive forces NGINX to re-resolve the domain name after the TTL expires by querying the DNS server
For those who are unfamiliar or want a refresher, the TTL value tells local resolving name servers how long a record should be stored locally before a new copy of the record must be retrieved from DNS. The record storage is known as the DNS cache, and the act of storing records is called caching.
The valid parameter overrides TTL value and forces NGINX to re-resolve based on the frequency you set
Resolver is useful for when you set your domain name to a variable and also when using a service discovery method such as zookeeper or consul
The resolve parameter acts similar to resolver just for individual servers in your upstream
resolver
Example
http {
resolver 10.xxx.xxx.2 valid=30s;
server {
set $elb "{{ lp_app_elb }}";
location / {
proxy_pass http://$elb/;
}
}
A lot of customers use NGINX to reverse proxy traffic to the app servers in a few places (For example Cache servers that also use Nginx as a reverse proxy for traffic to the backend servers and the backend servers themselves reverse proxy traffic to other application servers). Many times in a service architecture like this all these services are containerized and hosted in private clous such as AWS, and to take advantage of some of the cool features that AWS offers such as an auto-scale we will throw an Elastic Load Balancer in front of everything.
So just like a lot of people on the inter webs you may have found out that NGINX will only resolve the ELB DNS once on startup and then won't refreshing it....holy crap right? This will cause you to thinkg you've been hacked or maybe that some of our servers break because AWS changes IPs for ELBs on the fly.
The usually suggested work around is to use the NGINX resolver directive to define a DNS server for NGINX to do the name lookup on AND, more importantly, a time of validity for the DNS lookups! using the valid parameter.
We also get a lot of questions about DNS spoofing. You must define a trusted DNS infastructure if you're going to lock it down. NGINX recommends to only use Active Directory Domain Services Zones and make sure NGINX points directly to that name server IP address.
How the Demo Works
hello
simulates backendsDocker uses copy-on-write, which essentially means that every instance of your docker image uses the same files until one of them needs to change a file. At that point, it copies the files and creates its own version. This means that often a docker image will not need to write anything to disk to spawn its process. That makes docker fast! We're talking "100 milliseconds" fast.
Once you start a process in Docker from an Image, Docker fetches the image and its Parent Image, and repeats the process until it reaches the Base Image. Then the Union File System adds a read-write layer on top. That read-write layer, plus the information about its Parent Image and some additional information like its unique id, networking configuration, and resource limits is called a container.
Dockerfile when used with docker build command, automates command line instructions:
FROM ubuntu:12.04
MAINTAINER jtack4970 version: 0.1
ADD ./mysql-setup.sh /tmp/mysql-setup.sh
RUN /bin/sh /tmp/mysql-setup.sh
EXPOSE 3306
$ docker pull
$ docker run
$ docker build
$ docker create
$ docker push
$ sudo apt-get install docker.io
$ sudo docker images
$ sudo docker pull nginx:1.12.0
$ sudo docker run -d nginx:1.12.0
$ sudo docker run -d nginx:1.11.0
$ sudo docker run -d nginx:1.10.0
$ sudo docker ps
$ sudo docker ps
$ sudo docker inspect <container ID>
$ curl <container ip>
$ sudo docker stop <ID>
$ sudo docker rm <ID>
$ sudo docker stop $(docker ps -a -q)
$ sudo docker rm -v $(docker ps -a -q)
# -v flag removes volumes on file system
Deep dive into architecture
$ gcloud compute zones list
$ gcloud config set compute/zone <VALUE>
$ gcloud container clusters create <my-cluster>
$ kubectl run nginx --image=nginx:1.12.0
$ kubectl expose deployment nginx --port 80 --type LoadBalancer
$ kubectl get services
$ kubectl create -f <path/to/.yaml>
$ kubectl get pods
$ kubectl describe pods <POD NAME>
Containers, especially in a cluster, have dynamic IPs. A service like Kubernetes can implement service discovery to make sure incoming traffic from Load Balancer is routed correctly
OR we can use this:
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
upstream backend {
zone upstream-backend 64k;
server webapp-svc.default.svc.cluster.local service=_http._tcp resolve;
}
Advanced Use Case of using NGINX as an Ingress ReplicationController
This is perhaps the most daunting aspect of microservice design:
Dynamic Re-Configuration Recap
upstream_conf
server
parameters
curl -D http://server/upstream_conf?upstream=myServers&id=0&weight=5
This model focuses entirely on in-bound traffic and ignores the whole inter process communication problem. Basically think of it as putting NGINX on a publc agent and letting the services on the private agents fend for themselves. The good thing is that:
This model works well for a simple and flat API or a monolith with some basic microservices attached. For Kubernetes we have an open source Ingress Controller that allows you to easily implement this system using our OSS or commercial version
NGINX + gives you dynamic upstreams, active health checks, and robust monitoring WAF
The next model is called the router mesh, like the proxy model, it has NGINX running in front of the system to manage in bound traffic and gives you all of the benefits of the proxy model
Where it differs is in the introduction of a centralized load balancer between the services. When services need to communicate with other services, they route through the this centralized load balancer and the traffic is distributed to other instances
The Dies Router with NGINX/NGINX Plus work in this manner
Service discovery through DNS and monitoring the service event stream in the registry, but the disadvantage her is it exacerbates the performance problem by adding another hop in the network connection thus requiring another SSL handshake to make it work
So instead of a 9 step SSL handshake, you need to do an 18 step SSL handshake
The final model is what we call the fabric model
Like the other two models you have a public proxy in front of the system to handle incoming HTTP traffic, where it differs from other models is that:
So let’s look why the fabric model is so good, but first looking at the normal process
Let’s say you have two services that need to talk to each other In this diagram the Investment manager needs to talk to the user manager to get user data
The investment manager will create a new instance of an HTTP client The client will doa DNS request to the service registry (Mesos DNS sitting on top of Zookeeper)
It will get an ip address back of the service It will then go through the 9-step SSL handshake
Once the data is transferred, it will close down the connection and garbage collect the HTTP client Service discovery is dependent on the the application to be able to query and understand DNS requests – good luck with SRV records
The load-balancing is dependent on the service registry and is typically the dumbest load balancing option, round-robin DNS Each and every request has to go through the SSL negotiation process – even if you don’t do CA authentication, it is a 4 step process at minimum.
So let’s look in detail of the how the Fabric model works between microservices The first thing you will notice is that NGINX Plus runs in each service and the application code talks locally to NGINX Plus Because these are localhost network connections, FastCGI or even file socket connections, they don’t need to be encrypted
You will also notice that NGINX Plus is connecting to the other microservices NGINX Plus instances
Finally you will notice that NGINX Plus is connecting to service registry to do service discovery We will go through each of these steps in detail in just a moment
Having NGINX Plus deal with service discovery is beneficial on a bunch of levels:
When NGINX Plus gets back the list of User Manager instances, it puts them in the load balancing pool to distribute requests
NGINX has a variety of load balancing schemes that are user definable
But here is where the real benefit comes in – stateful, persistent connections between microservices
Remember the first diagram and how the service instance goes through the process of:
Here NGINX creates a connection to the other microservices and, using keepalive connection functionality, maintains that connection across application code requests
Essentially, there are mini VPNs that are created from service to service
In our initial testing we have seen a 77% increase in connection speed
As an added benefit, you can build the Circuit Breaker pattern into your microservices using NGINX Plus active health checks You define an active health check for your service that queries a healthcheck end point You can have a variety of responses that NGINX can evaluate using our regex functionaliy If the system is marked as unhealthy, we will throttle back traffic to that instance until it has time to recover We even go beyond Martin Fowler’s circuit breaker pattern in providing alternate solutions for a variety of circumstances 500 error response Backup server options You can even add a slow start feature