Wednesday, November 29, 2017

How to quick setup kubernetes dev box on ubuntu.

This post shows you how to setup KAIO ( kubernete all in one ) dev box on your VirtualBox ubuntu 1604.

1. Prepare vagrant file - Vagrantfile

$ cat Vagrantfile
Vagrant.configure(2) do |config|
  config.vm.define "ss03" do |ss03|
    ss03.vm.box = "ubuntu/xenial64"
    ss03.vm.box_version = "20171011.0.0"
    ss03.ssh.insert_key = 'false'
    ss03.vm.hostname = "ss03.swiftstack.idv"
    ss03.vm.network "private_network", ip: "172.28.128.43", name: "vboxnet0"
    ss03.vm.provider :virtualbox do |vb|
        vb.memory = 4096
        vb.cpus = 2
    end
  end
end

$ vagrant up

$ vagrant ssh ss03


2. Prepare k8s setup file - k8s.sh

$ cat k8s.sh

sudo apt-get update -y && apt-get install -y apt-transport-https
sudo apt install docker.io -y
sudo systemctl start docker
sudo systemctl enable docker
sudo apt install python-swiftclient -y
echo "================================="
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni
sudo kubeadm init
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
kubectl get pods --all-namespaces

kubectl get nodes

PS: if you don't know how to prepare k8s.sh

vi k8s.sh
copy above w/o "$ cat k8s.sh" and paste.
wq or x save and exit

$ sudo chmod +x k8s.sh

$ sudo ./k8s.sh


That's it!

you might need to wait a while to let all k8s spin up properly.

then you should able to see this.


$ sudo kubectl get pods --all-namespaces
NAMESPACE     NAME                           READY     STATUS    RESTARTS   AGE
kube-system   etcd-ss03                      1/1       Running   0          1m
kube-system   kube-apiserver-ss03            1/1       Running   0          40s
kube-system   kube-controller-manager-ss03   1/1       Running   0          1m
kube-system   kube-dns-545bc4bfd4-h722h      3/3       Running   0          1m
kube-system   kube-proxy-m7m6q               1/1       Running   0          1m
kube-system   kube-scheduler-ss03            1/1       Running   0          31s
kube-system   weave-net-jlqx9                2/2       Running   0          1m

$ sudo kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
ss03      Ready     master    1m        v1.8.4


Reference:
https://github.com/chianingwang/KAIO/tree/master



Tuesday, November 28, 2017

How to setup Varnish to connect Object Storage with SSL

This article will like to show you how to leverage caching to speed up your Transaction. The solution we pick is Varnish however, varnish doesn't support ssl unless you pay for Varnish +. Thus we take advantage reverse proxy + stunnel to make client -> ssl -> varnish --> ssl > swift ( object storage ) happen.

Idea

  Client -> SSL(443) ->
               Ngnix   |
               (80)  <-
              Varnish
              (8080) ->
              Stunnel  |   
                        -> SSL (443) Swift Endpoint

Outline

  • Setup / Configure Nginx Reverse Proxy with SSL ( 443 to 80 )
  • Setup / Configure Varnish for cache ( 80 to 8080 )
  • Setup / Configure Stunnel for SSL tunnel ( 8080 to 443 )
PS: if you would like to let Varnish connect to Swift Endpoint with SSL directly, you would need to buy Varnish-plus Or the option could be connecting with Swift Endpoint without SSL.


Setup / Configure Nginx Reverse Proxy with SSL ( 443 to 80 )

Here I use wildcard cert "*.ss.org", ss02 is swift endpoint, ss03 is nginx, varnish and stunnel server.
$ apt install nginx -y

comment out /etc/nginx/sites-available/default 
since we don't need default web page run on port 80


$ vi /etc/ngnix/sites-enabled/default

server {
        listen 443 ssl http2 default_server;
       listen [::]:443 ssl http2 default_server;

        server_name <your ngnix + varnish + stunnel hostname>
        ssl_prefer_server_ciphers  on;
        ssl_ciphers  'ECDH !aNULL !eNULL !SSLv2 !SSLv3';
        ssl_certificate /etc/nginx/ssl/star_ss_org.pem;
        ssl_certificate_key /etc/nginx/ssl/star_ss_org.key;

        location / {
            proxy_pass http://127.0.0.1;
            proxy_set_header X-Real-IP  $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto https;
            proxy_set_header X-Forwarded-Port 443;
            proxy_set_header Host $host;
        }
}
$ move your *pem, *key under /etc/nginx/ssl/ PS; if you don't have then mkdir /etc/nginx/ssl
service nginx restart

Setup / Configure Varnish for cache ( 80 to 8080 )

$ apt install varnish -y


/etc/default/varnish. In the file you'll find some text that looks like this:
DAEMON_OPTS="-a :6081 \
             -T localhost:6082 \
             -f /etc/varnish/default.vcl \
             -S /etc/varnish/secret \
             -s default,256m"
Change it to:
DAEMON_OPTS="-a :80 \
             -T localhost:6082 \
             -f /etc/varnish/default.vcl \
             -S /etc/varnish/secret \
             -s default,256m"

/etc/varnish/default.vcl
vcl 4.0;
# List of upstream proxies we trust to set X-Forwarded-For correctly.
acl upstream_proxy {
     "127.0.0.1";
}

# Default backend definition. Set this to point to your content server.
backend default {
    .host = "127.0.0.1";
    .port = "8080";
}

sub vcl_recv {
    # Set the X-Forwarded-For header so the backend can see the original
    # IP address. If one is already set by an upstream proxy, we'll just re-use that.
    if (client.ip ~ upstream_proxy && req.http.X-Forwarded-For) {
        set req.http.X-Forwarded-For = req.http.X-Forwarded-For;
    } else {
        set req.http.X-Forwarded-For = regsub(client.ip, ":.*", "");
    }
}

sub vcl_hash {
    # URL and hostname/IP are the default components of the vcl_hash
    # implementation. We add more below.
    hash_data(req.url);
    if (req.http.host) {
        hash_data(req.http.host);
    } else {
        hash_data(server.ip);
    }

    # Include the X-Forward-Proto header, since we want to treat HTTPS
    # requests differently, and make sure this header is always passed
    # properly to the backend server.
    if (req.http.X-Forwarded-Proto) {
        hash_data(req.http.X-Forwarded-Proto);
    }
    #return (hash);
}

sub vcl_backend_response {
    # Happens after we have read the response headers from the backend.
    #
    # Here you clean the response headers, removing silly Set-Cookie headers
    # and other mistakes your backend does.
    set beresp.ttl = 60s;
}
restart varnish service
$ service varnish restart
you can double check via
$ varnishlog or varnishstat

Setup / Configure Stunnel for SSL tunnel ( 8080 to 443 )

$ apt-get install stunnel4 -y

$ cat /etc/nginx/ssl/star_ss_org.key /etc/nginx/ssl/star_ss_org.pem >> /etc/nginx/ssl/stunnel.pem

$ vi /etc/stunnel/stunnel.conf
client = yes
[varnish]
accept = 8080
connect = <swift endpoint>:443 # e.g ss02.ss.org
cert = /etc/nginx/ssl/stunnel.pem
restart stunnel service
$ service stunnel4 restart

Final test

Test for your clients
$ swift --debug -A https://ss03.ss.org/auth/v1.0 -U ss -K ss stat -v
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): ss03.ss.org
DEBUG:requests.packages.urllib3.connectionpool:"GET /auth/v1.0 HTTP/1.1" 200 0
DEBUG:swiftclient:REQ: curl -i https://ss03.ss.org/auth/v1.0 -X GET
DEBUG:swiftclient:RESP STATUS: 200 OK
DEBUG:swiftclient:RESP HEADERS: {u'Content-Length': u'0', u'X-Varnish': u'32912', u'Set-Cookie': u'X-Auth-Token=AUTH_tk7b9e4ac141dc4cd3b202189214ca7d05; Path=/', u'Age': u'0', u'X-Trans-Id': u'tx6fb7e69b98a64cf1b1c02-005a0b88db', u'Server': u'nginx/1.10.3 (Ubuntu)', u'Connection': u'keep-alive', u'Via': u'1.1 varnish-v4', u'Accept-Ranges': u'bytes', u'X-Storage-Token': u'AUTH_tk7b9e4ac141dc4cd3b202189214ca7d05', u'Date': u'Wed, 15 Nov 2017 01:05:32 GMT', u'X-Storage-Url': u'https://ss03.ss.org/v1/AUTH_ss', u'X-Auth-Token': u'AUTH_tk7b9e4ac141dc4cd3b202189214ca7d05', u'Content-Type': u'text/plain; charset=UTF-8', u'X-Openstack-Request-Id': u'tx6fb7e69b98a64cf1b1c02-005a0b88db'}
INFO:requests.packages.urllib3.connectionpool:Starting new HTTPS connection (1): ss03.ss.org
DEBUG:requests.packages.urllib3.connectionpool:"HEAD /v1/AUTH_ss HTTP/1.1" 200 0
DEBUG:swiftclient:REQ: curl -i https://ss03.ss.org/v1/AUTH_ss -I -H "X-Auth-Token: AUTH_tk7b9e4ac141dc4cd3b202189214ca7d05"
DEBUG:swiftclient:RESP STATUS: 200 OK
DEBUG:swiftclient:RESP HEADERS: {u'X-Openstack-Request-Id': u'tx2b3cce143b4c459c864fe-005a0b88db', u'Content-Length': u'122', u'X-Account-Storage-Policy-Standard-Replica-Container-Count': u'4', u'Age': u'0', u'Accept-Ranges': u'bytes', u'X-Account-Storage-Policy-Standard-Replica-Object-Count': u'11', u'Via': u'1.1 varnish-v4', u'X-Account-Bytes-Used': u'1155343', u'Server': u'nginx/1.10.3 (Ubuntu)', u'Connection': u'keep-alive', u'X-Varnish': u'108', u'X-Timestamp': u'1510255401.40488', u'X-Account-Meta-Temp-Url-Key': u'380a104a-6b76-4b4f-8588-1c02ff3b25cc', u'X-Trans-Id': u'tx2b3cce143b4c459c864fe-005a0b88db', u'Date': u'Wed, 15 Nov 2017 01:05:32 GMT', u'X-Account-Storage-Policy-Standard-Replica-Bytes-Used': u'1155343', u'X-Account-Container-Count': u'4', u'Content-Type': u'text/plain; charset=utf-8', u'X-Account-Object-Count': u'11'}
                             StorageURL: https://ss03.ss.org/v1/AUTH_ss
                             Auth Token: AUTH_tk7b9e4ac141dc4cd3b202189214ca7d05
                                Account: AUTH_ss
                             Containers: 4
                                Objects: 11
                                  Bytes: 1155343
Containers in policy "standard-replica": 4
   Objects in policy "standard-replica": 11
     Bytes in policy "standard-replica": 1155343
                      Meta Temp-Url-Key: 380a104a-6b76-4b4f-8588-1c02ff3b25cc
                 X-Openstack-Request-Id: tx2b3cce143b4c459c864fe-005a0b88db
                                    Via: 1.1 varnish-v4
                          Accept-Ranges: bytes
                                 Server: nginx/1.10.3 (Ubuntu)
                                    Age: 0
                             Connection: keep-alive
                              X-Varnish: 108
                            X-Timestamp: 1510255401.40488
                             X-Trans-Id: tx2b3cce143b4c459c864fe-005a0b88db
                           Content-Type: text/plain; charset=utf-8

You might see some varnish info attached to the header.
    Server: nginx/1.10.3 (Ubuntu)
       Age: 0
Connection: keep-alive
 X-Varnish: 108


From ngnix, varnish and stunnel node.
ngnix access or error log
$ tail -f /var/log/nginx/<node name>.access.log
or
$ tail -f /var/log/nginx/<node name>.access.log
varnish log
$ varnishlog

*   << BeReq    >> 109
-   Begin          bereq 108 fetch
-   Timestamp      Start: 1510707932.316241 0.000000 0.000000
-   BereqMethod    HEAD
-   BereqURL       /v1/AUTH_ss
-   BereqProtocol  HTTP/1.0
-   BereqHeader    X-Real-IP: 172.28.128.1
-   BereqHeader    X-Forwarded-Proto: https
-   BereqHeader    X-Forwarded-Port: 443
-   BereqHeader    Host: ss03.ss.org
-   BereqHeader    user-agent: python-swiftclient-3.4.0
-   BereqHeader    x-auth-token: AUTH_tk7b9e4ac141dc4cd3b202189214ca7d05
-   BereqHeader    X-Forwarded-For: 172.28.128.1, 127.0.0.1
-   BereqMethod    GET
-   BereqProtocol  HTTP/1.1
-   BereqHeader    Accept-Encoding: gzip
-   BereqHeader    X-Varnish: 109
-   VCL_call       BACKEND_FETCH
-   VCL_return     fetch
-   BackendOpen    18 boot.default 127.0.0.1 8080 127.0.0.1 60674
-   Timestamp      Bereq: 1510707932.316325 0.000084 0.000084
-   Timestamp      Beresp: 1510707932.336352 0.020111 0.020027
-   BerespProtocol HTTP/1.1
-   BerespStatus   200
-   BerespReason   OK
-   BerespHeader   Content-Length: 122
-   BerespHeader   X-Account-Container-Count: 4
-   BerespHeader   X-Account-Storage-Policy-Standard-Replica-Object-Count: 11
-   BerespHeader   X-Account-Object-Count: 11
-   BerespHeader   Accept-Ranges: bytes
-   BerespHeader   X-Timestamp: 1510255401.40488
-   BerespHeader   X-Account-Meta-Temp-Url-Key: 380a104a-6b76-4b4f-8588-1c02ff3b25cc
-   BerespHeader   X-Account-Storage-Policy-Standard-Replica-Bytes-Used: 1155343
-   BerespHeader   X-Account-Storage-Policy-Standard-Replica-Container-Count: 4
-   BerespHeader   Content-Type: text/plain; charset=utf-8
-   BerespHeader   X-Account-Bytes-Used: 1155343
-   BerespHeader   X-Trans-Id: tx2b3cce143b4c459c864fe-005a0b88db
-   BerespHeader   X-Openstack-Request-Id: tx2b3cce143b4c459c864fe-005a0b88db
-   BerespHeader   Date: Wed, 15 Nov 2017 00:22:51 GMT
-   TTL            RFC 120 10 -1 1510707932 1510707932 1510705371 0 0
-   VCL_call       BACKEND_RESPONSE
-   TTL            VCL 60 10 0 1510707932
-   VCL_return     deliver
-   Storage        malloc s0
-   ObjProtocol    HTTP/1.1
-   ObjStatus      200
-   ObjReason      OK
-   ObjHeader      Content-Length: 122
-   ObjHeader      X-Account-Container-Count: 4
-   ObjHeader      X-Account-Storage-Policy-Standard-Replica-Object-Count: 11
-   ObjHeader      X-Account-Object-Count: 11
-   ObjHeader      X-Timestamp: 1510255401.40488
-   ObjHeader      X-Account-Meta-Temp-Url-Key: 380a104a-6b76-4b4f-8588-1c02ff3b25cc
-   ObjHeader      X-Account-Storage-Policy-Standard-Replica-Bytes-Used: 1155343
-   ObjHeader      X-Account-Storage-Policy-Standard-Replica-Container-Count: 4
-   ObjHeader      Content-Type: text/plain; charset=utf-8
-   ObjHeader      X-Account-Bytes-Used: 1155343
-   ObjHeader      X-Trans-Id: tx2b3cce143b4c459c864fe-005a0b88db
-   ObjHeader      X-Openstack-Request-Id: tx2b3cce143b4c459c864fe-005a0b88db
-   ObjHeader      Date: Wed, 15 Nov 2017 00:22:51 GMT
-   Fetch_Body     3 length stream
-   BackendReuse   18 boot.default
-   Timestamp      BerespBody: 1510707932.336467 0.020226 0.000115
-   Length         122
-   BereqAcct      303 0 303 620 122 742
-   End

*   << Request  >> 108
-   Begin          req 107 rxreq
-   Timestamp      Start: 1510707932.316185 0.000000 0.000000
-   Timestamp      Req: 1510707932.316185 0.000000 0.000000
-   ReqStart       127.0.0.1 40464
-   ReqMethod      HEAD
-   ReqURL         /v1/AUTH_ss
-   ReqProtocol    HTTP/1.0
-   ReqHeader      X-Real-IP: 172.28.128.1
-   ReqHeader      X-Forwarded-For: 172.28.128.1
-   ReqHeader      X-Forwarded-Proto: https
-   ReqHeader      X-Forwarded-Port: 443
-   ReqHeader      Host: ss03.ss.org
-   ReqHeader      Connection: close
-   ReqHeader      Accept-Encoding: identity
-   ReqHeader      user-agent: python-swiftclient-3.4.0
-   ReqHeader      x-auth-token: AUTH_tk7b9e4ac141dc4cd3b202189214ca7d05
-   ReqUnset       X-Forwarded-For: 172.28.128.1
-   ReqHeader      X-Forwarded-For: 172.28.128.1, 127.0.0.1
-   VCL_call       RECV
-   VCL_acl        MATCH upstream_proxy "127.0.0.1"
-   ReqUnset       X-Forwarded-For: 172.28.128.1, 127.0.0.1
-   ReqHeader      X-Forwarded-For: 172.28.128.1, 127.0.0.1
-   VCL_return     hash
-   ReqUnset       Accept-Encoding: identity
-   VCL_call       HASH
-   VCL_return     lookup
-   VCL_call       MISS
-   VCL_return     fetch
-   Link           bereq 109 fetch
-   Timestamp      Fetch: 1510707932.336489 0.020304 0.020304
-   RespProtocol   HTTP/1.1
-   RespStatus     200
-   RespReason     OK
-   RespHeader     Content-Length: 122
-   RespHeader     X-Account-Container-Count: 4
-   RespHeader     X-Account-Storage-Policy-Standard-Replica-Object-Count: 11
-   RespHeader     X-Account-Object-Count: 11
-   RespHeader     X-Timestamp: 1510255401.40488
-   RespHeader     X-Account-Meta-Temp-Url-Key: 380a104a-6b76-4b4f-8588-1c02ff3b25cc
-   RespHeader     X-Account-Storage-Policy-Standard-Replica-Bytes-Used: 1155343
-   RespHeader     X-Account-Storage-Policy-Standard-Replica-Container-Count: 4
-   RespHeader     Content-Type: text/plain; charset=utf-8
-   RespHeader     X-Account-Bytes-Used: 1155343
-   RespHeader     X-Trans-Id: tx2b3cce143b4c459c864fe-005a0b88db
-   RespHeader     X-Openstack-Request-Id: tx2b3cce143b4c459c864fe-005a0b88db
-   RespHeader     Date: Wed, 15 Nov 2017 00:22:51 GMT
-   RespHeader     X-Varnish: 108
-   RespHeader     Age: 0
-   RespHeader     Via: 1.1 varnish-v4
-   VCL_call       DELIVER
-   VCL_return     deliver
-   Timestamp      Process: 1510707932.336530 0.020345 0.000041
-   RespHeader     Accept-Ranges: bytes
-   Debug          "RES_MODE 0"
-   RespHeader     Connection: close
-   Timestamp      Resp: 1510707932.336560 0.020375 0.000030
-   ReqAcct        300 0 300 684 0 684
-   End

*   << Session  >> 107
-   Begin          sess 0 HTTP/1
-   SessOpen       127.0.0.1 40464 :80 127.0.0.1 80 1510707932.316122 16
-   Link           req 108 rxreq
-   SessClose      RESP_CLOSE 0.020
-   End
-

可以學習的叫知識,可以練習的叫技巧,但學不來也練不會的,叫做熱情。
You can learn facts and you can train skills, but passion is something that has to be felt by the heart.
搖滾教室 (The School of Rock)

Tuesday, August 15, 2017

Analysis Object Storage Networking Tool Sets

Object Storage Networking Tools as Analyzer

ping

how to use

ping is default network tools in most of linux, you shouldn't yum or apt install it. Here is an ping cli example for ping with package size around MTU e.g. 1500 see how that happen
$ ping -s <package size; e.g. 1480 > -c <count; e.g. : 3> or -i <wait>
Here is the example
$ ping 192.168.202.10 -s 1480 -c 10
PING 192.168.202.10 (192.168.202.10): 1480 data bytes
1488 bytes from 192.168.202.10: icmp_seq=0 ttl=63 time=8.118 ms
1488 bytes from 192.168.202.10: icmp_seq=1 ttl=63 time=6.077 ms
1488 bytes from 192.168.202.10: icmp_seq=2 ttl=63 time=3.317 ms
1488 bytes from 192.168.202.10: icmp_seq=3 ttl=63 time=3.430 ms
1488 bytes from 192.168.202.10: icmp_seq=4 ttl=63 time=3.659 ms
1488 bytes from 192.168.202.10: icmp_seq=5 ttl=63 time=3.277 ms
1488 bytes from 192.168.202.10: icmp_seq=6 ttl=63 time=3.220 ms
1488 bytes from 192.168.202.10: icmp_seq=7 ttl=63 time=3.617 ms
1488 bytes from 192.168.202.10: icmp_seq=8 ttl=63 time=3.707 ms
1488 bytes from 192.168.202.10: icmp_seq=9 ttl=63 time=3.024 ms

--- 192.168.202.10 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 3.024/4.145/8.118/1.558 ms
you can see at the end of summary, it shows how many packages transmitted and how many packages received at remote end, and total package loss.


package loss

package loss is kinds of import, here is bad example
more info regarding package loss affect on your network.
Packet loss is almost always bad when it occurs at the final destination. Packet loss happens when a packet doesn't make it there and back again. Anything over 2% packet loss over a period of time is a strong indicator of problems. Most internet protocols can correct for some packet loss, so you really shouldn't expect to see a lot of impact from packet loss until that loss starts to approach 5% and higher. Anything less than this is showing a possible problem, but one that is probably not impacting your experience significantly at present (unless you're an online gamer or something similar that requires 'twitch' reflexes).

latency

latency estimation from ping result
e.g. 1488 bytes from 192.168.202.10: icmp_seq=9 ttl=63 time=20.024 ms
latency : 20.024 ms = 0.020024 seconds

If your network latency larger than 0.07, then we think this latency might be potential to hurt your object storage

trouble shoot when you can't run ping.

Please do make sure your firewall doesn't block icmp

dstat

dstat is not usually default install, you might need to install first, however installation is pretty straightforward.

dstat installation

# yum install dstat -y
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.hmc.edu
 * extras: mirror.sjc02.svwh.net
 * updates: mirror.linuxfix.com
Resolving Dependencies
--> Running transaction check
---> Package dstat.noarch 0:0.7.2-12.el7 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================================================================================================================================================================================================================
 Package                                                         Arch                                                             Version                                                                  Repository                                                      Size
================================================================================================================================================================================================================================================================================
Installing:
 dstat                                                           noarch                                                           0.7.2-12.el7                                                             base                                                           163 k

Transaction Summary
================================================================================================================================================================================================================================================================================
Install  1 Package

Total download size: 163 k
Installed size: 752 k
Downloading packages:
dstat-0.7.2-12.el7.noarch.rpm                                                                                                                                                                                                                            | 163 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : dstat-0.7.2-12.el7.noarch                                                                                                                                                                                                                                    1/1
  Verifying  : dstat-0.7.2-12.el7.noarch                                                                                                                                                                                                                                    1/1

Installed:
  dstat.noarch 0:0.7.2-12.el7

Complete!

how to use dstat in general

I usually doesn't add too many parameters for dstat.
$ dstat
----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--
usr sys idl wai hiq siq| read  writ| recv  send|  in   out | int   csw
  4   1  95   0   0   0|  31k  138k|   0     0 |5335B   16k| 706  1480
  0   1  99   0   0   0|   0     0 |1774B  998B|   0     0 | 242   594
  1   0  99   0   0   0|   0     0 |1668B 1422B|   0     0 | 241   592
  2   0  98   0   0   0|   0    16k| 827B 1065B|   0     0 | 277   749
  1   1  99   0   0   0|   0     0 | 891B  350B|   0     0 | 242   616
  1   1  98   0   0   0|   0     0 | 666B  358B|   0     0 | 236   602
  1   0  99   0   0   0|   0     0 | 771B  350B|   0     0 | 217   581
  1   1  99   0   0   0|   0     0 |1355B 1320B|   0     0 | 265   612
  2   0  98   0   0   0|   0    12k| 831B 1065B|   0     0 | 267   755
  0   0  99   0   0   1|   0     0 | 666B  350B|   0     0 | 251   625
  1   1  98   0   0   0|   0     0 | 891B  358B|   0     0 | 249   638
  1   1  99   0   0   0|   0     0 |1601B  350B|   0     0 | 261   607
  1   0  99   0   0   0|   0     0 |1563B 1874B|   0     0 | 247   623
  2   0  98   0   0   0|   0    32k| 953B 1167B|   0     0 | 284   756

how to use dstat with more parameters

if you would like to specific drives or NICs, you can try as below. This dstat will list specific drives or NICs and collect info within 20 seconds as a batch.
# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   40G  0 disk
├─sda1            8:1    0  500M  0 part /boot
└─sda2            8:2    0 39.5G  0 part
  ├─centos-root 253:0    0 38.5G  0 lvm  /
  └─centos-swap 253:1    0    1G  0 lvm  [SWAP]
sdb               8:16   0    8G  0 disk /srv/node/d0
sdc               8:32   0    8G  0 disk /srv/node/d1
sdd               8:48   0    8G  0 disk /srv/node/d2

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:0c:4e:dc brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 86285sec preferred_lft 86285sec
    inet6 fe80::a00:27ff:fe0c:4edc/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:5c:d7:41 brd ff:ff:ff:ff:ff:ff
    inet 172.28.128.42/24 brd 172.28.128.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe5c:d741/64 scope link
       valid_lft forever preferred_lft forever
4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 08:00:27:be:6e:03 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:febe:6e03/64 scope link
       valid_lft forever preferred_lft forever
5: tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 100
    link/none
    inet 10.123.0.4 peer 10.123.0.1/32 scope global tun0
       valid_lft forever preferred_lft forever

 # dstat -D sda,sdb,sdc,sdd -N enp0s3,enp0s8,enp0s9,tun0 5
 You did not select any stats, using -cdngy by default.
 ----total-cpu-usage---- --dsk/sda-----dsk/sdb-----dsk/sdc-----dsk/sdd-- -net/enp0s3--net/enp0s8--net/enp0s9---net/tun0- ---paging-- ---system--
 usr sys idl wai hiq siq| read  writ: read  writ: read  writ: read  writ| recv  send: recv  send: recv  send: recv  send|  in   out | int   csw
  10   7  82   1   0   0| 618k   64k:  14k   13k:  18k 9707B:  18k 9659B|   0     0 :   0     0 :   0     0 :   0     0 |   0     0 |1584  2081
  11   8  81   0   0   0|   0     0 :   0     0 :   0     0 :   0     0 |  60B  164B:  46B   57B:   0     0 :  10B   22B|   0     0 |1864  2192
   8   8  83   0   0   0|   0     0 :   0     0 :   0     0 :   0     0 |  60B  135B:1099B 4922B:   0     0 : 421B   11k|   0     0 |1836  2326
   5   1  93   0   0   0|   0   819B:  29k   12k:  34k 8192B:  34k 8397B| 129B  200B:  46B   38B:   0     0 :  10B   22B|   0     0 |1174  1822
  17  15  68   0   0   0|   0  9318B:5734B   29k:   0    19k:   0    19k|  96B  182B: 290B  296B:   0     0 : 110B  135B|   0     0 |2681  2983
   3   1  96   0   0   0|   0   289k:   0     0 :   0     0 :   0     0 |  60B  132B:  46B   57B:   0     0 :  10B   22B|   0     0 | 955  1417
   7   5  88   0   0   0|   0  2458B:   0     0 :   0     0 :   0     0 |  78B  140B: 237B  296B:   0     0 : 110B  135B|   0     0 |1491  1920
   9   7  84   0   0   0|   0  1638B:   0     0 :   0     0 :   0     0 |  60B  119B:  58B   69B:   0     0 :  10B   22B|   0     0 |1523  1847
   3   1  96   0   0   0|   0     0 :   0     0 :   0     0 :   0     0 |  60B  124B:1007B 4910B:   0     0 : 421B   11k|   0     0 |1041  1603
  12   7  80   0   0   0|   0     0 :   0    10k:   0  3789B:   0  5530B| 127B  185B:  46B   38B:   0     0 :  10B   22B|   0     0 |1815  2292
  19  13  68   0   0   0|   0    15k:   0    25k:   0    13k:   0    11k|  60B  122B: 283B  308B:   0     0 : 110B  135B|   0     0 |2219  2395
   6   1  93   0   0   0|   0  4915B:   0     0 :   0     0 :   0     0 |  60B  117B:  83B   57B:   0     0 :  10B   22B|   0     0 | 915  1267
   8   5  87   0   0   0|   0     0 :   0     0 :   0     0 :   0     0 |  60B  127B: 310B  370B:   0     0 : 150B  177B|   0     0 |1507  1936
   7   6  87   0   0   0|   0   287k:   0     0 :   0     0 :   0     0 |  60B  120B:  46B   57B:   0     0 :  10B   22B|   0     0 |1434  1801
   2   1  97   0   0   0|   0  3277B:   0     0 :   0     0 :   0     0 |  60B  114B:1019B 4997B:   0     0 : 421B   11k|   0     0 | 907  1489
  10   6  83   0   0   0|   0     0 :   0  9114B:  22k 4710B:   0  4710B|  93B  152B:  46B   38B:   0     0 :  10B   22B|   0     0 |1881  2385
  15  13  72   0   0   0|   0  3686B:  34k   25k:  13k   19k:  34k   17k|  96B  182B: 387B  323B:   0     0 : 131B  146B|   0     0 |2161  2406
   2   0  98   0   0   0|   0  1638B:   0   819B:   0     0 :   0     0 |  78B  138B:  46B   57B:   0     0 :  10B   22B|   0     0 | 753  1272
  10   9  81   0   0   0|   0  2458B:   0     0 :   0     0 :   0     0 |  60B  137B: 303B  335B:   0     0 : 131B  146B|   0     0 |1897  2255
   6   3  91   0   0   0|   0     0 :   0     0 :   0     0 :   0     0 |  60B  117B:  46B   57B:   0     0 :  10B   22B|   0     0 |1148  1533
   3   1  97   0   0   0|   0   288k:   0     0 :   0     0 :   0     0 |  78B  138B: 980B 4841B:   0     0 : 411B   11k|   0     0 | 909  1433
  13  11  75   0   0   0|   0     0 :   0    15k:   0  5222B:   0  8294B| 109B  186B:  95B   50B:   0     0 :  10B   22B|   0     0 |2012  2392

iperf

install iperf

iperf is usually not default install in the Linux, you might need to install it first before you start to use it.
$ yum install iperf3

trouble shoot for running iperf

From swift node to swift node, sometimes, iptables block iperf 5001 testing

Solution 1 , disable firewalld temporary for iperf

For CentOS

=== flush out iptables ===
$ iptables -F

=== bring it back once you finish

CentOS 6
$ service iptables restart

CentOS 7
$ service firewalld restart

Solution 2 , add rule in firewalld permeantly

$ iptables -I INPUT -p udp -m udp --dport 5001 -j ACCEPT

how to use iperf in general

In general how to run iperf is very straightforward. Server end for listening the packages from iperf clients.
$ iperf -s
Client end for generate the packages to iperf server.
$ iperf -c <server_ip>

how to use iperf example with more detail

The main purpose for me to use iperf is to measure bandwidth, here is an example to test against with TCP protocol.
$ iperf -c 192.168.201.239
------------------------------------------------------------
Client connecting to 192.168.201.239, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 192.168.201.153 port 56476 connected with 192.168.201.239 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   959 MBytes   804 Mbits/sec
You can test against with bi-directions from server to client then from client to server.
$ iperf -c 192.168.201.239 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.201.239, TCP port 5001
TCP window size:  162 KByte (default)
------------------------------------------------------------
[  5] local 192.168.201.153 port 56970 connected with 192.168.201.239 port 5001
[  4] local 192.168.201.153 port 5001 connected with 192.168.201.239 port 59806
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec   680 MBytes   570 Mbits/sec
[  4]  0.0-10.0 sec  1023 MBytes   856 Mbits/sec
You can test against with UCP protocol as well, here is an example.
$ iperf -c 192.168.201.239 -u
------------------------------------------------------------
Client connecting to 192.168.201.239, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size:  208 KByte (default)
------------------------------------------------------------
[  3] local 192.168.201.153 port 40041 connected with 192.168.201.239 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.23 MBytes  1.03 Mbits/sec
[  3] Sent 893 datagrams
read failed: Connection refused
[  3] WARNING: did not receive ack of last datagram after 5 tries.
Here is the server end output example.
$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 192.168.201.239 port 5001 connected with 192.168.201.153 port 56474
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec   981 MBytes   822 Mbits/sec
[  5] local 192.168.201.239 port 5001 connected with 192.168.201.153 port 56476
[  5]  0.0-10.0 sec   959 MBytes   803 Mbits/sec
[  4] local 192.168.201.239 port 5001 connected with 192.168.201.153 port 56970
------------------------------------------------------------
Client connecting to 192.168.201.153, TCP port 5001
TCP window size:  272 KByte (default)
------------------------------------------------------------
[  6] local 192.168.201.239 port 59806 connected with 192.168.201.153 port 5001
[  6]  0.0-10.0 sec  1023 MBytes   858 Mbits/sec
[  4]  0.0-10.0 sec   680 MBytes   568 Mbits/sec

iostat

iostat is most likely use for check drive io status. It's nothing with network. Howerver, I usually use it when I trouble shoot network issue on object storage.

iostat installation

It usually default install in linux however sometime you might still need to install by yourself, for this case happen, here is how to install it.
$ sudo apt install sysstat

iostat example 1

If you want to know how to run iostat, just try iostat -h. Here is the most common one I am using now.
$ iostat -dmx 2
Linux 4.4.0-89-generic (elk-swift) 08/10/2017 _x86_64_ (2 CPU)

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
vda               1.09    11.63    0.97    7.87     0.03     0.13    38.22     0.01    1.06    1.01    1.07   0.32   0.28
scd0              0.00     0.00    0.00    0.00     0.00     0.00     8.00     0.00    0.50    0.50    0.00   0.50   0.00
dm-0              0.00     0.00    0.75   14.31     0.02     0.12    19.62     0.01    0.73    1.24    0.71   0.18   0.27
dm-1              0.00     0.00    1.30    3.97     0.01     0.02     8.00     0.06   11.55    0.16   15.30   0.02   0.01

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
vda               0.00     0.50    0.00    1.50     0.00     0.01     8.00     0.00    0.00    0.00    0.00   0.00   0.00
scd0              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00    1.50     0.00     0.01     8.00     0.00    1.33    0.00    1.33   1.33   0.20
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00

iostat example 2

Or you can try parameter as below and meantime to grep which drive e.g. sdd you would like to target on
# iostat -xtc 2 10
Linux 3.10.0-327.el7.x86_64 (ss02.ss.idv)  08/15/2017  _x86_64_ (2 CPU)

08/15/2017 07:28:44 PM
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           9.38    0.00    6.35    0.32    0.00   83.94

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.35    0.83    1.88     8.36     9.56    13.19     0.00    1.15    0.72    1.35   0.80   0.22
sdd               0.00     0.32    0.73    0.70    10.33     6.39    23.51     0.00    1.63    0.84    2.45   0.97   0.14
sda               0.00     0.22    8.44    2.33   298.40    52.65    65.15     0.03    3.13    3.36    2.30   0.94   1.02
sdc               0.00     0.32    0.73    0.70    10.33     6.40    23.48     0.00    1.91    0.86    3.01   0.96   0.14
dm-0              0.00     0.00    7.31    1.80   262.74    50.22    68.69     0.03    3.06    2.98    3.37   1.09   0.99
dm-1              0.00     0.00    0.17    0.00     3.15     0.00    36.49     0.00   14.71   14.71    0.00  13.68   0.24

08/15/2017 07:28:46 PM
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.14    0.00    1.31    0.00    0.00   95.55

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sda               0.00     1.50    0.00    5.00     0.00    36.25    14.50     0.02    4.30    0.00    4.30   0.70   0.35
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
dm-0              0.00     0.00    0.00    6.00     0.00    36.25    12.08     0.03    5.08    0.00    5.08   0.58   0.35
dm-1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00


# iostat -xtc 2 10 | grep sdd
sdd               0.00     0.32    0.73    0.70    10.34     6.38    23.37     0.00    1.63    0.84    2.44   0.96   0.14
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00