This post shows you how to setup KAIO ( kubernete all in one ) dev box on your VirtualBox ubuntu 1604. 1. Prepare vagrant file - Vagrantfile $ cat Vagrantfile Vagrant.configure(2) do |config| config.vm.define "ss03" do |ss03| ss03.vm.box = "ubuntu/xenial64" ss03.vm.box_version = "20171011.0.0" ss03.ssh.insert_key = 'false' ss03.vm.hostname = "ss03.swiftstack.idv" ss03.vm.network "private_network", ip: "172.28.128.43", name: "vboxnet0" ss03.vm.provider :virtualbox do |vb| vb.memory = 4096 vb.cpus = 2 end end end $ vagrant up $ vagrant ssh ss03
This article will like to show you how to leverage caching to speed up your Transaction. The solution we pick is Varnish however, varnish doesn't support ssl unless you pay for Varnish +. Thus we take advantage reverse proxy + stunnel to make client -> ssl -> varnish --> ssl > swift ( object storage ) happen.
Setup / Configure Nginx Reverse Proxy with SSL ( 443 to 80 )
Setup / Configure Varnish for cache ( 80 to 8080 )
Setup / Configure Stunnel for SSL tunnel ( 8080 to 443 )
PS: if you would like to let Varnish connect to Swift Endpoint with SSL directly, you would need to buy Varnish-plus Or the option could be connecting with Swift Endpoint without SSL.
Setup / Configure Nginx Reverse Proxy with SSL ( 443 to 80 )
Here I use wildcard cert "*.ss.org", ss02 is swift endpoint, ss03 is nginx, varnish and stunnel server.
$ apt install nginx -y
comment out /etc/nginx/sites-available/default
since we don't need default web page run on port 80
vcl 4.0;
# List of upstream proxies we trust to set X-Forwarded-For correctly.
acl upstream_proxy {
"127.0.0.1";
}
# Default backend definition. Set this to point to your content server.
backend default {
.host = "127.0.0.1";
.port = "8080";
}
sub vcl_recv {
# Set the X-Forwarded-For header so the backend can see the original
# IP address. If one is already set by an upstream proxy, we'll just re-use that.
if (client.ip ~ upstream_proxy && req.http.X-Forwarded-For) {
set req.http.X-Forwarded-For = req.http.X-Forwarded-For;
} else {
set req.http.X-Forwarded-For = regsub(client.ip, ":.*", "");
}
}
sub vcl_hash {
# URL and hostname/IP are the default components of the vcl_hash
# implementation. We add more below.
hash_data(req.url);
if (req.http.host) {
hash_data(req.http.host);
} else {
hash_data(server.ip);
}
# Include the X-Forward-Proto header, since we want to treat HTTPS
# requests differently, and make sure this header is always passed
# properly to the backend server.
if (req.http.X-Forwarded-Proto) {
hash_data(req.http.X-Forwarded-Proto);
}
#return (hash);
}
sub vcl_backend_response {
# Happens after we have read the response headers from the backend.
#
# Here you clean the response headers, removing silly Set-Cookie headers
# and other mistakes your backend does.
set beresp.ttl = 60s;
}
restart varnish service
$ service varnish restart
you can double check via
$ varnishlog or varnishstat
Setup / Configure Stunnel for SSL tunnel ( 8080 to 443 )
ping is default network tools in most of linux, you shouldn't yum or apt install it. Here is an ping cli example for ping with package size around MTU e.g. 1500 see how that happen
$ ping -s <package size; e.g. 1480 > -c <count; e.g. : 3> or -i <wait>
Here is the example
$ ping 192.168.202.10 -s 1480 -c 10
PING 192.168.202.10 (192.168.202.10): 1480 data bytes
1488 bytes from 192.168.202.10: icmp_seq=0 ttl=63 time=8.118 ms
1488 bytes from 192.168.202.10: icmp_seq=1 ttl=63 time=6.077 ms
1488 bytes from 192.168.202.10: icmp_seq=2 ttl=63 time=3.317 ms
1488 bytes from 192.168.202.10: icmp_seq=3 ttl=63 time=3.430 ms
1488 bytes from 192.168.202.10: icmp_seq=4 ttl=63 time=3.659 ms
1488 bytes from 192.168.202.10: icmp_seq=5 ttl=63 time=3.277 ms
1488 bytes from 192.168.202.10: icmp_seq=6 ttl=63 time=3.220 ms
1488 bytes from 192.168.202.10: icmp_seq=7 ttl=63 time=3.617 ms
1488 bytes from 192.168.202.10: icmp_seq=8 ttl=63 time=3.707 ms
1488 bytes from 192.168.202.10: icmp_seq=9 ttl=63 time=3.024 ms
--- 192.168.202.10 ping statistics ---
10 packets transmitted, 10 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 3.024/4.145/8.118/1.558 ms
you can see at the end of summary, it shows how many packages transmitted and how many packages received at remote end, and total package loss.
package loss
package loss is kinds of import, here is bad example
more info regarding package loss affect on your network.
Packet loss is almost always bad when it occurs at the final destination. Packet loss happens when a packet doesn't make it there and back again. Anything over 2% packet loss over a period of time is a strong indicator of problems. Most internet protocols can correct for some packet loss, so you really shouldn't expect to see a lot of impact from packet loss until that loss starts to approach 5% and higher. Anything less than this is showing a possible problem, but one that is probably not impacting your experience significantly at present (unless you're an online gamer or something similar that requires 'twitch' reflexes).
latency
latency estimation from ping result
e.g. 1488 bytes from 192.168.202.10: icmp_seq=9 ttl=63 time=20.024 ms
latency : 20.024 ms = 0.020024 seconds
If your network latency larger than 0.07, then we think this latency might be potential to hurt your object storage
trouble shoot when you can't run ping.
Please do make sure your firewall doesn't block icmp
dstat
dstat is not usually default install, you might need to install first, however installation is pretty straightforward.
dstat installation
# yum install dstat -y
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.hmc.edu
* extras: mirror.sjc02.svwh.net
* updates: mirror.linuxfix.com
Resolving Dependencies
--> Running transaction check
---> Package dstat.noarch 0:0.7.2-12.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================================================================================================================================================================================================================
Package Arch Version Repository Size
================================================================================================================================================================================================================================================================================
Installing:
dstat noarch 0.7.2-12.el7 base 163 k
Transaction Summary
================================================================================================================================================================================================================================================================================
Install 1 Package
Total download size: 163 k
Installed size: 752 k
Downloading packages:
dstat-0.7.2-12.el7.noarch.rpm | 163 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : dstat-0.7.2-12.el7.noarch 1/1
Verifying : dstat-0.7.2-12.el7.noarch 1/1
Installed:
dstat.noarch 0:0.7.2-12.el7
Complete!
how to use dstat in general
I usually doesn't add too many parameters for dstat.
if you would like to specific drives or NICs, you can try as below. This dstat will list specific drives or NICs and collect info within 20 seconds as a batch.
iperf is usually not default install in the Linux, you might need to install it first before you start to use it.
$ yum install iperf3
trouble shoot for running iperf
From swift node to swift node, sometimes, iptables block iperf 5001 testing
Solution 1 , disable firewalld temporary for iperf
For CentOS
=== flush out iptables ===
$ iptables -F
=== bring it back once you finish
CentOS 6
$ service iptables restart
CentOS 7
$ service firewalld restart
In general how to run iperf is very straightforward. Server end for listening the packages from iperf clients.
$ iperf -s
Client end for generate the packages to iperf server.
$ iperf -c <server_ip>
how to use iperf example with more detail
The main purpose for me to use iperf is to measure bandwidth, here is an example to test against with TCP protocol.
$ iperf -c 192.168.201.239
------------------------------------------------------------
Client connecting to 192.168.201.239, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.201.153 port 56476 connected with 192.168.201.239 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 959 MBytes 804 Mbits/sec
You can test against with bi-directions from server to client then from client to server.
$ iperf -c 192.168.201.239 -d
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 192.168.201.239, TCP port 5001
TCP window size: 162 KByte (default)
------------------------------------------------------------
[ 5] local 192.168.201.153 port 56970 connected with 192.168.201.239 port 5001
[ 4] local 192.168.201.153 port 5001 connected with 192.168.201.239 port 59806
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 680 MBytes 570 Mbits/sec
[ 4] 0.0-10.0 sec 1023 MBytes 856 Mbits/sec
You can test against with UCP protocol as well, here is an example.
$ iperf -c 192.168.201.239 -u
------------------------------------------------------------
Client connecting to 192.168.201.239, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.201.153 port 40041 connected with 192.168.201.239 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 1.23 MBytes 1.03 Mbits/sec
[ 3] Sent 893 datagrams
read failed: Connection refused
[ 3] WARNING: did not receive ack of last datagram after 5 tries.
Here is the server end output example.
$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 192.168.201.239 port 5001 connected with 192.168.201.153 port 56474
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 981 MBytes 822 Mbits/sec
[ 5] local 192.168.201.239 port 5001 connected with 192.168.201.153 port 56476
[ 5] 0.0-10.0 sec 959 MBytes 803 Mbits/sec
[ 4] local 192.168.201.239 port 5001 connected with 192.168.201.153 port 56970
------------------------------------------------------------
Client connecting to 192.168.201.153, TCP port 5001
TCP window size: 272 KByte (default)
------------------------------------------------------------
[ 6] local 192.168.201.239 port 59806 connected with 192.168.201.153 port 5001
[ 6] 0.0-10.0 sec 1023 MBytes 858 Mbits/sec
[ 4] 0.0-10.0 sec 680 MBytes 568 Mbits/sec
iostat
iostat is most likely use for check drive io status. It's nothing with network. Howerver, I usually use it when I trouble shoot network issue on object storage.
iostat installation
It usually default install in linux however sometime you might still need to install by yourself, for this case happen, here is how to install it.
$ sudo apt install sysstat
iostat example 1
If you want to know how to run iostat, just try iostat -h. Here is the most common one I am using now.