Before start you might need a IPv6 address set.
The open ipv6 you can get is from ipv6 generator
fd52:ed8b:e916::/48
This is a randomly generated
48-bit unique local IPv6 prefix as defined by RFC 4193.
It can be used for local IPv6 networking. Everytime the regenerate button is
pressed or the page is reloaded a new random prefix will be generated.
64-bit subnetting
Unique local addresses have 48-bit prefixes,
leaving 16 bits for local subnetting. Below see the addresses of the first and
last subnets.
Prefix
|
fd52:ed8b:e916::/48
|
1st subnet
|
fd52:ed8b:e916::/64
|
last subnet
|
fd52:ed8b:e916:ffff::/64
|
IPv4 local address equivalent
The IPv6 unique local addresses
are used similarly to the IPv4 local adresses e.g. 10.0.0.0/8. Unlike their
IPv4 counterpart IPv6 unique local addresses have a 40-bit random part.
Therefore if you connect 2 or more unique local networks, by VPN for example,
it's very unlikely to ever have address collisions. [+]
Converting Swift Controller Environment from IPv4 to IPv6
1. On all the nodes, add an IPv6 address to the NICs assigned to:
- Outward-facing Interface
- Cluster-facing Interface
- Data replication Interface
Ubuntu14:04 Controller:
ss01:~$ cat /etc/network/interfaces.d/eth1.cfg
# The primary network interface
auto eth1
iface eth1 inet6 static
address fd52:ed8b:e916::0001
netmask 64
vagrant@ss01:~$ cat /etc/network/interfaces.d/eth1:0.cfg
# The primary network interface
auto eth1:0
iface eth1:0 inet static
address 172.28.128.23
netmask 255.255.255.0
CentOS7.2 SwiftNode 01:
ss02 $ cat /etc/sysconfig/network-scripts/ifcfg-enp0s8
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not
modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.28.128.24
NETMASK=255.255.255.0
DEVICE=enp0s8
PEERDNS=no
#VAGRANT-END
IPV6INIT=yes
IPV6ADDR=fd52:ed8b:e916::0002
CentOS6.8 SwiftNode 02:
ss03 $ cat /etc/sysconfig/network-scripts/ifcfg-eth1
#VAGRANT-BEGIN
# The contents below are automatically generated by Vagrant. Do not
modify.
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=172.28.128.25
NETMASK=255.255.255.0
IPV6INIT=yes
IPV6ADDR=fd52:ed8b:e916::0003
DEVICE=eth1
PEERDNS=no
#VAGRANT-END
$ ifconfig
$ ping6
fd52:ed8b:e916::0001
$ ping6
fd52:ed8b:e916::0002
$ ping6 fd52:ed8b:e916::0003
2. Restart network service on the nodes and confirm that the IPv6 addresses are working properly.
- Ubuntu
- sudo ifdown eth0 && sudo ifup eth0
- sudo service network-manager restart
- CentOS
- sudo service network restart
- sudo systemctl restart network.service
In the
controller web UI, go to the node’s management page and
- click on the “Network” button located at the left hand corner of the page.
- On the “Edit Network Interfaces” page,
- change the IP address assigned to those interfaces to the IPv6 addresses you setup in step (1).
Go to the cluster management page and click on
“Configure” in the left side menu. If your cluster has the “Cluster API
Hostname” set, the only think you need to do is to ensure that you have an
IPv6 record for the “Cluster API Hostname.” If, however, your cluster
can only be
reached using an IP address, you need to modify the “Cluster API IP Address” and change it from the IPv4 address to your new IPv6 address for
the cluster.
6. Push the new config to the cluster and confirm that the config is pushed successfully. ( Ignore if you are not using Swift Controller )
(There is a chance where you will see “SwiftStack Node Connectivity: NOT OK: IP service location(s): fd57:bd44:c845:e36a::3:58318 fd57:bd44:c845:e36a::4:58318 not reachable” warning. If that’s the case, please restart the ssnoded daemon on each node and it should resolve the condition).
7. To verify that everything worked, using a test
account, try to authenticate to the cluster using its auth URL, make a
request using the storage URL, and check that the Swift Web Console still
works with your new all-IPv6 configured cluster.
6. Push the new config to the cluster and confirm that the config is pushed successfully. ( Ignore if you are not using Swift Controller )
(There is a chance where you will see “SwiftStack Node Connectivity: NOT OK: IP service location(s): fd57:bd44:c845:e36a::3:58318 fd57:bd44:c845:e36a::4:58318 not reachable” warning. If that’s the case, please restart the ssnoded daemon on each node and it should resolve the condition).
$ sudo systemctl restart ssnoded <CentOS7>
$ sudo restart ssnoded <CentOS6>
8. SSH to Swift controller and once logged in, check on the ring’s builder file to make sure that the ring now contains the IPv6 address of the nodes.
9. Optional: On the Swift nodes, remove the IPv4 address(es) from the interface configuration and restart the network interface for the settings to take effect.
10. PS: if your lab is setting up in VirtualBox, you might need to configure IPv6 Address Sets ( e.g fd52:ed8b:e916::10 ) as below.
Converting SS OpenVPN from IPv4 to IPv6
Controller ( Ubuntu ) : openvpn Server
OpenVPN
In case customers want to
use IPv6 on the VPN tunnel (tun0 & tun1) in the controller, these are the
steps needed:
- In /etc/openvpn/server.conf, add the following
line to the configuration:
server-ipv6 2001:db8:0:123::/64 (or you can change this to any other IPv6 address block)
- Add the following line at the end of
/etc/openvpn/server.conf:
#additional IPv6 setting
tun-ipv6
push tun-ipv6
ifconfig-ipv6 2001:db8:0:123::1 2001:db8:0:123::2 (please pick 2 other IPv6 addresses if you are NOT using the block listed above. Note: 2001:db8:0:123::0 /64 is an IPv6 NAT address).
- Allow packet forwarding for IPv6 by uncommenting (or adding) “net.ipv6.conf.all.forwarding=1” line in /etc/sysctl.conf.
# Uncomment the next line to enable packet forwarding for IPv6
# Enabling this option disables
Stateless Address Autoconfiguration
# based on Router Advertisements
for this host
net.ipv6.conf.all.forwarding=1
- Restart OpenVPN services. ( Ubuntu Server )
$ service openvpn restart
* Stopping virtual private
network daemon(s)...
* Stopping VPN
'recovery-server'
rm: cannot remove ‘/run/openvpn/recovery-server.pid’: Permission denied
vagrant@ss01:~$ sudo service openvpn restart
* Stopping virtual private
network daemon(s)...
* Stopping VPN
'recovery-server'
[ OK ]
* Stopping VPN 'server'
[ OK ]
* Starting virtual private
network daemon(s)...
* Autostarting VPN
'recovery-server'
* Autostarting VPN 'server'
Swift Node ( CentOS ) : openvpn client
Cent OS 7
Leverage systemctl -a find out openvpn client dameon
$ sudo systemctl -a | grep vpn
openvpn@120e8807-d9c4-11e6-95af-0800270c4edc.service
loaded active running
OpenVPN Robust And Highly Flexible Tunneling Application On
120e8807/d9c4/11e6/95af/0800270c4edc
system-openvpn.slice
loaded active active
system-openvpn.slice
$ sudo systemctl restart
openvpn@120e8807-d9c4-11e6-95af-0800270c4edc.service
CentOS 6
$ service openvpn restart
Shutting down openvpn: [ OK ]
Starting openvpn:
[ OK ]