Networking OpenSolaris zones on Amazon EC2 with IPv6

OpenSolaris in the Amazon EC2 cloud was a great approach to a first step into cloud computing.

While OpenSolaris has many advanced features, three particularly shine in arbitrary ephemeral environments:

  • zones – slim and efficient virtual operating system instances
  • ZFS – quick and easy ability to snapshot and stream file system changes, simple and comprehensive management of growth and storage devices
  • dtrace(1) – comprehensive performance and status tracing from the operating system through the application stack, think truss(1)/strace(1)/ktrace(1) on steroids

Unfortunately the future of OpenSolaris on EC2 is unclear… the AMIs still exist and can be run, but patches and updates are not forthcoming. Hopefully the OpenIndiana project can pick things up and keep running.

The built in zones mechanism is an effective compartmentalization approach which minimizes overhead and optimizes shared memory usage. This lets you experiment with performance and scaling on EC2. For example, if you cut up your various services or pieces of your architecture into disparate zones, you can exercise extreme flexibility.

Zones allow you to run many instances of OpenSolaris on the same physical box. On the EC2 platform, you can thus run many OpenSolaris zones on the same EC2 instance, which can greatly reduce costs, as you don’t need to pay for an EC2 instance for each OpenSolaris zone. In addition, since the zones are independent and unique, busy or popular zones can be shuffled around or moved to other EC2 instances as necessary to provide adequate resources.

Have tons of services that don’t need a lot of resources? Run the zones all on one instance. Have some get busy or need more legroom? Move the zone to another instance and everything will be set. Being able to run the zones exclusively from EBS volumes makes it very easy to arbitrarily move zones from instance to instance as necessary. ZFS snapshots provide another layer that can provide granular replication for failover or scaling needs in your architecture.

Using tunnels between instances, you can effectively have your own network of zones running on an arbitrary or changing number of actual EC2 instances. Due to the way zones run on OpenSolaris, the actual overhead of running the zones is minimized.

Start by launching an OpenSolaris EC2 instance. Once you have that up and running, you can repeat this brief demo on spinning up a zone and providing direct access to it from the Internet.

Setup basic services and routing. This assumes you have a Hurricane Electric IPv6 tunnel, since Amazon does not yet provide IPv6 service. If you do not, consider trying it out at tunnelbroker.net. Note that if you want to include IPv6, you will need to update your Amazon firewall policy to permit ICMP from tunnelbroker’s server, and permit protocol 41 traffic.

Change the IPs below as appropriate, but tunnelbroker’s Solaris tunnel generation will work fine if you swap out your public IP for the private one that Amazon issues for the host. In a real world scenario you will want to develop much more restrictive IPF policies. This example also assumes that you have your Amazon firewall policy set to allow port 4000/tcp through, which we will use for sshing to our zone.


routeadm -e ipv6-forwarding
routeadm -e ipv4-forwarding
routeadm -u
echo "set intercept_loopback true;" > /etc/ipf/ipf.conf
echo "pass in log on xnf0" >> /etc/ipf/ipf.conf
echo "pass out log on xnf0" >> /etc/ipf/ipf.conf
echo "map xnf0 192.168.0.0/16 -> 0/32 proxy port ftp ftp/tcp" > /etc/ipf/ipnat.conf
echo "map xnf0 192.168.0.0/16 -> 0/32 portmap tcp/udp auto" >> /etc/ipf/ipnat.conf
echo "map xnf0 192.168.0.0/16 -> 0/32" >> /etc/ipf/ipnat.conf
echo "rdr xnf0 0.0.0.0/0 port 4000 -> 192.168.1.11 port 4000" >> /etc/ipf/ipnat.conf
svcadm enable network/ipfilter
ifconfig ip.tun0 inet6 plumb
ifconfig ip.tun0 inet6 tsrc instance.private.ip.address tdst 216.66.22.2 up
ifconfig ip.tun0 inet6 addif 2001:470:x:y::2 2001:470:x:y::1 up
route add -inet6 default 2001:470:x:y::1

Next we’ll create our private network with crossbow and the interfaces necessary for communication with the zone. This is effectively a completely virtual network that we are spinning up within the instance. In this example we are calling the private network dmz1, and the instance will be web1. You can setup as many private networks as you like, perhaps putting multiple peer zones on the same network without filtering, or isolating various services to different private networks to ensure IPF filtering and compliance between the two.

Allocating an IPv6 subnet off of the /48 that tunnelbroker granted us for this network, and then setting up a gateway for both IPv4 and IPv6 on this network, the zone will end up with full connectivity.


dladm create-etherstub -t dmz1
dladm create-vnic -l dmz1 dmz1_nic0
dladm create-vnic -l dmz1 dmz1_nic1
ifconfig dmz1_nic0 plumb
ifconfig dmz1_nic0 inet 192.168.1.1/24
ifconfig dmz1_nic0 up
ifconfig dmz1_nic0 inet6 plumb up
ifconfig dmz1_nic0:1 plumb
ifconfig dmz1_nic0:1 inet6 plumb
ifconfig dmz1_nic0:1 inet6 2001:470:a:b::1/60
ifconfig dmz1_nic0:1 inet6 up

Time to create the zone itself.


mkdir /mnt/zones
zonecfg -z web1
create
set zonepath=/mnt/zones/web1
set ip-type=exclusive
add net
set physical=dmz1_nic1
end
verify
commit
exit
zoneadm -z web1 install
zoneadm -z web1 boot

You can attach to the console of the zone with zlogin -C web1. This requires you to step through a few pages of settings, including setting the IP. To go with the IPF examples above, assign the IPv4 address to 192.168.1.11/24 and the IPv6 address to 2001:470:a:b::1/60 (changing out a:b for your /48 issued from tunnelbroker).

If you change your ssh port /etc/ssh/sshd_config to 4000, and restart ssh with svcadm ssh restart, you should now be able to ssh into your zone directly from the Internet to port 4000. IPv6 will be directly to the IPv6 address of the zone, and the IPv4 will be to the public IP address of your EC2 instance.

For longer term deployments, you will want to install zones on an EBS volume, or leave in ephemeral storage and snapshot to EBS or other storage solutions. You will want to take a close look at your Amazon firewall rules, and your IPF rules and make sure they match up. It is important to place appropriate access controls in your IPF rules for both IPv4 and IPv6, since the IPv6 traffic effectively bypasses the Amazon firewall once the tunnel traffic is permitted.