/home/jeevanullas

Up in the Cloud!

AWS VPC and Eucalyptus - Part 2 (Eucalyptus Midonet Integration)

Continuing forward from the 1st blog in this series we will now look at standing up an Eucalyptus cloud-in-a-box and integrating it with the Midonet setup we prepared earlier.

Note that we would be using the same host as used earlier to setup and configure Eucalyptus 4.1.1 cloud.

Configuring Static Up-link (Step 1)

Before we begin the installation and configuration of Eucalyptus we have one important step left in the configuration. We should ensure that our midonet setup is configured for static up-link. Midonet documents this on the official documentation page.

In case of the other blog post from John Mille here he discusses another approach to accomplish this use case which is the BGP link. BGP is recommended for production/advance setups but for getting-started purposes we will rely on static routes.

In our case we have a handful of 11 public IPs and we will go ahead with static up-link configuration. Please note that this configuration shown below is not persistent across reboots. We encourage you to persist it via the usual network config files etc. or by making sure the steps are executed when the system starts via /etc/rc.d/rc.local

First of all it is time to create a veth pair as depicted in the documentation

# ip link add type veth
# ip link set dev veth0 up
# ip link set dev veth1 up

Once this is done we will create a bridge on veth0 and assign an IP address to that bridge as shown below:

# brctl addbr uplinkbridge
# brctl addif uplinkbridge veth0
# ip addr add 172.19.0.1/30 dev uplinkbridge
# ip link set dev uplinkbridge up

With this done now we need to enable IP forwarding on the system. Ensure that the following parameter is set to 1 in /etc/sysctl.conf

# Controls IP packet forwarding
net.ipv4.ip_forward = 1  

Another parameter that you might want to enable (Eucalyptus networking daemon eucanetd complains about it if not enabled) in /etc/sysctl.conf is shown below:

net.bridge.bridge-nf-call-iptables = 1  

Now to apply the new/updated values just run the below command:

# systcl -p

Once this is done we will set static routes for all our public (fake) IP addresses with gateway IP been 172.19.0.2 , note that 172.19.0.2 is the next IP in the veth pair and associated with veth1 the overlay network (we will see how it gets created later).

NOTE: For the simplicity purpose we store all public IP addresses in a text file and use this text file to apply the static routes.

# for i in `cat public_ip_list`; do
ip route add $i/32 via 172.19.0.2  
done  

Finally we add a masquerading rule to the external interface so connections coming from the overlay (midonet) with addresses that belong to the external network are NATed. Also make sure these packets can be forwarded:

# for i in `cat public_ip_list`; do
iptables -t nat -I POSTROUTING -o br0 -s $i/32 -j MASQUERADE  
iptables -I FORWARD -s $i/32 -j ACCEPT  
done  

Note that we use br0 as our external interface. Remember from the 1st blog post we created br0 during Step 1.

So far in this section we dealt with the 1st virtual interface veth0 in the pair. We will see what happens to the other interface i.e. veth1 later.

Installation of Eucalyptus (Step 2)

After we have done the static up-link we move on to installation and configuration of Eucalyptus. This part is well documented here but for simplicity purposes we list down the steps followed below:

  • Install the release RPMs for Eucalyptus and Euca2ools (The EPEL repo was configured in blog 1st while installing Midonet and its dependencies):
# yum install http://downloads.eucalyptus.com/software/eucalyptus/4.1/centos/6Server/x86_64/eucalyptus-release-4.1-1.el6.noarch.rpm

# yum install http://downloads.eucalyptus.com/software/euca2ools/3.2/centos/6Server/x86_64/euca2ools-release-3.2-1.el6.noarch.rpm
  • Once the repos are configured we will go ahead and install the software packages as shown below:
# yum install eucalyptus-cloud eucalyptus-walrus eucalyptus-cc eucalyptus-sc eucalyptus-nc euca2ools eucanetd eucalyptus-service-image nginx
  • If you carefully see the above yum command we have installed all the eucalyptus packages as well as nginx. In Eucalyptus 4.1.1 we use nginx on the CLC for providing instance meta-data to the instances in VPC mode.
Eucalyptus Configuration (Step 2)

In this section we configure the Eucalyptus cloud as we do normally. These steps are usually done via eucalyptus cookbook in faststart deployments:

  • Initialize the Eucalyptus postgres database:
# euca_conf --initialize
  • Modify the /etc/eucalyptus/eucalyptus.conf(Eucalyptus configuration file) for the VNET_MODE parameter
VNET_MODE="VPCMIDO"  
  • Configure and start NTPD
# ntpdate -u 0.centos.pool.ntp.org
# service ntpd start
# chkconfig ntpd on
  • Start Eucalyptus services
# service eucalyptus-cloud start
# service eucalyptus-cc start
# service eucalyptus-nc start

Note that there is one important eucalyptus service we will not start right now and it is call eucanetd. The reason being we will better understand how eucanetd actually makes the Eucalyptus<->Midonet integration possible.

Configuring Eucalyptus (Step 3)

In this section we run commands to register the Eucalyptus services and finally configure the Eucalyptus components.

  • Register UFS, Walrus backend, CC, SC and the NC. Note that everything in our case is running on the same machine.
# euca_conf --register-service -T user-api -H 192.168.1.5 -N API_5

# euca_conf --register-walrusbackend --partition walrus --host 192.168.1.5 --component walrus

# euca_conf --register-cluster --partition home-cloud-az01 --host 192.168.1.5 --component cc-01

# euca_conf --register-sc --partition home-cloud-az01 --host 192.168.1.5 --component sc-01

# euca_conf --register-nodes "192.168.1.5"
  • Next we grab our cloud admin credentials and source them to configure the cloud services
# euca_conf --get-credentials admin.zip
# unzip admin.zip
# source eucarc
  • Configure the block storage to use overlay and object storage to use walrus
# euca-modify-property -p objectstorage.providerclient=walrus

# euca-modify-property -p home-cloud-az01.storage.blockstoragemanager=overlay
  • Increase the access-key, secret-key limits and strategy for generation of certificates on credential download for the clouds:
# euca-modify-property -p authentication.access_keys_limit=10

# euca-modify-property -p authentication.signing_certificates_limit=10

# euca-modify-property -p authentication.credential_download_generate_certificate=Limited
  • Prepare and upload the network.json file for network. This part is important:
{
  "InstanceDnsServers": [
    "192.168.1.5"
  ],
  "Mode": "VPCMIDO",
  "PublicIps": [
    "192.168.1.211-192.168.1.221"
  ],
  "Mido": {
    "EucanetdHost": "frontend.euca",
    "GatewayHost": "frontend.euca",
    "GatewayIP": "172.19.0.2",
    "GatewayInterface": "veth1",
    "PublicNetworkCidr": "172.19.0.0/30",
    "PublicGatewayIP": "172.19.0.1"
  }
}

If you look at the above JSON document you would find that we specify the IP range for public IPs that we would have in our cloud. These are exactly the same we used earlier while setting up static routes and iptables rules.

Next in the JSON we specify the DNS server IP for our instances. We are going to use the CLC as our DNS server for instances.

Finally the important part. The Mido section of the JSON document asks for some more details. The parameters asked are:

  • EucanetdHost - The host that would be running eucanetd. This needs to be CLC host as eucanetd running on CLC is responsible for the integration to happen.
  • GatewayHost - The host that runs at the edge of the network and connect your Eucalyptus cloud (with midonet integrated) to the external network (e.g. internet). In our case this would again be the CLC host.
  • GatewayInterface - In our case note that we created a veth pair and the first virtual interface of that pair (veth0/uplinkbridge) we set an IP 172.19.0.1. Now we would use the 2nd interface of the same pair i.e. veth1 as our GatewayInterface. This way we put veth1 into the overlay network.
  • GatewayIP - The IP address we would want to have on the GatewayInterface. We choose the next available IP after 172.19.0.1 i.e 172.19.0.2. This IP will get assigned to veth1
  • PublicNetworkCidr - In our case note that the veth pair uses network 172.19.0.0/30 and this is the view we give midonet. Midonet assumes this to be our public network (it isn't really a public network but a virtual network).
  • PublicGatewayIP - This would be the IP address of the public gateway that we tell midonet. We would use the uplinkbridge/veth0 IPs as our public gateway IP.

All this still might not be making sense to you but let's move a little further and see what really happens next.

We will upload the above config to the cloud using the below command:

# euca-modify-property -f cloud.network.network_configuration=network.json
Verify Eucalyptus and Midonet Integration (Step 4)

This section is where all the fun begins. So far we have put all the bits and pieces together but now we have reached the point where things have to be tied up and VPC needs to be established.

We will start the integration process by starting eucanetd but before that we recommend you to enable DEBUG log level for your eucanetd log file that will get stored in /var/log/eucalyptus/eucanetd.log. In order to change the log level please open /etc/eucalyptus/eucalyptus.conf and modify the following variable shown below to the value shown below:

LOGLEVEL=DEBUG  

Save and quit the file. Now we will start the eucanetd daemon as shown below:

# service eucanetd start

The log file is going to have lot of information that would help you better understand what it did but basically it took the config from the JSON we set earlier and implemented that inside Midonet using the Midonet-API.

Before it did so it created few important networking objects inside the Midonet namely eucart (Midonet provider router) and eucabr(a bridge device). Let's go check this out from within the midonet-cli

# midonet-cli
midonet> list router  
router router0 name eucart state up  
midonet> list bridge  
bridge bridge0 name eucabr state up  
midonet>  

Now that we know it has done this lets see what are the ports we have on the router

midonet> router router0 list port  
port port0 device router0 state up mac ac:ca:ba:11:c8:8e address 172.19.0.2 net 172.19.0.0/30  
port port1 device router0 state up mac ac:ca:ba:c9:8c:aa address 169.254.0.1 net 169.254.0.0/17 peer bridge0:port0  

As you can see it (eucanetd via midonet-API) has created 2 ports. If we see port0 it is basically the veth1 we specified in JSON document earlier with an IP 172.19.0.2. In order to verify if port0 on router0 is really bind to veth1 we can check the host bindings as shown below:

midonet> host host0 list binding  
host host0 interface veth1 port router0:port0  

If we have a router then we must have routes associated to it. Let's see the routes:

midonet> router router0 list route  
route route0 type normal src 0.0.0.0/0 dst 0.0.0.0/0 gw 172.19.0.1 port router0:port0 weight 0  
route route1 type normal src 0.0.0.0/0 dst 172.19.0.2 port router0:port0 weight 0  
route route2 type normal src 0.0.0.0/0 dst 172.19.0.0/30 port router0:port0 weight 0  
route route3 type normal src 0.0.0.0/0 dst 169.254.0.1 port router0:port1 weight 0  
route route4 type normal src 0.0.0.0/0 dst 169.254.0.0/17 port router0:port1 weight 0  

There you see the default gateway is set to 172.19.0.1. This was done by eucanetd that read the JSON document and using the PublicGatewayIP specified in it created a default public gateway.

The bridge created by eucanetd eucabr has one port to the router eucart as shown below:

midonet> bridge bridge0 list port  
port port0 device bridge0 state up peer router0:port1  

This is it. The integration of Eucalyptua and Midonet is finished. It is time now for us to start using this cloud with our VPCs.

Troubleshooting/Debugging issues (Step 5)

In case you have issues with your setup we recommend please do the following to start everything from scratch (you might be doing this very frequently considering the fact that VPC is tech-preview currently):

  • Stop eucanetd
# service eucanetd stop
  • Flush eucanetd - This would trigger deletion of all the networking artifacts eucanetd created inside midonet.
# eucanetd -F
  • Stop midolman, tomcat, cassandra and zookeeper services
# service midolman stop
# service tomcat stop
# service cassandra stop
# service zookeeper stop
  • Delete files for zookeeper and cassandra as shown below:
# rm -rf /var/lib/cassandra/*
# rm -rf /var/lib/zookeeper/*
  • Re-create the zookeeper files for data as we did in the 1st blog post
# mkdir /var/lib/zookeeper/data
# chmod 777 /var/lib/zookeeper/data
# echo 1 > /var/lib/zookeeper/data/myid
  • Start zookeeper, cassandra, tomcat and midolman
# service zookeeper start
# service cassandra start
# service tomcat start
# service midolman start
  • Re-create the tunnel-zone euca-mido and add the host as a member to this tunnel-zone as we did in the 1st blog post
# midonet-cli
midonet> tunnel-zone create name euca-mido type gre  
tzone0  
midonet> tunnel-zone list  
tzone tzone0 name euca-mido type gre  
midonet> list host  
host host0 name frontend.euca alive true  
midonet> tunnel-zone tzone0 add member host host0 address 192.168.1.5  
zone tzone0 host host0 address 192.168.1.5  
  • Finally start eucanetd again
# service eucanetd start
Creating your VPC and starting an instance inside it (Step 6)

Now that everything is in place we kick the tyres and create a VPC. We are going to create a default VPC for the eucalyptus account and run an instance inside that. In order to create a default VPC you need to issue the following command

# euare-accountlist | grep "^eucalyptus"
# euca-create-vpc 005359918066

Note that above we first find out the account-ID for account eucalyptus then using this ID we create a default VPC. With the default VPC a subnet would get created automatically. We see next how these artifacts are mapped on the midonet side below:

# euca-describe-vpcs
VPC    vpc-56403811    available   172.31.0.0/16   dopt-b6c8061b   default true

# euca-describe-subnets 
SUBNET    subnet-8e279423 available   vpc-56403811    172.31.0.0/20   4091    home-cloud-az01 true    true  

Midonet side is shown below:

midonet> list router  
router router0 name vr_vpc-56403811_2 state up infilter chain0 outfilter chain1  
router router1 name eucart state up  
midonet> list bridge  
bridge bridge0 name vb_vpc-56403811_subnet-8e279423 state up  
bridge bridge1 name eucabr state up  

We will now add 22/SSH to the default VPC security group as shown below (note that you will have to source your credentials before executing the below command):

# euca-authorize -P tcp -p 22 -s 0.0.0.0/0 default

Time to launch the instance into default VPC:

# euca-run-instances -k sshlogin -t m1.xlarge emi-14d5a75a

Once the instance is up you could check the console-output or SSH to it using the public IP assigned to it as shown below:

# euca-describe-instances i-ff4366df
RESERVATION    r-0ba5d5db  005359918066    default  
INSTANCE    i-ff4366df  emi-14d5a75a    192.168.1.212   172.31.0.240    running sshlogin    0       m1.xlarge   2015-05-13T11:47:06.778Z    home-cloud-az01             monitoring-disabled 192.168.1.212   172.31.0.240    vpc-56403811    subnet-8e279423 instance-store                  hvm         sg-4a4b5f0e x86_64  
NETWORKINTERFACE    eni-d03e5579    subnet-8e279423 vpc-56403811    005359918066    in-use  172.31.0.240    euca-172-31-0-240.eucalyptus.internal   true  
ATTACHMENT        0   attached    2015-05-13T11:47:06.789Z    true  
ASSOCIATION    192.168.1.212   eucalyptus  172.31.0.240  
PRIVATEIPADDRESS    172.31.0.240    euca-172-31-0-240.eucalyptus.internal   primary  
TAG    instance    i-ff4366df  euca:node   192.168.1.5  

To SSH just use the private key and use the public IP as shown below:

# ssh -i .euca/sshlogin cloud-user@192.168.1.212
The authenticity of host '192.168.1.212 (192.168.1.212)' can't be established.  
RSA key fingerprint is a0:84:18:28:63:98:ef:f4:65:3d:1e:24:0c:4b:00:b3.  
Are you sure you want to continue connecting (yes/no)? yes  
Warning: Permanently added '192.168.1.212' (RSA) to the list of known hosts.

[cloud-user@ip-172-31-0-240 ~]$ cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.6 (Santiago)

[cloud-user@ip-172-31-0-240 ~]$ curl jeevanullas.in
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">

<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">  
    <head>
        <title>Deependra Singh Shekhawat</title>
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
    </head>

    <body>
        <h1>We are glad you found this!</h1>
    </body>
</html>  

As can be seen we were able to successfully connect to our RHEL 6.6 instance and access internet from within the instance. This was all possible due to the static-uplink configuration we did in this blog post. The CLC host acts as a gateway for the instances to reach external networks.

Conclusion

Finally we have come to the end of this 2 blog post series. We hope these 2 blogs would help you to get started with VPC feature in Eucalyptus. Our goal was to make sure the setup is as easy it can be by running everything on a single box (or a single VM). This would help you test the VPC API and backend support we have got and provide useful feedback on the implementation , things you would like to see improved in future release when VPC is production ready feature.

In case you are wondering about the VPC features missing in the current implementation please feel free to look at the following wiki page.