This blog post series is going to help new users get started with VPC support in Eucalyptus. Note that this is series of 2 blog posts in part 1 we are going to discuss the configuration and installation of Midonet that is the core piece of software used by Eucalyptus to provide VPC support. You can read more about Midonet here.
In the 2nd part of the blog post we will discuss installation and configuration of Eucalyptus 4.1.1. The configuration will share detailed step-by-step approach to configure the cloud for VPC support.
One important difference you would find between this series of blogs and the one written by John Mille here is that we specifically look at deploying VPC based cloud on a single host, that means all cloud components including Midokura running on the one single machine. This kind of deployment is preferred for getting started requirements and we have Eucalyptus faststart for exactly the same purpose (without VPC feature).
End objective could be that we have Eucalyptus faststart deploying AWS like clouds with VPC configured and installed
If you have used Eucalyptus in the past to setup AWS like clouds on-premise you must be aware of networking modes. Traditionally Eucalyptus had following network modes
With support for STATIC and SYSTEM networking modes going away we are left with only 3 network modes. EDGE is the newest network mode we introduced and it is very useful considering the fact that it removes the Cluster Controller (CC) from the data path. You can read more about EDGE and its benefits in this excellent wiki page.
Customers and users have been asking for AWS VPC support in on-premise Eucalyptus clouds for a while. The prime reason as John Mille highlighted in his blog post is because AWS defaults to VPC for all new accounts starting 2013-12-04
In order to implement the solution (AWS VPC on-premise) we decided to partner with Midokura. It is great choice for several reasons and one of the benefits is that Midonet developed by folks at Midokura is open source.
Implementation of AWS VPC introduces a network mode in Eucalyptus i.e. call VPCMIDO. Currently AWS VPC support is under tech-preview which means customers do not get official support from Eucalyptus support team on Midonet+Eucalyptus configuration/setup(s) but this does not stop you to give it a try and exactly that is the motivation of the 2 blog posts today.
Preparation (Step 1)
Please note again, that the objective of these 2 blog posts is to help you install an Eucalyptus 4.1.1 cloud with VPCMIDO network mode. In this 1st blog post of the series we will see how to install and configure Midonet on the system.
First of all please install CentOS 6.6
x86_64minimal on a fresh system. The minimal configuration that we would recommend for this system is 8GB RAM, 200GB hard disk, 1 network interface and 4 CPUs minimal. It is preferred to have it on a physical machine but similar config VM with nested virt support in processor is also not a problem (yes we have tested these steps in both physical and virtual machine installation).
After the installation of CentOS is finished we recommend you immediately update all the installed packages with the latest releases:
yum -y update
After you have updated all the installed packages we would go ahead and disable SELinux (Eucalyptus currently does not have SELinux policies) and iptables (simplicity of configuration) on the system. Note that disabling iptables is not recommended for an installation of Eucalyptus in production but we would be setting up a test (tech-preview) cloud for only personal usage.
/etc/sysconfig/selinuxby changing the value for
To disable firewall execute the following commands
# iptables -F # iptables -t nat -F # service iptables save
Now we would configure the network bridge on the default network interface. In our case it is eth0. If it is something else on your system please identify by running the command
route -n. The default gateway set on the interface would be the one to choose for bridge creation.
/etc/sysconfig/network-scripts/ifcfg-br0both shown below respectively :
# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=EC:A8:6B:F8:73:85 TYPE=Ethernet UUID=259897cc-2608-4fb9-8f50-2da23e85b4fb ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none BRIDGE=br0 # cat /etc/sysconfig/network-scripts/ifcfg-br0 DEVICE=br0 TYPE=Bridge ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=static IPADDR=192.168.1.5 NETMASK=255.255.255.0 GATEWAY=192.168.1.1
- Next we disabled ZEROCONF on the system as well as set a hostname. Note that Midonet requires to have a hostname that is resolvable (we used /etc/hosts instead of a DNS server).
# cat /etc/sysconfig/network NETWORKING=yes HOSTNAME=frontend.euca NOZEROCONF=yes # cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.5 frontend.euca frontend.euca
- Finally we rebooted the system for the changes to take affect.
Configuring RPM repositories (Step 2)
Midonet software needs to be installed on the system now. Before installing the software we need to configure the correct repositories. Midonet depends on cassandra and zookeeper to run correctly. You can read more about why here. Below we configure repositories for Midonet (Midonet core and Misc), Cassandra and EPEL.
# cat /etc/yum.repos.d/datastax.repo #DataStax (Apache Cassandra) [datastax] name = DataStax Repo for Apache Cassandra baseurl = http://rpm.datastax.com/community enabled = 1 gpgcheck = 0 gpgkey = https://rpm.datastax.com/rpm/repo_key # cat /etc/yum.repos.d/midonet.repo [midonet] name=MidoNet baseurl=http://repo.midonet.org/midonet/v2015.01/RHEL/6/stable/ enabled=1 gpgcheck=1 gpgkey=http://repo.midonet.org/RPM-GPG-KEY-midokura [midonet-misc] name=MidoNet 3rd Party Tools and Libraries baseurl=http://repo.midonet.org/misc/RHEL/6/misc/ enabled=1 gpgcheck=1 gpgkey=http://repo.midonet.org/RPM-GPG-KEY-midokura # rpm -ivh http://mirrors.ustc.edu.cn/fedora/epel/6/i386/epel-release-6-8.noarch.rpm
Package Installation (Step 3)
In this section we will go ahead and install all the necessary software packages on the system that runs Midonet.
# yum install tomcat midolman midonet-api python-midonetclient zookeeper zkdump cassandra21 java-1.7.0-openjdk kmod-openvswitch # yum install https://repos.fedorapeople.org/repos/openstack/openstack-icehouse/epel-6/iproute-2.6.32-130.el6ost.netns.2.x86_64.rpm
If you have paid attention above in addition to installing midonet we have installed a bunch of different packages.
- Tomcat is used to host the Midonet-server API (it's a stateless REST API over HTTP).
- Midolman (the MidoNet Agent) is the daemon that runs on all hosts where traffic enters and leaves MidoNet.
- Midolman requires a Linux kernel that has the Open vSwitch kernel module installed and a Java 7 runtime (JRE) in userspace. It instructs the Open vSwitch kernel module on how to handle network traffic (what modifications to apply to packets and where to tunnel them to).
- Midonet CLI is provided by python-midonetclient. We would be primarily using this instead of the midokura control panel which is only available to enterprise customers.
- MidoNet uses Apache Cassandra version 2.0 to store flow state information, for example NAT bindings, connection tracking information, and to support VM migration.
- MidoNet uses Apache ZooKeeper 3.4.5 to store critical path data about the virtual and physical network topology.
- MidoNet uses network namespaces and to better understand them we recommend you read this blog post. CentOS 6.x does not have support for network namespaces hence we install the
iproute-2.6.32-130.el6ost.netns.2.x86_64.rpmfrom RDO openstack repos.
Configuration (Step 4)
In this section we detail the steps taken to configure each dependency of midonet and finally configuration of midolman itself. Note that we plan on doing this manually before we automate later.
Zookeeper configuration file is
/etc/zookeeper/zoo.cfg, we modify it to add a new line for our server like shown below
Once this is done we create a symbolic link that points at our
java binary (NOTE: Zookeeper uses this path which is configured by
# mkdir -p /usr/java/default/bin/ # ln -s /usr/lib/jvm/java-1.7.0-openjdk-188.8.131.52.x86_64/jre/bin/java /usr/java/default/bin/java
Note that the path couldbe different on your system so we recommend you to check that using following command:
# alternatives --display java | grep best
When the zookeeper server starts up, it determines which server it is by looking for the file myid in the data directory. That file contains the server number, in ASCII, and it should match x in server.x in the left hand side of this setting. In our case it is 1 so we follow the steps shown below:
# mkdir /var/lib/zookeeper/data # chmod 777 /var/lib/zookeeper/data # echo 1 > /var/lib/zookeeper/data/myid
Once this is done it is time to start zookeeper service on the system as shown below. Ensure that it is running and also runs automatically on next system reboot:
# service zookeeper start # service zookeeper status dead but pid file exists # chkconfig zookeeper on # ps aux | grep zookeeper
As seen above the
status command returned error, we found that the problem was with the init script for zookeeper located at /etc/init.d/zookeeper, it seem to be referring to
But we never found
ZOOPIDFILE anywhere , so we decided to set it above this line like below:
This did the trick and we could confirm that
status shows the server was running. Another way to confirm
zookeeper is working properly is to issue the below command (If everything is working fine you could expect the output to be
# echo ruok | nc 192.168.1.5 2181 iamok
Similarly one can issue the
stat command to get more detailed information as shown below:
# echo stat | nc 192.168.1.5 2181 Zookeeper version: 3.4.5--1, built on 02/08/2013 12:25 GMT Clients: /192.168.1.5:51368(queued=0,recved=10924,sent=11945) /192.168.1.5:51371(queued=0,recved=6117,sent=6117) /192.168.1.5:57671(queued=0,recved=1,sent=0) /192.168.1.5:51370(queued=0,recved=16185,sent=16185) Latency min/avg/max: 0/0/200 Received: 33229 Sent: 34249 Connections: 4 Outstanding: 0 Zxid: 0x828 Mode: standalone Node count: 2357
Important log messages related to zookeeper gets logged at the following log file (helps during troubleshooting problems):
Configuring this was straight forward. The config file is stored at
/etc/cassandra/conf/cassandra.yaml and we modified the following parameters in it:
cluster_name: 'midonet' rpc_address: 192.168.1.5 seeds: 192.168.1.5 listen_address: 192.168.1.5
Note that our zookeeper and cassandra installations are running on the single system. This is not recommended configuration for production but for the tech-preview this is OK.
Once we have finished configuring cassandra its time to start the service and ensure that it is running. Also we ensure that it starts automatically after system reboot (in future):
# service cassandra start # service cassandra status # chkconfig cassandra on
# nodetool --host 127.0.0.1 status Datacenter: datacenter1 ======================= Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns Host ID Rack UN 192.168.1.5 213.96 KB 256 ? 6b648b8b-8fda-4158-9e87-686c71d78ed0 rack1 Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless
Again in case of troubleshooting the log files are stored at
This is required to host the midonet API. It provides the servlet container (We use tomcat 7). As its a servlet container it needs to know the Context for the application we are deploying on it which in our case is the midonet API. To do that we created a new file as shown below and set the context attributes:
# cat /etc/tomcat/Catalina/localhost/midonet-api.xml <Context path="/midonet-api" docBase="/usr/share/midonet-api" antiResourceLocking="false" privileged="true" />
MidoNet API server is a web-app and it's configuration file is located at
/usr/share/midonet-api/WEB-INF/web.xml, this configuration file requires 3 modifications as shown below:
<param-name>rest_api-base_uri</param-name> <param-value> http://192.168.1.5:8080/midonet-api </param-value> <param-name>auth-auth_provider</param-name> <param-value> org.midonet.api.auth.MockAuthService </param-value> <param-name>zookeeper-zookeeper_hosts</param-name> <param-value> http://192.168.1.5:2181 </param-value>
As shown above we changed the
rest_api-base_uri parameter to our system IP address. Note that tomcat runs on port 8080.
Midonet-API today only supports authentication with keystone. As Eucalyptus does not have an integration with openstack keystone we would be using the MockAuthService which means disable authentication by using a mock authentication service.
Finally Midonet-API needs to know the zookeeper hosts so that it can send the requests to modify Midonet network state database.
Now its time to start tomcat. Ensure that it is running and it get started automatically on the system reboot.
# service tomcat start # service tomcat status # chkconfig tomcat on
In case of troubleshooting the log files are located in the directory
In this section we will discuss the configuration for the Midolman agent that runs on the host. The configuration file for Midolman is
/etc/midolman/midolman.conf and we tell it about zookeeper and cassandra server IP/ports as shown below:
[zookeeper] zookeeper_hosts = 192.168.1.5:2181 [cassandra] servers = 192.168.1.5:9042
Once this is done we start the agent and make sure it is running. We need to also make sure midolman runs on system reboot/start but this behaviour is currently not controlled by
chkconfig and it seems to start automatically on reboot/start
# service midolman start # service midolman status
If everything goes alright we should have the log files coming in at
/var/log/midolman/ and one can take a look at them for troubleshooting issues down the line.
Finally its time to start interacting with midonet. In order to do that there are currently 2 ways. One is the CLI which is open-source and freely available which is what we are going to use today. The other way to interact is via CP (Control Panel) but that feature is only available to Enterprise Midokura users at this point.
In order to start using
midonet-cli we first create a config file for the same as shown below. Note that authentication bits aren't important as we use Mock Authentication service
# cat ~/.midonetrc [cli] api_url = http://192.168.1.5:8080/midonet-api username = admin password = admin project_id = admin
midonet-cli is very easy. But please note that the tool is bit tricky in its usage from our experience so far. We have found that if one has to manipulate on certain artifact he/she has to first load that artifact into memory by listing them out within the CLI.
In the below snippet we show we create a midonet tunnel zone of type GRE. This piece is important before beginning integration with Eucalyptus. Eucalyptus currently only support tunnel zone type GRE. Once the tunnel zone is created we add our system to this tunnel. Note that before we add our system to the tunnel we have listed it down using the
list command this is necessary because if we do not do this we would not be able to run the
add member command. Apparently the CLI needs to have the host object in memory before we use it for other purposes:
# midonet-cli midonet> tunnel-zone create name euca-mido type gre tzone0 midonet> tunnel-zone list tzone tzone0 name euca-mido type gre midonet> list host host host0 name frontend.euca alive true midonet> tunnel-zone tzone0 add member host host0 address 192.168.1.5 zone tzone0 host host0 address 192.168.1.5
This is the end of the first blog post in this series. We hope it gives you enough deep insights into the Midonet software installation and configuration process. Note that it is an important piece of software needed for the integration with Eucalyptus to achieve AWS VPC like cloud so getting deep understanding is very important hence we list down the some URLs we used to learn more on this here:
- Midonet reference architecture - http://docs.midonet.org/docs/latest/reference-architecture/content/_preface.html
- Midonet quick start guide (RHEL7 Openstack)
- Midonet operations guide - http://docs.midonet.org/docs/latest/operation-guide/content/preface.html
- Midostack , quickest way to get started with Midonet - https://github.com/midonet/midostack
- Midonet docs on github - https://github.com/midonet/midonet/tree/master/docs