Visual representation of Eucalyptus database

Thought this might be a useful thing for some folks out there who would like to have a visual representation of the Eucalyptus database schema. Eucalyptus currently uses postgres as the database. It has 1 database (starting 4.1.0) that in turns has many schemas in it. One can use a popular tool like Schemaspy and get going with a visual representation.

On the box running the Eucalyptus database server we need to make sure the package for graphviz is installed. Schemaspy uses graphviz to generate the visual representation.

Once that is done we need to download the JDBC driver for postgresql, I downloaded the latest release from postgres website.

After that you need to download Schemaspy latest release from their website.  The only thing you need in the end is the database password for your Eucalyptus cloud. The password is available in the following file on the box running database:


Once all this information is with you, you can run the below command from where you have the JARs copied (I had them in my home directory):

# java -jar schemaSpy_5.0.0.jar -t pgsql -db eucalyptus_shared -host -u eucalyptus -p a30975d9cae09908c5ca3b8b7332ed6876b030d3a20a70cc8fd21602994597b7 -connprops "ssl\=true;sslfactory\=org.postgresql.ssl.NonValidatingFactory" -all -o ./schemaspy -dp postgresql-9.4-1201.jdbc4.jar -s public -noads

Note that we passed some options to Schemaspy , mainly the type of database (we are using postgresql here), the database name, the host and port where the DB server is listening, the username, the DB password, connection parameters (I think we don’t need them really) and finally very important “-all” , this parameter tells Schemaspy to (from the website)

“Evaluate all schemas in a database. Generates a high-level index of the schemas evaluated and allows for traversal of cross-schema foreign key relationships”

The outcome was https://gist.github.com/jeevanullas/1b689a5f00adf39ec2eb

You can then copy this directory on to a webserver and browse through the schema representation in a web browser. Some sample screenshots are show below:

schemaspy-1 schemaspy-2

Hope this could be useful for folks who want to understand a bit on how the database schema looks like for Eucalyptus cloud and would like to manipulate it directly for certain use cases they might have.


Cloudformation and Eucalyptus

Eucalyptus officially released support for AWS Cloudformation back in 4.0.0 but with the latest release of Eucalyptus i.e. 4.1.0 this support is now out of tech preview mode. What this means for the cloud users is that they can use Cloudformation just like they use it on AWS and get official support from Eucalyptus for it. Yes our support is not just paid support but we have a very extensive community to help you get started or solve your problems.

When it comes to Cloudformation support in Eucalyptus the first thing that came to my mind was can I take one of the sample templates from AWS website and create a stack out of it on my Eucalyptus cloud. I don’t have to worry about the disclaimer put on the templates at AWS that says:

**WARNING** This template creates one or more Amazon 
EC2 instances and an Elastic Load Balancer. You
will be billed for the AWS resources used if you 
create a stack from this template.

Because I run my own AWS cloud in a Virtual Machine on my laptop.

As soon as I started looking at the templates on AWS I found that almost all of them were using the Cloudformation helper scripts in one way or the other. Obviously this was something I wanted to make sure works with my Eucalyptus cloud.

To my surprise everything works but unfortunately due to one silly bug I cannot claim that it works out of the box. But for now I just worked around that part.

To give you an example I share some of the command line snippets from the helper scripts:

Output from cfn-get-metadata against a Cloudformation endpoint on Eucalyptus:

/opt/aws/bin/cfn-get-metadata --stack WordPress --url \
https://cloudformation.cloud.jeevanullas.in --role \ 
adminrole --region eucalyptus --resource WebServer

Can be seen here https://gist.github.com/jeevanullas/87524e26527c81957aae

Similarly output from cfn-signal against a Cloudformation endpoint on Eucalyptus can be seen here


In the above output I used cfn-signal with a WaitCondition that had a WaitConditionHandle to signal creation of particular resource from the stack.

Of-course you cannot escape cfn-init the most powerful of all the helper script out there from AWS. As you can perhaps gather from the AWS documentation link.

In my case I used cfn-init against the AWS wordpress template that sets up the wordpress stack on Eucalyptus without any changes.

In order to get going I would highly encourage you to make sure service DNS names are enabled for your Eucalyptus cloud. By default the Eucalyptus installation uses IP addresses for the webservices endpoints but in order to work with the helper scripts you need DNS names for the services (EC2/Cloudformation/S3/IAM etc.)

Also please ensure you have a re-direct rule on your host (running Eucalyptus) that puts all traffic coming on port 443 to port 8773. This is mandatory because by default Eucalyptus webservices only listen on 8773 whether its HTTP or HTTPS. Again cloudformation helper scripts use HTTPS only with DNS names so you need to have this in place before proceeding.

Finally you need to have the region set for Eucalyptus cloudformation correctly because this is what the AWS Cloudformation template would eventually used.

All these 3 points I covered in my last blog post that is available here http://jeevanullas.in/blog/2015/03/things-i-do-after-installing-eucalyptus/

Now coming back to the issues you would have to work around if you plan to run Cloudformation helper scripts from AWS against Eucalyptus.

a) Cloudformation helper scripts assume that the webservice endpoint is HTTPS. Not a bad assumption to make but in case of Eucalyptus running on a VM for only your own development purpose you might not have a SSL cert authorized with a known CA and you might still be using the self-signed certificate from Eucalyptus default installation. If that is the case you will have to disable SSL verification in the cloudformation helper script (unfortunately its not as simple as changing a config file):


565 def req_opts(kwargs):
566 kwargs = dict(kwargs) if kwargs else {}
567 kwargs['verify'] = False
568 kwargs['hooks'] = get_hooks()
569 kwargs['stream'] = True

As you have guessed already it’s the ‘verify’ that you would have to set to False.

Again running a production cloud on Eucalyptus, you should use a real SSL cert with your cloud. I covered it in my previous blog post http://jeevanullas.in/blog/2015/03/things-i-do-after-installing-eucalyptus/

b) Cloudformation helper scripts grab detail about particular stack resource. The output from the webservice is a JSON that the script then parses and builds important structures. One of those parameters returned from the webservice is ‘LastUpdatedTimestamp’. Unfortunately the value for this parameter returned by Eucalyptus isn’t the same as returned by AWS Cloudformation. Clearly it’s a BUG in Eucalyptus which we are going to have to fix. More detail for the same here:


The code used by cloudformation helper script is at


214 class StackResourceDetail(object):
215 """Detailed information about a stack resource"""
217 def __init__(self, resp):
218 detail = resp.json()['DescribeStackResourceResponse']['DescribeStackResourceResult']['StackResourceDetail']
220 self._description = detail.get('Description')
221 self._lastUpdated = datetime.datetime.utcfromtimestamp(detail['LastUpdatedTimestamp'])

As can be seen from above the ‘LastUpdatedTimestamp’ coming from the webservice is then passed as an argument to datetime. Because the value returned by Eucalyptus is out of range an error is thrown out but for AWS it works.

Without spending much time on how I can fix this properly I decided to comment this line (Line 221) and get away with it. So far has not done much damage but I fully expect it to trigger some misbehaviour if you care about the last updated timestamp for that resource in the given stack to make some business decision.

Once I had fix this issue, I built an EMI with these changes and ensure all my templates uses it for the future use. As you have guessed it right, I had to add one more string in the official AWS templates for this to work:

"eucalyptus"     : { "PV64" : "emi-1da38bb5", "HVM64" : "emi-1da38bb5", "HVMG2" : "NOT_SUPPORTED" },

You need to add the above in your AWSRegionArch2AMI section of the template.

That’s pretty much it. In case you experience any discrepancy other than these please do file a bug report in the system.

Happy Cloudformation on Eucalyptus without worrying about your AWS bills.



Things I do after installing Eucalyptus

I have been asked many a times what are the several configuration related (note that I am not testing anything but I have learned that doing these configuration changes almost always had me test 1 or 2 things in Eucalyptus) things I do after installing Eucalyptus. I thought of making the list of things that I would normally do to get started with my cloud deployment. By no means this list is exhaustive and please feel free to pitch in your feedback/comments as you feel necessary. The cloud deployment discussed here would be based on Eucalyptus faststart. Obviously I haven’t automated this but that is the next natural step for me to do.

Once the faststart install is finished I would do following things:

a) Add my SSL cert to the CLC. Cannot trust HTTP right ? This URL helps


By default Eucalyptus will come with SSL enabled and a self signed certificate. But most of the clients that I use (example the AWS SDKs etc.) have SSL verification switched ON by default. This causes problem when the self signed certificate verification fails. So for my cloud sub-domain (cloud.jeevanullas.in) I have spent money to buy an SSL cert (with *).

b) Re-direct port 443 to 8773. All Eucalyptus webservice listens on port 8773. Again a non-standard port for my HTTPS communication (via SDKs/tools). I had rather use 443 instead of a custom port. So as per the earlier URL I make sure the system got the iptables re-direct rule going. Make sure this is also set in /etc/sysconfig/iptables so you don’t have to apply it again after a OS restart etc.

c) Install and configure AWS CLI . By no means do I dislike euca2ools. But seriously euca2ools has some lot of catching up to do specially around IAM roles which I use almost all the time along with instance profiles. All my AMI or EMI if you will, have AWS CLI installed or it gets installed on fly. In case you still want to use euca2ools I highly recommend using the euca2ools config file to set your accounts, users and clouds.

d) Enable DNS names for the Eucalyptus webservices and instances. This I do not understand why we don’t do by default. Perhaps maybe our DNS implementation has some serious catching up to do before it can go mainstream?

e) You must have heard of the imaging worker and ELB service in Eucalyptus. Not sure about your experience but I have never had success using them at the first instance. Always ended up troubleshooting something or the other. Once it was NTP, then it was the firewall (security group) and sometimes DNS. All these troubleshooting hours have taught me to enable following properties:

PROPERTY     services.imaging.worker.ntp_server 
PROPERTY     services.loadbalancing.worker.ntp_server

Above 2 properties you would like to set if your cloud is running on a restricted network. When the ELB or imaging worker instance boots up they look for to sync their clock with an NTP server on internet. If you don’t have internet access for the instances please update these 2 properties with your organization NTP server IP/hostname.

PROPERTY     services.imaging.worker.keyname
PROPERTY     services.loadbalancing.worker.keyname

In case your imaging worker or ELB aren’t doing the job correctly you will have to troubleshoot further but by default you aren’t allow to login to the imaging worker or ELB instance. You need to make sure a proper keypair is set as value for the above 2 properties before you trigger creation of imaging worker or ELB. You will have to allow SSH in the security group to which the imaging worker or ELB instance belongs. Having SSH access is so useful you will realise when you for example keep looking for a status update via euca-describe-conversion-tasks.

PROPERTY     services.imaging.worker.log_server      
PROPERTY     services.imaging.worker.log_server_port

In scenarios’ where I am asked to setup cloud for our customers, I usually set the above 2 properties to point at the ELK stack. This way all logs from imaging worker are available at 1 central place with all goodness of search,filtering etc.

PROPERTY     services.imaging.worker.instance_type

In case you are dealing with raw files or images bigger than the disk size for m1.small instance type you might want to change the above property as well. More on this at our documentation page.

Oh yeah if you are an old Eucalyptus user you might have noticed the difference. Yes the cloud properties have changed. Now we have these properties all prefixed with “services” in case you didn’t get the memo about it please spare a minute or two to read the documentation on it.

f) Well I can’t stress enough on the question of security. If you haven’t done this already please review the ports used by Eucalyptus and ensure you are having a restrictive firewall via iptables on your host machine that only allows connections intended for normal operation of cloud.

Next on the same topic of security ensure you create a new account on your cloud and have a group as well as a user in that group. Don’t use account or cloud administrator API keys. This helps you to plan your deployment and prepare your app for AWS. Follow the best practices.

Ensure your IAM policies are restrictive enough to allow the particular task defined to be perform and not allowing other operations on the cloud. This helps in deploying code to production i.e. AWS (where you save in terms of $$$ and perhaps real security threats coming from a public network).

g) Set the cloudformation region. This I have to do almost always because most of my templates are the same that I use on AWS and are provided here

euca-modify-property -p cloudformation.region=eucalyptus

Above list is by no means an exhaustive list of things someone would do on a new install. Some would like to point MicroQA on their Eucalyptus cloud to perform functional tests against it and ensure things are working fine. But if you are someone out there who prefers to do tweaking with the cloud configuration please feel free to comment on this blog post and I will make sure to add your post setup task to the list.




The Story of AWS JAVA SDK and Eucalyptus

Many people have asked this to us, how can I use AWS JAVA SDK with Eucalyptus. In the past we had lot of trouble configuring this , but not anymore.

As you might or might not know, Eucalyptus 3.3.0 is bringing better support for the AWS JAVA SDK and the developers are working hard , that every aspect is covered in some way or the other.

If you want to read more on that, I would recommend the following:


Those 2 links will give you a decent idea of what is coming in 3.3.0 with regards to AWS JAVA SDK support.

Now on to this blog post, well 3.3.0 is currently under development, the brave hearts can go and grab the source from here https://github.com/eucalyptus/eucalyptus , in the testing branch , lives the bits

If you are not so brave and you are running Eucalyptus 3.1.2 or 3.2.0/3.2.1 well this blog post is for you, because yes, you can use AWS JAVA SDK 1.3.14 against your Eucalyptus private cloud.

I spent considerable time back in July 2012 to figure out what is going on with this thing and I came up with 2 tickets in our bug tracking system which are available here


At that time I wrote some ugly patches , basically removing new stuff and adding back what was there earlier in the AWS JAVA SDK , to make it work with Eucalyptus.

Then recently, I saw 2 commits from Steve Jones one of the CLC developer at Eucalyptus , here to the AWS JAVA SDK


I decided I will take these 2 commits and apply them to the AWS JAVA SDK version 1.3.14 , which I made to work in July 2012. So I went ahead and reverted the patches I had done and worked on applying the above 2 commits on the version 1.3.14

The code is available here, on my github local fork


And as usual the patch is here


Thats about it. For people who would like to build from source , they can clone my fork and build from it, get the JAR file , start playing right away.

For those who do not have much time, I have build the JAR and have uploaded that here


Feel free to grab and use it.

I did try to use the latest version of AWS JAVA SDK but could not make it work with the patch, on Eucalyptus 3.2.0/3.2.1/3.1.2 , so this is the latest version of AWS JAVA SDK, which I know off, that works.

Again don’t think it is end of the road, Eucalyptus 3.3.0 is bringing in better support for AWS JAVA SDK and, I have tried that personally, last known version of SDK I tried was 1.3.26 and it works like a charm, plus 3.3.0 Eucalyptus has got the other cool stuff like ELB/Autoscaling/Cloudwatch, so definitely something forward to look at.

In the blog post to follow up I will try to give few examples of how you can use the AWS JAVA SDK to write code which talks to Eucalyptus, because then we can see how many cool things are in store, so stay tuned and healthy!


openbsd on Eucalyptus

People love BSD and it bothers me, that  they cant run it on Eucalyptus private cloud inside their organisation. So, I took the challenge on  building an openbsd image (EMI) which we could then run on Eucalyptus.

The version of openbsd used is 5.2 amd64 and Eucalyptus 3.2.0 on CentOS 6.3. Note that there were problems with running an instance store backed EMI and hence I ended up running an instance from boot-from-EBS EMI.

For the sake of simplicity, this post is divided into 5 parts :

Part – 1 , Build the base image

First of all download the openbsd 5.2 install ISO from the following link:


Next on your machine (having virt-manager and KVM), run virt-manager to create a new virtual machine, virt-manager provides an easy to use GUI based interface to create a virtual machine. The wizard based process is pretty slick, screenshots below shows the details that were given:

After this the VM would boot, and we need to just follow the installation process as we do normally for openbsd; I followed the following link:


Make sure you ensure the SSH and NTP services are installed and configured to start automatically at boot, this is really helpful, the only tricky part is the disk partitioning, I used a custom disk partitioning , using the disklabel ; following link would help do that


The need for a custom layout and a disk large as 10 GB (Check the wizard screenshot above) was because I planned on using the openbsd ports and install necessary software in my image.

Once the install is finished; just reboot the VM, next we will do some configuration

Part -2 Modify the base image, to include necessary tools and configuration

In my tests , I have found that you need VIRTIO enabled on the Node Controller (Eucalyptus) to make this openbsd EMI work; there are couple of things to do , to make sure the EMI works that way;

VIRTIO network driver if used creates a device file in the openbsd instance; vio0 , we need to make sure the network configuration is placed such that this device gets a DHCP provided IP address during boot, this is how we do it on openbsd

echo 'dhcp' > /etc/hostname.vio0

Next I followed this link here , to get ports installed and configured on this VM;


I basically put wget and curl inside the VM , because those help with getting meta-data from within the instance when it is running on Eucalyptus

Finally I put a custom version of rc.local in /etc , to get the SSH keys working, for the sake of completeness I have uploaded it on github at the following URL;


NOTE: The above script is copied and hacked together (to make it work on openbsd) from the original script available here https://github.com/eucalyptus/Eucalyptus-Scripts/blob/master/rc.local

I also made sure that the SSH configuration within opebsd strictly only allows for SSH key base authentication and not password by modifying the necessary configuration in /etc/ssh/sshd_config

Now shutdown the VM and copy the virtual disk file in /var/lib/libvirt/images/ for this VM to the Eucalyptus Storage Controller (SC)

Part -3 Upload the base image to Eucalyptus and get an EMI

From Eucalyptus side ; the first requirement is to make sure we use VIRTIO for everything on the NC, so on the NC , please make sure inside /etc/eucalyptus/eucalyptus.conf USE_VIRTIO_* is all set to 1

The openbsd EMI would be a boot-from-EBS (bfEBS) EMI because that is what works, the instance store backed EMI seems to not work due to natural reasons of kernel/ramdisk issues;

Next create a 10G volume on the Eucalyptus cloud and attach it to any running instance (any running instance, from any image), basically need to dump the image file for openbsd we got in Part -2 into the volume

euca-create-volume -s 10 -z cloud3
euca-attach-volume -d /dev/vdb -i i-05CB3829 vol-9B4F3F76

Now go on the SC and dd the openbsd image to the corresponding LV device for the volume you attached above (you need to figure out the LV for your volume ; its pretty easy if you have only 1 volume attached to an instance in the whole cloud, otherwise some manual work is involved IIRC)

dd if=openbsd-blog.img of=dev/vg-itL1qZZjVsCuLg../lv-D0fGsw..

Once the above is finished, detach the volume from the running instance and snapshot it

euca-detach-volume vol-9B4F3F76
euca-create-snapshot vol-9B4F3F76

Once the volume is snapshotted we can just register the EMI out of the same using;

euca-register -n "openbsd" --root-device-name /dev/sda1 -b /dev/sda1=snap-60FA3B94

This should give you an EMI-ID which you can run instance out off;

Part -4 Run an instance

Now we would run an instance from this EMI and check if we can access it over SSH;

euca-run-instances -k sshlogin -t m1.large emi-2D8A446A
ssh -i sshlogin root@

There you go! Your own openbsd instance on your own Eucalyptus private cloud, with root access, start playing!

Part -5 Run eutester, Instance check on the EMI

This last part was little tricky to crack and there are some loose ends, which I would like help on, from the openbsd users/developers on internet;

Basically we run a set of tests on the EMI via the eutester test suite to verify the EMI, you can check out more on the test here;


The results for the test is stored here;


For openbsd there were 3 things that failed

The result of the test are available on the following link:


Hope this blog post is useful for the openbsd lovers and they would enjoy reading and running their favourite openbsd instances on top of Eucalyptus cloud without much hiccup;


Day 2 for Eucalyptus at gnuNify 2012, Pune

Day 2 started with a real good breakfast. Again thanks to the organizing team for arranging such a good breakfast. I should say I really loved the weather at Pune after staying at Jaipur for past 1 month specially when it is cold.

My prime goal for day 2 was to have the lab ready for the Eucalyptus workshop at 2:30PM. I would really like to thank every one of the volunteers part of this lab setup in helping me get ready for the workshop. We took around 5 machines and setup cluster controller, cloud controller, storage controller and walrus on one machine. Rest all machines were our node controllers.

Volunteers were pretty excited about the preparation of the lab. They helped me with small small things like getting a extra network interface card for the server, a local switch and my macbook displayport to VGA converter so I can share my screen on a HDTV. They also prepared and recorded a complete video of the workshop. I am not sure if I would be able to get it but if I do it could be really helpful for me as well as the whole community I hope.

[


Day 1 for Eucalyptus at gnuNify 2012, Pune

I got the opportunity to attend as well as speak at gnuNify 2012 , Pune. It was my first FOSS conference as a speaker and it went really great. I was there for two days 10th and 11th Feb and was accompanied by Atul Jha one of our Eucalyptus community member in India. Thanks to Atul for joining me in from Chennai and spreading the word about our community . This was the 10th version of gnuNify at SICSR (Symbiosis Institute of Computer Studies and Research) Pune.

This year the main theme for the conference was Cloud Computing as well as Mobile Computing and I was happy to see lot of good talks and workshops on the same.

[


testing eucalyptus cloud now made easy

So last weekend I thought of trying out the Eutester project which has been up on projects.eucalyptus.com for a while and now the code been moved to github with some serious development happening.

Well for those who are new to Eutester, it is a framework written in python which helps you test your Eucalyptus private cloud setup. Remember, testing can sometime be a hard job so best to automate it as much as possible with all your test cases. Besides that there is a lot more to it. I guess using the existing code which is part of the test framework one can develop his/her own test cases for varied scenarios/bugs and contribute the same to the Eutester code on github.

For me personally it is a great tool to have in my ninja pockets, makes my job pretty easier. Imagine you have a bug on launchpad that you want to replicate in your environment? How long would it take to execute those Steps to follow to re-create the problem and now imagine one single program, a Eutester test case doing all that for you in one shot, that’s what I call smartness.

[


jeevanullas is back with clouds

Well it’s been really really long since I updated this space but looks like the right time to change a few things around starting with a new post in a new year!

This post basically summarizes 2011 for me and ends with the latest news which comes with the new year. For a better understanding of things I am categorizing it into sections

[


Connecting to Amazon Virtual Private Cloud using Linux

Hello internet,

I am trying to connect my Linux machine to Amazon VPC using end to end IPSec tunnel. I have set all the required VPC objects on Amazon side and now plan to set my Linux Laptop as a VPN gateway. But the only doubt I have is that my Laptop is behind NAT. Though I have opened and re-directed the necessary ports on my NAT device I am not sure if this thing is going to work.

Please let me know if this setup can work. I am trying to follow the following guide


From what I understand so far in order to make this guide work for my setup I need to do some extra configuration. I have also found out that IPSec supports tunnels behind NAT devices but I am not sure if Amazon VPC will support such configuration.

Any help in this matter is highly appreciated.


Using Boxgrinder to build your own AMI for EC2

In my last article I showed on how to create your own AMI for EC2. The article basically demonstrated the whole process been done manually by executing commands. In this article I would like to cover Boxgrinder which reduces the manual effort completely and helps you get your own AMI registered on EC2 and in few minutes.

First thing is that we need to run boxgrinder on CentOS if we would like to build a CentOS AMI and on Fedora if we would like to build a Fedora AMI. The good thing about boxgrinder is that it uses the latest pvgrub kernel images provided by Amazon which basically lets you boot into your own kernel. So gone are the days when we had to use Amazon EC2 kernel. Thanks to Marek Goldmann for making this possible in boxgrinder 0.5

[


Creating your own AMI for Amazon EC2

It’s been long since I posted on this blog. This time I have come up with this new post which takes you through on how to go about creating your own Amazon Machine Image (AMI) for Amazon EC2. Note that there are several publicly available AMI’s on Amazon which one can use for various purposes but sometimes we require to have a AMI of our own which has all the require software / configuration to meet our daily requirements. That is time we need to know how to create our own AMI.

I would like to thank Phil Chen for his excellent post here http://www.philchen.com/2009/02/14/how-to-create-an-amazon-elastic-compute-cloud-ec2-machine-image-ami which I followed and have mentioned below with some extra addition and some modification.

[


Fedora 13 release party in Bangalore

I am a little late on writing about the release party we had last week on Saturday, was occupied with $work.  Well it all started with the mail of Rangeen to FSUG mailing list. That time I have just come to Bangalore and thought it will be great to meet all the Fedora folks in Bangalore during such party. Venue was a big issue initially as almost all the colleges in Bangalore were having exams. But I knew my friend Saket who has been pretty active in FSMK Bangalore. I contacted him and asked him if we can organize this release party at FSMK Bangalore office. I got a positive reply and after confirmation from the FSMK folks we finalized to have it at FSMK Bangalore office on June 5th.

I personally never expected many folks to turn up because of two reasons. One been that many colleges in Bangalore were having end semester exams plus we don’t have much Fedora folks here in Bangalore. Initially we (Me, Ankur, Hiemanshu , Dipjyoti and Rangeen) had a hard time finding out the office. Point to remember GPS in India will not be accurate enough with all these narrow lanes everywhere.

[


Hackers dom in Hyderabad

Well we hackers can’t live without our machines and that’s so true. Recently I have been playing with EBS volumes in Eucalyptus but was not able to make it work on Centos 5.4 with XEN more information can be found here. Then I thought to give it a try on our KVM + Fedora 12. But wait a minute we need more than 1 machine to try EBS volume. Hmm, just gave it a thought and remembered that I have another laptop with me at home. My brother’s new DELL laptop which is only having Windows 7 and nothing else. He doesn’t like anyone else to install any other OS on it so I thought of putting Fedora 12 inside VMware and finally after doing that I started my hacking experiment.

I have been running my Eucalyptus private cloud on a single laptop till now and knew that I can’t use brother’s laptop as node because it has windows 7 and fedora 12 was running inside VM. So I started installing eucalyptus 1.6.2 on that VM via source and finished eucalyptus as well euca2ools installation. Created that VM as the head running Cloud controller, cluster controller, storage controller and walrus. Next I switched to my laptop from scratch installed eucalyptus and configured it to run as node. My laptop is super cool. It’s been 4 years I have had this machine but the good thing it has the special processor flags which gives VT support so that I can run fully virtualized VM in KVM (kvm_intel).

[


Cloud Computing is the future

Well for those who don’t know, since Jan 2010 I have been working on eucalyptus a open source software to setup private cloud inside organization premises.

I have seen lot of people blogging about eucalyptus specifically on Ubuntu Server edition. Well to be frank eucalyptus is a great software and it works with almost al latest linux distributions. Though I haven’t found time to test all the available linux distribution as I am stuck with work and Fedora but I have tested it on centos 5.4 and Fedora 12. Works great!

Few problems I have always faced but the IRC channel for eucalyptus on freenode as well as the online forums have been really helpful in solving my doubts.

I plan to write my experiences with eucalyptus on this blog in times to come. Besides eucalyptus creating virtual appliances in a automated way is also one of the areas I have worked on paste few weeks. This is all specific to fedora right now using boxgrinder. It is a alternative to vm-builder which ubuntu folks have got.

I am a strong support of the fedora project and love the way how the fedora community is structured and functions. Have been associated with it since the beginning (Fedora core 1).

In the end, for now, I would just like to say, cloud computing is the future and open source is the best medium we have all got to implement it.


Random musings from city of nizams

These days I am living in hyderabad which is famous for briyani and haleem. The city is very lively with crowd everywhere I go. Few things I would like to talk about this great city which I have found out.

1. If you are a hardcore non-vegetarian this is the place to be. This city has to offer variety of chicken and mutton and it is not costly either.

2. Briyani lovers, please go to paradise hotel. It’s like the makka madena of briyani. I have been there only once with my friends but would love to go again.

3. The place where I live is close to the famous hi-tech city. This hi-tech city is the area where you will see huge buildings some of them having nice arch. too. It is full of dynamic youth which are not at all driven by passion or enthusiasm but auto rickshaw drivers who run their auto’s in sharing mode. Even I am a regular customer of these sharing autos where you have to sit in front with the driver if there are ladies.

4. Description of this city will be in-complete without discussing my work place. The best place to be. It’s even better from my room because I don’t get power cuts there. It has a green landscape with small water falls, awesome food courts, big and nice buildings. We have in the employee care center everything. We got bank, gym, more store, indigo nation showroom, table tennis, carrum, snooker, dance class, tennis court, basketball ground, cricket ground, laundrament, apollo clinic and most important of all these if you would like to take rest or sleep we have got dormitory, separate for men and women. The dormitory is special cause it has got clean bed with air conditioned dark room where one can sleep for hours. Other things in the employee care center are strand book stall which has got some of the nice books for me to read at a discounted price.

What else? I am still exploring this city though not that much but as and when I get time. The places I have visited so far are near to my living place. I plan to go to the old city sometime and visit the original places where as I have heard there are still some nizams or at-least their property.

Right now I would like to advice you not to go to the old city just like that due to the telengana issue. It’s a big issue and sometimes results in curfew in the whole city. At that time it’s hard to find good food and easy way to commute. I have seen bike number plates having TG(telengana) instead of AP(Andhra pradesh)

I was able to write this post because of my E72 and pretty good airtel GPRS connection. Laptop battery died few minutes ago and still we don’t have lights here. What a life in the city they call as cyberabad.


Hello Internet

Dear Internet,

I am glad to be part of you. Here I am, back, finally with my own space. I hope this journey is long and becomes learning experience for both of us. Looking forward to have a great time with you.




End of a long silence

Hey all , I am back :)

Well at least I will try to be regular now on this blog and write few things which I have been engaged with these days.

So stay tune!


I voted

Cast your vote go to : https://admin.fedoraproject.org/voting


Fedora 10 brings happiness to linuxguru's Life


Cambridge was out yesterday night (IST).

And just when the download was about to finish for me I got my Directory Services exams result and I was told that I have passed the exam. w000t. That was the last hurdle in the way to become RHCSS.

Finally I can call myself Redhat Certified Security Specialist.

How awesome all this looks. Brand new fedora on my laptop and desktop machines and RHCSS!!!

For verficiation:-



Introducing myself to Fedora Planet

Hi Fedora Planet,

Thanks for accepting me ;-). Those who would like to know more about me just have a look here.

Hope to get some meaningful posts in the future. ;-)

Till then Enjoy!


Securing Log Server in RHEL

Few weeks back I took up the task to replace my syslog server in RHEL5.2 with the new rsyslog package. Redhat packaged rsyslog from RHEL 5 starting with update 2. So I thought of testing it out with stunnel supporting me encryption over the communication line.

The setup goes something like this:-

I have two RHEL5.2 machines one is station1 and the other is server1. The station1 machine sends the log for local6 facility of any type of priority to server1. But the log send over to server1 is going to be encrypted via stunnel package. Let’s see how:-

Setup at server1

This is going to be our central log server for local6 log facility. First of all we will install the rsyslog package which though comes with RHEL5.2 but is not the default:-

#yum install rsyslog
#service syslog stop
#yum remove sysklogd
#service rsyslog start
#chkconfig rsyslog on

Next we configure rsyslog such that it listens for connections on tcp/61514

#vi /etc/sysconfig/rsyslog

Edit it such that at line 6 it shows:-

SYSLOGD_OPTIONS="-m 0 -t 61514"

Now we need to add this port 61514/tcp to our semanage ports. This will be done via the following command:-

#semanage port -a -t syslogd_port_t -p tcp 61514

Later we can see if the above command have succesfully worked or not by issuing the following command:-

#semanage port -l | grep syslogd_port_t

The output of the above command will be something like this on a default installation of RHEL5.2

syslogd_port_t tcp 61514
syslogd_port_t udp 514

This tells that port 514/udp and port 61514/tcp are SELinux managed for the type syslogd_port_t. Okay that’s what we wanted. For securing the log server we want it to run on a tcp port and that’s why we did all this starting from editing /etc/sysconfig/rsyslog to semanage. Note that all our setups have SELinux in enforcing mode so it’s necessary that we take proper care of SELinux.

Next we restarted the rsyslog service.

#service rsyslog restart

Now we need to configure stunnel on server1 so that it accepts connections from the client on some fix port and forward them to port 61514/tcp running on server1. We will ensure via the iptables that the port 61514/tcp is not directly exposed to the network as well as port 514/udp.

#iptables -A MYCHAIN -p tcp --dport 60514 -j ACCEPT
#service iptables save

This rule opens up port 60514/tcp on server1. This will be the port where stunnel running on server1 will listen for client connections and later forward them to locally running rsyslog service at 61514/tcp.

The package for stunnel was installed default in a base installation of RHEL5.2 so that was not a big deal but if it’s not there in your setup ensure that you have stunnel installed.

After the installation is done we need to configure stunnel. The configuration directory for stunnel is empty but it’s package provide one sample conf file which can be used. To use the provided conf sample just follow the below commands:-

#cd /etc/stunnel
#cp /usr/share/doc/stunnel-4.15/stunnel.conf-sample stunnel.conf

Next we edited the stunnel.conf file according to our requirements and when we completely edited it that’s how it looked:-

; Certificate/key is needed in server mode and optional in client mode
cert = /etc/stunnel/stunnel.pem
key = /etc/stunnel/stunnel.key

; Some security enhancements for UNIX systems - comment them out on Win32
chroot = /var/run/stunnel/
setuid = nobody
setgid = nobody
; PID is created inside chroot jail
pid = /stunnel.pid

; Some performance tunings
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1

; Authentication stuff
verify = 2
; It's often easier to use CAfile
CAfile = /etc/stunnel/cacert.pem

; Service-level configuration
accept = 60514
connect = 61514

The section [ssyslog] specify which port stunnel will listen to and then which port it will forward the connection too. The destination port is of the local interface ( as far as I know, haven’t digged much into it so I am not sure. Please feel free to comment on it.

There are number of other variables on top that configures alot of stuff. First is the filepath for the stunnel security certificate, then is the filepath for the stunnel security certificate key, next comes the directory under which stunnel will run (this makes stunnel run in a chroot jail, that’s good for security reason but it’s only available on windows host), after that the user and group with which the application will run and the pid file name and path for stunnel it actually is /stunnel.pid but that’s relative to /var/run/stunnel now, after that we had some performance tuning options which actually came enabled default in the sample conf file so I thought of keeping them up, after that verify=2 is used to verify the other end of the tunnel, the verification is done by checking the security certificate of the other end of the tunnel upto depth level 2 so that checks whether the security certificate of the other end (the client end, in our case station1) is actually signed by the same Certificate Authority (CA) as the one specified by the next option that is CAfile.

Now we need to create the directory in which stunnel will store it’s pid file and will also run in chrooted jail provided by that directory. The group/owner permission of that directory are also important (as specified in stunnel.conf):-

#mkdir /var/run/stunnel
#chown nobody:nobody /var/run/stunnel

Now we need to work on the security certificate stuff. Stunnel uses both self-signed or third party signed certificates. We went with the trusted third party signed certificate. For this we already had a private Certificate Authority running in our network which was used to sign/revoke security certificates of clients in the network.

So first of all we created the key to be used for the certificate and then we generated a certificate signing request for the stunnel certificate and later send that to the certificate authority to sign and return back to us. The certificate authority also sent us a copy of there own certificate which was also kept in /etc/stunnel for configuration purposes. The following command helped in the above task:-

#cd /etc/stunnel
#openssl genrsa -out stunnel.key 2048
#openssl req -new -key stunnel.key -out stunnel.csr
#scp stunnel.csr root@certificate.example.com:/etc/pki/CA

At certificate.example.com we issued the following commands:-

#cd /etc/pki/CA
#openssl ca -in stunnel.csr -out stunnel.pem
#scp stunnel.pem cacert.pem root@server1.example.com:/etc/stunnel/
#rm -f stunnel.*

Note that after we have recieved the signed certificate and CA certificate the first thing we did was secure those by strictly changing there file permissions as shown below:-

#chown root:root /etc/stunnel/*
#chmod 600 /etc/stunnel/*

That was sufficient. Well if you are not running a local CA I would suggest you do run it or have a commerical 3rd party trusted authority sign your certificate. For a small setup self sign certificate will do the job so no need for Certificate author
ity. Also note that the step I mentioned above are completely custom as I want them to be it might be that your setup is different then you have to use different commands and options.

That’s all about stunnel on the server side. Now was the time to start the tunnel, so that’s done just by running the command stunnel.

#ps aux | grep stunnel

The first command runs the tunnel and the next command is given to make sure if stunnel is running in the background successfully or not. The output should be something like this

nobody 4476 0.0 0.3 5060 984 ? Ss 16:46 0:00 stunnel

If you want to make sure that stunnel runs automatically on every boot up just put these lines in /etc/rc.d/rc.local of your system (at the bottom):-


That’s it the job at server1 is done and now it’s time to proceed at the client side.

Setup at station1

First the same steps as performed on server1 installing rsyslog package and removing the stock sysklogd package via the following commands:-

#yum install rsyslog
#service syslog stop
#yum remove sysklogd
#service rsyslog start
#chkconfig rsyslog on

Now we need to configure the stunnel package on the client side too. As mentioned earlier stunnel comes with the default installation of RHEL5.2 but if it’s not installed just make sure you have it installed. Stunnel actually is part of official RHEL5.2 distribution. Next as done earlier copy the sample configuration file provided by the stunnel package to the stunnel configuration directory.

#cd /etc/stunnel
#cp /usr/share/doc/stunnel-4.15/stunnel.conf-sample stunnel.conf

Edit the file such that it looks as shown below:-

; Certificate/key is needed in server mode and optional in client mode
cert = /etc/stunnel/stunnel.pem
key = /etc/stunnel/stunnel.key

; Protocol version (all, SSLv2, SSLv3, TLSv1)
sslVersion = SSLv3

; Some security enhancements for UNIX systems - comment them out on Win32
chroot = /var/run/stunnel/
setuid = nobody
setgid = nobody
; PID is created inside chroot jail
pid = /stunnel.pid

; Some performance tunings
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1

; Authentication stuff
verify = 2
; It's often easier to use CAfile
CAfile = /etc/stunnel/cacert.pem

; Use it for client mode
client = yes

; Service-level configuration
accept =
connect =

The major difference between the stunnel.conf of station1 and server1 is that the stunnel.conf of station1 contains a variable client = yes that differentiates server end and client end in a stunnel.

First we will create the chroot directory in which stunnel will run. As done in the configuring the server section above:-

#mkdir /var/run/stunnel
#chown nobody:nobody /var/run/stunnel

Now it’s time to make the security certificate for this end of the tunnel. We will proceed in the same way as we did while setting up the server end. First we will generate a 2048 bit key. One particular thing which I forgot to mention about this key is that it’s not a password protected key. If it’s compromised that end of the tunnel is compromised. We could have protected the key with a password by specifying option like -des3 to the genrsa command but then we would have to give the password for the key when we ran stunnel that asks alot of overhead when we say our tunnel will automatically start on boot. In that case we have to manually feed in the password for the tunnel to get started.

#cd /etc/stunnel
#openssl genrsa -out stunnel.key 2048
#openssl req -in -key stunnel.key -out stunnel.csr
#scp stunnel.csr root@certificate.example.com:/etc/pki/CA

At certificate.example.com the following commands were issued:-

#cd /etc/pki/CA
#openssl ca -in stunnel.csr -out stunnel.pem
#scp stunnel.pem cacert.pem root@station1.example.com:/etc/stunnel
#rm -f stunnel.*

Now as we did during setting up the server end we secure the configuration file and the certificate files at the client end by modifying the file permissions accordingly:-

#chown root:root /etc/stunnel/*
#chmod 600 /etc/stunnel/*

Now we can start the stunnel at the client end too via the simple command stunnel. If we want to start the tunnel automatically on every boot up it’s simple just add the line /usr/sbin/stunnel in /etc/rc.d/rc.local at the end. To verify that stunnel is running properly or not just issue the old command ps aux | grep stunnel and see if there is any process owned by user nobody with the name stunnel.

Now we will configure the rsyslog service at the client so that it re-directs all the logs for local6 facility to where stunnel will read them up and send them to Note here that is actually server1 but instead of specifying the name I preferred IP address as DNS can be un-available in my setup.

The below line I added in /etc/rsyslog.conf:-

local6.* @@

Save and exit and then restart rsyslog:-

#service rsyslog restart

Now to test the setup we issued the following command at station1 and while that was running we sniff the packet via wireshark available in RHEL5.2 to intercept what was getting transferred between the two tunnel ends:-

logger -i -p local6.info -t deependra "This is a test log message sent over stunnel"

The output was clearly seen at server1 in /var/log/messages as

Jul 28 18:24:30 station1 deependra[3460]: This is a test log message sent over stunnel

While the communication was happening between the two ends of the tunnel I sniffed the packets transferred between the two ends and it was all encrypted from what I saw.

That’s how I was able to secure my log server communication with clients. There are much better and inbuilt ways to provide security of log server which comes with rsyslog that you can check out at rsyslog website.


NTP Server

It’s been long since I am using NTP server in my installations here. So I thought to document my setup a bit in order to explain myself what’s going on and to help others world wide so that they can also have a secure setup for Time server in Linux.

Time server is a important part of a network as everybody might be knowing. It is a must if we want to have a network setup which will later consists of kerberos or DNSSEC. It is also needed in windows environment but the configuration for that need not be done in the default case.

I have my test server running latest updated version of Fedora 9. First of all I make sure that my setup have the ntp package. Actually ntp comes default with Fedora distribution so I had no problem in getting the package.

Next step was to make sure I have the correct configuration file setup. So I took a backup of the original file that is /etc/ntp.conf first.

mv /etc/ntp.conf /etc/ntp.conf.bak

Next I wrote the following in a new /etc/ntp.conf

fudge stratum 1
crypto pw redhat randfile /dev/urandom
keysdir /etc/ntp
restrict default ignore
restrict mask nomodify noquery
driftfile /var/lib/ntp/drift

I know the above options are not the best of the breed but I will explain. First of all if we used our local hardware clock as the time source and then declared it to be at stratum 1 via the fudge line. That may sound madness to everybody. This was done just for testing purpose. Don’t do this in your production servers. Please use reliable time source which can be found at http://support.ntp.org/bin/view/Servers/StratumTwoTimeServers

The next line is the crypto line which tells that my ntpkey files are protected with a password redhat and that the filesource which is used for generating random seed data is /dev/urandom. Note that the password attribute is a important one so this file which /etc/ntp.conf should have a strict permission.

#chmod 640 /etc/ntp.conf
#chown root.ntp /etc/ntp.conf

Now the next line tells the directory where all the ntpkey_* files are stored. In fedora 9 it defaults to /etc/ntp/crypto but I used /etc/ntp which is default in RHEL 5.

Next three lines controls the access to the NTP server. The first of them restricts everybody to use the time server or remotely configure the time server. Next restrict line opens up restrictions for the local interface that is This address can do anything no restrictions apply on it. The last restrict line opens the network to use the time server to get time service but it can’t modify or query (status query on time server) the time server itself. That means any client in the can configure as it’s reliable time source but it can’t use to connect to that server via ntpq or ntpdc utility.

The last line specify the file name which contains the latest estimate of clock frequency error. This file is owned by ntp user.

In the next step we switch to directory /etc/ntp and generate the host keys and IFF parameters as we are going to use IFF identity scheme in this setup.

#cd /etc/ntp
#ntp-keygen -T -I -p redhat

The above command generates the key files and IFF parameters file. The host key file is protected with a password redhat that we also mentioned in /etc/ntp.conf. The list of files which were generated in my case are listed below


In the above list some are key files and some are symbolic links to them. Next we need to extract the IFFkey so that it can transferred to every NTP clients of this server. We can also protect this key with a password that only we and the NTP client knows.

#ntp-keygen -e -q redhat -p linux > ntpkey_IFFkey_station1.example.com.3426211635
#scp ntpkey_IFFkey_station1.example.com.3426211635 root@server1.example.com:/etc/ntp

The above command generate the IFFkey file but the IFF parameter file itself is protected by a password which we specified in the first ntp-keygen command so with -q we specified that password and with -p we specified the password with which the IFFkey file will be protected (the client needs to know this password). The -e option is used to export the IFFkey.

Now I started the ntpd service and configured it to start automatically at the next boot up. Also I had a custom chain in my iptable based firewall in which I opened the udp/123 port on which ntpd listens.

#service ntpd start
#chkconfig ntpd on
#iptables -A MYCHAIN -p udp --dport 123 -j ACCEPT
#service iptables save

Next was the setup at client side that was pretty easy. First of all I configured as usual the main configuration file /etc/ntp.conf

#chmod 640 /etc/ntp.conf
#chown root.ntp /etc/ntp.conf
#vi /etc/ntp.conf

The client side ntp.conf contained the following:

server station1.example.com iburst autokey
crypto pw linux randfile /dev/urandom
keysdir /etc/ntp

The above lines specify the preferred time server to use be station1.example.com aka The option autokey enables the use of public key cryptography. The next line specify the crypto password with which the client ntpkey_* files will be protected and also specify the random seed source to be used. Next line specify where to find the key data.

Next we generated the client side parameters by the following commands

#cd /etc/ntp/
#ntp-keygen -H -p linux
#ln -s ntpkey_IFFkey_station1.example.com.3426211635 ntpkey_iff_station1.example.com
#ln -s ntpkey_host_server1.example.com ntpkey_iff_server1.example.com

The above generates the host parameters on the client side protected by the password linux and next create some symlinks which later configure the IFF keys at the client side. Note here that the file ntpkey_IFFkey_station1.example.com.3426211635 was sent by the time server which was protected by the password linux.

The list of file with the prefix ntpkey_ in there name at the client side /etc/ntp were finally:-


Now we started the time service at the client and configured it to start automatically at boot and also open the udp/123 port.

#ntpdate -b station1.example.com
#service ntpd start
#chkconfig ntpd on
#iptables -A MYCHAIN -p udp --dport 123 -j ACCEPT
#service iptables save

The first command in the above code was issued to first synchronize the clock of the client with that of the server then start the time service to later keep that new time in synchronization with the server. It took approx. 5 minutes to get synchronized and after that when issued the fo
llowing command the output was:

#ntpq -cas

ind assID status conf reach auth condition last_event cnt
1 28241 f624 yes yes ok sys.peer reachable 2

#ntpq -c"rv 0 cert"
assID=0 status=0664 leap_none, sync_ntp, 6 events, event_peer/strat_chg,
cert="server1.example.com station1.example.com 0x6",
cert="station1.example.com station1.example.com 0x7",
expire=200907280527, cert="server1.example.com server1.example.com 0x2",

#ntpq -c"rv 28241 flags"
assID=28241 status=f624 reach, conf, auth, sel_sys.peer, 2 events, event_reach,

The last command issued returned the flags as 0x83f21 that signifies that the communication with the time server was successful and that IFF identity scheme with cryptography enabled was used.

Client side utilities to check the time configuration are ntpq,ntptrace,ntpdate,nptdc,ntpstat etc.


Using svn with Eclipse

Using eclipse as the IDE for my java programming has become very common these days. It’s so common that I have it installed under windows xp, Fedora 8 and OSX leopard. Few days back I asked charlie for a svn account on unixpod and was able to get one by the name java-jeevan.
I thought of configuring this svn repository in my eclipse europa so that it will be easier for me to manage my code. I found to my surprise that there was no such svn thing in eclipse by default but there is one good project that provides svn functionality into the eclipse ide. Here is the project Subclipse. I just downloaded the necessary site-1.0.6.zip and extracted it. Then I pasted the regular files inside that directory into the corresponding eclipse subdirectories. Started eclipse and here I see a new view under the SVN group called the SVN repository. Inside this view I added a new repository location (right clicking on the SVN repository view and selecting new). It only asked me the URL of my repository which was svn://svn.unixpod.com/java-jeevan and there it goes within few seconds I saw the repository shown inside the view and I can see my code tree inside it. Managing SVN repos is so easy from within eclipse even I can import the svn repository code into one of my projects either new or pre-existing one. It is really very easy. I went to create a new project from the new project wizard during the initial selection process I selected create a new project from svn checkout. This wizard showed me my repository and let me select the project directory I was interested in checking out into a new project in the eclipse workspace. It was pretty easy for me.
Once I started editing the code it was again very easy to commit the changes I have made into the source code just right click on the file I edited and then team->commit… It asked me my SVN password which I gave for once and saved it for future automation of this process.

Pretty cool.

That was it. Quick and dirty post :)


/me back to blogging

Well, again it’s been quite a while since I blogged. Truly speaking it was me who never felt interest in writing again but now I think that I should write (silly me :-P).
Lot of things have happened since my last post. In the month of October I had a visit to my native place (my village), spent almost 10 days at places like Mukungarh (Dadosa’s home), Sainswas (my village) and Pilani (visit to a friends place). That was a nice holiday. Got to know alot of things about my ancestors and history.
In November I was majorly at home (Jaipur) with family for the diwali celebrations. That was also a nice time. But had to came back to so called work place (Jodhpur) to do some stuff. In the last week I started again, teaching some of my friends GNU/Linux considering Centos 5.1 for the job.
December was party time. Had alot of parties and picnic’s and all. Learned quite alot of things in GNU/Linux, majorly not regarding programming but from the point of view of system administration. It’s been a while since I programmed stuff. New year party was great. Had my favorite stuff (non-veg).
January 2008 have come. The start was again with parties and enjoyment. Nothing serious about studies (as usual). Got the second semester result. That was fine. Now again the exam season is ringing bells. Seriously speaking I hate these bells.
Well, a little update on what I wrote in the last post. I was working with team of 3 people on the IBM Academic project but due to my laziness that never got completed, start was perfect but we couldn’t cope up with things and it died. We had submitted synopsis too but we were never able to complete the code. I had learned alot of things by that project. Most importantly stuff in advance java.
Recently I have again started doing some programming and seriously speaking that thing have inspired me to write again. Really programming is something that gives me internal pleasure. It is the ultimate thing. I wish I can keep myself always motivated for programming. But yeah let’s see how it goes.
I will update on the recent stuff I am working on in the next post. Stay tune!!


Random Stuff

Today morning I was reading #fedora at freenode a little carefully and found this (nice to read it once hahaha):

04:30 < R0CK> Hello guys
04:31 < R0CK> any news about the add/remove software ?
04:31 < opsec> R0CK: ask a real question or no one will respond to you
04:32 < R0CK> I'm having problem on pirut, It's hanging when i run it,.
04:32 * opsec puts on his mid reading hat
04:32 < opsec> R0CK: and?
04:32 < BULLE> R0CK: that seems to be a common problem lately
04:32 < opsec> run it from the command line ..
04:32 < R0CK> opsec, I know very well how to ask my questions.
04:32 < opsec> if you get an error --> dpaste.com
04:33 < opsec> R0CK: no, you don't
04:33 < R0CK> opsec, well, my question was any news about the issue?
04:33 < opsec> there is no issue that i know of
04:33 < R0CK> opsec, don't try to be intelligent baby, you`re here since yesterday.
04:33 < opsec> either run it from the command like and paste the error to dpaste.com or use yumex instead
04:34 < opsec> R0CK: i'm done with you moron.
04:34 < R0CK> opsec, then stop supporting me,

It happens alot of time at #fedora.

Talking about the other thing I did this week was to finish the synopsis document for my project. It was a great experiencing learning how to make Data Flow Diagrams, Use case model etc. Learned alot. Now the team started writing the nasty code thingie.

The Captcha code for the JAVA Project is finished and now I have to figure out how can i insert the captcha module into the project and use it whenever I need it. I had to also learn packages/interface’s in JAVA. The other thing on my agenda is XML and handling XML via JAVA because I can sense that it will be needed in the project during some point of the life cycle (and my personal attraction to XML).

The progress on my secret python project is going good. I have been doing the testing stuff for a long time now with a different nick on freenode and it seems to perform well for the moment. I had to see a better alternative to CGI and supported by my shell provider so I can get a better stat’s page.

Downloaded Oracle 11g for Linux and will give it a try soon, need to learn how to configure a Oracle database server and configure/install clients so that they can utilize the server database instance.

It has been along time and there are lot of articles piling up in my docs stack. I got to find sometime to finish them up and upload on to my server (most important of them is the samba plus ldap guide for the fedora-docs team).

I was happy to make the TATA indicom internet connection working on Fedora. Now I can hope that some more people get involved into the learning Linux stuff (more closely).

From above it seems like a busy yet exciting weekend ahead. Hmm..


/me Status Update

After along time I got some time to write at this place. Well for quite some months I have been working in Python and stuff related to it. The important things I worked on in python were XML stuff, IRC/Google Talk API’s provided in Python, writing bots (irc, gtalk) and learning XML parsing.

Then I got busy in my 2nd semester examination which were a total hell (took the hell out of me).

Now that my exams are over and I got some time I decided to roll upon some new stuff. First is a secret project I have started working on (in python) and second is JAVA. Truely speaking I have never done JAVA sincerely though I got alot of time to do it. But now I decided to look around and started working on the Core stuff (as JAVA is in my current semester).

The other important thing these days is the IBM Challenge program going on in which I have enrolled as a participant having 3 more members in my team. So meanwhile I am learning the core JAVA stuff I am also looking into the Advance JAVA stuff the web thingie. We got our team blog now thanks to /me.

So I guess probably for some more time I will not be updating this blog (my first and only blog) because of some small works I am indulged in. Keep in touch with the team blog.

See you soon.


Compiz-Fusion now arrives for Fedora7

Many thanks to KageSenshi’s efforts we now have Compiz-Fusion for Fedora7. For more information on how to install it look into KageSenshi’s Blog.

After installing all the stuff I did the following things (Note: This is for Intel video card users and GNOME no KDE support Yet :( ):

Step 1: vi /home/deepsa/compiz-fusion-run
LIBGL_ALWAYS_INDIRECT=1 INTEL_BATCH=1 compiz –replace –sm-disable ccp &

Save and exit

Step 2: chmod a+x /home/deepsa/compiz-fusion-run

Step 3: gnome-session-properties

Startup Programs -> New
Name : Compiz-Fusion
Command: /home/deepsa/compiz-fusion-run

Step 4: System -> Preferences -> CompizConfig Settings Manager -> Window Decorations -> Command -> emerald –replace

Step 5: Logout and LogBack In.

Some screenshots of new features

Expo Plugin

Cube with reflection


Configuring Pidgin 2.0.0 final for Google Talk in Linux

Well guys, finally Pidgin (earlier Gaim) developers released the 2.0.0 stable release. It was after a long time we saw the final release coming out. Now with the new name , new website, new graphics and lot’s of new features Pidgin is rocking.

I thought to give it a try on my Linux loaded Laptop. My howto is inspired by the already available step by step instructions available here.

First of all you need to download Pidgin. If it gives error regarding any dependency during the installation check out the project’s sourceforge page.

After the installation is over we need to start Pidgin (Applications –> Internet –> Pidgin Internet Messenger). If it’s your first run of Pidgin you would see something like this

Click on the Add button on the bottom and you will be shown something like this

I have filled almost all the values for my test but you need to change them according to your need.

Protocol: XMPP (earlier used to be Jabber)
Screen Name: Must be filled up with your gmail-id before @gmail.com.
Server: Should be gmail.com.
Resource: Can be anything the default "Home" will also work.
Password: You gmail password goes here.

You can select other options according to your need. It’s all on choice. I have selected what I feel are necessary for me.

After you are done with this tab select Advanced Tab. It will look something like shown below:

Note the important thing to see here is the Connect Server. It should be talk.google.com. Well everything is done for now just Save the details about the account and Pidgin will automatically connect to the network. And after a few moments you will see that you are connected. Something like this (:( I have very less online contacts).

This is it. Hope you get your g-talk account working with Pidgin 2.0.0 in GNU/Linux.


Laptop RAM Upgraded

Today I got my Laptop’s RAM upgraded to 1GB. Now I can try researching more on virtualization specially full virtualization using KVM (Kernel Virtual Machine). And yeah now I can run more app’s simultaneously in my Redhat5. I am almost finished writing my next article on how to configure Redhat Enterprise5 to use full virtualization provided by KVM (Kernel Virtual Machine).

This article helped me learn how I can build my own rpm’s using the source rpm’s provided by Redhat. That was really nice. I got my whole virtualization stuff in Redhat5 upgraded to the latest version.

And yeah I am working on my website too. Don’t know when I will get satisfy with the design and stuff I am hosting.


New Ubuntu

Hello everybody. Last night Ubuntu feisty fawn was released. I was eagerly waiting for this distro. And the major reason for that is I found only Ubuntu as a Linux distro which can be installed and configured according to user needs in very less time. Most of the linux distros are known for there hard software installation and configuration just after installing the base operating system. Detecting most of the commonly hardware is also a pain. But I kinda like it for past few years. But then also we need some distro that can help get us other windows users a reason to use linux. I prefer to install ubuntu on my friends systems (who never worked in linux before) and they all like it. With Beryl or Compiz with a Ubuntu installation you get a dream desktop which windows users (even vista) can only dream off. With feisty comes the desktop effects by default but it’s based on compiz and Ubuntu warns the user who enable those effects that they will not support that software it’s just there for technology preview. Well somewhat same thing was said by Redhat when they released almost the same thing with there Enterprise 5 product last month (Client). But then user have choice to install beryl on their Ubuntu and enjoy more effects (most of the nvidia/ati guys will go for it). But the sad part of the story is that beryl is going to be merged with compiz and now there will be no beryl anymore. There will be only compiz and what we see in beryl will be ported to compiz-extras. Well this merge is under process and that’s the reason we don’t see any changes in the beryl source svn these days. I heard from the beryl@irc.freenode.net that after the merge there will be no more svn they will host the source on git and beryl will move to compiz-extra. Well I am just waiting and watching the progress for the moment.
For the time being I have installed Feisty and enjoying it! I have also tried some server configuration on it (just like we have in Redhat). Most notable of them so far are Bind (DNS) and Apache (Httpd).


Provisioning Linux Simplified with Cobbler

Well. RedHat Enterprise Linux 5 was release on 14th of last month. I hope alot of new changes will be there in it. But I still have RHEL5 Beta2 and this time I thought of trieing some new technology in my Beta2. I have heard alot of PXE(Pre-Execution Environment) but was never able to get time to try it. But recently I saw the RedHat Emerging Technologies webpage. This is a division in RedHat that is working on new technologies and one of them is Cobbler.

I studied about it and found that it is related to PXE in some way or the other but is more simplified and yet more powerful. So I thought to give it a try. Earlier I was not getting what I was doing but with a few posts at the mailing list of et-mgmt I got my self the way. So I am summarizing here what’s the purpose of Cobbler.

If we left Cobbler for a moment and try to concentrate on Provisioning in Linux then what exactly does it means. It means how we can create and manage Linux machines. In simple terms we can install a lot of Linux machines (server or client) un-attendly (we need not be near the machine). We can have template based installation where we can specify a particular template to a given set of machines and another template to another set of machines and this way we can get all of them installed and running according to our configuration in very less time.

In earlier days (lol. still today) PXE installations were used to provision Linux machines so that we can get un-attended installations easily for a large number of machines. But RedHat is working on a technology called Cobbler that helps simplify PXE configuration and add to it alot of powerful features to mention some: Provisioning Xen Virtual Machines, Kickstart Templating and enchant.

I thought of trying Cobbler on my RedHat Beta2. Well to get started we need to install some dependencies. That are TFTP Server, DHCP Server, NFS Server, Portmap, HTTPD server and cheetah python template (required for Kickstart templating). All of the above packages can be found in the installation media of RedHat sources only required Cheetah can be downloaded from here. Once all of the dependencies are installed we proceed with installation of Cobbler. Cobbler can be downloaded from here. To install Cobbler just extract the tar you download and from the source directory give the command python setup.py install. This will install it. After installing Cobbler we need to do some pre-configuration steps which are necessary before we start with Cobbler.

First of all we need to enable TFTP Server. For that we need to edit /etc/xinetd.d/tftp and change the disable=yes to disable=no and then service xinetd restart and chkconfig xinetd on.

Second we need to configure our installation tree. I had dumped my whole Beta2 DVD in /rhel5/Dump. And configure any one either HTTPD or NFS so that later we can access the Dump during installation. I preferred HTTPD as I faced problems with NFS earlier. To do so I edited the /etc/httpd/conf/httpd.conf file. In the last write this:

<virtualHost "">ServerAdmin root@server.example.com
DocumentRoot /rhel5
<directory "/rhel5">Options Indexes Includes
</directory>ServerName server.example.com
ErrorLog logs/server.example.com-error_log
CustomLog logs/server.example.com-access_log common

We need not do anything we DHCP server configuration and PXE configuration as it will all be well taken care of by Cobbler. Now comes the crucial part of this configuration and that is creating a kickstart file to install clients. Creating a kickstart file is a very tricky thing. I prefer system-config-kickstart tool for this job as it is GUI and easy to use. It can give us a sample kickstart file which we can edit according to our use. I did the same thing created the file via system-config-kickstart and edited it according to my client machine (I had one client only). I am posting my ks.cfg here:

For Physical Machines:

url --url=
key 2515dd4e215225dd
lang en_US.UTF-8
keyboard us
xconfig --startxonboot
network --device eth0 --bootproto static --ip --netmask --gateway --hostname server1.example.com
rootpw --iscrypted $1$QGYhCela$pNOZoWf4XoONvUdND/nS01
firewall --disabled
authconfig --enableshadow --enablemd5
selinux --disabled
timezone Asia/Calcutta
bootloader --location=mbr --driveorder=hda --append="rhgb quiet"
# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
#clearpart --linux
#part / --fstype ext3 --onpart sda3
#part swap --onpart sda6
#part /home --fstype ext3 --onpart sda7


In the above kickstart I have commented the partition scheme as I wanted it done manually (don’t wanted to loose my data). I save this file at /rhel5/Dump/ks.cfg.

After we have configured all the required servers we should edit the file /var/lib/cobbler/settings. In this file we edit the lines so that they look like this:

manage_dhcp: 1
next_server: ''
server: ''

manage_dhcp: 1 tells cobbler to take care of the /etc/dhcpd.conf for us. For this cobbler use a template /etc/cobbler/dhcp.template. The next_server and server points to my cobbler server system. They will be used in /etc/dhcpd.conf as next-server. As I have only one dhcp server so there is no where else to look for dhcp information that’s why my next-server is the same as my server.
After this step we run cobbler check. This commands checks that all things are in place and everything is fine. If this command reports the following:

No setup problems found.
Manual review and editing of /var/lib/cobbler/settings is recommended to tailor cobbler to your particular configuration.
Good luck.

Then we are done with our cobbler’s pre-configuration steps. Now is the time to proceed and configure cobbler.
Before starting the configuration of Cobbler I would like to mention some terminology of it. In Cobbler we have Distro, Profile and Systems.
They can be viewed as in a hirearchy:
Distro -> Profile -> Systems.
Like for example:
Fedora Core 6 -> WebServer -> System A, System B
Fedora Core 6 -> MailServer -> System C, System D
Redhat 5 -> DNSServer -> System E
So we have one Distro within which we can have one or more than one profile and within that we can have one or more than one or even zero systems. I hope you got my point.
So in our case I created first of all a Distro entry for Cobbler with the cobbler distro add command.

cobbler distro add --name=rhel5
-dvd --kernel=/rhel5/Dump/images/pxeboot/vmlinuz --initrd=/rhel5/Dump/images/pxeboot/initrd.img --arch=x86

This created a distro inside cobbler’s configuration (which is stored in /var/www/cobbler).

After adding a distro we add a profile inside that distro. I create a profile for the new machines I am going to install later. To create a profile this command I gave:

cobbler profile add --name=redhat5y --distro=rhel5-dvd –kick-start=/rhel5/Dump/ks.cfg

The above command is simple to understand. It tells that the profile name is redhat5y and it’s a profile for distro rhel5-dvd I created earlier. The –kick-start option tells the path of the ks.cfg I created earlier for my new physical machines going to be installed later. After creating the profile I can proceed by creating system within the profile.

For example I want to add a systemA in the profile redhat5y I can give the following command:

cobbler system add –name= --profile=redhat5y

Here name can be a ip address, MAC address or DNS resolvable hostname. I didn’t tried the above command as it was a little confusing and the other thing was that I was going to have only one more system to install so I didn’t need a system within redhat5y profile. I can use the profile itself to install the new system. It sounds a little confusing right? Well let me explain it a little bit more. We created a distro and within that distro we created a profile. Now what actually is going on is that there is a database getting created in cobbler in hirearchial manner. Under which on top is the distro within it is a profile. For further customization I can add system’s data within that profile. But if I don’t add any system within the profile then also I can continue. I can very well use the profile to boot systems. That way new systems will inherit the profile directly there is no need to be more specific about particular system but if in case we want customization we can add a system data within a cobbler profile.

After creating the profile we are done almost with the configuration of cobbler and now we proceed to start cobbler.

Starting cobbler is simple with the command cobbler sync. This commands reads the database distro, profile, systems (if any) and write’s /etc/dhcpd.conf and starts the dhcp server service. After it’s done we can see a cobbler report with the command cobbler report. This command lists the distro’s the profiles within those distro’s and systems if any.

After all this we switch to client side. On the client side we need a PXE boot enabled LAN card. I got one from my friend. Most of the LAN card today come with PXE support. I selected PXE boot as first boot device priority from within the BIOS and it booted from the PXE. Got IP address from my cobbler server managed dhcp. Then showed the boot: prompt. Here you can type in the profile name and press enter and it will boot into that profile automatically or if you have a system within a particular profile you can just enter the system name here and it will boot the configuration for that particular system. As I haven’t created any systems within my redhat5y profile I typed in my profile name that is redhat5y and pressed enter. If you want to see the list of all the available profiles and systems within them you can type menu at the boot: prompt.

When I gave the profile name at the boot: prompt what it actually did was it read /tftpboot/pxelinux.cfg/default file inside which there was a entry for redhat5y profile telling what kernel to boot and which initrd image to use. All was specified when I added a new profile from cobbler profile add command earlier. And when I ran cobbler sync command it was written to /tftpboot/pxelinux.cfg/default. After the initial boot it switched to ks.cfg file to get install information. The only information it asked me was the partitioning which I left commented in ks.cfg (for the sake of my data you can very well specify this too). And after that it installed the client machine. It took very less time and a small user intervention (which can also be eradicated).

So I got a new client installation from provisioning. It all sounds a little complex but once we do this practically things become more clear. So in the next step I tried to install a xen virtual machine from PXE boot. For that all I did was create a new distro named rhel5-xen and within that distro a new profile named redhat5x

cobbler distro add –name=rhel5-xen –kernel=/rhel5/Dump/images/xen/vmlinuz –initrd=/rhel5/Dump/images/xen/initrd.img --arch=x86

See the above command the only major difference between the earlier distro I created and this one is the kernel and initrd images. These one are for xen (see the pathname).
Then I created a profile redhat5x within this distro:

cobbler profile addd --name=redhat5x --distro=rhel5-xen --kick-start=/rhel5/Dump/ks1cfg --virt-file-size=2 –virt-ram=256

For the sake of convience I am posting my ks1.cfg file I used to install the virtual machine. In this kickstart file I specified the partitioning information and in this one I used nfs as my installation method (that can be configured very easily).

nfs --server= --dir=/rhel5/Dump
key 2515dd4e215225dd
lang en_US.UTF-8
keyboard us
network --bootproto=bootp --device=eth0 --onboot=on
rootpw --iscrypted $1$VwD9nalr$06K0bUawzanX72gNk0es91
firewall --disabled
authconfig --enableshadow --enablemd5
selinux --disabled
timezone --utc Asia/Calcutta
bootloader --location=mbr --driveorder=xvda --append="console=xvc0"
# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
clearpart --all --drives=xvda
part /boot --fstype ext3 --size=100 --ondisk=xvda
part pv.2 --size=0 --grow --ondisk=xvda
volgroup VolGroup00 --pesize=32768 pv.2
logvol / --fstype ext3 --name=LogVol00 --vgname=VolGroup00 --size=1024 --grow
logvol swap --fstype swap --name=LogVol01 --vgname=VolGroup00 --size=144 --grow --maxsize=288


The above profile add command created a profile within distro rhel5-xen. With a new kickstart which was specially written for xen virtual machine. Now the new arguments in the above command one tells the image file size was 2GB. This image file is used to store the virtual machine on the hard disk just like vmware uses files to emulate hard disks within the virtual machine we see partitions but on hard disk they are files actually in case of xen they are .img files and if not specified are stored in /var/lib/xen/images/(cobbler stores them here). The second argument tells the amount of RAM to be given to the virtual machine. I have 512 MB physical RAM on my system out of which I gave 256 MB with the above argument. I tried to gave 128 MB RAM but it failed during booting of the virtual machine for the first time itself with some xen error reporting balloon error.

After I added the profile it was time to start the virtual machine installation. But wait it is not the same as we did earlier in case of physical machines it’s different. We use a new technology software from RedHat named koan‘. Koan helps start the virtual machine from the cobbler’s profile. I installed the software from here. The installation was as that of cobbler. Just extract the file and from within the source directory run python setup.py install. After it’s installed just run the following command:

koan --virt –server= –profile=redhat5x

The above command tells koan that we are going to install a virtual machine (–virt). The next argument tells the cobbler server’s ip address and the last one tells the profile name on the cobbler server. This thing was really amazing. Koan communicated with cobbler server and checked for the profile redhat5x (which was there) and started the installation of the virtual machine. Actually it starts the installation and ends up. What it gives is a alpha-numeric number which we need to use in xm console command to get a console of the virtual machine (so that we can see what’s going on during the installation). The number is like 00_16_3E_6B_D5_39. I used this and gave the below command after koan return me to the shell:

xm console 00_16_3E_6B_D5_39

This command connected me to the virtual machine 00_16_3E_6B_D5_39. This name is given by koan so that it maintains uniqueness of the virtual machine. Later we use this number as our virtual machine name. I haven’t digged into this number but it’s a sort of MAC address type which I will look into more detail later. For the time being my installation of the virtual machine started and ended very soon. Was fast.

After the installation finished the virtual machine rebooted and it was there. It was a working xen virtual machine installation using cobbler and koan in RedHat Enterprise 5 Beta2.

Later we can use the libvirt to manage the virtual machine as we did for other virtual machines that thing is the same.

So in this article I wrote about provisioning which is simplified and give more power with the new emerging technology like cobbler and koan. I will be working on some more things in the coming days specially kickstart templating and enchant. There are a lot of thing’s in cobbler and koan we can use according to our use I haven’t mentioned them all but I hope once you get started with this technology you will automatically start reading about them. Well there is no good documentation about cobbler except the man pages and of-course the mailing list.

Thanks for your time. Will see you soon!


Configuring Xen para virtualization in Redhat Enterprise 5

Hello to everybody. In one of my last post I discussed about Xen and virtualization in Redhat Enterprise 5. Well at that time I was not able to configure virtual machine via the virtualization tool but after that I researched alot about this topic and finally was able to configure Xen on my Redhat Enterprise 5.

Well, I had alot of problems with Xen earlier and I like to discuss them here. My first problem was during virt-install and it was:

libvir: Xen Daemon error : POST operation failed: (xend.err 'Error creating domain: I need 262144 KiB, but dom0_min_mem is 262144 and shrinking to 262144 KiB would leave only 235124 KiB free.')

I eradicated this problem by editing /etc/xen/xend-config.sxp. In this file I edited dom0-min-mem and changed it to (dom0-min-mem 128).

Now I rebooted and again tried to create a new virtual machine but this time I got this error:

libvir: Xen Daemon error : POST operation failed: (xend.err 'Device 0 (vif) could not be connected. Hotplug scripts not working.')

This I figured out as a problem due to my wireless device (eth1). Problem was that the Xen created a virtual network device xenbr1 which was bridged to eth1. But virt-install was looking for xenbr0 which was not present. Soon I found xenbr0 is the default virtual network device virt-install looks for. We can though change the default but I preferred the default so I disabled my wireless (eth1) and enabled my ethernet (eth0). So this time Xen created xenbr0 bridged virtual network device and virt-install detected it.

Well virt-install is a command line tool to create a new virtual machine that I used in CUI to create a virtual machine but here I am writing the GUI way that is too easy and user friendly. But before that some configuration steps. I saw on the internet that Xen para virtualized guests can only be installed via a NFS or HTTP or FTP install source location. So I preferred the NFS way as it’s too easy. I configured /etc/exports to export /dvd/actual which contained my RHEL5-Client DVD data. But I was never able to install via a NFS source. My installation always hanged. It hanged saying “Starting Install process. It will take several minutes to start..” This message comes when the final installation is about to begin (after all options are specified like Partitions, packages, passwords etc.). I never figured out a way to debug this thing so I changed my installation source.

This time I used a HTTP source. So I configured Apache Webserver for my installation source. These were the VirtualHost configuration lines:

<virtualHost "">ServerAdmin root@server.example.com
DocumentRoot "/dvd"
ServerName server.example.com
ErrorLog logs/server.example.com-error_log
CustomLog logs/server.example.com-access_log common

So my HTTP source now was http://server.example.com/actual. Great so let’s get started.

First of all I booted the Xen enabled kernel. Here is my Xen enabled kernel lines of grub.conf:

title Red Hat Enterprise Linux Client (2.6.18-1.2747.el5xen)
root (hd0,5)
kernel /boot/xen.gz-2.6.18-1.2747.el5
module /boot/vmlinuz-2.6.18-1.2747.el5xen ro root=LABEL=/ rhgb quiet
module /boot/initrd-2.6.18-1.2747.el5xen.img

Note that I specified no dom0_mem= variable in kernel line. All was handle via dom0_min_mem in /etc/xen/xend-config.sxp.

See I understand what all this means but I am hiding all these details so that this thing becomes easy.

When my Xen enabled kernel booted the xend service started automatically. See we need xend service to get started successfully if we want to create a new virtual machine via Xen.

Now I started Virtual Machine Manager present in Application -> System Tools.

In this window select Local Xen Host and Click on Connect. Note this window helps us to connect to a remote host that is running Xen. Very useful !!

So here’s how the Virtual Machine Manager looks. Note here that right now I am running the host domain0 only so only that is shown but we will create a new domain (domainU) soon.

So after clicking the New button the Create Virtual Machine wizards starts. This wizard asks some input regarding the new virtual machine and finally creates it.

First of all it asks the name of the virtual machine. I gave rhel5b2-pv1 (Redhat Enterprise Linux 5 beta2 para virtualized 1).

Now the wizard ask about the virtualization method. We select para-virtualization (unique point about Xen). Para virtualization is what I have configured till now and it’s working I have not tried full virtualization (will try it soon with Windows XP guest).

In this step the wi
zard ask about the installation source. Two options are provided I choosed the first one and specified my installation source http://server.example.com/actual (I have DNS and APACHE configured on my system).

In this step the wizard ask about the storage. How the virtual machine is going to be stored on the disk. I choosed the file way and specified the file as /opt/rhel5b2-pv1.img and size 4GB.

In this step the wizard asked about the memory I want to give to the virtual machine. I gave 256MB to my RHEL5 virtual machine. Note the GUI was a little buggy in showing my system memory as 502 GB instead of 502 MB (LOL). Also in this step the wizard asked about the VCPU which I specified 1 (I have centrino duo).

This was a summary of all the options specified earlier by me. Finally I clicked Finish Button.

This little window showed a progress bar and behind the scene it was creating a virtual machine. It was also establishing a VNC connection so that I can see the installer in a window in my host OS.

So this was the window I was shown with. This is the first screen the window showed to me. After that the anaconda installer started. Asked me the partition layout, root password, time zone, network configuration and package selections.

Finally when all configuration was done the install process started. This is just before it started (I captured this screen because with my NFS installation it hanged during this process).

The installation finished in about 20 minutes as it was a very minimal installation around 940MB. No X-window no GNOME just pure Console. When the installation finsihed it rebooted and showed nothing I had to select Serial Console from the View Menu (Virtual Machine Console) and it showed me the login prompt.

Using this console I logged in to my virtual machine and used it. LOL.

When I was finished with all this I shut down the virtual machine and did somethings. First of all I saw that my dom0 (default domain) which was compressed to use 212 MB RAM when my virtual machine was running still used 212 MB. Performance was slow. I need to give it the whole RAM now but how I do that. Well I figured out a way (searching internet):
In command line


Inside the virsh prompt

virsh # connect
virsh # setmem Domain-0 500000

The first command connects virsh to the local hypervisor and the second command sets the memory of Domain-0 (Domain Name) to 500MB. To see the name of your domain issue list command inside the virsh prompt.

Now I wanted to start my virtual machine automatically during next boot up. This is how I did it.

cd /etc/xen/auto
ln -s ../rhel5b2-pv1 .

What I did here was I went to /etc/xen/auto and created a symlink to my /etc/xen/rhel5b2-pv1 file in this directory. /etc/xen/rhel5b2-pv1 was the Xen configuration file for my virtual machine. And if I want to start my virtual machine during every boot up I need to place a symlink for it’s config file in /etc/xen/auto. This symlink is read by xendomains service. So I

chkconfig xendomains on

So the service get’s started at every bootup.

When you don’t want to use the virtual machine just do this

service xendomains stop

But if you want to assign all the physical RAM after this to the default domain you have to again call virsh and do what I told earlier. I need to figure out a easy way to do this (hack the /etc/init.d/xendomains script maybe).

Well this was it and hope you also got Xen installed and configured in your linux. I was specific about the OS that is Redhat Enterprise Linux 5. Virtualization is alot easier in RHEL5 and hope to see more in the final release of RHEL5.

What Redhat guys have done is that they have used a API called libvirt that talks to Xen. libvirt is simply great written in C language having bindings for Python and Perl. This API helped them to write the simple yet powerful GUI ‘virt-manager’ for configuring and managing the virtual machines. They are doing great work at Redhat Emerging Technology. I have also tried the new virtual machine manager (GUI) and they have included alot more options in it (latest release is 0.3.1). Hope to see the new Virtual machine manager in Final release of RHEL.

Configuring Yum in RHEL5 for DVD source

In my last article I explained the problem I faced with the installation of software in RHEL5 Beta2. I tried system-config-packages and the old “rpm” command but nothing worked as it used to in earlier days. So I thought to dig into this thing and tried to find the possible cause and solution to this problem.

So I went on to GNU/Linx community and put up this question. Okay I got some inputs some directions and finally I got what I wanted. First of all let me tell you the scenario once more so that you can better get what I want to say.

Suppose you have installed a RHEL5 system and now after the installation is complete you want to install a package (which is not installed). You put in the DVD and mount it. Go to the said directory and try to install the package via the well old “rpm” command. But to your surprise you found that it failed due to dependency problems. Okay no problem. We all know how to deal with it. We use the “–aid” switch with our “rpm” command that will automatically install the dependency rpm first then the said rpm. Well we try that but it again failed with the same error message. That means it’s not finding the dependency rpm. But wait. The dependency rpm and the rpm we want to install both are in the same directory then why is the “rpm” command failing.

Well that’s because in RHEL5 (as in Fedora Core 6) all the things are controlled by “yum”. I read somethings about “yum” and quickly found that it had problem with dvd sources. But I didn’t found any thing on how to disable “yum” completely and go through the well old command line way of installing packages. But I found a way out by which “yum” can access DVD sources and if that happens we can install/un-install packages easily either via graphical tool(system-config-packages) or the command line via “yum” command.

Okay so let’s start this.I inserted the RHEL5 Client DVD and mounted it on /media/dvd/

mkdir -p /media/dvd
mount /dev/dvd /media/dvd

Then I created a ISO file for this DVD using the “mkisofs” command.

mkisofs -o /opt/RHEL5.iso -r /media/dvd/

The above command took sometime as I was creating a image file for my DVD (approx 3.6GB). Well after sometime it finished. Now was the time to do the real job. There was no use of the DVD so i unmounted and ejected it.

umount /media/dvd/

Now I created a directory which will act as mount point for the ISO file I created earlier.

mkdir -p /dvd/actual

Now I mounted the ISO file onto the above mount point. Note that to mount the ISO file we need to use special options. So let’s see what is the command.

mount -r -o loop -t iso9660 /opt/RHEL5.iso /dvd/actual

The above command mounted the RHEL5 ISO on /dvd/actual. Now I went to the mount point directory and installed a rpm called “createrepo”.

cd /dvd
rpm -Uvh actual/Client/creatrepo*

The need for this RPM arises because the DVD of RHEL5 (also of FC6) has “media:” written in it’s metadata that creates problem with “yum”. Now by using this createrepo I will create a copy of my own for the repodata that will not be having the “media:” thing and that will help me use the repodata with “yum” and hence the software using “yum” too like “system-config-packages” or “pirut”.

Now it’s time to create the repodata. This is how I did that (note: I didn’t changed my current directory. Was where I was previously).

createrepo .

Above command indexed around 2239 Packages and created a repodata/ directory in the current directory of around 8.1 MB. This directory had the repomod.xml and other metadata files. Actually what it did was it indexed all the RPM’s present in the current directory, that was “/dvd”. So I had rpm’s in

/dvd/actual/Client, /dvd/actual/VT, /dvd/actual/Workstation

All got indexed and the metadata was created.

I also copied the GPG key files to my hard disk (to tell yum to use them later).

cp /dvd/actual/*GPG* /opt

There were around 4-5 GPG files they got copied to /opt. Later we will see that we can make “yum” to read these GPG key files and verify a package before installing.

Now finally came the time to tell yum to use this repo to for my installations. That was done by creating a repo file in /etc/yum.repos.d/. This is how it was done:

cd /etc/yum.repos.d/
vi dvd.repo

Inside this file I wrote the following:

gpgkey=file:///opt/RPM-GPG-KEY file:///opt/RPM-GPG-KEY-beta file:///opt/RPM-GPG-KEY-fedora file:///opt/RPM-GPG-KEY-fedora-test

Saved the /etc/yum.repos.d/dvd.repo file. Now I thought of disabling the plugins for RHN and “InstallOnly Packages”. So I went to /etc/yum/pluginconf.d/ and opened the configuration file for each plugin and made “enabled=1″ to “enabled=0″.

Now finally I updated my “yum” so that it reads the new repo and other settings once again. For that I did:

yum clean all
yum update

Voila.It’s finally done. But hey wait.When I install a package from where will it select the package rpm.It will do that from the mounted ISO.That means I need to mount the ISO everytime.

Well I can use fstab for that.So I created a entry in /etc/fstab so that my ISO gets mounted automatically on boot.

Here was the entry I made in /etc/fstab:

/opt/RHEL5.iso /dvd/actual iso9660 defaults,ro,loop 0 0

Now I ran system-config-packages and search, browse, install and un-install RPM’s easily. Now the GUI Package manager can search for installed as well as not installed rpm’s. That’s great. But the most important thing is that it if install a RPM which needs a dependency RPM (which is not installed) the Package manager will tell us that there is a dependency and will install it automatically.Great !. Same goes for un-installation of packages.If some package is acting as a dependency for some other package and we try to remove it then it will show a message and will ask us what to do.

For command line lovers “yum” command will work. Now they can search package via yum search or if they don’t remember the name they can see the large list using yum list command.For installing yum install.This will handle the dependencies too. Okay so I finally managed to find a way out. But it was a real pain. But as they say “No Pain No Gain”. Meanwhile I have not formatted my RHEL5 but instead of that in my Vista partition I hav
e installed Ubuntu 7.04 Herd 4. So now I have two Linux RHEL5 Beta2 (Client) and Ubuntu 7.04 Herd 4. Well I kept RHEL5 so that I can learn some more new things.
All in all Package management in RHEL/Fedora needs a great improvement.Today I call upon the developers to come together and help the Redhat guys to improve the “yum”,”pirut” and “system-config-packages”.


Linux Distros

Well, it has been a long time since I wrote here. I have been busy with my exams and then got busy trying the new linux distro versions. In these days I tried my hands on the Redhat Enterprise Linux 5 Beta2(RHEL). It was a great experience trying a enterprise product. I downloaded the huge 3.6 GB Client DVD.So let’s see how was the drive with RHEL5.

Installation procedure (anaconda) was exactly a copy of Fedora Core 6. I had tried FC6 earlier but only once and then I never used it. Same went for RHEL5. During installation it asked me to enter a key. That was the only difference I saw between FC6 and RHEL5 installation. I got the key from internet (I don’t remember from where) and now I don’t have the key. LOL.

The few changes I noted down between the previous release of RHEL and this release were that this release had 3D Desktop all because of AIGLX enabled Xorg 7.x. Compiz was providing the desktop effect. I also saw in this release Xen enabled 2.6.18 kernel. Oh wait. Let me show you the `uname -a` of RHEL5 beta2:

Linux deepsa.lenovo 2.6.18-1.2747.el5 #1 SMP Thu Nov 9 18:55:30 EST 2006 i686 i686 i386 GNU/Linux

Well the above one is not Xen enabled kernel but if you choose virtualiaztion as a installation option you surely get Xen only. Other changes were Eclipse, JAVA etc. development tools available in this release. I selected all of these and the Servers too. The installation took around 35 minutes. Wow. That was fast.
Okay. Now I booted the Kernel and as it should be, Xen started. Xend (Xen daemon) also started. Most notably I saw Avahi daemon. It was great to see it in RHEL(now surely will get to see some great desktop).

Okay so here came the Login screen. But hey what’s that. My resolution was not according to my laptop it was 1024×768 it should be 1280×800. No problem. I logged in(via root). So as I have guessed it was just like Fedora 6. No change. The following hardware which I need as soon as I install a Operating System didn’t worked in RHEL5.

a) Wireless Internet
b) Bluetooth
c) Proper Screen Resolution.

Okay. So I went on searching for drivers for ipw3945. I got to know that redhat had ipw3945 in Beta1 but they removed it due to some license problem. What the Fuck. They say if we include propertiery modules we get problem later. Here’s what they say exactly (I have converted it into a story).

Suppose redhat includes ipw3945 for example in RHEL5. Client A purchase RHEL5 installs it and use it. And after sometime Client A gets problem in it. Client A calls the redhat customer support guy and ask to fix the problem. Customer support department detect problem was due to a propertiery module named ipw3945. The guy tells Client A to please not install the module. Now the Client tells the guy that if you have given the module you need to better fix it. Not installing the module is not a solution. In this case note that the client didn’t required ipw3945 but ipw3945 was somewhere conflicting with some other crucial module. So redhat thought not to have propertiery module.

Now guys who need ipw3945 goes to atrpms and install from there. Note that installing these things from internet is a real pain. All because of the dependencies problem. I am of the favour that if Linux somehow solves the Dependency problem they really have a easy path ahead (Ubuntu has done it very well).

Okay so I went to the above mentioned website and downloaded the daemon rpm, kernel module rpm and installed them. Then via system-config-network I configured my wireless with a 128 bit WEP Key. Oh to my surprise I saw that my wireless doesn’t starts at boot (even after telling it to start on boot automatically). I figured out a way soon. I have to switch off my wireless(on my laptop) and again switch it on after the boot process completes. The problem is that it is not associating with the Access point. I don’t know if there is some other solution to this problem but I am going with the above solution right now.

Bluetooth was working without any problem so it was nice. Well now comes the resolution problem. I have been facing this problem with many linux distros. Only OpenSuse 10.1 gave me proper resolution for others I have to use 915resolution. So I did the same for RHEL5. I went to the website (mentioned above) and downloaded 915resolution configured it for 1280×800 and in such a way that the service for it starts automatically during boot. Okay now what I am left with. Xen? Yeah. I have tried this tool earlier with RHEL4 but never got success and to my surprise it was way too easy in RHEL5 Beta2 with the new virtualization manager (GUI) tool to create, configure and modify new virtual machines based on Xen. But I never got success in configuring one for me. And the major reason for that I find is my RAM. I need to have at-least 1GB RAM otherwise there is no point having Xen.

So I installed my old friend VMware workstation. But this time it was a new version. Yes the Beta 6.0. It was easy to install and configure as this time it has the code to compile vmnet and vmmon drivers for 2.6.18 kernel with GCC 4.1.1. Okay. So I got through with virtualization but hey wait. When we will see a user friendly open source virtualiaztion tool that has capablities just like VMware. Xen is promising but it’s not for beginners.

Okay so what I am left now. Oh Yes. The 3D Desktop. Well it was easy just go to System > Preferences > 3D Desktop effects and enable it and BOOM it’s on. Well it was Compiz utilizing AIGLX power. But as I have tried beryl and I felt in love with it along time ago I went on installing beryl via SVN on RHEL5. It was easy. I got beryl 0.2.0rc3 installed in a few minutes and up running with most of the effects and 3D Desktop is on. Well according to Redhat they have included 3D desktop just for a technological preview in RHEL5. They don’t mean to have this kind of software in a enterprise product. It’s good. Sometime we get bore at that time we can play with the desktop cubes. LOL.

Well now comes the most important part. And in a enterprise product like RHEL5 it’s the servers. I tried DNS (BIND-9), APAHCE and SQUID. Well I found not much difference between the earlier (RHEL4) and these ones (RHEL5). The important difference I saw was in DNS. Earlier they use to have a caching-nameserver RPM in only AS and ES version not the WS version. But this time they had that rpm in Client RHEL5 Dvd. Well it’s a DVD so it need to have more softwares. I didn’t downloaded the Server DVD which was less in size. But I think the only difference between server and client DVD is that the server DVD is going to have the Cluster suite too (GFS too).

The installation of software via the DVD after a base installation is there can be easily done through system-config-packages. It tries to find a connection with RHN(Redhat Network for Update). But as my system is not registered with Redhat it couldn’t connect to the RHN servers. But can I install RPM from the DVD via this GUI tool. Let’s see.

I started system-config-packages. Searched for a package named zsh which was on the DVD but not installed the search result said no packages were found. But what’s that. I have the package on the DVD. Okay I figured out the problem. The problem was that each time this Add/Remove Software program starts it runs a plugin called Loading "installonlyn" plugin. I don’t know how to remove this plugin and how to get a new plugin that can search for me the software present on the DVD and not installed on my system. Okay now I tried to install a RPM from DVD that requires other dependency RPM’s via command line (mostly the prefer way with linux administrators). I wanted to test –aid. I tried installing xfig rpm that depended on transfig rpm. Both the RPM’s were in /mnt/Client/ directory. I gave the command:

t@deepsa Client]# rpm -ivh xfig-3.2.4-21.1.i386.rpm --aid

I got the result

error: Failed dependencies:
transfig >= 1:3.2.4-12 is needed by xfig-3.2.4-21.1.i386

But why so. It must have installed the transfig RPM automatically I gave “aid”. Now I installed transfig first and then I installed xfig. Now it got installed. Man. It’s not what I wanted. It’s the same as in Fedora core 6.

The aid switch doesn’t work in RHEL? It’s very important thing. God knows what will happen to Redhat. I am specially worried after ORCALE release there own Linux which is exact copy of RHEL4 Update 4. I mean aid is what I use to use many times. I mean I taught my students during there RHCE course about this switch but now it’s not functioning as it should have. Come on RedHat!!.

This is a screenshot of the GUI based Package Manager in RHEL5 Beta2.

The other problem I faced on my laptop was that the CD/DVD were not getting detected and mounted automatically when I inserted them. I had to use the mount command everytime. I guess problem is with gnome-volume-manager. Guys need to fix it.

All in all I am going to format my RHEL5. Why? I am a desktop user not a server administrator. Sometimes I do some programming with C/C++/GTK+ but I think RHEL5 is better for enterprise not for a home user. Home user requires much more user friendly desktop and application. We don’t have CHM reader in RHEL5. We don’t have MP3 support (patent problems). We don’t have MPEG, AVI players in RHEL5. And lastly the most important issue is the software installation procedure. But that’s what a average desktop user wants.

Well if you are having a laptop configuring RHEL5 for your laptop is not much of a problem now. But then also you need to configure somethings before you say it’s ready for use.

I am now downloading Herd 4 of Ubuntu 7.04. The only linux distro I tried that is best for a laptop user like me. Wireless no problem. Bluetooth no problem. Resolution no problem. 3D Desktop no problem. MP3, AVI, MPEG it’s way to easy to configure and use. I am eagerly waiting for the final release of 7.04 in April. Meanwhile I will try Herd 4 with a 2.6.20 kernel having EXT4 support (experimental). LOL.

In the recent months I had tried alot of distros:

a) BackTrack Beta 2.0
b) OpenSUSE 10.2
c) Gentoo 2006.1
d) RHEL5 Beta2
e) Ubuntu Edgy
f) Ubuntu Fiesty Herd 2


Exams fever the worst fever for a human being like me.

Today I got my first semester exams date. They are starting from 27th feb 2007. After a long time I am giving exams. So this exam fever is killing me. I am not so use to this fever in the past 1 or so year.
The fever has increased due to the boring subjects I have in my course. And most horrible of them is Discrete Mathematical Structures. The worst subject I have ever read. I haven’t studied in the whole semester, attended only 3 classess in college for this subject during the semester and now when I open the book nothing goes in mind. Predicate logic it’s formula, normal forms etc. etc. so fucking bore.
God knows what will happen in this exams. Well I am not preparing that hard so I don’t expect much but I need to pass.
Will comeback soon with alot of new discoveries I have done in the field of computer science (lol).

Installing OSX86

So finally today I got time to give OSX86 a try. I have heard a lot about this. Someone said me it can be only installed on a blank hard drive (you have to make the hard drive blank otherwise loose data). Someone said it doesn’t support most of the hardware in the market. But I thought to give it a try.
My first and last concern was that I don’t wanted to wipe out my hard drive because it was already housing Windows XP and SuSE Linux. So I went to google searching for the possible paths. I found one after researching quite a lot there.
So here I am writing what I did to get OSX86 on my laptop without wiping Windows XP and SuSE Linux and other data partition.

First of all I downloaded the tiger-x86-flat.img from torrent. It’s a huge file around 6 GB so most of the trackers have it in bzip2 compressed format so it’s 1.28 GB only. After downloading the bzip2 archive uncompress it via WinRAR on some ntfs partition as FAT32 doesn’t support single file size greater than 4 GB. So you need more than 6 GB on some NTFS or EXT3 partition (it too will work).

Secondly download Acronis Disk Director Suite. It’s not a free software you need to purchase it if you want to use it. From within this software I resized my C drive housing windows XP. It was a 20 GB partition having 10.8 GB free space. I resized it such that I got unallocated space of 7 GB at the starting and then my C drive reduced to 13 GB.

After that from within Acronis Disk Director Suite I formatted the unallocated space as partition type of “af”.

Now I booted my system from the hard drive and I got grub error 17. Lol. I lost my grub. What a stupid I am, I must have taken backup of my MBR but ok I will manage things somehow.

Now I booted my laptop via a Ubuntu 5.10 Live CD and from within the shell mounted the ntfs drive having my tiger-x86-flat.img file it was /dev/sda5.

mkdir /mnt/C
mount /dev/sda5 /mnt/C
cd /mnt/C
dd if=tiger-x86-flat.img of=/dev/sda1 bs=512 skip=63

The last command took around 1 hour. I found a fast alternative to that but I haven’t tried so don’t know whether it will work or not but one use that method the time taken by the command is around 5 minutes.

Now when I booted my system from the hard drive I again for the Grub 17 error that means the boot loader needs to be installed. Ok now comes the SuSe 10.1 bootable dvd.
I booted from within the DVD and went to rescue shell. From within this shell I issued the following commands to recover my grub.
First of all I mounted my Linux “/” partition.

mkdir /mnt/mm
mount /dev/sda6 /mnt/mm

Now I edit the menu.lst configuration file of grub so that I can boot OSX86.

cd /mnt/mm/boot/grub
vi menu.lst

Title Mac OSX86
rootnoverify (hd0,0)
chainloader +1

I also edited my Windows XP and SuSE linux entry because the partition number have changed after adding the HFS partition at the beginning.

Title Windows XP
rootnoverify (hd0,1)
chainloader +1

After that I saved the file and quit. Then I issued the command:

grub-install –root-directory='/mnt/mm' /dev/sda

This command installed my grub on the MBR.

After this some basic steps I did so that my Linux and windows can boot properly. I edited my Linux “/etc/fstab” file accordingly (as my partition table has now changed).
I also needed to edit the BOOT.INI in the C: drive otherwise I will get hal.dll erro while booting windows XP. So I mounted the Windows XP partition and edited the BOOT.INI file like this.

mkdir /mnt/C
mount /dev/sda2 /mnt/C
cp /mnt/C/BOOT.INI /root
chmod a+w /root/BOOT.INI
vi /root/BOOT.INI

Edit the partition(1) to partition(2) on the two lines. Save and exit.

umount /mnt/C
chmod a-w /root/BOOT.INI
ntfscp /dev/sda2 /root/BOOT.INI /mnt/C/BOOT.INI

ntfscp comes with the ntfstools package in SuSE 10.1 otherwise it would have been difficult to write changes on NTFS partition from within Linux.

Now finally I rebooted my laptop from the hard drive and tested all my 3 Operating System. All 3 booted well.
So now I have 3 operating system on my hard drive that are tiger osx86, windows xp home and Suse Linux 10.1 and that too without loosing a single bit of data.

Now I am researching on OSX86 and will try to improve my graphics and sound in it. Internet and 1024×768 resoultion are working fine it right now and it’s really nice operating system.

Wish you all happy diwali and enjoy OSX86 (power of MAC on Intel).

Lenovo Laptop 3000N100 (07684KA)
512 DDR2 SDRAM (667 Mhz)
Intel Centrino Dual Core 1.66 Ghz
80 GB SATA Hard Drive
RealTek Ethernet
Intel Pro 10/100 Wireless
Intel 945GM Mobile Graphics
Intel High Definition Audio (SoundMAX)
Integrated Camera
Card Reader
External Monitor Port
Firewire Device
15.4” inch Wide Screen
Finger Print Scanner
Bluetooth Enabled


Installing Windows XP virtual Machine

In the previous post we discussed how we can install and configure VMware-workstation on a Fedora Core 5 machine. Now in this post I am going to discuss with you how to install a Windows XP virtual machine inside the VMware-workstation we installed in the previous post.

From inside the VMware Application Select File->New->Virtual Machine.
A “New Virtual Machine Wizard” gets opened up, inside the wizard you are asked various questions you need to follow these simple steps in-order to install a simple Windows XP Professional Virtual Machine.

Step 1) Just Press Next

Step 2) Virtual Machine Configuration -> Select Custom and Press Next

Step 3) New Virtual Machine Format -> Select New-Workstation 5

Step 4) Guest Operating System -> Select (1) Microsoft Windows and inside Version Select Windows XP Professional

Step 5) Select a Name for your virtual machine you want and the path you want to save that virtual machine to.

NOTE: Whichever path you give mind you that it should be having at-least 4GB free space of HardDisk.

Step 6) Number of Processors: One

Step 7) Memory Size : It depends on how much RAM do you have, as I have only 256 MB RAM I give 128 MB range if you have 512 MB RAM I recommend you go for 256 MB RAM size here. Just slide the slider towards left or right to change the memory size.

Step 8) Network Type: If you have a broadband connection and you want that connection to be accessed inside windows virtual machine select “Bridge Networking” , if you want to only communicate with the Fedora Core 5 host machine select “Host Only Networking” and if you don’t want any network connection select “No Network Connection”.

Step 9) SCSI Adapter: Select BusLogic and Press Next (LSI logic seems to have problem with windows XP drivers set).

Step 10) Disk: Select “Create New Virtual Disk” and Press Next

Step 11) Virtual Disk Type: IDE (Recommended) Select this and Press Next

Step 12) Disk Size: I gave a 4GB space for my Windows XP virtual Machine here and selected “split disk into 2 GB files”. It depends on your needs. If you want to allocate more disk space you can give more space.

NOTE: The option “Allocate all disk space now” option will allocate all the said disk space during that time and in that case you should have the specified disk space where your virtual machine is going to be saved.

Step 13) Disk file Name: Choose the name and the location. I prefered the default one and Press Next.

After these steps the wizard finishes up the new virtual machine gets created and it’s time to boot up into the new virtual machine.

Now it’s time to install Windows XP Professional:

Step 1) So just put inside the Windows XP Setup CD inside the CD-Drive and “Power On” the virtual machine.

Step 2) During the boot up click inside the “Black Screen” (to come out of the screen press Ctrl+Alt) and press “Esc” key to get inside the “BIOS setup”. Here select the First Boot Option as CD-ROM, save the setting and exit.

Step 3) After the windows XP installation starts and everything goes as it goes when you install Windows XP.

Step 4) Just during the partition time you will be shown a Single Parition as Free Space. Create a FAT/NTFS partition of that and install windows XP on that parition.

Voila ! You now have a Fedora Core 5 Machine inside which you have a running Windows XP.

Here I am posting a Screenshot I took when I was installing Windows XP inside my Fedora Core 5 (using vmware).


VMware Installation , Configuration on Fedora Core 5

In this post , I am going to share my experiences with you all , of how was I able to install VMware 5.5.1 on Linux (Fedora Core 5) and configure it and finally install windows XP through it.
So following the step-by-step procedure we start up right away(follow all the below steps by login via root user only):

Step 1) Vmware needs that the kernel and it’s appropriate sources are installed properly on your system. I recommend you to install the latest kernel for Fedora core 5. You can do this by issuing the following simple commands (note that you need to be connected to internet as a root user in order for these commands to run successfully):
a) yum install kernel
b) yum install kernel-devel
This will install the latest kernel and it’s source code. The version I have is 2.6.16-1.2080_FC5. After you do this step you need to reboot and boot into the newly installed kernel.

Step 2) We need the VMware Software for Fedora Core 5. For this we can go here and follow the steps described there. Download the RPM file for VMware 5.5.1-19175 version.

Step 3) As the above software is a shareware version you get a key for the first 30 days trial on the e-mail provided by you during registration on the link provided in step (2), note down the key and save it somewhere in a text file we will need it after installation and configuration finishes up.

Step 4) Now as VMware 5.5.1 has reported some problem during the configuration in Fedora Core 5 we will need to download this specially created perl script from here.

Step 5) Install VMware from the rpm by issuing the following command:

rpm -ivh VMware-workstation-5.5.1-19175.i386.rpm

Step 6) After installing the above rpm we need to extract the script downloaded in step (4) and follow these steps:

a) gzip -d vmware-any-any-update101.tar.gz
b) tar -xf vmware-any-any-update101.tar
c) cd vmware-any-any-update101
d) ./runme.pl

Step 6) After you run the runme perl script it will ask you many things. These should be your answers:

Question 1) Before running VMware for the first time after update, you need to configure it for your running kernel by invoking the following command:
“/usr/bin/vmware-config.pl”. Do you want this script to invoke the command for
you now? [yes]

Answer 1) Press Enter

Then Read the License Agreement (press Space Bar 7-8 times) and in the end type yes and press Enter.

Question 2) In which directory do you want to install the mime type icons?

Answer 2) Press Enter

Question 3) What directory contains your desktop menu entry files? These files have a
.desktop file extension. [/usr/share/applications]

Answer 3) Press Enter

Question 4) In which directory do you want to install the application’s icon?

Answer 4) Press Enter

Question 5) None of the pre-built vmmon modules for VMware Workstation is suitable for your running kernel. Do you want this program to try to build the vmmon module for your system (you need to have a C compiler installed on your system)? [yes]

Answer 5) Press Enter

Question 6) What is the location of the directory of C header files that match your running kernel? [/lib/modules/2.6.16-1.2080_FC5/build/include]

Answer 6) Yes (note your directory listing may differ with the given here !)

Question 7) Do you want networking for your virtual machines? (yes/no/help) [yes]

Answer 7) Press Enter (if you don’t want networking type no and press enter). If you need to run internet from your virtual machine in future and you happen to use a broadband connection please select yes and press enter.

Question 8) Do you want to be able to use NAT networking in your virtual machines? (yes/no) [yes]

Answer 8) This depends. For my setting I typed no and Pressed Enter.

Question 9) Do you want to be able to use host-only networking in your virtual machines?[no]

Answer 9) Answer it yes or no if you want/don’t want to communicate with your Fedora Core 5 host machine inside the virtual machine you install later. I choosed no.

Step 7) After these simple questions and configuration of VMware-5.5.1 finishes off and now we can run the vmware software by issuing the following command at command line:


Step 8) Once the software starts up goto Help->Enter Serial Number … and paste the serial number you got via email during step (3).

And voila! VMware is now installed and configured on your fedora core 5 machine.
In the next post I am going to explain you how did I configured the Windows XP Professional Virtual Machine inside VMware and that too through the step by step guide. So for the time being just play around with the above guide and try your hands with the different options giving in the software.

Happy Linux’ing !!


BSNL Broadband configuration Linux

I was seeing constant threads at orkut regarding configuration of internet on linux. This guide I hope will help you establish the BSNL broadband connection on your linux box. I assume that you have configured your network interface (if not then I am going to post a similar post regarding howto configure it).

This document is a copy of a document which I saw in late 2004 when I was searching for a howto to configure internet in linux.

This document describes howto setup bsnl broadband service on your gnu/linux server/desktop. It assumes that you are neither linux beginer nor an expert. It assumes that you are aware of basic gnu/linux commands.

I have installed it on redhat 9. I dont find any reason why it would not work on following distributions:

An unofficial FAQ on Dataone can be found here


Before we start, please ensure that you have the following details on hand:

Using the builtin dialer

A HOWTO with instructions on using the PPPoE builtin dialer of the Huawei SmartAX MT800 ADSL router is available.
The procedure to configure the dialer in UTStarCom UT300R modem is given here

Using RP-PPPOE as dialer


rpm package command line
rpm package GUI
source code

If you are downloading the tar.gz version, you have to build the rpm first. So, please walk through the following procedure to build the rpm:

Procedure to build RPM from tar.gz.

 1. Login as root
2. Download tar.gz version & execute the following commands
# tar -zxvf rp-pppoe-3.5.tar.gz (untar this package)
* This command would untar the package in the current directory.
# cp rp-pppoe-3.5.tar.gz /usr/src/redhat/SOURCES/
# cd rp-pppoe-3.5
* The rpm spec file, rp-pppoe.spec is available in this directory.
# cp rp-pppoe.spec /usr/src/redhat/SPECS/
# cd /usr/src/redhat/SPECS
# rpm -ba rp-pppoe.spec
* The above command would untar the tar.gz package in ../BUILD
directory, compile the package and finally it would create neces
sary rpms in

If everything goes fine, you would see the following or similar message:

 Wrote: /usr/src/redhat/SRPMS/rp-pppoe-3.5-1.src.rpm
Wrote: /usr/src/redhat/RPMS/i386/rp-pppoe-3.5-1.i386.rpm
Wrote: /usr/src/redhat/RPMS/i386/rp-pppoe-gui-3.5-1.i386.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.97349
+ umask 022
+ cd /usr/src/redhat/BUILD
+ cd rp-pppoe-3.5
+ rm -rf /tmp/pppoe-build
+ exit 0

If this is the case, the necessary rpms are successfully created and they are available in appropriate directory.

 Source RPMS - /usr/src/redhat/SRPMS/rp-pppoe-3.5-1.src.rpm
Binary RPMS - /usr/src/redhat/RPMS/i386/rp-pppoe-3.5-1.i386.rpm
GUI RPMS - /usr/src/redhat/RPMS/i386/rp-pppoe-gui-3.5-1.i386.rpm

Now, at this stage you are having necessary rpms with you. Either you have downloaded them from the above website. Or you would have built them on your own:

Command Line Version

 # rpm -vih rp-pppoe-3.5-1.i386.rpm

It would install the command line version of rp-pppoe package.

GUI Version

 # rpm -vih rp-pppoe-gui-3.5-1.i386.rpm

It would install the gui version of rp-pppoe package. If you have successfully installed, then you can proceed to next section.


Once you have installed rp-pppoe package, all necessary commands would have been installed.

 adsl-setup  - to configure the connection parameters
adsl-start - to start the connection
adsl-status - to check status of the connection
adsl-stop - to stop the connection

Now, we are ready to configure the BSNL broadband internet connection.

Here you have to configure the following parameters:

 * userid
* network interface (eth0 or ..)
* Demand value (normally you can leave it blank)
* DNS Information (primary & secondary)
 # tail -f /var/log/messages

Mar 15 15:29:48 Gateway pppd[3564]: Remote message: Authentication success,Welcome!
Mar 15 15:29:48 Gateway pppd[3564]: local IP address
Mar 15 15:29:48 Gateway pppd[3564]: remote IP address
Mar 15 15:29:48 Gateway pppd[3564]: primary DNS address
Mar 15 15:29:48 Gateway pppd[3564]: secondary DNS address

There it goes! The primary & secondary dns details are found and it would be automatically updated in /etc/resolv.conf.

 * Password
* Firewall (set is to 2, MASQUERADE)
* Confirm
 # adsl-setup
Welcome to the Roaring Penguin ADSL client setup. First, I will run
some checks on your system to make sure the PPPoE client is installed

Looks good! Now, please enter some information:


>>> Enter your PPPoE user name (default test@dataone):


>>> Enter the Ethernet interface connected to the ADSL modem
For Solaris, this is likely to be something like /dev/hme0.
For Linux, it will be ethn, where 'n' is a number.
(default eth0):

Do you want the link to come up on demand, or stay up continuously?
If you want it to come up on demand, enter the idle tim e in seconds
after which the link should be dropped. If you want the link to
stay up permanently, enter 'no' (two letters, lower-case.)
NOTE: Demand-activated links do not interact well with dynamic IP
addresses. You may have some problems with demand-activated links.
>>> Enter the demand value (default no):


Please enter the IP address of your ISP's primary DNS server.
If your ISP claims that 'the server will provide DNS addresses',
enter 'server' (all lower-case) here.
If you just press enter, I will assume you know what you are
doing and not modify your DNS setup.
>>> Enter the DNS information here:
Please enter the IP address of your ISP's secondary DNS server.
If you just press enter, I will assume there is only one DNS server.
>>> Enter the secondary DNS server address here:


>>> Please enter your PPPoE password:
>>> Please re-enter your PPPoE password:


Please choose the firewall rules to use. Note that these rules are
very basic. You are strongly encouraged to use a more sophisticated
firewall setup; however, these will provide basic security. If you
are running any servers on your machine, you must choose 'NONE' and
set up firewalling yourself. Otherwise, the firewall rules will deny
access to all standard servers like Web, e-mail, ftp, etc. If you
are using SSH, the rules will block outgoing SSH connections which
allocate a privileged source port.

The firewall choices are:
0 - NONE: This script will not set any firewall rules. You are responsible
for ensuring the security of your machine. You are STRONGLY
recommended to use some kind of firewall rules.
1 - STANDALONE: Appropriate for a basic stand-alone web-surfing workstation
2 - MASQUERADE: Appropriate for a machine acting as an Internet gateway
for a LAN
>>> Choose a type of firewall (0-2): 2

** Summary of what you entered **

Ethernet Interface: eth0
User name: test@dataone
Activate-on-demand: No
Primary DNS:
Secondary DNS:
Firewalling: MASQUERADE

>>> Accept these settings and adjust configuration files (y/n)? y
Adjusting /etc/ppp/pppoe.conf
Adjusting /etc/resolv.conf
(But first backing it up to /etc/resolv.conf-bak)
Adjusting /etc/ppp/pap-secrets and /etc/ppp/chap-secrets
(But first backing it up to /etc/ppp/pap-secrets-bak)
(But first backing it up to /etc/ppp/chap-secrets-bak)

Congratulations, it should be all set up!

Type 'adsl-start' to bring up your ADSL link and 'adsl-stop' to bring
it down. Type 'adsl-status' to see the link status.

Internet Connection

Now, if you see this Congrats! message, then you are all set & done. You can type,

# adsl-start ..Connected! #

Thats it. The bsnl broadband setup has been installed & configured.

At this point, you can go through the man pages of appropriate adsl-* commands for details.

Hope this article helps you out in configuring internet on linux specially for the redhat persons. I will soon post a article regarding configuring network interfaces on linux and other stuff related to it.


Stepper motor driver GNU/Linux 2.6 kernel

I started a work on some project related to Stepper Motor Driver in Linux. I found one tutorial at linux gazette online magazine. But to my surprise the driver written was for 2.4.x kernel’s. I soon realised that I need to do pretty much stuff in order to make that driver work in 2.6.x kernel. Then again to my surprise I found a excellent tutorial at tldp explaining the 2.6.x and 2.4.x kernel module programming. I went through those tutorials and I ported the driver for 2.6.x kernel.
I used a debian 3.1 sarge with 2.6.8 kernel in it. I started off with some basic differences (between 2.4.x and 2.6.x). Code writing convention has change alot and these days module programmer prefer to use macro’s more than anything. They even have macros for defining the init function and the exit function. I went on to learn those things and finally came up with a code which I suppose will work in most of the 2.6.x’s series.
Here I present the code:

#include “linux/kernel.h”
#include “linux/fs.h”
#include “linux/module.h”
#include “linux/init.h”
#include “asm/io.h”
#include “asm/uaccess.h”

#define LPT_BASE 0x378
#define DEVICE_NAME “Stepper”
#define DRIVER_DESC “*Stepper Motor Driver*”

static int Major;
static int Device_Open=0;
static int i,j,k=0;
static int pattern[2][4][8] = {

void step(void)
printk(KERN_INFO “%d\n”,pattern[i][j][k]);
printk(KERN_INFO “%d\n”,pattern[i][j][k]);
static int stepper_open(struct inode *inode, struct file *file)
printk(KERN_INFO “Stepper Opened\n”);
return -EBUSY;
return 0;
static int stepper_release(struct inode *inode, struct file *file)
printk(KERN_INFO “Stepper Released\n”);
Device_Open –;
return 0;
static ssize_t stepper_write(struct file *file, const char *buffer, size_t len, loff_t *off)
char data;
switch (data)
case ‘h': /* Half Step */
printk(KERN_INFO “Half-Step mode initialized\n”);
case ‘f': /* Full Step */
printk(KERN_INFO “Full-Step mode initialized\n”);
case ‘F': /* Forward */
case ‘R': /* Reverse */
default: /* Invalid */
printk(KERN_INFO “Invalid argument\n”);
return -EINVAL;
struct file_operations fops = {
static int __init entry(void)
if(Major < 0)
printk(KERN_INFO “Cannot get major number for the device %s\n”,DEVICE_NAME);
return Major;
printk(KERN_INFO “‘mknod /dev/%s c %d 0’\n”,DEVICE_NAME,Major);
return 0;
static void __exit bye(void)
int ret = unregister_chrdev(Major, DEVICE_NAME);
if (ret < 0)
printk(KERN_INFO “Error in un-registering stepper: %d\n”, ret);
printk(KERN_INFO “Stepper un-registered.\n”);

The messages logged by this driver can be found in /var/log/messages file. Anytime we need to see what the program does or what it outputs we need to open the file and go down the file to see the recent outputs. This can be made easy by issuing the following command:

tail -f /var/log/messages

As a super user now we need to execute the below command on the terminal. Here the major number is the number allocated by the kernel when the module get’s inserted into it. It’s a dynamic number. Minor number is mostly 0 (in this case).
In my case I had to execute the following command:

mknod /dev/Stepper c 254 0

This create’s a device file in the /dev directory and now if we want to communicate to the motor all we need to do is write data onto this file. That can be done by simply using:

echo “h” > /dev/Stepper
echo “F” > /dev/Stepper
(NOTE: What’s happening when these commands execute can be see by tail -f /var/log/messages)

Here data to the stepper device is any valid character in the set {h,f,F,R}. First two elements of the set intializes the motor in half-step or full-step mode and the last two data set actually runs the motor in Forward or Reverse directions.

In the next article I will explain the user interface program I created to handle this motor. I used the power of GTK+ in creating that interface.

There maybe many points I have left in this article regarding the internal working of the program. For that I assume you have gone through the page .


How tough is the RHCE exam ?

Toughness of a certification exam can be judged out through many parameters. But important among them is the course content, type of examination (practical or theory) and also the industry value of the certificate.

As I am currently studying RHCE I created this blog to get the views of students/sysadmin taking the RHCE exam or who had given the exam earlier. What are your views about the exam and it’s toughness for a newbie to Linux operating system.

Please let your view been expressed and communicate them with other RHCE aspirants.