Thursday, July 22, 2010

First impression of Puppet

So I've been using cfengine since around 2001, with much success. It's been a huge part of why my career in web operations has been successful, as a matter of fact. There have been a number of things lacking in cfengine 2.x, much of which seems to be addressed in version 3.x, but I'm not sure that I should just blindly charge forward with a migration to cfengine 3.

With that in mind, I've decided to try out a couple alternatives - beginning with puppet.

Thursday morning I was at work at 8, and nobody else really shows up until 9, so I figured I'd see if this old SA can get up and running with puppet in under an hour on Debian Lenny. Here's how it went:

8 am: looking at puppet downloads page, wondering if I should see how hard it is to build from source, quickly decide against it.

9 am: using apt I have puppet and puppetmaster and all dependencies installed, running, with sample config fixed to avoid "Cannot access mount[plugins]" error, and sudo file perms set to be fixed automatically. Cert for puppet client signed, and finding how cool it is to have a human readable cached config stored at /var/lib/puppet/state/localconfig.yaml. The "Introduction to Puppet" page was helpful.

Then I was off to a meeting.

11 am: out of meeting. Decide to walk all the way through "Getting Started with a Simple Puppet Pattern" and "A More Advanced Puppet Pattern" so that my configs are setup in a sane way. I'm rather impressed by these docs, they're useful, concise, and correct.

By lunch, I have a centrally managed sudoers file being pulled via a "sudo" module from the puppet fileserver.

My initial impression of Puppet is favorable. The idea behind modules and being able to split it all out for sharing is a good one. The human readable cache is nice. I still have a lot to learn about using it, but so far the single best thing is that the docs are very beginner-friendly.

I'm going to take this further and figure out how to make sure a Debian package is installed, and have Puppet install it when missing. Wish me luck!

Tuesday, July 20, 2010

What devops means to me

This is meant to be a short post with my quick thoughts on devops - with the same title as a really good post recently at http://opscode.com/blog/2010/07/16/what-devops-means-to-me/ by John Willis.

We have a long history in UNIX/Linux system administration of automating tasks. Easy access to tools such as cron and shell scripts made repetitive tasks easy to automate. Experienced sysadmins tended to have a large ~/bin directory with lots of scripts that did increasingly complex things.

Over time this collection of shell scripts was seen as a problem, and tools such as cfengine started to gather fans and supporters. Infrastructures.org was developed as a way to share hard-won experience with automation across an entire infrastructure. Now with the rapid provisioning of entire infrastructures in the cloud, we have a number of quality tools to choose from: chef, puppet, cfengine, bcfg2, and more.

What do all these tools give us? Over time, my mental concept has changed a bit...

Initially - cfengine allowed me to stop treating my site as a collection of standalone hosts and start treating it as a whole. I could configure a new host as a DNS server, and configure all other hosts to immediately start using it. This was a huge win, and saved lots of time. I didn't do things differently, when you got down to it. I simply used cfengine to roll out config files and manage processes, but in-house code releases were still done using in-house tools and generally using painful and manual procedures.

Now, these tools mean something different to me - they allow me to come up with new ways to interact with in-house developers. I can give them new, automated ways to deploy to a staging environment. I can have workflows that allow the deployments to go to production in a robust, repeatable way - one that's tested via the continuous deployments to a staging/QA area.

In short, I can stop thinking of automation as a way to do the same things better (edit files, manage processes), but as a way to improve deployment processes and improve the lives of our entire technical organization at work.

It's fitting that we have a new term for this. Though I see it as an evolutionary step in system administration, it can and does have a revolutionary effect on operations at many sites.