I have been writing and deploying applications for the past 7 years. I wish I had some friends to set up and run the servers for me but I was nt lucky enough. So I had to always do it myself. It took about an year for me to really start enjoying server management. I self taught the art of installing and configuring applications the hard way. When I started working at Pinterest I took care of devops for about 2 years. After those 2 good years I decided to never own a server or maintain one. They just require attention and a lot of automation to make your life easier.

In 2011 Marty Weiner used to ask me a MySQL server and I would boot an ec2 instance and copy paste a huge list of bash commands into a terminal and get back to my work. Took about 20 - 25 minutes to bake a server.

    $ apt-get install build-essentials nginx
    $ echo y | apt-get install mysql-server

I used to update my set of commands once in a while when there is a distro update. Life was ok but not great. After Ryan Park joined Pinterest and took over the devops from my hands life got a lot better. During 2012 I had to learn puppet to make configuration changes. I went to few meetups to really understand what devops really meant and why it mattered. Although my friend Dave Fowler did not really approve the need of puppet / chef. I hated puppet for the obvious reasons.

  • Had to learn a new DSL
  • Puppet master is down!!
  • There is a puppet certificate error
  • You cant boot more than 5 boxes at a time. Puppet master cant handle it.

Although most of those problems only happen at a certain scale I still did not enjoy using it. I want to update this memcache cluster immediately. Oh just send a pull request on the puppet repo. Merge the change. Wait for an hour because our puppet clients pull changes at an interval of hour or so. What if I force all the memcache nodes to pull latest puppet changes? Oh the master puppet would go down. Alright I am never going to use puppet for any of my personal projects.

Around this time I heard a lot about Chef and Chef-solo. Chef solo sounded perfect. I can use it for my small personal projects. Although I still love my bash scripts chef looked cool so I gave it a shot. Using chef-solo was quite a pain untill I found the amazing littlechef. Since I am from the python land I loved the fact that littlechef uses fabric to SSH into the node and run chef solo. I used it for a year. Except that chef DSL changed quite a bit and I had hard time keeping up with it. End of the day its ruby and its not my cup of tea.



After hearing a lot about how Ansible is gaining momentum I decided to give it a try. Voila. Instant love. No master node. No dedicated ansible process on the node. Its purely a push based model where you push changes into the nodes via SSH. SSH is already secure and I use it everyday. So I started to follow a little tutorial. Took about 17 minutes to install nginx and mysql via ansible. I wrote the playbook according to my configuration needs.


    $ [sudo] pip install ansible   # on your local machine
    $ mkdir playbook  # you can create this directory inside your application too
    $ vim hosts
    web1 ansible_ssh_host=42.28.11X.XX7

Hosts file contains all the hosts that needs to be configured. You can combine hosts into a group. I have one webserver where I intend to install nginx, mysql and my python application to serve my API. For testing purpose I suggest running a vagrant VM and using ansible to configure the VM.

    $ ansible all -m ping [-u vagrant --private-key=aws.key]
    web1 | success >> {
        "changed": false,
        "ping": "pong"

The above command will fail if ansible cannot SSH into your node. So make sure you can SSH into your machine using public key or password. A list of ansible options you need to know.

  • -all - Use all defined servers from the inventory file
  • -m ping - Use the “ping” module, which simply runs the ping command and returns the results
  • -s - Use “sudo” to run the commands
  • -k - Ask for a password rather than use key-based authentication
  • -u vagrant - Log into servers using user vagrant

My first task when I get a clean ubuntu server is to apt update the packages and update the distro with all the latest patches. For this I create a new ansible playbook update.yml.

    - hosts: webservers
      sudo: true
       - name: updates a server
         apt: update_cache=yes
       - name: upgrade a server
         apt: upgrade=full

This playbook commands ansible to run on webservers hostgroup using sudo command. apt module run the apt-get command to update and upgrade the server. Running this playbook is as easy as

    $ ansible-playbook update.yml -i hosts -u ubuntu --private-key=aws_key.pem

Now create another playbook to install all the packages we need. Like nginx, mysql and python pip. For installing you can make ansible install nginx using the apt module. But I need to configure nginx to point to my python server, so I used ansible playbook from the ansible galaxy. Here is a complete nginx implementation from ansible galaxy. To install the above playbook you can run

    $ mkdir roles
    $ ansible-galaxy install jdauphant.nginx -p roles

This will download the playbook into your roles. You can add roles to any host and that will install nginx on the host. Now to configure nginx to proxy to our python application. I run my python app on 8000 port.

    $ vim deploy.yml
    - hosts: webservers
      sudo: true
        - role: jdauphant.nginx
            - worker_connections 1024
            - use epoll
            - multi_accept on
                - proxy_set_header X-Real-IP  $remote_addr
                - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for
                - upstream myapp { server; }
            - sendfile on
            - access_log /var/log/nginx/access.log
               - listen 80
               - server_name myapp.com
               - location / { proxy_pass  http://myapp/; include /etc/nginx/proxy_params; }

The configuration is pretty self explanatory. The jdauphant.nginx playbook is a well configured playbook that accepts variables to spit out a config file. In this case of nginx I specified the number of worker connections and proxy settings. For nginx_sites I configured my domain and proxied the domain requests to upstream myapp which runs on 8000 port.

Similarly I used the mysql playbook from ANXS.

    - role: ANXS.mysql
      mysql_root_password: 'myrootpassword'
        - name: appdb
        - name: appuser
          pass: apppwd
          priv: "appdb.*:ALL"
          host: "%"
      monit_protection: false

I modified the mysql book to install mysql 5.6 and create a root user with a given password and create the application database. The best part of ansible playbook is they are idempotant unless you wrote it in a way they are not. Installing nginx one or more times should not change anything on the server. Setting mysql password is slightly tricky as its not an idempotent operation. So when I set a password I check for the existence of ~/.my.cnf to make sure the root password is already set. Now I need to install packages for my python application. Create a new role app in the roles directory.

    $ mkdir -p roles/app/tasks
    $ vim roles/app/tasks/main.yml
    - name: install common packages needed for python application development
      action: apt pkg={{item}} state=installed
        - libmysqlclient-dev
        - python-setuptools
        - python-mysqldb
        - git-core
    - name: install pip
      action: easy_install name=pip

Now that we installed python mysql and pip. We can clone our python code and install the python dependencies.

    - name: Creates directory
      file: path=/mnt/app state=directory owner=ubuntu mode=0755

    - name: Setup the Git repo
      sudo: false
      git: repo=git@github.com:yashh/app.git dest=/mnt/app accept_hostkey=yes

    - name: Install requirements
      pip: requirements=/mnt/app/requirements.txt
      notify: restart supervisord

I created a directory with the file module in one line. How easy is that? One line to clone the git repository and one line to install the pip requirements on my server. Now copy your public key so that you dont have to use the aws pem file always.

    - authorized_key: user=ubuntu key="{{ lookup('file', '/Users/username/.ssh/id_rsa.pub') }}"

Voila. I love ansible. I could go on and on about ansible but seriously give it a shot yourself. There is a facts module which discloses all the facts about the host like number of CPU’s, memory available. I hard coded my mysql password in the playbook but you can use ansible_vault to encrypt the playbook.

    $ ansible-vault create vars/main.yml

Vault will not encypt the files and templates.

    $ ansible-playbook --ask-vault-pass -i .......

Hope you’ll like ansible as well.