Sunday, 21 December 2014

Docker, Rackspace On Metal and Core OS


This post will cover Rackspace On Metal, an intro to Docker, review of CoreOS & Fleet and demonstrate how one could use all of them to build a multi-tier application.

The associated presentation on this can be found at http://bit.ly/rs-onmetal-docker & the code at https://github.com/srirajan/onmetal-docker

Before you start

You will need the following
  •  A Rackspace cloud account. Get a free tier Rackspace developer account - https://developer.rackspace.com/signup/
  •  If you don't have an account, you can still follow this and do it on your own servers.  Some of the examples are specific to Rackspace cloud servers but the ones around Docker, CoreOS and Fleet can be done on any server.
  •  Ensure you have novaclient installed. Refer to http://www.rackspace.com/knowledge_center/article/installing-python-novaclient-on-linux-and-mac-os for more details.

Rackspace On Metal


Onmetal provides single-tenant bare metal servers and you read more about it here
http://www.rackspace.com/cloud/servers/onmetal

As of Dec, 2014, On Metal is only available in US region of IAD. So you need a Rackspace US cloud account.

We will have two groups of servers.
  • First we will build a single Ubuntu 14.04 LTS (Trusty Tahr) On Metal server
key=sri-mb

nova boot --flavor onmetal-compute1 \

--image 6cbfd76c-644c-4e28-b3bf-5a6c2a879a4a \

--key-name $key \

--poll play01
  • Now prepare CoreOS cluster. First get a discovery URL for your cluster. More on this at https://coreos.com/docs/cluster-management/setup/cluster-discovery/
curl -w "\n" https://discovery.etcd.io/new
  • Edit the cloud init file named coreos-cluster/cloudinit.yaml. Replace the token url from above.
  • If you are doing on On Metal use coreos-cluster/cloudinit-onmetal.yaml as a workaround for https://github.com/coreos/coreos-cloudinit/issues/195. Eventually the above should work on both flavors.
  • Decide which flavor you are using
#On metal
flavor=onmetal-compute1
image=75a86b9d-e016-4cb7-8532-9e9b9b5fc58b
key=sri-mb
cloudinit=cloudinit-onmetal.yaml


# Performance
flavor=performance1-1
image=749dc22a-9563-4628-b0d1-f84ced8c7b7a
key=sri-mb
cloudinit=cloudinit.yaml


  • Boot 4 servers for the cluster
nova boot --flavor $flavor --image $image  --key-name $key \
--config-drive true --user-data $cloudinit core01

nova boot --flavor $flavor --image $image  --key-name $key \
--config-drive true --user-data $cloudinit core02

nova boot --flavor $flavor --image $image  --key-name $key \
--config-drive true --user-data $cloudinit core03

nova boot --flavor $flavor --image $image  --key-name $key \
--config-drive true --user-data $cloudinit core04


Docker

Now let's play a litte with Docker.

  • Install docker on Ubuntu (play01 above)
apt-get update
apt-get install -y docker.io screen git vim
update-rc.d docker.io  defaults

  • Pull our first image. Review the listing & you should have 3 centos images for each of the versions.
docker pull centos

docker images
  •  Run it. '-i' is for interactive & '-t' allocates a pseudo-tty.
docker run -i -t centos /bin/bash


  • This runs the default image for CentOS. Review the inside to see how this looks inside docker.
cat /etc/redhat-release

ps

ls

whoami

cat /etc/hosts

exit


  •  List Docker processes
docker ps -a


  •  Run a different release
docker run -i -t centos:centos6 /bin/bash

cat /etc/redhat-release

exit


  •  On the host (play01) with the networking. Docker uses a combination of Linux bridges and iptables to build managed networking on the container, communication with other containers and communication from the outside.
ifconfig docker0

iptables -nvL

iptables -nvL -t nat


  •  Let's run something more than bash.

docker run -d  centos python -m SimpleHTTPServer 8888


  •  And check a few things.
docker ps

docker top <container UID>

docker inspect <container UID> |less

curl  http://<container IP>:8888


  •  Let's do some more. Clone the repo from git
apt-get install -y git

git clone https://github.com/srirajan/onmetal-docker


  •  Review the Dockerfile. This image file installs Nginx and PHP & loads a sample php file
cd /root/onmetal-docker/ubuntu_phpapp

cat Dockerfile

docker build -t="srirajan/ubuntu_phpapp" .

docker images


  •  Run the container and map the port 80 on the container to port 8082 on the host. You can curl the URL to see the site.
docker run -d -p 8082:80 "srirajan/ubuntu_phpapp"

docker top <container UID>

docker logs <container UID>

docker inspect <container UID>



#curl the container IP

curl http://<container IP>/home.php



#curl the public IP and port

curl http://<IP>:8082/home.php


  •  Docker diff
docker diff <container UID>


  •  Now let's look at linking containers. Start a mysql container and map the mysql port on the container to the host port 3306
docker run --name db -e MYSQL_ROOT_PASSWORD=dh47dk504dk44dd -d -p 3306:3306 mysql

docker logs db


  •  Build the helper container
cd /root/onmetal-docker/dbhelper

docker build -t="srirajan/dbhelper" .


  •  Start a helper container as a linked container and check it's environment variables. Linking allows information to be shared across containers. You can read more about linking here https://docs.docker.com/userguide/dockerlinks/
docker run --name dbhelper --link db:db srirajan/dbhelper env


  •  Now if you run the actual container as it is, it will login to the mysql instance on the db container and install the world database.
docker rm dbhelper

docker run -d --name dbhelper --link db:db srirajan/dbhelper  /usr/local/bin/configuredb.sh

docker logs dbhelper


CoreOS, Fleet & Docker

  • In the above steps, we should have created 4 core os machines with cloudinit. Now, lets play with CoreOS, etcd and Fleet. This should list all 4 machines in the cluster. Fleet is a distributed cluster management tool. It relies on etcd, which is a distributed key value store for operation. It also works with systemd files and behaves like a distributed systemd in a multi-node setup.
fleetctl list-machines


  •  Pull our repo on one of the nodes.
git clone https://github.com/srirajan/onmetal-docker


  •  Review and load all the services. We will go into details of each service in following steps.
cd /home/core/onmetal-docker/fleet-services

fleetctl submit *.service


fleetctl list-unit-files

UNIT       HASH DSTATE  STATE  TARGET

db.service      fbf415a launched launched -

dbhelper.service 747c778 inactive inactive -

lbhelper.service 0592528 inactive inactive -

mondb.service  a4f50cc inactive inactive -

monweb@.service  d5ed242 inactive inactive -

web@.service  0ac8be5 inactive inactive -



  •  Run the db service.
fleetctl start db.service

Unit db.service launched on 2aa4e35a.../10.208.201.253



fleetctl list-units

UNIT  MACHINE    ACTIVE SUB

db.service 2aa4e35a.../10.208.201.253 active running



  • You can also review the systemd service file. This one is fairly simple service and runs a mysql container on one of the hosts. Wait for this service to start before proceeding. Also note, that Fleet decides which host to run the container on.
cat db.service

[Unit]

Description=DB service

Requires=etcd.service



[Service]

EnvironmentFile=/etc/environment

TimeoutStartSec=0

ExecStartPre=/usr/bin/docker pull mysql

ExecStart=/usr/bin/docker run --rm --name db -e MYSQL_ROOT_PASSWORD=dh47dk504dk44dd -p ${COREOS_PRIVATE_IPV4}:3306:3306 mysql

ExecStop=/usr/bin/docker stop db

Restart=always


  •  One oddity with fleet is that to query the status, you have to run the command on the host running the container.
fleetctl status db.service

db.service - DB service

   Loaded: loaded (/run/fleet/units/db.service; linked-runtime)

   Active: active (running) since Thu 2014-11-13 14:11:06 UTC; 8min ago

  Process: 22050 ExecStartPre=/usr/bin/docker pull mysql (code=exited, status=0/SUCCESS)

 Main PID: 22068 (docker)

   CGroup: /system.slice/db.service

           └─22068 /usr/bin/docker run --rm --name db -e MYSQL_ROOT_PASSWORD=dh47dk504dk44dd -p 10.208.201.253:3306:3306 mysql



Nov 13 14:11:12 core04 docker[22068]: 2014-11-13 14:11:12 1 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: ec53a4d8-6b3e-11e4-8386-0a1bbbebe238.

Nov 13 14:11:12 core04 docker[22068]: 2014-11-13 14:11:12 1 [Note] Server hostname (bind-address): '*'; port: 3306

Nov 13 14:11:12 core04 docker[22068]: 2014-11-13 14:11:12 1 [Note] IPv6 is available.

Nov 13 14:11:12 core04 docker[22068]: 2014-11-13 14:11:12 1 [Note]   - '::' resolves to '::';

Nov 13 14:11:12 core04 docker[22068]: 2014-11-13 14:11:12 1 [Note] Server socket created on IP: '::'.

Nov 13 14:11:12 core04 docker[22068]: 2014-11-13 14:11:12 1 [Note] Event Scheduler: Loaded 0 events

Nov 13 14:11:12 core04 docker[22068]: 2014-11-13 14:11:12 1 [Note] Execution of init_file '/tmp/mysql-first-time.sql' started.

Nov 13 14:11:12 core04 docker[22068]: 2014-11-13 14:11:12 1 [Note] Execution of init_file '/tmp/mysql-first-time.sql' ended.

Nov 13 14:11:12 core04 docker[22068]: 2014-11-13 14:11:12 1 [Note] mysqld: ready for connections.

Nov 13 14:11:12 core04 docker[22068]: Version: '5.6.21'  socket: '/tmp/mysql.sock'  port: 3306  MySQL Community Server (GPL)


  •  dbhelper.service uses container linking to install the mysql world database and configure some users for our application. The systemd configuration tells fleet to run on the same host as the db.service. mondb.service is not a container but uses systemd to run a script that updates etcd with the information about the db service. In this case we are just pushing private IPs to etcd but this can be leveraged to do other things as well.
fleetctl start dbhelper.service

Unit dbhelper.service launched on 2aa4e35a.../10.208.201.253



fleetctl start mondb.service

Unit mondb.service launched on 2aa4e35a.../10.208.201.253


  •  Run fleetctl again to see where our containers are deployed. Because of our systemd definition file, fleet will ensure they run on the same host.
fleetctl list-units

UNIT   MACHINE    ACTIVE SUB

db.service  2aa4e35a.../10.208.201.253 active running

dbhelper.service 2aa4e35a.../10.208.201.253 active running

mondb.service  2aa4e35a.../10.208.201.253 active running



  •  You can also login to the host running the dbhelper service and review the journal (logs) for the service.
fleetctl journal dbhelper

-- Logs begin at Mon 2014-11-10 21:00:41 UTC, end at Thu 2014-11-13 14:23:43 UTC. --

Nov 11 05:22:51 core04.novalocal systemd[1]: Stopped DB Helperservice.

Nov 11 05:22:51 core04.novalocal systemd[1]: Unit dbhelper.service entered failed state.

-- Reboot --

Nov 13 14:22:43 core04 systemd[1]: Starting DB Helperservice...

Nov 13 14:22:43 core04 docker[22239]: Pulling repository srirajan/dbhelper

Nov 13 14:23:00 core04 systemd[1]: Started DB Helperservice.

Nov 13 14:23:06 core04 docker[22272]: Creating the world database

Nov 13 14:23:13 core04 docker[22272]: Creating application user

Nov 13 14:23:13 core04 docker[22272]: Counting rows in world.city

Nov 13 14:23:13 core04 docker[22272]: COUNT(*)

Nov 13 14:23:13 core04 docker[22272]: 4079


  •  Now let's move on the web containers. Start the one container from the web service. In systemd a service with @ is generic service and you can append values to start as many of them. The first container will take a little bit of time as it is downloading the container.
fleetctl start web@01.service

Unit web@01.service launched on 6847f4f7.../10.208.201.226



fleetctl list-units

UNIT   MACHINE    ACTIVE  SUB

db.service  2aa4e35a.../10.208.201.253 active  running

dbhelper.service 2aa4e35a.../10.208.201.253 active  running

mondb.service  2aa4e35a.../10.208.201.253 active  running

web@01.service  6847f4f7.../10.208.201.226 active running


  •  Start 9 more web containers. Fleet will disribute them across the different hosts.
fleetctl start web@{02..10}.service

Unit web@04.service launched on ee5398cf.../10.208.201.250

Unit web@10.service launched on 2aa4e35a.../10.208.201.253

Unit web@07.service launched on 6847f4f7.../10.208.201.226

Unit web@09.service launched on ee5398cf.../10.208.201.250

Unit web@03.service launched on 6847f4f7.../10.208.201.226

Unit web@05.service launched on c3f52cb3.../10.208.201.234

Unit web@02.service launched on ee5398cf.../10.208.201.250

Unit web@06.service launched on c3f52cb3.../10.208.201.234

Unit web@08.service launched on c3f52cb3.../10.208.201.234



fleetctl list-units

UNIT   MACHINE    ACTIVE SUB

db.service  2aa4e35a.../10.208.201.253 active running

dbhelper.service 2aa4e35a.../10.208.201.253 active running

mondb.service  2aa4e35a.../10.208.201.253 active running

web@01.service  6847f4f7.../10.208.201.226 active running

web@02.service  ee5398cf.../10.208.201.250 active running

web@03.service  6847f4f7.../10.208.201.226 active running

web@04.service  ee5398cf.../10.208.201.250 active running

web@05.service  c3f52cb3.../10.208.201.234 active running

web@06.service  c3f52cb3.../10.208.201.234 active running

web@07.service  6847f4f7.../10.208.201.226 active running

web@08.service  c3f52cb3.../10.208.201.234 active running

web@09.service  ee5398cf.../10.208.201.250 active running

web@10.service  2aa4e35a.../10.208.201.253 active running



  •  Start the monweb services. These are similar to the mondb.service and update etcd with different values from the running containers.
fleetctl start monweb@{01..10}.service

Unit monweb@04.service launched on ee5398cf.../10.208.201.250

Unit monweb@02.service launched on ee5398cf.../10.208.201.250

Unit monweb@01.service launched on 6847f4f7.../10.208.201.226

Unit monweb@05.service launched on c3f52cb3.../10.208.201.234

Unit monweb@03.service launched on 6847f4f7.../10.208.201.226

Unit monweb@09.service launched on ee5398cf.../10.208.201.250

Unit monweb@07.service launched on 6847f4f7.../10.208.201.226

Unit monweb@10.service launched on 2aa4e35a.../10.208.201.253

Unit monweb@08.service launched on c3f52cb3.../10.208.201.234

Unit monweb@06.service launched on c3f52cb3.../10.208.201.234



fleetctl list-units

UNIT   MACHINE    ACTIVE SUB

db.service  2aa4e35a.../10.208.201.253 active running

dbhelper.service 2aa4e35a.../10.208.201.253 active running

mondb.service  2aa4e35a.../10.208.201.253 active running

monweb@01.service 6847f4f7.../10.208.201.226 active running

monweb@02.service ee5398cf.../10.208.201.250 active running

monweb@03.service 6847f4f7.../10.208.201.226 active running

monweb@04.service ee5398cf.../10.208.201.250 active running

monweb@05.service c3f52cb3.../10.208.201.234 active running

monweb@06.service c3f52cb3.../10.208.201.234 active running

monweb@07.service 6847f4f7.../10.208.201.226 active running

monweb@08.service c3f52cb3.../10.208.201.234 active running

monweb@09.service ee5398cf.../10.208.201.250 active running

monweb@10.service 2aa4e35a.../10.208.201.253 active running

web@01.service  6847f4f7.../10.208.201.226 active running

web@02.service  ee5398cf.../10.208.201.250 active running

web@03.service  6847f4f7.../10.208.201.226 active running

web@04.service  ee5398cf.../10.208.201.250 active running

web@05.service  c3f52cb3.../10.208.201.234 active running

web@06.service  c3f52cb3.../10.208.201.234 active running

web@07.service  6847f4f7.../10.208.201.226 active running

web@08.service  c3f52cb3.../10.208.201.234 active running

web@09.service  ee5398cf.../10.208.201.250 active running

web@10.service  2aa4e35a.../10.208.201.253 active running



  •  Query etcd for values. This will return the IP addresses and ports of the web containers.
for i in {01..10}; do  etcdctl get /services/web/web$i/unit; etcdctl get /services/web/web$i/host; etcdctl get /services/web/web$i/public_ipv4_addr; etcdctl get /services/web/web$i/port; echo "-----" ; done

monweb@01.service

core01

162.242.254.113

18001

-----

monweb@02.service

core03

162.242.255.71

18002

-----

monweb@03.service

core01

162.242.254.113

18003

-----

monweb@04.service

core03

162.242.255.71

18004

-----

monweb@05.service

core02

162.242.254.215

18005

-----

monweb@06.service

core02

162.242.254.215

18006

-----

monweb@07.service

core01

162.242.254.113

18007

-----

monweb@08.service

core02

162.242.254.215

18008

-----

monweb@09.service

core03

162.242.255.71

18009

-----

monweb@10.service

core04

162.242.255.73

18010

-----

  •  Test the site on one of the container.

curl http://162.242.255.73:18010/home.php

<!DOCTYPE html>

<html>

<body>



<strong>There is no place like 127.0.0.1</strong><br/>Date & Time: 2014-11-13 14:29:18<br/>Container name: dca363b9af75<hr/>



</body>

</html>



  •  At this point, we have database container running and a bunch of web containers running on different hosts. The communication between them has been established as well.
  • Optionally, we can run the lbhelper service that updates the cloud load balancer. This requires a Rackspace cloud load balancer pre-configured and you need to set the values in etcd
etcdctl set /services/rscloud/OS_USERNAME <cloud username>

etcdctl set /services/rscloud/OS_REGION <cloud region>

etcdctl set /services/rscloud/OS_PASSWORD <cloud api key>

etcdctl set /services/rscloud/OS_TENANT_NAME <cloud account no>

etcdctl set /services/rscloud/LB_NAME <cloud lb name>

etcdctl set /services/rscloud/SERVER_HEALTH_URL health.php

etcdctl set /services/rscloud/SERVER_HEALTH_DIGEST dbe72348d4e3aa87958f421e4a9592a82839f3d8

</ODE>

  • Now run the lbhelper service.
fleetctl start lbhelper.service

Unit lbhelper.service launched on 2aa4e35a.../10.208.201.253



fleetctl list-units

UNIT   MACHINE    ACTIVE SUB

db.service  2aa4e35a.../10.208.201.253 active running

dbhelper.service 2aa4e35a.../10.208.201.253 active running

lbhelper.service 2aa4e35a.../10.208.201.253 active running

mondb.service  2aa4e35a.../10.208.201.253 active running

monweb@01.service 6847f4f7.../10.208.201.226 active running

monweb@02.service ee5398cf.../10.208.201.250 active running

monweb@03.service 6847f4f7.../10.208.201.226 active running

monweb@04.service ee5398cf.../10.208.201.250 active running

monweb@05.service c3f52cb3.../10.208.201.234 active running

monweb@06.service c3f52cb3.../10.208.201.234 active running

monweb@07.service 6847f4f7.../10.208.201.226 active running

monweb@08.service c3f52cb3.../10.208.201.234 active running

monweb@09.service ee5398cf.../10.208.201.250 active running

monweb@10.service 2aa4e35a.../10.208.201.253 active running

web@01.service  6847f4f7.../10.208.201.226 active running

web@02.service  ee5398cf.../10.208.201.250 active running

web@03.service  6847f4f7.../10.208.201.226 active running

web@04.service  ee5398cf.../10.208.201.250 active running

web@05.service  c3f52cb3.../10.208.201.234 active running

web@06.service  c3f52cb3.../10.208.201.234 active running

web@07.service  6847f4f7.../10.208.201.226 active running

web@08.service  c3f52cb3.../10.208.201.234 active running

web@09.service  ee5398cf.../10.208.201.250 active running

web@10.service  2aa4e35a.../10.208.201.253 active running


  •  You can look at the logs from it. This container runs a python script that will populate the load balancer with the IP addresses and port numbers from the web containers. The script queries etcd for the information and also does a health check to determine the status of the container.
docker logs lbhelper

[11/13/14 14:35:48][INFO]:Authenticated using rtsdemo10

[11/13/14 14:36:32][INFO]:web 01 Found. Processing server

[11/13/14 14:36:32][INFO]:Health test passed. Adding to load balancer

[11/13/14 14:36:32][INFO]:Load balancer pool myworld found

[11/13/14 14:36:32][INFO]:Adding server to load balancer

[11/13/14 14:36:33][INFO]:web 02 Found. Processing server

[11/13/14 14:36:33][INFO]:Health test passed. Adding to load balancer

[11/13/14 14:36:33][INFO]:Load balancer pool myworld found

[11/13/14 14:36:33][INFO]:Adding server to load balancer

[11/13/14 14:36:44][INFO]:web 03 Found. Processing server

[11/13/14 14:36:44][INFO]:Health test passed. Adding to load balancer

[11/13/14 14:36:44][INFO]:Load balancer pool myworld found

[11/13/14 14:36:44][INFO]:Adding server to load balancer

[11/13/14 14:36:55][INFO]:web 04 Found. Processing server

[11/13/14 14:36:55][INFO]:Health test passed. Adding to load balancer

[11/13/14 14:36:55][INFO]:Load balancer pool myworld found

[11/13/14 14:36:55][INFO]:Adding server to load balancer

[11/13/14 14:37:06][INFO]:web 05 Found. Processing server

[11/13/14 14:37:06][INFO]:Health test passed. Adding to load balancer

[11/13/14 14:37:06][INFO]:Load balancer pool myworld found

[11/13/14 14:37:06][INFO]:Adding server to load balancer

[11/13/14 14:37:12][INFO]:web 06 Found. Processing server

[11/13/14 14:37:12][INFO]:Health test passed. Adding to load balancer

[11/13/14 14:37:12][INFO]:Load balancer pool myworld found

[11/13/14 14:37:13][INFO]:Adding server to load balancer

[11/13/14 14:37:23][INFO]:web 07 Found. Processing server

[11/13/14 14:37:23][INFO]:Health test passed. Adding to load balancer

[11/13/14 14:37:23][INFO]:Load balancer pool myworld found

[11/13/14 14:37:24][INFO]:Adding server to load balancer

[11/13/14 14:37:34][INFO]:web 08 Found. Processing server

[11/13/14 14:37:34][INFO]:Health test passed. Adding to load balancer

[11/13/14 14:37:34][INFO]:Load balancer pool myworld found

[11/13/14 14:37:35][INFO]:Adding server to load balancer

[11/13/14 14:37:45][INFO]:web 09 Found. Processing server

[11/13/14 14:37:45][INFO]:Health test passed. Adding to load balancer

[11/13/14 14:37:46][INFO]:Load balancer pool myworld found

[11/13/14 14:37:46][INFO]:Adding server to load balancer

[11/13/14 14:37:57][INFO]:web 10 Found. Processing server

[11/13/14 14:37:57][INFO]:Health test passed. Adding to load balancer

[11/13/14 14:37:57][INFO]:Load balancer pool myworld found

[11/13/14 14:37:57][INFO]:Adding server to load balancer

[11/13/14 14:38:08][INFO]:web 11 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 12 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 13 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 14 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 28 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 29 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 30 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 31 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 32 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 33 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 60 does not exist.Skipping...

<truncated>

[11/13/14 14:38:08][INFO]:web 97 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 98 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:web 99 does not exist.Skipping...

[11/13/14 14:38:08][INFO]:Printing summary

[11/13/14 14:38:08][INFO]:Load balancer pool myworld found

[11/13/14 14:38:08][INFO]:Nodes: 1.1.1.1 80

[11/13/14 14:38:08][INFO]:Nodes: 162.242.254.113 18001

[11/13/14 14:38:08][INFO]:Nodes: 162.242.255.71 18002

[11/13/14 14:38:08][INFO]:Nodes: 162.242.254.113 18003

[11/13/14 14:38:08][INFO]:Nodes: 162.242.255.71 18004

[11/13/14 14:38:08][INFO]:Nodes: 162.242.254.215 18005

[11/13/14 14:38:08][INFO]:Nodes: 162.242.254.215 18006

[11/13/14 14:38:08][INFO]:Nodes: 162.242.254.113 18007

[11/13/14 14:38:08][INFO]:Nodes: 162.242.254.215 18008

[11/13/14 14:38:08][INFO]:Nodes: 162.242.255.71 18009

[11/13/14 14:38:08][INFO]:Nodes: 162.242.255.73 18010

[11/13/14 14:38:08][INFO]:Sleeping 10 seconds...

  • This covers our initial exploration of Docker, CoreOS and Fleet.  There is more these tools can do to help with tighter integration but overall this combination is a good way to manage docker containers and run serious workloads on it.

Misc Commands

A collection of random snippets that are useful.
  • Build without cache. This burnt me the first time. Ubuntu removes old package versions from their repos and if a cached image has that version, apt-get install will try to pull that and fail. Needless to say --no-cache will takelonger to build.
docker build --no-cache
  •  Review logs of etcd and fleet
journalctl -u etcd

journalctl -u fleet



  •  Delete all containers
docker stop $(docker ps -a -q)

sleep 2

docker rm $(docker ps -a -q)



  •  Delete all images
docker rmi $(docker images -q)



  • Cleanup fleet
fleetctl destroy $(fleetctl list-units -fields=unit -no-legend)

fleetctl destroy $(fleetctl list-unit-files -fields=unit -no-legend)

sleep 5

fleetctl list-unit-files

fleetctl list-units

  • Restart Fleet
sudo systemctl restart fleet.service

Resources

  • Free Rackspace developer account - https://developer.rackspace.com/signup/
  • Core OS - https://coreos.com/docs/
  • A good overview on why docker was created. Done by dotCloud founder and CTO Solomon Hykes - https://www.youtube.com/watch?v=Q5POuMHxW-0  -
  • Getting Started with systemd - https://coreos.com/docs/launching-containers/launching/getting-started-with-systemd/

Thursday, 8 May 2014

Chromecast your life


Chromecast is fundamentally simple and powerful.  The price is amazingly low but that does not mean it lacks functionality. To me, it simplifies the smart TV model by putting the "smart" elsewhere. In this case it is your device which could be a laptop, phone or tablet.  Because of Google's weight behind it, it already has ton of apps who support it.

So how am I using ? Simply put I'm using it in two cases.

TV
This is straightforward and this works fine out of the box. Connect your Chromecast and follow instructions on the screen.

Audio
This needed some additional hardware. Buy this Audio Extractor and it will extract audio from the HDMI. It does not require any separate power source.  Any now you can attach this to any speaker.

Now I have 4 such setups on different Chromecast devices. My speaker collection is a mix of cheap devices and some better speakers. For couple of my rooms like the kitchen, I am using a cheap Logitech Z120 Stereo Speakers which works fine simple audio. This does not so advanced things like the same audio in each room, although given that the API is open, you can write an app to do that.

My App Ecosystem
Youtube
Plex -
BeyondPod
Google Play
Netflix
AllCast



Links and Further Reading




Tuesday, 18 March 2014

Chef Backups

There are few ways to backup a Chef server. Opscode has some documentation on their wiki https://wiki.opscode.com/display/chef/Backing+Up+Chef+Server

Some of this is outdated now because chef no longer uses Couch DB under the hood. However there is this little gem (pun intended) called knife-backup. To put this to test, install it first
    gem install knife-backup

More details are here https://github.com/mdxp/knife-backup An execution looks like this
    knife backup export -D ./backups
    Backing up clients
    Backing up clients chef-validator
    Backing up clients chef-webui
    Backing up clients test01.example.com
    Backing up clients web01.example.com
    Backing up nodes
    ...Output Truncated...

This nicely exports all your settings such as nodes, clients, roles, environments and cookbooks into the backups directory. Typically your cookbooks would be version controlled via Git or some other revision control system and you can restore it from there as well. This method allows you to completely mirror cookbooks from one chef server to another along with the other stuff in couple of simple commands.
    ls backups
    clients      cookbooks    data_bags    environments nodes        roles
To test your backup spin up a new server and install chef per your OS http://www.opscode.com/chef/install If you want to try this on a existing server, you can use the following

  ** This will erase all your chef server data **
    sudo chef-server-ctl cleanse
    sudo chef-server-ctl reconfigure

Copy _/etc/chef-server/admin.pem_ from the new server to your local workstation. You will use this user to perform the restore. Once you have restored you can use other clients/users that you were using with the original server.
    knife backup restore -D ./backups -u admin -k <path to admin key> -s  <new server url>
This will restore to the new server with the exception of few things. This is because knife restore does not overwrite existing clients.

 1. The _admin_ user and it's credentials.

 2. The _chef-webui_. This is used by the web-interface and so it makes sense to leave it.

 3. The _chef-validator_ client. Now this has some implications. chef-validator's key is used on a node when it runs the chef-client for the first time in order to get an API client identity. Since this is now different from your original server, and if you are using knife to bootstrap nodes, you will need to re-copy this to your knife workstation setup. Existing nodes don't need this as they are already registered.

All said this is a handy tool and with a little bit of scripting you can run these backups hourly/daily and use time stamped directories.

Monday, 29 April 2013

Chef Experiments - Create Users



The objective here is to create  a users cookbook with data bags

Create the data bag

knife data bag create user_config

Create the user json file

data_bags/users/usr_sri.json 

"id": "sri",
{

    "comment": "Sriram Rajan",

    "uid": 2000,

    "gid": 0,

    "home":"/home/sri",

    "shell":"/bin/bash",

    "pubkey":"<replace with the SSH public key"

}


Import the file

knife data bag from file users_config usr_sri.json


Create a key for the encrypted data bag

openssl rand -base64 512 > data_bags/users/enckey


Create the encrypted data bag

knife data bag create --secret-file  data_bags/users/enckey password_config pwdlist



Edit the data bag

knife data bag edit --secret-file  data_bags/users/enckey password_config pwdlist


"id": "pwdlist",
{
"sri": "Replace with SHA password string"
}


At this point you should have a data bag with users and encrypted data bag with passwords. Now we move to the cookbook

Create the cookbook

knife cookbook create user_config

Recipe looks like this. We add the user and also ensure the .ssh directory is created and populated with the public keys. The password will be pulled from the encrypted bag.


decrypted = Chef::EncryptedDataBagItem.load("password_config", "pwdlist")
search(:user_config, "*:*").each do |user_data|
    user user_data['id'] do
        comment user_data['comment']
        uid user_data['uid']
        gid user_data['gid']
        home user_data['home']
        shell user_data['shell']
        manage_home true
        password decrypted[user_data['id']
        action:create
    end
  
    ssh_dir = user_data['home'] + "/.ssh"
    directory ssh_dir do
        owner user_data['uid']
        group user_data['gid']
        mode "0700"
    end

    template "#{ssh_dir}/authorized_keys" do
        owner user_data['uid']
        group user_data['gid']
        mode "0600"
        variables(
             :ssh_keys => user_data['pubkey']
             )
        source "authorized_keys.erb"
    end
end

The template file

base_users/templates/default/authorized_keys.erb 

<% Array(@ssh_keys).each do |key| %>

<%= key %>

<% end %>



Finishing up
knife cookbook upload user_config

Ensure the secret key for the encrypted data bag is also sent to the node and stored under  /etc/chef/encrypted_data_bag_secret. You can bootstrap this file into the node build. See http://docs.opscode.com/essentials_data_bags_encrypt.html

Then add the recipe to a role or node run list and run the chef-client to test.

Designing in the cloud


Service based model
This is not a very new concept (http://en.wikipedia.org/wiki/Service-oriented_architecture)  but the cloud model makes this very important.  Build your business model  such that it can be consumed as a service. This would also force you to modularize parts and all this would ensure you have a high degree of portability.


Build for failure
Cloud is multi-tenant in most cases and with it comes challenges like noisy neighbours or failure of individual components.  Build for these scenarios.  Symian army (http://techblog.netflix.com/2011/07/netflix-simian-army.html) talks a lot about this and is an interesting read.  Importantly plan for "What happens when"

In building for failure you are also creating a good recovery model.  One the benefits of running everything as code means that you can recover faster and this would translate into better uptime.

Cloud is all about the API and pluggabiliity. Think about  building a top level API for your business model. Then use vendor APIs and plug them into your API.  Wherever possible, loosely couple your application interaction. For e.g., instead of direct database calls use an API


Monitoring
Monitoring becomes more than just making sure your applications are working fine. If you leverage multiple cloud providers, you can use it to make operational decisions.  You can use it to go with the best cloud provider and save costs.  You can use it to find low performing instances within the same provider.  One important point here is to make sure your monitoring is vendor agnostic and wherever possible not a tool provided by the vendor. Frameworks like Sensu  (http://www.sonian.com/cloud-monitoring-sensu/)  or tools like Riemann(http://riemann.io) can help


Automation
Cloud will force automation to a large extent and you need to embrace it.  Automation also allows you to build across different vendors  When using multiple vendors, us a model that works on all platforms. There are open source libraries like libcloud which provide vendor agnostic ways.   Be careful with using automation providers as you can get vendor lock-in in a different way.  While building your own autoscale model is complex in the long term, there is a lot more to gain as it will fit your business model.


Think about data
Cloud provides commodity based services for things like compute, storage etc but your data is not commodity. So think about distributing this over different vendors or build that into your recovery model.


Think about security
Security in the cloud is a hot topic and it is safe to say that this is still evolving.  This is also something that is overlooked while you plug in other nuts and bolts.  Make sure things like identity management, access control models are at the heart of your cloud strategy.  Even if security is not an immediate requirement, you can build them as services which can be implemented at a later stage.