Docker is undoubtably one of the most talked about infrastructure technologies of the last year. It’s easy to find hundreds of blog posts, tweets, and tech talks around how it will revolutionize how we write and deploy code. While there is an overwhelming amount of information about getting started and using it to simplify development, not many people are talking about they’ve succeeded or struggled with getting containers into production.
In this Lightning talk at DevOpsDays Rockies, I gave extremely brief introductions to tools you may want to use to get containers running in production, including CoreOS (including etcd and fleet), Amazon EC2 Container Service, Docker Swarm, Apache Mesos, Google Kubernetes, Deis, and Flynn.
Load balancing and routing traffic to a single application is easy, but sending traffic to a always-changing number of applications is quite a challenge. In the last year, Belly has migrated from a monolithic Rails app to a service-oriented architecture with over fifty applications.
A large portion of my day-to-day work at Belly is in a supporting role, which means I get interrupted quite a bit. To combat this, I try to work remotely every once in a while. It’s a great change of pace and allows me to make significant progress on a single task. Usually this means I work from home, but sometimes a couple of us on the backend team work together out of coffee shops on a single project.
At the end of August, I took a short mid-week trip to The Twin Cities to visit a couple of friends and go to the Minnesota State Fair. Trust me - the state fair is way more awesome than you think it is. Thanks to my friend Eryn, I was able to work for two days out of Clockwork Active Media’s incredible office. While I was there, Eryn asked me, “What do you sysadmins actually do all day? Look at graphs and cat pictures?” I responded with, “Not quite, but this could be a most excellent blog post.”
Logs are everywhere: application logs, database logs, server logs, event logs. Managing the volume of logs, and even more important, using the data effectively, can be extremely difficult.
At Belly, we are all about data. We make frequent iterations to products and internal processes and track how well the iterations succeed. We understand the importance of A/B testing, and one of the many tools we use to make decisions from those tests is Apache Hive.
Hive is a data warehouse tool for Apache Hadoop originally written by Facebook. Hive provides a SQL-like interface into large data sets in Hadoop, which makes summarizing large amounts of data very easy for both developers and non-developers. Hive Query Language (HiveQL) saves users the trouble of having to write or understand typical map/reduce jobs.
subscribe via RSS