This episode's thoughts on Google Kubernetes Engine and Scrum for DevOps Teams:
This month's thoughts are a series of articles in a tale of technical debt:
This month's thoughts are around:
Containers have been around now for quite a few years. We can trace the concept back to 1979 and the introduction of the chroot system call but it wasn't until BSD Jails, Solaris Zones and LXC in 2000, 2004 and 2008 when the technology started to mature. Zones in particular became incredibly stable very early on. With a very high level of isolation and performance, capable of multi-tenancy systems.
With the rise of VMWare and IaaS providers like AWS, container technologies took a back seat as the masses embraced cloud computing. Containers weren't fully able to satisfy the demands of ephemeral and dynamically scaling systems. However, in more recent years Docker has revitalised the interest back in this technology by introducing the idea of application containers and a powerful set of tools and infrastructure for maintaining container images.
Expanding the benefits beyond performance and resource utilisation gains, Docker improved standardisation, configuration management and portability, meaning containers are fast becoming the next hot technology (if they're not already). However, they do maintain some challenges in the Cloud. Specifically monitoring, orchestration (e.g. automated scheduling and auto-scaling) and service discovery are an additional burden.
Optoro's Shift to Self-hosted Infrastructure - Optoro
Since 2010, Optoro has used Amazon Web Services (AWS) as its cloud-computing provider. We relied on them to supply the horsepower needed to drive our IT resources and applications. However, after some hard analysis, we decided to move away from the AWS and onto our own infrastructure. At a time when so many SaaS’s/IaaS’s/PaaS’s exist, why would we decide to run a data-center’s worth of gear? AWS has been a large drain on our budget at scale, and we wanted a more cost-efficient solution.