Information

Information (3)

 

Containers have been around now for quite a few years. We can trace the concept back to 1979 and the introduction of the chroot system call but it wasn't until BSD JailsSolaris Zones and LXC in 2000, 2004 and 2008 when the technology started to mature. Zones in particular became incredibly stable very early on. With a very high level of isolation and performance, capable of multi-tenancy systems.

With the rise of VMWare and IaaS providers like AWS, container technologies took a back seat as the masses embraced cloud computing.  Containers weren't fully able to satisfy the demands of ephemeral and dynamically scaling systems. However, in more recent years Docker has revitalised the interest back in this technology by introducing the idea of application containers and a powerful set of tools and infrastructure for maintaining container images.

Expanding the benefits beyond performance and resource utilisation gains, Docker improved standardisation, configuration management and portability, meaning containers are fast becoming the next hot technology (if they're not already). However, they do maintain some challenges in the Cloud. Specifically monitoring, orchestration (e.g. automated scheduling and auto-scaling) and service discovery are an additional burden.

What we mean by "operations," has changed over the years and some ambiguity has resulted from the pace of change. I'd like to discuss some of these changes.

Firstly, let me explain why I believe I know a little on the subject. I want to invite you to come back in time to the Technology Management Centre for a large Telco in the early 00s where a young man has just sat down for his first day on the job and his supervisor, Spencer hands him a drive bay (hot desking was serious business here) and says, "I recommend you do a stage 1 install of Gentoo because it'll will be a good learning exercise to set up the Operaing System from scratch. Then, when you're done, we'll go over this script I'm working on to automate some tests on our new Cisco 10K routers."

I'd never compiled an operating system before that point, so we never made it to the script but it was the first time in my career when I was suddenly plunged into a world of highly skilled engineers and architects, simply doing some amazing things under very tight requirements and needing to be "DevOps", just to ensure their success. Scripting and automating tests, building our own configuration management system, measuring everything that moved, working cross-functionally, high collaboration and information sharing across teams were all just the norm. We'd also virtualised our environments and were even running containers in production over ten years ago.

From that point, my work career continued in much the same way. Sure, there has been some challenges trying to help some people see the vision but now there is a DevOps community and a wealth of literature, those challenges mostly went away and the approach was less about pushing an agenda to simply agreeing with peoples ideas as they embraced the philosophies as well.

Where did it all start

Hybid Cloud

For several years, cloud computing has been the focus of IT decision makers and corporate bean counters, but the extremely security-conscious have been hesitant to move their data and workloads into the cloud. Now, with the underlying technology behind cloud services available for deployment inside organizations, a new model of cloud computing is gaining a foothold in business: the hybrid cloud.

WHAT IS HYBRID CLOUD?

About Mesoform

For more than two decades we have been implementing solutions to wasteful processes and inefficient systems in large organisations like Tiscali, and impressing our cloud based IT Operations on well known brands, such as RIMSonySamsung and SiriusXM.

Read more