Google And Microsoft Make Containers More Useful – Forbes

With continued investment, apparently inexhaustible media attention, and real enthusiasm from developers, it would be easy to assume that San Francisco startup Docker invented the increasingly popular concept of the container. They did not.  Instead, they took an existing technology and made it easier to use.

VMware VMware first entered the enterprise server market over a decade ago, with a viable virtualization product. Businesses that depended upon Windows and Linux servers soon found new ways to make more efficient use of their hardware investment, clustering physical servers together to create pools of virtual machines that could be shared across different business units, almost on-demand. The model suited enterprise system administrators. It also, broadly, suited the individual business units that no longer needed to procure and maintain their own physical servers. Instead, they ran their applications in virtual machines sitting on hardware that was someone else’s problem to maintain. And lastly, the model suited the dominant applications of the day. They were typically large and mostly self-contained. The pieces were well understood, few in number, relatively slow to change, and almost exclusively installed and maintained in-house. Operating at scale, the potential operational efficiencies and cost savings of virtualization would prove significant.

Containers on the dock in Tokyo. (KAZUHIRO NOGI/AFP/Getty Images)

At one level, containers offer similar benefits to the VMware-style virtual machine. Containers, too, allow an administrator to share the resources of a physical computer between two or more ‘guest’ virtual computers. But while each guest in the model pursued by VMware and others includes its own operating system (Microsoft Microsoft Windows, Ubuntu Linux, Red Hat Red Hat Linux, etc.), containers rely upon a single copy of Linux running on the physical machine beneath them. This makes containers smaller than an equivalent virtual machine (as there’s no need to carry the bloat of an entire operating system) and typically much faster too (as there’s no need to wait for that operating system to boot). In certain circumstances, lightweight versions of Linux like CoreOS make the operating system itself even smaller.

Even before the rise of Docker, organizations were finding uses for containers. Back in June of last year, Google Google VP of Infrastructure Eric Brewer wrote,

Everything at Google, from Search to Gmail, is packaged and run in a Linux container. Each week we launch more than 2 billion container instances across our global data centers, and the power of containers has enabled both more reliable services and higher, more-efficient scalability

Cloud provider Joyent, too, used container-like technology behind the scenes, and has recently begun to promote this as Triton.

For Google, Joyent, and practically every organization using containers before 2014, they represented a powerful tool in the hands of operations staff. And that’s an important point. Before Docker, containers – like mainstream virtual machines – were all about efficiently managing infrastructure at scale. Containers were for operators. Docker’s strength lay in turning existing container technology into a powerful proposition for a far larger market: developers.

Competing container offerings like CoreOS’s Rocket and Joyent’s Triton (despite its heritage) broadly repeat Docker’s developer-friendly pitch. Amazon Web Services’ Container Service does too, although its Docker-specific language continues to evolve in ways that suggest greater choice may not be far away.

But to ensure long-term viability, and to effectively transition from the laptops and test environments of developers to the mission-critical production servers at the heart of so many enterprise applications, Docker and its competitors must also appeal to operations teams. Containers must be managed, at scale, and effectively integrated with everything else in a company’s IT estate.