5 ridiculous Docker myths – busted!
Run a Google search on Docker and you’ll see a strange phenomenon – 75 per cent of the world is loudly singing its praises, while the remaining quarter is grumbling and moaning with no apparent justification. The latter is mainly down to outdated information about Docker, as well as some myths that were never true to begin with.
Confusion is never a good starting point when you’re choosing technology for the enterprise, so we’ve decided to tackle it head on. Below, in no particular order, are our top five spurious Docker myths – busted!
Busted Myth #1 – Docker is insecure
Security is incredibly important and there’s no way we would recommend a technology that doesn’t deliver on that front. Docker Enterprise Edition adds a number of key elements to ensure enterprise-class security, alongside a rolling updates programme that ensures you’re running the more secure version.
Importantly, Docker information is encrypted in both transit and rest, unlike some competitors. In addition, Role Based Access Control (RBAC) gives you control over who can access and make changes to applications.
Trusted images is Docker’s image signing system, using a key owned by your company. The corporate server can be configured to only run signed images. Tying in to this is Docker’s Vulnerability Scanning, a centralised, constantly-updated database of vulnerabilities used to scan for any security issues within containers, ensuring developers avoid costly mistakes.
Busted Myth #2 – Docker isn’t reliable
Docker used to be a classic Open Source project with features added at breakneck speed. As with any developing technology, this meant there were some caveats around using it in development and production environments.
Docker Enterprise Edition is different. This is Docker with a paid rolling support programme, bringing reliability and stability via a different release model to Docker Consumer Edition. It provides a certified apps platform on enterprise Windows or Linux operating systems and Cloud providers.
Busted Myth #3 – Docker requires the Cloud
This myth is particularly easy to bust – Docker doesn’t need Cloud, it’s as simple as that. Docker’s Build/Ship/Run ethos means that a developer can, if they wish, run an image on their local machine, then promote that same image all the way through into production. And that local machine can be pretty much anything – we’ve run Docker on Mac, Windows, Linux boxes, even a Raspberry Pi board with not a Cloud in sight.
That’s not to say the Cloud can’t come in handy. All the reasons we love the Cloud still apply, including advantages such as enhanced disaster security and recovery, multiple instances via the likes of Amazon and Azure, remote collaborative working. It’s just that Docker is totally flexible – it fits into the way you want to work
Busted Myth #4 – Docker can’t compete because it’s Open Source
The days of Open Source being something inherently untrustworthy is a misconception that went out with the dinosaurs. In fact, Docker has significant advantages, not least being the Secrets functionality that ensures security both in transit and at rest.
What we’re seeing is that Docker Enterprise Edition is being implemented by the likes of banks and retailers. These are huge enterprises that are risk averse to the nth degree – they need to get things right first time, every time.
Busted Myth #5 – Docker’s containers are too limiting
Docker’s containers are specifically designed to replicate a single kernel. This is in fact an advantage over Virtual Machines, rather than a limitation. This design results in dramatic efficiency improvements compared to VMs, giving developers the ability to run far more apps on the same servers.
What’s more, the latest version of Data Centre introduces the ability to run a Swarm cluster that includes Linux, Windows and some mainframe hosts. We particularly like the way you can deploy containers to your Swarm and allow Docker to do the tedious work of analysing what operating systems they need to run automatically by analysing the code.