How hard is it to get started with Kubernetes? Sometimes it really makes you doubt your life.
In the past few years, even the technology bricklayers of the big Internet companies have to take a closer look.
“While we’ve seen more and more enterprises embrace Kubernetes over the past few years, they’ve gotten stuck.”
Drew Bradstock, product lead for Google Kubernetes Engine (GKE), said in a recent public statement.
Kubernetes is like a double-edged sword. It is not only the best container orchestration technology, but also has quite high complexity and high threshold for applications. This process often leads to some common mistakes.
In 2019, Atlassian, a well-known software development service provider, discovered this after three years of trying to deploy Kubernetes: Kubernetes is too complicated to deploy.
Today, even Google itself, the creator and core promoter of Kubernetes, acknowledges the problem.
Why is Kubernetes so hard
In fact, in China, Kubernetes did not begin to mature gradually until after 2017, and it was largely due to the rapid acceleration of enterprise user practice in the cloud computing market itself.
In an interview, Zhang Lei, a senior technical expert at Alibaba, analyzed the nature of Kubernetes. He pointed out that,
“Kubernetes itself is a distributed system rather than a simple SDK or programming framework, which itself has raised its complexity to the position of a system-level distributed open source project. In addition, Kubernetes is the first time to put the idea of declarative API in open source. The infrastructure field has become popular, and based on this, a series of usage paradigms such as container design patterns and controller models have been proposed. These advanced and forward-looking designs also make the Kubernetes project accepted by the public. There is a certain learning cycle .”
In other words, there are two reasons for the complexity of Kubernetes at present: one is the application difficulty of the technology itself, and the other is the acceptance of developers, and the awareness and maturity of the market need to be improved.
How the originator Google does not abandon or give up
Since Google launched its cloud-hosted Kubernetes service Google Kubernetes Engine (GKE) in 2015, it has been attracting attention and use from the outside world. During this period, Google is constantly releasing new version models to enhance its applicability.
Not long ago, Google introduced a new feature, Autopilot, to simplify the challenges of deploying and managing Kubernetes configurations.
GKE is a Kubernetes management platform that runs primarily on Google Cloud Platform, but also on other cloud platforms managed by Anthos clusters or on-premises platforms.
From this point of view, there are currently two operating modes, one is standard manual control, and the other is automatic control Autopilot. The basic principle of Autopilot can be explained as: a fully managed deployment platform of GKE, which needs to run on the Google Cloud Platform. Although GKE itself is a managed service, it differs from Autopilot in that the latter can be more autonomous and automated than GKE.
Kubernetes itself involves aspects such as clusters (a group of physical or virtual servers), nodes (single servers), pods (management units that represent one or more containers on a node), and containers. GKE primarily hosts clusters, while Autopilot extends this to nodes and pods.
Google admits that Kubernetes is too complicated, so it opens Autopilot in the container world
Google Cloud usually has three or more computer rooms in one place. If all resources are placed in a single computer room, the elasticity will be less than if they are distributed in multiple computer rooms, and at the same time, by distributing faults to multiple computer rooms, the elasticity can be maximized. Autopilot mode is always divided by region, which is good for elastic scalability, but the cost is high.
ps Usually the cloud uses the concepts of region and zone to partition. The former mainly refers to geographical partitions, and the latter mainly refers to specific computer rooms.
However, the application of Autopilot mode also has its limitations. These include Linux “container optimizations” where the operating system is always based on Google’s own containers, not Docker, or Windows Server servers. Also, the maximum number of pods per node is 32, compared to 110 for standard GKE.
At the same time, the pricing model is also different. There is also a fee of 1 cent per hour per Autopilot cluster.
Whether Autopilot is more expensive or GKE is more expensive is not an easy answer to the obvious question. “It also has a premium over GKE because we get Site Reliability Engineering (SRE) and SLAs, it’s not just a product feature.”
That said, under-standardized GKE deployments may cost more than Autopilot due to the difficulty in estimating the correct specification of compute instances.
Overall, the new Autopilot service provides Kubernetes with more options, whether it increases costs, reduces flexibility, or potential challenges for IT operators. Of course, that doesn’t include the question of satisfaction with customer support.
It’s worth mentioning that software engineer Kevin Lin recently compared Amazon and Google’s cloud services, noting that Google’s customer support is basically unhelpful, while Amazon’s technical services are fast and useful. Kevin Lin, who worked for Amazon, recently described his experience with AWS and Google Cloud.
Going back to what Lei Feng.com discussed at the beginning, the complexity of Kubernetes is a technical problem that has always puzzled many developers. With Kubernetes becoming the most mainstream open source container orchestration technology in the entire cloud native community, the adoption rate in the production environment is increasing. The higher it is, the complexity can be expected to grow linearly.
Do you have a best practice for complexity issues? Your solutions are welcome.