Kubernetes is definitely the simplest and easiest way to meet the needs of complex Web applications.
In the late ’90s and early’ 00s, it was fun to work for a big website. My experience reminds me of American Greetings Interactive, where on Valentine’s Day we had one of the top 10 sites on the Internet (measured by web visits). We provide e-cards for companies like AmericanGreetings.com, BlueMountain.com, and partners like MSN and AOL. Veterans of the group still remember the epic battle with other e-card sites like Hallmark. By the way, I also ran big websites for Holly Hobbie, Care Bears and Strawberry Shortcake.
I remember it like it was yesterday, the first time we had a real problem. Typically, our front door (routers, firewalls, and load balancers) has about 200Mbps of traffic coming in. But then, all of a sudden, the Multi Router Traffic Grapher (MRTG) graph suddenly jumps up to 2Gbps in a matter of minutes. I was running around like crazy. I learned about our entire technology stack, from routers, switches, firewalls, and load balancers, to Linux/Apache Web servers, to our Python stack (a meta-version of FastCGI), and network file system (NFS) servers. I knew where all the configuration files were, I had access to all the administrative interfaces, and I was an experienced, hard-fought system administrator with years of experience solving complex problems.
But I couldn’t figure out what was going on…
Five minutes can feel like an eternity when you’re frantically typing commands on a thousand Linux servers. I knew that the site could crash at any time because it was so easy to crush a cluster of thousands of nodes when it was divided into smaller clusters.
I quickly ran to my boss’s desk and explained the situation. I was frustrated that he barely looked up from his E-mail. He looked up, smiled and said, “Yes, marketing might run advertising campaigns. Sometimes that happens.” He told me to put a special logo in the app to reduce Akamai’s traffic. I ran back to my desk, set up flags on thousands of Web servers, and a few minutes later, the site was back to normal. Disaster was averted.
I could share 50 more stories like this, but there’s probably a little bit of curiosity in the back of your mind: “Where is this operation going?”
The point is, we have a business problem. When technical problems prevent you from conducting business, they become business problems. In other words, if your site is inaccessible, you can’t process customer transactions.
So what does all this have to do with Kubernetes? Everything! The world has changed. Back in the late ’90s and early’ 00s, only large websites had large, scale-scale web-scale problems. Now, with microservices and digital transformation, every enterprise faces a big, scale problem — possibly multiple big, scale problems.
Your business needs to be able to manage complex scale sites with many different, often complex services built by many different people. Your sites need to handle traffic dynamically, and they must be secure. These properties need to be API driven at all levels, from the infrastructure to the application layer.
Enter the Kubernetes
Kubernetes is not complicated; Your business is complicated. When you want to run applications in a production environment, meeting performance (scalability, performance jitter, and so on) and security requirements requires a minimum level of complexity. High availability (HA), capacity requirements (N+1, N+2, N+100), and data technologies to ensure ultimate consistency are required. These are the production requirements of every company going digital, not just big sites like Google, Facebook and Twitter.
In the old days, when I was at American Greetings, every time we added a new service, it looked like this: All of this was handled by the site operations team, and none of it was transferred to another team through the ordering system. Here’s DevOps before DevOps:
- Configure DNS (typically the internal service layer and externally to the public)
- Configuring load balancers (typically internal services and public oriented)
- Configuring shared access to files (large NFS servers, clustered file systems, and so on)
- Configuring cluster software (database, service layer, etc.)
- Configure a Web server cluster (can be 10 or 50 servers)
Most of the configuration is done automatically through configuration management, but it is still complicated because each system and service has different configuration files in completely different formats. We looked at tools like Augeas to simplify it, but we decided that using a converter to try and standardize a bunch of different configuration files was an anti-pattern.
Now, with Kubernetes, launching a new service essentially looks like this:
- Configure Kubernetes YAML/JSON.
- Submit to Kubernetes API (
kubectl create -f service.yaml
).
Kubernetes greatly simplifies service startup and management. Service owners (whether system administrators, developers, or architects) can create YAML/JSON files in Kubernetes format. With Kubernetes, every system and every user speaks the same language. All users can commit these files in the same Git repository to enable GitOps.
Furthermore, services can be deprecated and deleted. Historically, deleting DNS entries, load balancer entries, configuration of Web servers, etc., was scary, because you almost certainly broke something. With Kubernetes, everything is in a namespace, so you can remove the entire service with a single command. While you still need to make sure that other applications don’t use it (the downside of microservices and function-as-a-service [FaaS]), you can be more confident that removing services won’t break the infrastructure environment.
Build, manage, and use Kubernetes
Too many people focus on building and managing Kubernetes instead of using it (see Kubernetes is a dump truck).
Building a simple Kubernetes environment on a single node wasn’t much more complicated than installing the LAMP stack, but we argued endlessly about build versus buy. Not Kubernetes hard; It runs applications on a large scale with high availability. Building a complex, highly available Kubernetes cluster is difficult because it is difficult to build any cluster of this size. It requires planning and a lot of software. Building a simple dump truck isn’t complicated, but building a truck that can carry 10 tons of garbage and travel steadily at 200 miles per hour is.
Managing Kubernetes can be complex because managing large, scale-scale clusters can be complex. Sometimes it makes sense to manage this infrastructure; And sometimes it’s not. Because Kubernetes is a community-driven open source project, it enables the industry to manage it in many different ways. Vendors can sell managed versions, and users can manage them at their discretion as needed. (But you should question whether you really do.)
Using Kubernetes is by far the easiest way to run a large-scale website. Kubernetes is popularizing the ability to run a set of large, complex Web services — much like Linux did in Web 1.0.
Since time and money is a zero-sum game, I recommend focusing on using Kubernetes. Spend your time and money on mastering Kubernetes primitives or the best way to handle activity and readiness probes (another example of the difficulty of large, complex services). Don’t focus on building and managing Kubernetes. There are many vendors that can help you (with both build and management).
conclusion
I remember troubleshooting countless issues, such as the one I described at the beginning of this article — NFS in the Linux kernel at the time, our own CFEngine, redirection issues only on certain Web servers, etc.). Developers can’t help me solve all of these problems. In fact, unless developers have the skills of senior system administrators, they can’t even step in and help as a second set of eyes. There is no console with graphics or “observability” — observability is in my head and that of other system administrators. Now, with Kubernetes, Prometheus, Grafana and others, everything has changed.
The key is:
- Times are different. Today, all Web applications are large distributed systems. As complex as AmericanGreetings.com used to be, every site now has scalability and HA requirements.
- Running large distributed systems is difficult. Absolutely. This is a business requirement, not a Kubernetes problem. Using a simpler choreography system is not the solution.
Kubernetes is definitely the simplest and easiest way to meet the needs of complex Web applications. This is the age we live in, and Kubernetes is good at it. You can discuss whether you should build or manage Kubernetes yourself. There are many vendors that can help you build and manage it, but it’s hard to deny that it’s the easiest way to run complex Web applications on a large scale.
Via: opensource.com/article/19/…
By Scott McCarty, lujun9972
This article is originally compiled by LCTT and released in Linux China