This article was published in High Availability Architecture.

In 2015, I was just a container user. In 2016, AS a container and cloud-related developer, I paid more attention to the container ecosphere. This article analyzes the current container-related products from the perspective of the ecosphere as a personal annual summary (originally intended to be published before the year, but unfortunately delayed until the year after the year).

  1. This article only represents personal views, unavoidable omissions, please correct.
  2. This article takes a technology ecosystem perspective, not a usage and market share perspective, which means that the product survives by filling a gap in the technology ecosystem, not by market or otherwise.

Docker

You can’t talk about containers without talking about dockers. Container technology has been around for a long time, but Docker made it big. For a long time, many people thought of a basic Docker as a container. Once, a product manager friend asked me what I was doing. When I said it was related to making containers, she didn’t understand. But when she said it was Docker, she understood, showing the popularity of Docker. This year saw several major releases of Docker. Let’s review what Docker was and how it changed.

  1. Docker’s mirroring mechanism provides a more advanced generic artifact package, known as a container capability. The original operating system or programming language has its own artifact mechanism, Ubuntu deb, Redhat RPM, Java JAR, war, their dependency management, artifact library are not the same. From source code packaging, distribution to artifacts, and deployment to servers, it is difficult to abstract a common process and mechanism. With Docker’s mirror and mirror warehouse standards, this process can finally be standardized. As a result, there are many image management warehouses springing up, which would have been hard to imagine in the old world of product management, which seemed to be the Nexus and ArtiFactory in the Java world.
  2. Docker image mechanism and production tools have replaced part of ansible/ Puppet’s functions, at least the definition and maintenance of various software stacks and dependencies on hosts have been replaced by containers, and the dream of immutable infrastructure of operation and maintenance personnel has been realized by Docker.
  3. Docker’s encapsulation of cgroups/ Namespace mechanism and network makes it easy to simulate a multi-node operating environment locally, which is convenient for developers to develop and debug complex and distributed software systems.
  4. Docker daemons take over part of the role of system initialization process managers such as Systemd/Supervisor. Services run through Docker are not managed by Systemd, and Docker Daemons maintain their life cycle.

The first three are basically container-related or extended, but the fourth one is criticized by other container scheduling vendors. In any distributed management system, an Agent is installed on each host. The Agent manages application processes on the node. From this point of view, it is in conflict with Docker Daemon. The systemd of operating system can be ignored, but Docker daemon is indispensable to use Docker container. Imagine if your boss asked you to manage a team, but all the instructions had to come from one other person. So the conflict between Docker and other container scheduling systems was planted from the start. Docker team did not think about this problem at first. Docker’s Swarm also used independent agents, but later found that it was a bit redundant. Why not build Docker’s Swarm agent and scheduler into Docker daemon? Moreover, Docker Daemon already supports remote API and is designed as a centrless peer cluster mode, which makes deployment and maintenance more convenient. Docker 1.12 came with Swarm (SwarmKit, to be exact), a few commands to turn multiple standalone versions of Docker daemons into a cluster, and support for the concept of service (abstraction of multiple container instances). The concept of stack (a combination of services) and Compose (a orchestration definition file) is supported in 1.13, and a somewhat complete orchestration system is in place.

This change also represents a break with other container scheduling systems. Originally Docker was a kind of container, and everyone built the scheduling system based on the container, and they could coexist peacefully. The container is like a wheel, and the arrangement and scheduling system is like a car. As a result, one day, the manufacturer of wheels said that several wheels of his own were directly joined together to become a car. It would be surprising if the manufacturer of wheels did not hurry, which is equivalent to launching a “dimensionality reduction” attack.

With the scheduling system, it is also natural for Docker to launch a Store. Most server-side applications are not single node or process applications and need to assemble multiple services. For a long time, people have been looking for a way to achieve one-click installation and deployment of server-side applications. Docker’s application packages the layout file and image definition, coordinates with the standard environment provided by the layout scheduling system, and uses Store as the application distribution. The intention has been very clear. However, I feel that Docker is in a bit of a hurry. The current scheduling system is not yet mature, so it is in a hurry to launch higher-level applications, which may drag down the subsequent evolution, but this should also be caused by the pressure of commercialization.

Of course, Docker faces huge challenges in making this decision. The fragmentation of the container ecosystem led Docker to reshape its own ecosystem on its own. Docker’s libNetwork standard is not accepted by other manufacturers, and other manufacturers are also reducing their dependence on Docker (which will be discussed in detail later). Swarm is not mature enough to be used in the production environment. Currently, its network only supports overlays, and it cannot support its own network plug-in standard, and the orchestration file is not complete enough. On the storage side, last year Docker acquired Infinit, a distributed storage company. (Infinit has long been an open source company, but Docker didn’t wait for the source code until it was acquired.) If Docker solves the pain point of dual distributed systems — networking and storage — in 2017, Swarm’s future looks promising.

As a developer myself, I actually like this design of Docker, even though it came out a little late. This pattern is consistent with the gradual adoption of containers by development teams. For example, Docker can be used as a workpack management tool at the beginning. The network uses host mode, which is no different from the process of maintaining and deploying multiple applications on the host, but takes over the maintenance of dependencies and installation packages. Then, when the packaging mechanism of the development process is reformed and the habits of developers gradually change, Docker network solution can be introduced at this time. The reformed application communicates directly through the network of the container first, but the original mode of deployment and scheduling will be followed. Swarm mode was opened to deploy and dispatch the application to Swarm, which basically completed the transformation of the application as a container. It’s like falling in love. Date first, hold hands, then (omit a few words), then talk about marriage, and decide whether to make a lifelong commitment. Other programming systems start with the user’s answer: Are you ready to give me everything you have and the rest of your life? A lot of people have to hesitate.

To sum up, Docker is no longer the original Docker this year. It represents a container scheme, a system software, a trademark, a company and a container scheduling system. The war of the containers has just begun. But no matter what the final outcome is, Docker, a start-up company, started from developer tools, just three years ago, has spread through the global technology circle, stirred up the whole server-side technology stack, and then entered the enterprise application market, with users against the giants, unprecedented. Traditional enterprise market, decision-making in don’t understand the technology leadership, more so in the solution for developers is friendly, is not a key point, but from the Docker and foreign some of the recent company case (such as slack, twilio, etc.), the phenomenon is changing, it shows that the developers group of maturity and the increase of the voice, How developer-friendly enterprise apps are is going to be a key competitive point.

Kubernetes

When I analyzed the Kubernetes architecture in late 2015, version 1.2 had not yet been released. There has been a 1.6 beta release so far. Last year was a breakout year for Kubernetes, as you can see from community articles and Meetup. There are many articles that can’t wait to start declaring that Kubernetes has won the container wars.

The advantage of Kubernetes is that many conceptual abstractions are very consistent with the ideal distributed scheduling system. Even if you redesign such a system by yourself, after continuous optimization and abstraction, you will finally find that it is inevitable to slowly approach Kubernetes. Such as Pod, Service, Cluster IP, ReplicationController (new ReplicaSets), labels, and selectable selectors via labels, And DaemonSets and StatefulSets introduced last year.

DaemonSets are used to define services that need to be deployed on each host and do not need to be dynamically migrated, such as monitored services. Swarm has yet to introduce this concept, but it could be implemented in other ways, such as plugins. The Agent extension of the component scheduling system can be understood through the services defined by DaemonSet.

StatefulSets (PetSets prior to version 1.5) are designed to address stateful service deployment issues by ensuring the stability of the pod’s unique network identifier (primarily the HOSTNAME of the pod, IP remains unchanged depending on the network implementation). The PersistentVolume specification is supported and encapsulates existing distributed storage and IaaS cloud storage interfaces.

In terms of Network, Kubernetes launched the CNI (Container Network Interface) Network standard. This standard is relatively simple, just agreed to call the command parameters, if you want to extend their implementation, just need to write a command line tool under the system path, and then according to the CNI call parameters to allocate network container can be implemented in any language. It has almost no coupling to Kubernetes, so it can be easily adopted by other scheduling systems (such as Mesos).

As can be seen from the evolution of Kubernetes, Google’s expectation of Kubernetes in the formulation of standards, the most core point is the ability to describe. What is Kubernetes? The following is a description:

Kubernetes is not a mere “orchestration system”; it eliminates the need for orchestration. The technical definition of “orchestration” is execution of a defined workflow: do A, then B, then C. In contrast, Kubernetes is comprised of a set of independent, composable control processes that continuously drive current state towards the provided desired state. It shouldn’t matter how you get from A to C: make it so. 

Its goal is not just to be a choreographed system, but to provide a specification that allows you to describe the architecture of the cluster and define the end state of the services that help your system achieve and maintain that state. This is similar to the purpose of Puppet/Ansible configuration management tools. However, the Puppet/Ansible configuration management tool can only achieve a certain state, but cannot maintain this state (it does not have dynamic scalability and failover capabilities), and its abstraction level is low. It chose the container because it is easier to reduce the coupling between the host and the application, and if there are other technologies that can achieve this, it will not affect its positioning. So Kubernetes launched CRI (Container Runtime Interface) last year, which further decouples Kubernetes from specific containers. On the one hand, it is a counter to Docker embedded Swarm. On the other hand, it is also the inevitable evolution of its goal.

As a result, it has ceded a significant portion of its functionality to the IaaS cloud (such as networking and storage) without rushing out a specific solution or compatibility with existing distributed applications, focusing on specification definition and system optimization more than two years after its release. Kubernetes is a famous door, not worried about the commercialization of the problem, every girl is not worried about marriage, no need to cater to others, naturally someone to adapt to their own.

Of course, the ideal is beautiful, the reality is that there are currently a large number of distributed systems, repeated implementation of many Kubernetes has implemented functions, or cluster mechanism is the current Kubernetes abstract concept does not cover, Currently it is difficult to run these systems on Kubernetes (such as Redis Cluster, Hadoop, ETCD, ZooKeeper, mysql cluster with automatic master/slave switchover, etc.) because any abstraction comes at the cost of personalization and specialization. I’m not saying that zooKeeper can’t run on Kubernetes, but it’s very difficult to implement one-click deployment and automatic scaling and configuration automatic change. In many cases, manual operation is required. Take the official ZooKeeper example.

CoreOS addressed this problem by introducing the concept of operator, which provides a dedicated operator for services that are not easily described by Kubernetes’ current abstraction (stateful services or special cluster services). Operator by calling Kubernetes API to achieve scaling, recovery, upgrade backup and other functions. While this conflicts with the declarative philosophy of Kubernetes and cannot be done directly through Kubectl (kubectl has proposed support for plug-in mechanisms, which may be implemented through plug-ins in the future), it is currently a viable approach. (Note: CoreOS ‘Li Hsiang corrected the description of operator after this article was published. Operator is based on Kubernetes’ Third Party Resources method, which can be operated by Kubectl. Therefore, it is corrected.)

In addition, Kubernetes lags behind Mesos in big data. Last year Kubernetes supported the concept of a job. The pod generated by a job exits immediately after it is executed. The original service can be understood as an infinite loop job. Thus, Kubernetes has the ability to distribute batch tasks, but lacks the upper-level application interface. Theoretically, it is feasible to transplant Hadoop’s MapReduce to Kubernetes and use Kubernetes to replace the resource scheduling function of the underlying Yarn. There is a project to integrate Yarn and Kubernetes that has not been updated for a long time. I am not very optimistic about the integration of Yarn and Kubernetes. The two systems are in serious conflict and are mutually interchangeable, unlike the integration of Mesos and Kubernetes (which will be analyzed in the next Mesos section).

Kubernetes has been criticized for its complex deployment. When I was testing in 2015, deploying a set of Kubernetes was a complex task. Last year Kubernetes introduced Kubeadm, which greatly reduced the complexity of deployment. Current Kubernetes in addition to Kubelet need to directly start on the host, the other components can be hosted by Kubernetes, to a certain extent to achieve “bootstrapped”, but ease of use and Swarm or gap.

As Kubernetes matured, other vendors began making their own distributions, such as CoreOS. The original container idea of CoreOS should be to merge multiple CoreOS hosts into a cluster through its own customized container operating system and ETCD, and then use the modified Systemd to act as an agent, and add a scheduler above it to realize container scheduling, which is also a feasible idea. Since a single server process can be managed through systemd, connecting the systemD of multiple machines is not a distributed server process management. Since then, however, it has changed its strategy, estimating that it would be too expensive to adopt such a solution (users would have to change their host operating systems and application deployment habits simultaneously), so the current container strategy has changed to customized Kubernetes, CoreOS was renamed Container Linux (presumably to avoid overlap between the product and the company’s name, but also to reveal a shift in product focus) to focus on the host operating system (which will be examined in the operating system section below).

To sum up, Kubernetes is almost complete after a year of rapid development. Performance issues with the original Kube-Proxy (prior to version 1.2) have also been resolved. It is suggested that users in the original wait-and-see state can pit. In terms of the final package definition of the application, Kubernetes has not yet released its own solution. It is not sure whether this is intended to be official or to be handed over to distribution manufacturers, but it can be expected that a relevant solution will be released in 2017.

Mesos

If Kubernetes was born for standards, Mesos is born for resources. The idea is:

define a minimal interface that enables efficient resource sharing across frameworks, and otherwise push control of task scheduling and execution to the frameworks

Define a minimal interface to support resource sharing across frameworks, delegating other scheduling and execution tasks to the framework itself.

That is to say, its perspective is resource perspective and its goal is resource sharing. This and Kubernetes’ goals are actually two sides of the same coin, one from the definition description perspective and the other from the resource perspective. Mesos was one of the first to enter the container space, but its container packaging is relatively simple. It is used for simple resource isolation, and users are not aware of it. With Docker, Docker was introduced, but as I explained earlier, the Mesos Docker was always a bit awkward, so Mesos (or mesosphere DC/OS) created a Universal container, The original Mesos container has been improved to support the Docker image format, so that users can use Docker in their own development environment, but production environment can be directly switched to the Mesos container, although there may be some compatibility issues, but the image format change will not be too fast. The problem is under control.

This also shows that Docker image format has basically become a de facto standard from another aspect. The key to whether a container solution can cut into the entire ecosystem is whether it supports Docker image format. Of course, Mesos members are also somewhat resistant to this, noting in some posts that Mesos also supports deployment of other types of package formats, such as GZIP etc., especially for large-scale server scenarios, pulling images from the Docker repository is a bottleneck. I don’t agree with this point. In large-scale server scenarios, pulling any installation package in any format from any central node will be a problem, so Twitter does P2P installation package distribution, but Docker image can also be made into P2P distribution, as long as there is a need, there must be someone who can make it.

In terms of container networks, the original Mesos network solution is port mapping, and applications need to adapt to Mesos. Mesos supports the CNI network standard initiated by Kubernetes, which solves this problem.

Mesos takes over some of the functions of existing distributed systems through the mechanism of the framework, making it possible for multiple distributed systems to share the same resource pool. Container orchestration is just an application on Mesos (Marathon). The advantage of this approach is that it is possible to integrate complex distributed systems. Because the framework is a programming interface and flexible, many complex applications that are not easy to run on Kubernetes can be built on Mesos, such as Hadoop and other big data-related applications.

Because the goals of Mesos and Kubernetes are two sides of the same coin, and their capabilities complement each other, there was a very early attempt to integrate Kubernetes and Mesos, to put Kubernetes on top of Mesos. In this way, Kubernetes takes over the container scheduling and replaces Marathon and Mesos to solve other complex distributed applications and realize resource sharing of multiple applications. But the kubernetes-Mesos project fizzled out, partly because Kubernetes changed too quickly, and partly because Mesosphere presumably found it competing with Kubernetes more than cooperating. IBM developed a Kube-Mesos-framework that was incubated in the Kubernetes incubator, but is not yet available. My personal feeling is that both Kubernetes and Mesos need a lot of work if they are really integrated, but this project is a big variable for the ecosystem, which may break the three-way situation.

Mesosphere’s DC/OS developed a package specification for distributed applications based on Mesos. While previous application releases were limited to packages for standalone operating systems, Mesosphere defined a distributed package specification through Marthon, images, and configuration files that allows users to deploy complex distributed applications from a repository (REPO) to Mesos with a single command. This is similar to the idea of a Docker Store.

To sum up, Mesos has the advantage of being customizable. It was born in and is popular with Internet companies like Twitter. Most of the infrastructure and systems of Internet companies are developed by themselves, and they can require their own applications to adapt to the scheduling system through specifications. Therefore, the demand for standardization and abstraction is not as strong as resource sharing. Its disadvantage is that the surrounding ecological relatively complex, interface and standard design, not Kubernetes so “elegant”.

Rancher

Now that there are three different choreography systems on containers, each with its own strengths, what other opportunities are there? Rancher tried another approach. Its own definition is an IaaS, a container-based IaaS. First, it reduces deployment costs through containers, requiring only one command per node to deploy a Rancher system, which is far easier than complex IaaS like OpenStack. Second, it is positioned as an IaaS dedicated to running container choreography systems on which Kubernetes, Docker Swarm, and Mesos can be deployed in one click. The container orchestration system, running directly on a physical machine, still needs some modification and has some upgrade and operation difficulties, so Rancher came up with the idea of abstracting a very thin layer of IaaS to solve this problem.

The thinking was that since it would be a tough decision to pick one of the three, it would be better to pick Rancher. Rancher allows several orchestration systems to coexist while reducing the cost of switching later, a difficult decision that can be made later.

It also publishes its own App Catalogs and App Catalogs. Its App Catalogs are also compatible with other container catalog definition files, representing a superset of container catalog applications. In this way, infrastructure-related applications can use Rancher’s own specifications, and business-related applications that change quickly and need to scale dynamically can be placed in the container orchestration system without the hassle of switching later.

The problem with Rancher’s product, however, is that if container scheduling systems become dominant in the future, there will be less room for them.

Where are container platforms going? CaaS, IaaS, PaaS, SaaS?

Some people call Container as a Service (CaaS) as a new application Service on IaaS. It is similar to database Service (RDB) on IaaS. However, IN my opinion, CaaS is actually a fake requirement. What users need is not a container, and IaaS is never called VaaS (VM as a Service). Container Service provides an operating environment, not a specific Service.

The container Scheduling platform is primarily a tool platform for developers to schedule applications, which is the biggest difference from current IaaS. Currently, IaaS is mainly for managers and resources scheduling. Therefore, the main entry for IaaS is the console. Although there are apis, the complexity of using command-based apis is much greater than that of declarative apis, and API users are mostly operation and maintenance tools, rarely integrating into the application scenarios of business systems. This is dictated by the historical mission of the current IaaS cloud. The first step of cloud is to allow users to migrate their applications from their own computer rooms to the cloud. Only by providing an environment that can fully simulate users’ physical computer rooms (virtual machine simulation hardware supports full-function OS, SDN network, and cloud disk storage) and no intrusion to applications, can users’ migration cost be lower.

However, when this step is completed, users will find that although the maintenance cost of hardware is saved, the maintenance cost of application development is not reduced. They will realize that their essential needs are not resources, but the running environment of application. This requirement is a bit like PaaS, but the problem with the original PaaS is that it is too intrusive to the application, and is bound to the development language, so there are limited applications that can be directly deployed. What users need is a more general PaaS, which can be called “Generic Platform as a Service”, which IS a term I coined. The container platform is a platform service that can deploy virtually any complex application.

Although container platform has its own historical background and focus, the solution is different also, but in the end point to the same goal, is the shielding of distributed system resource management details, provide standard operation of distributed application environment, define a distributed application package at the same time, for developers to reduce the development cost of distributed systems, This reduces the maintenance cost of distributed applications for users and distribution cost for vendors. To summarize, it’s DataCenter OS or Distributed OS. The term DC/OS has been used by Mesosphere for its own products, but personally sees it as the ultimate goal for all container platforms. Of course, whether this container wave will actually achieve that goal is far from certain, but at least for now it seems hopeful.

As a result, PaaS becomes a devOPS tool stack on top of DCOS, and many PaaS platforms are evolving in this direction. IaaS becomes the service that provides the operating environment for DCOS, which takes over the upper application and the underlying services. Of course, there are other SaaS services on top of IaaS. (Due to SaaS, the extension of PaaS is not precisely defined. In many cases, SaaS middleware for developers is called PaaS component. Fully pay-as-you-go services should be SaaS, such as object storage. The SaaS model has the highest utilization of resources and the lowest cost to users, who generally will not replace them with independently deployed services. As a result, the user deploys a SET of DCOS on top of IaaS, with independently deployed services and self-developed services running in DCOS, and other middleware that can be abstracted into standard SaaS services being provided in preference to IaaS. IaaS transfers independently deployed infrastructure services, as well as the deployment and scheduling of users’ own services, to DCOS. Of course, in the end, whether IaaS itself will directly become a multi-tenant DCOS, from scheduling resources to directly scheduling applications is not impossible, but the scheduling layer will face great pressure and the nodes supported by the data center will be limited. It seems that two-layer scheduling is more feasible temporarily.

This has a big impact on IaaS as a whole, so some people say that the threat to AWS may not be Google Cloud, but Kubernets (DCOS). But even as DCOS matures, how it will eventually be commercialized remains highly variable. Will the ideal server app marketplace like Apple’s AppStore eventually be provided by DCOS, SaaS, or IaaS? DCOS has the advantage of mastering standards and defining application specifications; SaaS service providers have the advantage of directly facing end users and may occupy the entrance of enterprise applications; IaaS service providers have the advantage of controlling the operating environment of applications, which has great advantages in both billing mode and anti-piracy. But in the private cloud space, IaaS products will face a bigger impact from DCOS. In private clouds where the need for multi-tenant isolation is not so strong, deploying a DCOS is a simpler and more efficient option.

Container versus VIRTUAL machine

The container vs. virtual machine debate has never stopped since Docker technology became popular. But over the past year, attention has shifted to the upper echelons, and the container versus virtual machine debate has come to an end. This is mainly because the epitaxial of the container itself is changing.

Container, first of all it’s not a technical word, it’s an abstraction. As the name suggests, it’s a vessel for holding things. What do you hold? Applications.

In fact, the concept of Container has been used in THE J2EE field for a long time. J2EE server is also called Web Container or Application Container. Its goal is to provide a standardized running environment for Java Web applications. Easy to develop, deploy, and distribute, but the container is platform bound.

Since its inception, the Linux container has been a collection of techniques for process isolation. With competition and evolution in the Container field, Kubernetes launched the OCI (Open Container Initiative) standard, Docker, Unikernels, CoreOS Rocket, Mesos Universal Container, Hyper HyperContainer (VM based container solution) and so on, the concept of container is gradually clear, we can define:

Container is a kind of standardized encapsulation of application process, whether using Linux Cgroup Namespace technology or using virtualization hypervisor technology to do this encapsulation, is not the key, the key is the goal, container is born for application standardization.

, that is to say, the first container is generally considered as a virtual machine a isolation technique, so will take container and the virtual machine, comparing the two isolated cost, safety isolation, but now the container’s extension changed, and the essence of the difference is not the traditional virtual machine packaging technology, but in the encapsulation of goal, the goal of traditional virtual machine packaging is the operating system, Provides a virtualized hardware environment for the operating system, and the container encapsulates the target application process, in which the embedded operating system is only a dependency of the application.

On the other hand, scheduling systems based on virtual machines, such as OpenStack, can also use containers as compute_driver of their own scheduling systems. That is, containers are used as virtual machines to reduce the isolation cost of virtual machines. So the battle between containers and virtual machines is essentially a battle of scheduling concepts, not isolation technologies.

Impact of containers on operating systems

Before the emergence of Docker, the operating system on the server side was basically dominated by traditional Linux big manufacturers, and Ubuntu managed to win a place through the advantages of its desktop version. There was basically no chance for a start-up company to enter this field, but the emergence of Docker brought an opportunity. This is an opportunity for an operating system to consider whether its goal is to manage hardware or applications. If it is to manage the hardware, that is, the operating system running on the physical machine, the goal is to provide the kernel and manage the hardware (disk, network, etc.), while the application layer is entrusted to the container. In this way, the operating system of the hardware layer does not need to consider the compatibility of various application software stacks, reducing the maintenance cost of the operating system. If you’re managing an application, you can give up managing the hardware layer and focus on maintaining the software stack. That’s why several startups are popping up in this space:

  1. The main goal of both OS, CoreOS ‘Container Linux and Rancher’s RancherOS, is to take over the hardware, freeing the host operating system from the complex software stack and concentrating on managing the hardware and supporting containers. Traditional Linux operating systems pay a significant cost to maintain the compatibility of the upper software stack, and one advantage of distributions is that they provide comprehensive stack compatibility testing. When the two are decoupled, this advantage is no longer, and the core of the competition lies in whether it can provide better support for containers and system and kernel upgrades.
  2. Alpine Linux is a stripped-down Linux distribution that focuses on maintaining the software stack and providing an environment for applications to run within the container. So it can be very small, only a few megabytes by default, or a little over ten megabytes even with bash included. So Docker officially uses it as the base system for mirroring by default.

The evolution and replacement of operating systems is a long-term project, which is basically accompanied by the obsoletion of hardware, so the evolution of at least five to ten years is not urgent, but can be expected.

Thinking about technology entrepreneurship

Technology startups need to think clearly about their opportunities in the market or in the technology ecosystem. Of course, the former is easier than the latter, which eventually has to become a marketable application to make money, and even gaining a position in the tech ecosystem doesn’t necessarily lead to a business model, but the latter has more potential than the former. Domestic entrepreneurship generally tends to be with the former, while foreign entrepreneurship tends to be with the latter. Therefore, domestic homogenization competition is fierce, and there is a lack of achievements in ecological construction of differentiation. This is similar To the 2C field. Recently, there are fewer and fewer successful cases of C2C (Copy To China) model in the 2C field. The 2B field definitely needs some years. But it doesn’t feel too far.

In my opinion, the competition in the field of container scheduling will rise to the application level in 2017. Multi-language application frameworks (such as micro-service framework and Actor model framework) on DCOS, containerization and standardization of various existing applications and infrastructure, and the transformation of existing applications into Cloud Native applications, etc.

Of course, there is a gulf between ideal and reality. Wave after wave of technological change is essentially an attempt to free development and operations people from trivial, repetitive, and boring tasks to focus on creative areas. But the hardest thing to change is people’s way of thinking and doing things, and dealing with organizational obstacles. You say the cloud is supposed to be on-demand, dynamic, self-service, but the user operations department likes to approve developer resource requests, so the cloud becomes a virtual machine distribution system. You say that resources should be dynamically scheduled and application mixing can improve resource utilization. However, the security department of the user requires application services to be layered. Each layer must be deployed on an independent server, and ports for cross-layer invocation need to be applied. So technological change the last to be born into a business model, the first thing to spread through the concept of technology to change the user’s way of thinking and organization system, it definitely can’t be achieved overnight, of course, is a long-term, repeated process, secondly wait for the business requirements drive, when the business growth in software complexity to a certain size, It is natural to seek new technologies to solve complexity.

conclusion

After all this writing, some people may ask, I am still not sure which one to choose. The future is unpredictable, but it can be created. The moment you make that choice, the future you want moves one step closer to you.

A link to the


  1. I wrote at the end of 2015 about Kubernetes architecture analysis of the article

  2. I’ve written about the Mesos architecture over the past 16 years

  3. What is Kubernetes? Kubernetes official introduction

  4. Kubernetes official Zookeeper deployment example

  5. CoreOS Operator’s official introduction

  6. Kubernetes-mesos project address

  7. Kube-mesos-framework project address

  8. HyperContainer official website address