Needless to say, application development is a rapidly evolving industry, especially in recent years. From Web/ mobile to cloud, container, DevOps, big data, big front end, VR, blockchain, artificial intelligence, we seem to have a never-ending stream of new technologies. It’s also clear that programmers, as a community of developers, are passionate about pursuing newer, cooler, more powerful, more elegant technologies. Whether this is a good thing or a bad thing is hard to assess comprehensively, as if we have taken it as a truism.
But lately, I’ve been hearing some unusual voices from developers with different technical backgrounds and different perspectives; But they all seem to point to the same conclusion: some technologies that are considered “old” are actually undervalued; Other new technologies that many developers have embraced may not actually achieve the productivity gains expected, and “old” technologies can actually achieve the same results. At any rate, these are the actual experiences of real developers. I want you to hear their voices.
In my career, no skill has been more useful than SQL!
- The original English
- InfoQ translation
The writer is the head of a Postgresql cloud service provider. His core points are as follows:
I’ve learned many skills over the course of my career, but none more useful than SQL. SQL in my opinion is the most valuable skill,
- It is valuable for different professional roles and disciplines;
- Once you have learned it, you do not need to learn it again.
- It makes you look like a superhero. Once you master it and others don’t, you are especially powerful.
My opinion: PARTLY agree with the author’s statement. One interesting trend I’ve observed over the years is that various NoSQL technologies start out as different from traditional RDBMSS, but as the product evolves, they eventually find themselves needing SQL (or something like it). However, I think SQL has its own problem, which is that as a technology released decades ago, it does not contain information on how to perform distributed execution, something that various data platforms have been working on since Google’s early papers to the present day. Moreover, there seems to be a nasty tendency for any company to claim they are using big data. I believe that many small business data problems can be solved at the SQL level.
One interesting example of the author’s point is the TIOBE Programming Language Trends Report.
As you can see from the chart, each of the top 20 languages has had its ups and downs, but SQL has remained remarkably stable since 2004, barely budging at all. TIOBE doesn’t have a lot of public details on why SQL maintains such stable scores, but it does go some way to supporting the authors’ conclusions. In the long run, SQL should probably be seen as a very stable and reliable investment with a modest return rate. Before you dive into the intricacies of various data technologies, you might want to learn about a name that’s been around for 50 years: SQL.
Why TypeScript is not suitable for large projects?
- The original English
- InfoQ translation
Despite the title, reading through the article, you can see that the author actually likes TypeScript and has used it on a large scale in production environments. The authors evaluated the results of the project and concluded:
I’ve gained a deeper understanding of TypeScript’s benefits, costs, and drawbacks. I would say that it was not as successful as I had hoped. I won’t use TypeScript in another large project unless it improves a lot.
In other words, TypeScript isn’t worthless, it’s just not as valuable as expected, and the cost of introducing TypeScript makes the investment in TypeScript less worthwhile.
In my opinion, I am interested in the author’s method of project evaluation. From there, the authors rate the impact of TypeScript’s introduction through development tools, API documentation, refactoring, training, recruiting, and more. This method should be said to have a certain reference significance, at the same time, for each index analysis of the author also carried out a detailed description, worth a look. The authors also make a valuable (and possibly controversial) point: TypeScript’s emphasis on type safety is actually less effective at reducing bugs than might be expected, while more traditional techniques, including unit tests and Code views, cover problems more fully. Type safety doesn’t seem to make much difference either. Quote again:
TypeScript supporters often talk about the benefits of type safety, but there is little evidence that type safety has a significant impact on bug density. This is important because code reviews and TDD can make a big difference (40 to 80 percent in TDD alone). Combining TDD with design reviews, specification reviews, and code reviews saw a reduction in bug density of more than 90%. Some of these processes (TDD in particular) catch a lot of bugs that TypeScript doesn’t catch in addition to what TypeScript does.
I suspect the authors’ conclusion will hurt a lot of TypeScript advocates a bit. Personally, THOUGH, I definitely agree with the authors that unit testing and code review should play a bigger role in development projects than the improvements that various new technologies bring.
Docker: A big gamble you’ll regret
The main point of this article is that it is dangerous and unreasonable to bet everything on Docker. The authors favor another path: deploying programs as so-called large binaries or UberJars (languages that best suit this pattern include Java, Golang, Clojure, etc.) to minimize dependency problems. In this scenario, using traditional o&M techniques instead of Docker, including Chef, Puppet, and Ansible, can still manage the server in an orderly manner and avoid Docker problems. At the same time, the author believes that the choreography brought about by K8S is still positive, but he also raises the question: does the choreography have to be container-based? In other words, does it have to be Docker based?
The text is long and quite extensive, and may require repeated readings to understand (I am more confident that I have understood the text myself until I have read it a third time). In addition, this article is a response to the author’s previous article Why Would Anyone Choose Docker over Fat binaries? So reading the previous article first will help you better understand the context of the article.
- The original English
- In the translation
- The author’s previous article, no Chinese version
To be honest, before reading this article, ALTHOUGH I also knew that Docker had some defects, I did not understand what these defects meant from a system perspective. In this sense, even though I don’t agree with some of the author’s final views, I still think the author’s perspective is worth understanding.
The authors offer the following advice:
Docker is best thought of as an advanced optimization. Yes, it’s cool and quite powerful, but it also adds complexity to the system, and only a professional system administrator can understand how to safely use Docker in production and deploy it in mission-critical systems.
For now, you’ll need more system expertise to use Docker. In fact, almost all docker-related articles show overly simplistic use cases and ignore the complexity challenges of using Docker on multi-host production systems. This has given people the wrong impression that Docker is easy to use in real production.
There is an old adage in computer programming that “premature optimization is the root of all evil.” However, most of our customers this year have insisted that “we have to fully docker-ify from the ground up.” Therefore, instead of establishing a working system, putting it into production, and finally considering whether Docker can bring comparative advantages, customers began to blindly promote the standardized development and deployment of Docker.
In addition, the article cites another developer who is even more critical of Docker because of the many major issues he has encountered with referencing Docker in production environments, to the point where he advises against deploying Docker in any serious situation. Although the viewpoint may be suspected of being too radical, it is also derived from the front-line production practice, and it mentions some insider information about the evolution of Docker file system and interface, which is quite worth looking at.
Docker in Production: A History of Failure
Write in the back
There is no doubt that new technology has new value, and I don’t mean to deny developers the desire to innovate and change. But as you chase new technologies, don’t forget our “old” technologies, which are there whether you use them or not.
Finally, I’d like to quote my review of old technology from Docker: A Huge Gamble that We Regret, because it really stuck with me:
The nice thing about boredom (the high level of restriction) is that the function of these things is relatively easy to understand. More importantly, failure patterns are much easier to understand. … But for shiny new technology, there are far more unknowns, and that’s important.
In other words, software that’s been around for a decade is easier to understand and has fewer unknowns. Fewer unknowns means lower operating costs, which is never a bad thing.