Good Programmers
It is difficult to define a good programmer. Good at abstract thinking, strong hands-on ability, the pursuit of efficiency, like automation, willing to continue learning, high pursuit of code quality and so on, all of these dimensions have their rationality, but they are slightly abstract and subjective.
(Photo: http://t.cn/R6I1yhJ)
I have my own standard for a good programmer, and that is how familiar/loving the command line is. This is a great indicator of a good (or potentially good) programmer. I’ve been around a lot of great programmers, all of whom are very good at working on the command line. What does it mean to know the command line? Simply put, 90% of your daily work can be done from the command line.
Of course, the love/habit of using the command line may be superficial, but the underlying substance is what makes a good programmer good.
automation
Larry Wall, the inventor of Perl, famously said:
The three chief drivers of a Programmer are: Consumed, drilled and Hubris. — Larry Wall
Laziness tops the three greatest virtues of programmers, preclude Laziness to preclude the programmer from preclude the use of daily tasks, preclude the inability to consume time. In contrast, it has to be said that GUI applications are inherently designed to make automation difficult (not pejoratively, GUIs have their own completely different target audience).
(Photo: http://t.cn/R6IBgYV)
GUI is more about direct interaction with human beings: information is presented in a multi-level way through visual means, guided by visual elements, and finally the system performs the actual processing in the background and presents the final result through visual means.
This emphasis on interaction is designed to make automation very difficult. On the other hand, because a GUI is designed for interaction, it should not be too responsive, at least to allow the operator time to react (and even some user actions require artificial delays to improve the user experience).
A programmer’s day job
Programmers do more than write code, such as automated testing, infrastructure configuration and management, continuous integration/continuous release environments, and even some teams need to do operations related things (online problem monitoring, environment monitoring, etc.).
- Development/testing
- Infrastructure management
- Continuous integration/continuous release
- Operation and maintenance (monitoring) work
entertainment
Behind this series of work, there is a hidden demand for automation. In doing this, a good programmer tries to automate it, using tools if they are available; If not, develop a new tool. This philosophy of trying to make everything as automated as possible has its roots in the UNIX world.
The practical embodiment of the UNIX philosophy is done on the command line.
Where there is a shell, there is a way.
UNIX Programming Philosophy
There are several versions of the UNIX philosophy, but here’s a more detailed list. There are different versions, but there is a lot of agreement:
- Small is beautiful
- Let the program do only one thing
- Prototype as early as possible (and evolve)
- Data should be saved as a text file
- Avoid user interfaces that are not customizable
Looking at these items, we see that they actually contribute to the possibility of automating everything. Here are some small examples of how command-line tools can simplify and increase productivity by applying these philosophies. Once you’ve mastered these skills, you can’t live without them, and you can’t live with inefficient and complex GUI tools.
How does the command line improve efficiency
An advanced calculator
One of the most exciting books I read early in my programming career was UNIX Programming Environments, which was relatively niche compared to other tomes on the basic UNIX world. When I was a sophomore, the book had been published for almost 22 years (and the Chinese version was 7 years old), and some of the content was outdated, such as the main function with no return value, the external parameter list, and so on. However, after learning the entire development process of HOC(High Order Calculator), I’m still deeply impressed.
In short, the HOC language development process requires several components:
- Lexical analyzer
lex
- Parser
yacc
- Standard mathematics library
stdlib
There are also some custom functions and so on, which are finally linked together by make. I followed the instructions in the book and typed all the code in front of the book. All this is done on a very old IBM ThinkPad T20, and it’s all done on the command line (and listening to music on the command line, of course).
It was also the first time I was completely blown away by the UNIX philosophy:
- Each tool does only one thing and does it well
- Tools can work together
- Everything text-oriented
Here’s a Makefile script from the book that, with a simple configuration, brings together a bunch of gadgets that do their job to precompile, compile, link, and binary generate a programming language program.
YFLAGS = -d
OBJS = hoc.o code.o init.o math.o symbol.o
hoc5: $(OBJS)
cc $(OBJS) -lm -o hoc5
hoc.o code.o init.o symbol.o: hoc.h
code.o init.o symbol.o: x.tab.h
x.tab.h: y.tab.h
-cmp -s x.tab.h y.tab.h || cp y.tab.h x.tab.h
pr: hoc.y hoc.h code.c init.c math.c symbol.c
@pr $?
@touch pr
clean:
rm -f $(OBJS) [xy].tab.[ch]Copy the code
Although much of the book is now out of date (especially since it was first published nearly 30 years ago), anyone interested can read it. Here is a small example of Lex/Yacc for those interested.
Of course, if you use today’s most advanced IDES (typical GUI tools), the same principle applies behind the scenes: generate a Makefile and then call it behind the scenes.
Infrastructure automation
During development, engineers also need to pay attention to the environment in which the software is running. When I first started learning Linux as a student, I would install a virtual machine software VMWare on my Windows machine, and then install a Redhat Linux 9 in VMWare.
(Photo: http://t.cn/R6IBSAu)
This way, when I accidentally mess up Linux, I just need to reinstall it without affecting my other data (like coursework, documents, etc.). However, every time you reinstall it, you need to find the ISO image file, mount it to the local virtual drive, and then use VMWare to install it.
And all of this is done in the GUI, with many repetitive tasks each time: finding image files, using virtual drive software to mount, starting VMWare, installing Linux, configuring personal preferences, configuring user names/passwords, and so on. Once proficient, I can install and configure a new environment in 30-60 minutes.
Vagrant
Vagrant, which I discovered later, allows developers to describe the machine through configuration and then install and start it from the command line, such as this configuration:
VAGRANTFILE_API_VERSION = "2" Vagrant.configure(VAGRANTFILE_API_VERSION) do |config| config.vm.box = "precise64" Config.vm.net work" private_network", : IP => "192.168.2.100" endCopy the code
It defines a virtual machine, uses an image of Ubuntu Precise 64, and then configures a network address 192.168.2.100 for it. Once defined, ALL I need to do is:
$ vagrant upCopy the code
My machine can be installed in a matter of minutes, and because this action is done on the command line, I can do the same thing in a continuous integration environment – with just one command. This defined file can be shared across the team, put into version management, and any member of the team can have the same environment as me in a matter of minutes.
Ansible
In general, for a software project, a brand new operating system is basically useless. In order for the app to run, we need a lot more. For example, the Web server, Java environment, CGI path, etc., in addition to installing some software, there is a lot of configuration work to do, such as the Apache HTTPD server document root path, JAVA_HOME environment variables, etc.
(Photo: http://t.cn/R6IBZKm)
When these tasks are done, an environment is ready. I remember on the last project, I accidentally deleted the Tomcat directory of the test environment, and it took another colleague three or four hours to restore the environment (including reinstalling Tomcat, configuring some JAVA_OPTS, deploying the application, etc.).
Fortunately, there are a number of tools available to help developers automate their environments: Chef, Puppet, Ansible. With some simple configuration and a command line application, the whole process can be automated:
- name: setup custom repo
apt: pkg=python-pycurl state=present
- name: enable carbon
copy: dest=/etc/default/graphite-carbon content='CARBON_CACHE_ENABLED=true'
- name: install graphite and deps
apt: name={{ item }} state=present
with_items: packages
- name: install graphite and deps
pip: name={{ item }} state=present
with_items: python_packages
- name: setup apache
copy: src=apache2-graphite.conf dest=/etc/apache2/sites-available/default
notify: restart apache
- name: configure wsgi
file: path=/etc/apache2/wsgi state=directoryCopy the code
The above configuration describes the installation of Graphite carbon, apAHCE configuration and many other manual labor, developers now only need to implement:
$ ansibleCopy the code
You can automate the whole process. Now if I accidentally delete Tomcat, it only takes a few minutes to reconfigure a new one, and the whole process is automatic. This is completely unimaginable in a GUI, especially in a scenario with so much custom content.
Continuous integration/continuous release
In addition to the actual coding and environment configuration, the other big part of daily development tasks is continuous integration/continuous release. With the help of the command line, this action can also be very efficient and automated.
Jenkins
Continuous integration/continuous publishing has become a basic configuration in many enterprise IT. Teams compile code through a continuous integration environment, check statically, perform unit tests, test end-to-end, generate reports, package, deploy to a test environment, and more.
For example, in the Jenkins environment, configuring a build task required a lot of GUI operations in the previous release, but in the new release, most of the operations are already scripted.
In this way, it is possible to automate, to copy an existing Pipline, or to modify configurations, commands, variables, etc., without the need to click and click. Moreover, the code can be incorporated into the project code base, managed and maintained along with other code, and the change history is easier to track and roll back (on guIs, especially web-based ones, rollback is almost impossible).
node {
def mvnHome
stage('Preparation') { // for display purposes
git 'https://github.com/jglick/simple-maven-project-with-tests.git'
mvnHome = tool 'M3'
}
stage('Build') {
sh "'${mvnHome}/bin/mvn' -Dmaven.test.failure.ignore clean package"
}
stage('Results') {
junit '*/target/surefire-reports/TEST-.xml'
archive 'target/*.jar'
}
}Copy the code
The Above Groovy script defines three phases, each with its own command, and this code-controlled approach is clearly more efficient than GUI editing and makes automation possible.
Operational work
Automatic monitoring
Graphite is a powerful monitoring tool, but the idea behind it is simple:
- Store timeline-based data
- Render the data into graphs and refresh periodically
Users only need to regularly send data to Graphite in a certain format, and Graphite does the rest. For example, it can consume data like this:
instance.prod.cpu.load 40 1484638635
instance.prod.cpu.load 35 1484638754
instance.prod.cpu.load 23 1484638812
Copy the code
The first field represents the name of the data, for example, instance.prod.cpu.load represents the CPU load of the PROd instance, the second field represents the value of the data, and the last field represents the timestamp.
In this way, Graphite plots all the values under the same name in chronological order.
(Photo: http://t.cn/R6IxKYL)
By default, Graphite listens to a network port to which users send information, which is then persisted and refreshed periodically. In short, it takes only one command to send data:
echo "instance.prod.cpu.load 23 </span>date +%s<span class="pl-pds">
" | nc -q0 graphite.server 2003Copy the code
Date +%s generates the current timestamp, which is then pieced together into a complete string using the echo command, such as:
instance.prod.cpu.load 23 1484638812Copy the code
And then through the pipe | the string will be sent over the network to graphite. The server port 2003 for this machine. So the data is recorded on grapherm. server.
Timing task
If we want to automatically send data to Grapherm. server every few seconds, we just need to modify this command:
- Gets the load of the current CPU
- Gets the current timestamp
- To form a string
- Sent to the
graphite.server
the2003
port - Repeat every 5 minutes 1-4
Loading the CPU is easy on most systems:
ps -A -o %cpuCopy the code
Here are the parameters:
-A
Indicates that all current processes are counted-o %cpu
Display only%cpu
The value of column
This gives the number of CPU loads used by each process:
%CPU 12.0 8.2 1.2...Copy the code
The next step is to add up the figures. This can easily be done with the awk command:
$ awk '{s+=$1} END {print s}'Copy the code
For example, to calculate the sum of 1, 2, and 3:
$ echo "1\n2\n3" | awk '{s+=$1} END {print s}'
6Copy the code
The two can be connected through a pipeline:
$ ps -A -o %cpu | awk '{s+=$1} END {print s}'Copy the code
Let’s test the effect:
$ps - A - o % CPU | awk '+ = $1} {s END} {print s' 28.6Copy the code
This script will be called periodically with crontab:
#! /bin/bash SERVER=graphite.server PORT=2003 LOAD=</span>ps -A -o %cpu <span class="pl-k">|</span> awk <span class="pl-s"><span class="pl-pds">'</span>{s+=$1} END {print s}<span class="pl-pds">'</span></span><span class="pl-pds">
echo "instance.prod.cpu.load ${LOAD} </span>date +%s<span class="pl-pds">
" | nc -q0 ${SERVER} ${PORT}Copy the code
Of course, it’s easy to do something even cooler with a UI emphasizing tool like Grafana:
(Photo: http://t.cn/R6IxsFu)
Think about how to do this with GUI applications.
entertainment
Command line MP3 player
In the early days, there was a command line tool called MPg123 for playing MP3 files. But the tool was commercially available, so someone wrote a tool called MPg321, which is basically an open source clone of MPg123. But then MPG123 itself opened source, which is not to be mentioned.
Save the path to all my MP3 files as a single file, equivalent to my playlist:
$ ls /Users/jtqiu/Music/*.mp3 > favorites.list $ cat favorites.list ... / Users/jtqiu/Music/Rolling In The Deep - Adele. Mp3 / Users/jtqiu/Music/Wavin 'Flag - K Naan. Mp3 / Users/jtqiu/Music/blue lotus - xu wei. Mp3 .Copy the code
Then I handed the playlist to MPG321 to play in the background:
$ mpg321 -q --list favorites.list &
[1] 10268Copy the code
This way I can listen to music while writing code. If I get bored, I just switch the background task to the foreground FG and then I can turn it off:
$ fg
[1] + 10268 running mpg321 -q --list favorites.listCopy the code
summary
Above all, good programmers can multiply (sometimes by orders of magnitude) their productivity with the help of command-line features, giving them more time to think, learn new skills, or develop new tools to help automate a task. That’s what makes good programmers good. Hand-oriented, raw graphical interfaces slow down the process, drowning much of what could be automated in “simple GUIs.”
(image from: http://cargocollective.com/)
Finally, the point of this article is to emphasize the relationship between a good programmer and the command line, not the comparison between GUI programs and the command line. Of course, GUI programs have their uses, such as 3D modeling, GIS systems, designer creation, graphic word processing software, movie players, web browsers, and so on.
It should be said that the command line and a good programmer are more related than causation. In the daily work of programmers, there are more scenarios that require command-line tools to support. If you take it to the extreme and force the command line into inappropriate situations, at the expense of efficiency, you’re overdoing it.
For more insights, please follow our wechat official account: Sitvolk