We learned about the good and bad of programming in our youth, but we didn’t know it was so important. Knowledge that can immediately improve programming and write “good” code includes the following:

• Five basic principles of object orientation; • Three common architectures; • drawing; • Choose a good name; • Optimize nested if else code;

Of course, the richness of other technical knowledge also determines the quality of the program design. For example, message queue can be introduced to solve the performance difference between two ends, and cache layer can be added to improve query efficiency. Let’s take a look at what the points listed above include and how they can help improve your code and programming.

Five basic principles of object orientation

The author of this book is a class of 2010 student, object oriented is the programming paradigm that the author developed in his youth. Its five basic principles are:

• Single responsibility principle; • Open and closed principle; • Dependency inversion principle; • Interface isolation principle; • Principles of composite reuse;

Let’s look at the impact of five basic principles on code quality through comparison and scenario assumptions.

The principle of single responsibility for immediate results

Yes, the results were immediate and remarkable. For those of us who teach ourselves how to program, we can implement the function, there is no time to consider code optimization and maintenance costs. As time goes by, I realize how important programming is only after I’ve been around it for a long time.

As the saying goes, as long as the code is bad enough, the improvement is obvious. Taking an example of matching key data from file contents and making a network request based on the matching results, see how most programmers write:

import re
import requests


FILE = "./information.fet"


def extract(file):
    fil = open(file, "r")
    content = fil.read()
    fil.close()
    find_object = re.search(r"url=\d+", content)
    find = find_object.group(1)
    text = requests.get(find)
    return text


if __name__ == "__main__":
    text = extract(FILE)
    print(text)
Copy the code

The requirement has been met, of course, but here’s the problem:

• What if an exception occurs while reading the file? • What happens if the data source changes? • What if the data returned by the network request does not meet the final requirements?

If your first thought is to change the code, pay attention. When something changes in the middle of a task, it is inevitable to change the code, but if you write it this way, not only will the code get messier, but the logic will also get messier. The single responsibility principle expresses the idea that a function should try to do only one thing, not multiple things in a function.

If the above code is redesigned, I think it should at least look like this:

Def get_source(): """ "" return def extract_(val): """ """ return def fetch(val): """ """ "return def trim(val): """ "" return def extract(file): Source = get_source() content = Extract_ (source) text = trim(fetch(content)) return text if __name__ == "__main__": text = extract(FILE) print(text)Copy the code

Breaking up the steps that were implemented in a single function into smaller functions, each of which does only one thing. When the data source changes, you only need to change the code associated with get_source. If the data returned by the network request does not meet the final requirements, we can trim it in the trim function. As a result, the code is much better able to cope with change, and the process becomes clearer and easier to understand. The changes before and after the modification are as follows:

At the heart of the single responsibility principle is decoupling and enhanced cohesion. If a function takes on too many responsibilities, they are coupled together, and this coupling can lead to weak designs. When changes are made, the original design can suffer unexpected damage. The single responsibility principle essentially breaks things down into multiple steps, and code changes have very little impact.

The open and closed principles and dependency inversion principles that make code stability soar

Open In the closed principle, open refers to open to expansion, and closed refers to closed to modification. The requirements are always changing. One month the business wants you to store your data in MySQL, the next month it wants you to export it to Excel, at which point you have to change your code. This scenario is similar to the single responsibility principle above, which is also facing code changes. The single responsibility principle example mainly expresses the reduction of the impact of changes through decoupling. In this case, it mainly expresses the improvement of the ability of the program to cope with changes and improve the stability of the program by being open to extension and closed to modification.

How to understand the word stability?

Little or no changes are considered stable, and stable means that other code that calls this object gets results that are deterministic and stable overall.

The code for a data store would look something like this:

class MySQLSave:

    def __init__(self):
        pass

    def insert(self):
        pass

    def update(self):
        pass


class Business:
    def __init__(self):
        pass

    def save(self):
        saver = MySQLSave()
        saver.insert()
Copy the code

There is no doubt that functionality can be implemented. To see how it copes with change, if you change the storage, that means you need to change the code. Following the code example above, there are two options:

• Rewrite a class stored in ExcelSave; • Make changes to MySQLSave class;

The above two options, either way, change two classes. Because not only does the stored class need to change, but the code at the invocation also needs to change. As a result, they are all unstable. If implemented differently, this problem can be easily addressed by following the dependency inversion design guidelines. Understand while looking at the code:

import abc


class Save(metaclass=abc.ABCMeta):
    @abc.abstractmethod
    def insert(self):
        pass

    @abc.abstractmethod
    def update(self):
        pass


class MySQLSave(Save):

    def __init__(self):
        self.classify = "mysql"
        pass

    def insert(self):
        pass

    def update(self):
        pass


class Excel(Save):
    def __init__(self):
        self.classify = "excel"

    def insert(self):
        pass

    def update(self):
        pass


class Business:
    def __init__(self, saver):
        self.saver = saver

    def insert(self):
        self.saver.insert()

    def update(self):
        self.saver.update()


if __name__ == "__main__":
    mysql_saver = MySQLSave()
    excel_saver = Excel()
    business = Business(mysql_saver)
Copy the code

An abstract base class is implemented through the built-in ABC. The purpose of this base class is to force subclasses to implement required methods to achieve the unity of subclass functionality. After the subclass function is unified, no matter which subclass of it is called, it is stable, and the caller will not need to change the method name or change the passed parameter.

The inversion of dependency inversion refers to the inversion of dependency. In the previous code, the caller Business relied on the MySQLSave object, and once the MySQLSave object needs to be replaced, Business needs to change it. Dependency inversion refers to the dependency of an object, which was previously dependent on an entity. If the dependency is abstracted, the situation is reversed:

Entity Business relies on abstraction for one advantage: abstraction is stable. Abstractions are more stable than mutable entities. The dependencies have changed significantly before and after the code change, with Business relying directly on the entity MySQLSave, and Busines, ExcelSave, and MySQLSave all relying on abstraction after the dependency inversion.

The advantage of this is that if you need to replace the storage, you only need to create a new storage entity and pass it in when calling Business. In this way, you don’t need to change the code of Business, in line with the open and closed principle of open for modification and open for extension.

The implementation of dependency inversion uses a method called dependency injection. In fact, dependency injection alone without dependency inversion can also meet the requirements of the open and closed principle. Interested readers may wish to have a try.

The picky interface isolation principle

Interfaces in Interface isolation principle refer to interfaces, not Restful interfaces in Web applications, but can be abstracted as the same objects in practice. At the design level, the interface isolation principle serves the same purpose as the single responsibility principle. The guiding principles of the interface isolation principle are:

• The caller should not rely on interfaces it does not need; • Dependencies should be built on minimal interfaces;

This actually tells us to give the interface to lose weight, too many functions of the interface can choose to split the way optimization. For example, now design an abstract class for a library book:

import abc


class Book(metaclass=abc.ABCMeta):
    @abc.abstractmethod
    def buy(self):
        pass

    @abc.abstractmethod
    def borrow(self):
        pass

    @abc.abstractmethod
    def shelf_off(self):
        pass

    @abc.abstractmethod
    def shelf_on(self):
        pass
Copy the code

They can be bought, they can be borrowed, they can be taken down, they can be put on the shelf, and that doesn’t seem to be a problem. But this abstraction is only available to administrators, and the user needs to create a new abstract class, because you can’t allow the user to manipulate the book on or off the shelf. The recommended method of interface isolation is to divide the bookloading and unloading, book purchasing and borrowing into two abstract classes. The book class on the management side inherits two abstract classes, and the book class on the client side inherits one abstract class. This may seem a little convoluted, but don’t panic, let’s look at the picture to understand:

This is not immediately understand. This guideline is important not only to guide the design of abstract interfaces, but also to guide the design of Restful interfaces, and to help us find problems with existing interfaces, so that we can design more reasonable programs.

The principle of lightweight composite multiplexing

The principle of composite reuse is to use object combination rather than inheritance to achieve reuse. The purpose of composite reuse is to reduce dependencies between objects. Because inheritance is strongly dependent, subclasses need to own their parent class completely regardless of which attributes they use. Composition takes a different approach to associating objects, reducing dependencies.

Why is it recommended to use composite reuse in preference to inheritance?

Because of the strong dependencies of inheritance, if the dependent object (parent class) changes, then the dependent (child class) also needs to change. Composite reuse can avoid this situation. Note that reuse is recommended in preference, but inheritance is not rejected; it is used where it should be used. Let’s take a piece of code as an example to illustrate the difference between composite reuse and inheritance:

import abc


class Car:

    def move(self):
        pass

    def engine(self):
        pass


class KateCar(Car):

    def move(self):
        pass

    def engine(self):
        pass


class FluentCar(Car):

    def move(self):
        pass

    def engine(self):
        pass
Copy the code

Car, as the parent class, has two important properties, move and engine. If you need to paint the Car color, you need to add a color attribute, and add all three classes. If you are using synthetic multiplexing, you can write:

class Color:
    pass


class KateCar:

    color = Color()

    def move(self):
        pass

    def engine(self):
        pass


class FluentCar:

    color = Color()

    def move(self):
        pass

    def engine(self):
        pass
Copy the code

The specific operation of class object composite reuse is to instantiate a class object in a class and then call it when needed. The code may not be that intuitive, but let’s look at the diagram:

This example is intended to illustrate how inheritance and composite reuse are implemented and changed back and forth. There is no need to go into Car inheritance, because if you are obsessed with discussing why the two cars on the right are not inherited, you will get lost.

The method of choice for composite multiplexing here is to instantiate another class Color in both Cars. It is also possible to instantiate Color externally using dependency injection and pass the instance object to both cars.

There are three common architectures

Understanding many different architectures allows us to broaden our knowledge and propose alternative solutions to a class of problems. At the same time, understanding multiple architectures allows us to plan during the design phase and avoid frequent refactoring later on. The three common architectures are:

• Monomer architecture; • Distributed architecture; • Microservices architecture;

Monomer architecture

Monolithic architectures are the ones we deal with a lot and are relatively easy to understand. A monolithic architecture brings all functions together in one application. We can simply think of this architecture as:

This architecture is simple, easy to deploy and test, and most applications start with a single architecture. The monomer architecture also has several obvious disadvantages:

• High complexity, all functions are integrated into one application, with many modules and fuzzy boundaries. With the passage of time and the development of business, more and more projects and codes, the overall service efficiency gradually decreases; • Release/deployment frequency is low, it takes a lot of work, the launch of new features or bug fixes requires coordination, and the release time is delayed. Large projects take longer to build and have an increased chance of build failure. • Performance bottlenecks are obvious, no matter how powerful a cow is, it will not be able to overcome the effect of the combined efforts of many cows. With the increase of data volume and concurrent requests, the weakness of read performance will be exposed first, and then you will find that other aspects can not keep up with it. • Affecting technological innovation: single architectures are usually developed in one language or one framework, and it is difficult to introduce new technologies or access modern services; • Reliability is low, once a service problem occurs, the impact is huge.

Distributed architecture

Compared with individual architectures, distributed architectures solve most of the problems faced by individual architectures through splitting, such as performance bottlenecks. If a single architecture is a bull, then a distributed architecture is a bull:

When performance bottlenecks occur in monolithic architectures, the team can consider converting monolithic architectures to distributed architectures to enhance service capabilities. Of course, distribution is not a panacea. It solves the problems of performance bottlenecks and low reliability of single architectures, but complexity, technological innovation and low release frequency still exist, and this is when microservices can be considered.

Microservices Architecture

The key word of microservice architecture is disassembly, which is to disassemble multiple functions originally combined in one application into multiple small applications. These small applications are connected in series to form a complete application with the same functions as the previous single architecture. The details are as follows:

Each microservice can run independently and interact with each other through network protocols. Multiple instances of each microservice can be deployed, providing the same performance as a distributed architecture. The release/deployment of a single service has little impact on other services and is not associated with the code, so new releases can be released frequently. The complexity problem is solved easily. After splitting, the architecture logic is clear, the function modules have single responsibilities, and the increase of functions and codes will not affect the overall efficiency. After service independence, the project becomes language independent. The evaluation service can be implemented in Java language or Golang language, which is no longer restricted by language or framework, and the problem of technological innovation is alleviated.

\

Isn’t this a lot like the single responsibility principle and interface isolation principle?

Distributed and microservices are not silver bullets

From the above comparison, it seems that distributed architecture is better than monolithic architecture, and microservice architecture is better than distributed architecture. So microservice architecture > distributed architecture > monolithic architecture?

This is not true. Architecture needs to be chosen according to scenarios and requirements. Microservice architectures and distributed architectures look great, but they also create new problems. Take the microservices architecture as an example:

• High O&M costs. In a single architecture, o&M only needs to ensure the normal running of one application, which may only concern hardware resource consumption. But if you switch to a microservice architecture, there are hundreds of applications, and when there is a problem with the application or the coordination between multiple applications is abnormal, the head of the operations staff will become larger. • The inherent complexity of distributed systems, with network partitioning, distributed transactions, and traffic balancing hampering developers and operations; • The cost of interface adjustment is high. An interface may have many callers. If the design does not follow the open and closed principle and interface isolation principle, the workload of interface adjustment will be very large; • Limited interface performance. The interaction was completed quickly in memory by using function calls, but the performance decreased significantly after the interface was replaced by network interaction; • Repetitive work, where there are common modules, but you need to implement a copy of the same code if the language is independent and performance is a concern (no interface interaction);

The choice of which architecture to use depends on the specific scenario. If your system complexity is not so high and the performance is not so high, for example, a crawler application with only tens of thousands of daily data, the single architecture is enough to solve the problem, and there is no need to force the distribution or micro-service, because this will only increase the workload of your own.

Draw flush

In situations where relationships need to be expressed and logic combed, diagrams are always better than code. In the industry, there is a popular saying “program development, design first”, which means that before development, the program needs to be conceived and designed. Can you write good code if you can’t articulate object relationships and logic?

Use case diagrams to mine requirements and functional modules when conceiving a project; Collaboration diagrams can be used to sort out module relationships in architecture design. You can use class diagrams to plan interactions when designing interfaces or class objects. State diagrams can be used in feature design to help us mine feature properties…

Once you understand the importance of drawing, you can learn more about specific drawing methods and techniques in the Engineer’s Drawing Guide section of this book (The Python Programming Reference).

Pick a good name

Do you remember the names you used to have:

, reversalList, get_translation, get_data, do_trim, CarAbstract

Having a good, appropriate name can make your code look consistent and clear. Choosing a good name is not just a matter of grammar, but also style and purpose. For detailed naming methods and techniques, see the Python Programming Reference’s naming and Style Guide.

Optimize nested if else code

It’s normal to write code with some control statements, but it can be a pain to have too many if else nested, and your code looks like this.

This structure results from the use of an if statement to check for prerequisites, proceed to the next line of code if the condition is in charge, and stop if it is not. In this case, we can reverse the prerequisite check and the code will look something like this:

if "http" not in url:
        return
if "www" not in url:
        return
Copy the code

This is my usual method of optimization, and then see the name of this method in the paid column of Teacher Zhang Ye – wei statement. \

Of course, this simple logic and if else control flow can be very effective when handled with a guard statement, but if the logic is more complex, the guard statement does not work as well. Assuming that the 4S shop has a discount limit, the general sales are entitled to grant a certain discount to cars with a price less than 300,000 yuan, the elite sales need to be authorized if the amount is more than 300,000 yuan but less than 800,000 yuan, and the discount of cars with a higher price needs to be authorized by the store manager. This function can be summarized as determining the grant authority according to the amount, and the corresponding code is as follows:

Def buying_car(price): def buying_car(price): def buying_car(price): def buying_car(price): def buying_car(price): def buying_car(price): def buying_car(price): def buying_car(price): def buying_car(price): Print (" manager ")Copy the code

The code is clear, but the problems are obvious. If we extend the pricing and grading later, we will add more if else statements and the code will become bloated. The order of the control statements is fixed in the code. If you want to adjust the order, you can only modify the control statements.

So the question is, is there a better way to do this than if else?

Consider a design pattern called chain of responsibility. The chain of responsibility design pattern is defined as: In order to avoid the request sender coupling with multiple request handlers, all request handlers are linked into a chain by remembering the reference of the previous object and the reference of the next object. When a request occurs, it can be passed along the chain until an object processes it.

It seems a little convoluted, but let’s look at the code diagram to understand it better:

The handler class determines whether its own handler can handle it based on its preconditions. If not, it passes to next_Handler, the next node in the responsibility chain. The above is realized by the responsibility chain as follows:

class Manager: def __init__(self,): self.obj = None def next_handler(self, obj): self.obj = obj def handler(self, price): pass class General(Manager): def handler(self, price): if price < 300000: Print ("{} ".format(price)) else: self.obj.handler(price) class Elite(Manager): def handler(self, price): If 300000 <= price < 800000: print(" format(price) ") else: self.obj.handler(price) class BOSS(Manager): Def handler(self, price): if price >= 800000: print("{} ".format(price))Copy the code

Once you have created the abstract class and the concrete processing class, there is no relationship between them. We need to mount them together as a chain:

general = General()
elite = Elite()
boss = BOSS()
general.next_handler(elite)
elite.next_handler(boss)
Copy the code

The order of responsibility chain established here is General -> Elite -> BOSS. The caller only needs to pass the price to General. If it does not have the right to confer discounts, it will hand it to Elite; if Elite does not have the right to confer discounts, it will hand it to BOSS. The corresponding code is as follows:

    prices = [550000, 220000, 1500000, 200000, 330000]
    for price in prices:
        general.handler(price)  
Copy the code

This is the same as when we go to the 4S shop to buy a car. As customers, we just need to decide which car to buy. As for how the 4S shop applies for a discount and who authorizes it, I just need to get the corresponding discount.

So now we’re done with if else optimization.


Do the above points, I believe that your code quality and programming level will have a good improvement, come on!