Today, I looked at Golang’s network programming and summed up some knowledge about TCP/HTTP, so I wrote it down for convenient recall in the future. Having a poor memory really hurts! 👴 tired.

If you read my history articles, you know THAT I am a Javaer, and I just switched to Golang recently, although I learned didiu diu before. Because now the company uses Golang, not hold not to step on, Golang write really comfortable! Life is short, Let’s Go!

TCP

Golang’s TCP programming is similar to most languages in that the ServerSocket(TcpListener) listens on a port and returns a Socket(Connection) when a Connection is established. Golang supports Goroutine. If you remember Java/C Socket programming, you will know that for each connection you need to open a thread, and then because of the number of connections cause the open thread to burst, and then NIO, and then non-blocking programming + asynchronous callback… . This corresponds to the Java Netty keyword and Rust async keyword, respectively. I’ve done it all, so the first time I used Goroutine feel ho! Really simple.

Of course, it would be inappropriate to discuss Goroutine’s performance or Golang’s scheduler here, but if you accept the user-level thread design, which is built into the language, you should accept the rules it draws for you. But basically nobody touches the boundary, so we don’t talk about the bottom layer here. First, it’s not necessary. You go to Golang to save your hair, not to manage your own scheduling. Two, I haven’t learned hahaha!

Let’s get started!

Set up a listener first:

func Serve(a) {
	address := "127.0.0.1:8190"
	tcpAddr, err := net.ResolveTCPAddr("tcp4", address)
	iferr ! =nil {
		log.Fatal(err)
	}
	// listener corresponds to ServerSocket
	serverSocket, err := net.ListenTCP("tcp4", tcpAddr)
	iferr ! =nil {
		log.Fatal(err)
	}
	for {
		// Each time a Connection is established, a Connection is returned, which corresponds to the Socket
		socket, err := serverSocket.AcceptTCP()
		fmt.Println("connection established...")
		iferr ! =nil {
			log.Fatal(err)
		}
		// create a Goroutine to handle new connections
		go server(socket)
	}
}
Copy the code

This step is similar to that of any other language, but finally, the go keyword opens Goroutine to handle new connections, and we do read and write in this method and respond to close operations from the client.

Basic version

Now let’s see how the server handles read/write requests:

// The most common version
func server(socket *net.TCPConn) {
	defer func(tcpConn *net.TCPConn) {
		err := tcpConn.Close()
		iferr ! =nil {
			log.Fatal(err)
		}
	}(socket)
	for {
		request := make([]byte.1024)
		readLen, err := socket.Read(request)
		if err == io.EOF {
			fmt.Println("Connection closed")
			return
		}
		msg := string(request[:readLen])
		fmt.Println(msg)
		msg = "echo: " + msg
		_, _ = socket.Write([]byte(msg))
	}
}
Copy the code

A byte slice receives data, processes it, and writes it out.

Here is the client writing method:

func client(a) {
	tcpAddr, _ := net.ResolveTCPAddr("tcp"."127.0.0.1:8190")
	socket, _ := net.DialTCP("tcp".nil, tcpAddr)
	defer func(socket *net.TCPConn) {
		_ = socket.Close()
	}(socket)
	var input string
	fmt.Println("input for 5 loops")
	for i := 0; i < 5; i++ {
		_, _ = fmt.Scanf("%s", &input)
		_, _ = socket.Write([]byte(input))
		response := make([]byte.1024)
		readLen, _ := socket.Read(response)
		fmt.Println(string(response[:readLen]))
	}
}
Copy the code

However, there are a number of problems. For example, if I send a lot at once, I can’t define the slice size this time. What do I do? You can’t just turn it up, can you?!

And TCP also has sticky packet and unpacket “, how can I ensure that my data is not unpacked, and sticky packet after how to deal with? In fact, THE TCP protocol is generally processed based on delimiter or length. For example, an HTTP request to send a file is implemented by pointing out the overall length at the beginning, then pointing out the delimiter string, also known as the boundary character, and then separating different files according to the boundary character.

Now let’s look at these two ways.

The separator

Based on delimiters:

// Delimiter based version
func serverClientDelimiterBased(socket *net.TCPConn) {
	defer func(socket *net.TCPConn) {
		err := socket.Close()
		iferr ! =nil {
			log.Fatal(err)
		}
	}(socket)
	// Build a Reader that will read continuously until the Socket is empty
	reader := bufio.NewReader(socket)
	for {
		// The equivalent of splitting a stream of data until it becomes unreadable
		data, err := reader.ReadSlice('\n')
		iferr ! =nil {
			if err == io.EOF {
				// The connection is closed
				break
			} else {
				fmt.Println("Abnormal" + err.Error())
			}
		}
		// Discard delimiters
		data = data[:len(data)- 1]
		text := string(data)
		fmt.Println("The server reads:" + text)
		resp := fmt.Sprintf("Hello, client. I have read: [%s] from you.", text)
		_, _ = socket.Write([]byte(resp))
	}
	fmt.Println("Connection closed")}Copy the code

The main difference between this and the base version is that it has a separator to split up the input stream. As you can see, we put the Socket read into a single Reader, and the Reader reads continuously from the Socket. Then each time we read the separator (in this case ‘\n’), we split the Socket and return the preceding part. It then rewrites the starting position to the position next to the delimiter until the connection is closed, the socket is unreadable, and EOF is returned.

Take a look at a possible client implementation:

func clientDelimiterBased(a) {
	tcpAddr, err := net.ResolveTCPAddr("tcp"."127.0.0.1:8190")
	iferr ! =nil {
		log.Fatal(err)
	}
	socket, err := net.DialTCP("tcp".nil, tcpAddr)
	iferr ! =nil {
		log.Fatal(err)
	}
	var input string
	fmt.Println("input for 5 loops")
	for i := 0; i < 5; i++ {
		_, _ = fmt.Scanf("%s", &input)
		// Add a delimiter
		input = input + "\n"
		_, _ = socket.Write([]byte(input))
		response := make([]byte.1024)
		readLen, _ := socket.Read(response)
		fmt.Println(string(response[:readLen]))
	}
	err = socket.Close()
	iferr ! =nil {
		log.Fatal(err)
	}
}
Copy the code

With delimiters, we don’t have to worry about sticky and unpack, and don’t have to guess the length of the request, just keep reading, and then split the request. We could have done this for the request, but we didn’t do it to save trouble and simplify the code.

A minor disadvantage of delimitor-based is that delimiters can be difficult to handle if they are also part of the content, and “splitting” with delimiters requires traversing the read stream, or the buffer. So this is a performance loss.

Based on the length of the

If we can specify the length of the request at some point, and the value of the record length must be read and must be read first, can we split the request by counting how many bytes have been read so far? I have written an IM system before, which is a custom message body, the message body has a length field, and then uses Netty’s length splitting Handler to implement the function of dividing different messages based on length.

And of course, for the sake of demonstration, we’re not going to get too complicated, but we’re just going to write the length first, followed by the data. The length is int32 (4 bytes), and since TCP uses the block method, remember to specify this before writing.

Look at the code:

func clientLengthBased(a) {
	tcpAddr, err := net.ResolveTCPAddr("tcp"."127.0.0.1:8190")
	iferr ! =nil {
		log.Fatal(err)
	}
	socket, err := net.DialTCP("tcp".nil, tcpAddr)
	iferr ! =nil {
		log.Fatal(err)
	}
	var input string
	fmt.Println("input for 5 loops")
	for i := 0; i < 5; i++ {
		_, _ = fmt.Scanf("%s", &input)
		data := []byte(input)
		var buffer = bytes.NewBuffer([]byte{})
		// Write the length first
		_ = binary.Write(buffer, binary.BigEndian, int32(len(data)))
		// Write data again
		_ = binary.Write(buffer, binary.BigEndian, data)
		_, _ = socket.Write(buffer.Bytes())
		response := make([]byte.1024)
		readLen, _ := socket.Read(response)
		fmt.Println(string(response[:readLen]))
	}
	err = socket.Close()
	iferr ! =nil {
		log.Fatal(err)
	}
}
Copy the code

Let’s look at another possible client implementation:

func clientLengthBased(a) {
	tcpAddr, err := net.ResolveTCPAddr("tcp"."127.0.0.1:8190")
	iferr ! =nil {
		log.Fatal(err)
	}
	socket, err := net.DialTCP("tcp".nil, tcpAddr)
	iferr ! =nil {
		log.Fatal(err)
	}
	var input string
	fmt.Println("input for 5 loops")
	for i := 0; i < 5; i++ {
		_, _ = fmt.Scanf("%s", &input)
		data := []byte(input)
		var buffer = bytes.NewBuffer([]byte{})
		// Write the length first
		_ = binary.Write(buffer, binary.BigEndian, int32(len(data)))
		// Write data again
		_ = binary.Write(buffer, binary.BigEndian, data)
		_, _ = socket.Write(buffer.Bytes())
		response := make([]byte.1024)
		readLen, _ := socket.Read(response)
		fmt.Println(string(response[:readLen]))
	}
	err = socket.Close()
	iferr ! =nil {
		log.Fatal(err)
	}
}
Copy the code

Easy to understand, right? Good! The end. Socket programming itself has nothing to say in Golang, and unlike Java and the Reactor model, just go for a run. So life is short, CS:GO.

HTTP

Golang was created to solve Google’s web programming pain points, such as creating a WebApp that requires a bunch of packages and a bunch of frameworks to run. (I’m not talking about Java.

Golang is much simpler, listening directly to Http and setting up routes, with an HTTPHandle method for each path and each request running in a separate Goroutine. So it’s easy to handle tens of thousands of concurrent tasks, and you don’t have to worry about blocking, just write in one step.

Let’s look at a simple use:

package server

import (
	"fmt"
	"net/http"
	"strings"
)

type HandlerFunc func(w http.ResponseWriter, r *http.Request)

type myHandler struct {
	// Pathmap matches are only written at the beginning, so no synchronization is required
	handlers map[string]HandlerFunc
}

func NewMyHandler(a) *myHandler {
	return &myHandler{
		handlers: make(map[string]HandlerFunc),
	}
}

func (h *myHandler) AddHandler(path, method string, handler http.Handler) {
	key := path + "#" + method
	h.handlers[key] = handler.ServeHTTP
}

func (h *myHandler) AddHandlerFunc(path, method string, f HandlerFunc) {
	key := path + "#" + method
	h.handlers[key] = f
}

type notFound struct{}func (n *notFound) ServeHTTP(writer http.ResponseWriter, request *http.Request){}var handler404 = notFound{}

func (h *myHandler) getHandlerFunc(path, method string) HandlerFunc {
	key := path + "#" + method
	handler, ok := h.handlers[key]
	if! ok {// Todo returns a 404 proprietary handler
		return handler404.ServeHTTP
	} else {
		return handler
	}
}

func (h *myHandler) ServeHTTP(writer http.ResponseWriter, request *http.Request) {
	url := request.RequestURI
	method := request.Method
	uri := strings.Split(url, "?") [0]
	h.getHandlerFunc(uri, method)(writer, request)
}

func ServeHttp(a) {
	myHandler := NewMyHandler()
	myHandler.AddHandlerFunc("/hello"."GET".func(w http.ResponseWriter, r *http.Request) {
		// Must parse! Otherwise, an error will be reported
		_ = r.ParseForm()
		fmt.Println(r.Form.Get("name"))
		_, _ = w.Write([]byte("ok"))
	})
	myHandler.AddHandlerFunc("/hello"."POST".func(w http.ResponseWriter, r *http.Request) {
		_ = r.ParseForm()
		fmt.Println(r.PostForm.Get("name"))
		_, _ = w.Write([]byte("ok"))
	})
	myHandler.AddHandlerFunc("/upload"."POST".func(w http.ResponseWriter, r *http.Request) {
		// Limit size to 8MB
		_ = r.ParseMultipartForm(8 << 20)
		fileHeader := r.MultipartForm.File["my_file"] [0]
		fmt.Println(fileHeader.Filename)
		_, _ = w.Write([]byte("ok"))
	})
	_ = http.ListenAndServe(": 8190", myHandler)
}
Copy the code

Let’s take a look at Golang’s HTTP process and come back to it.

The veil

ListenAndServe(); ListenAndServe(); ListenAndServe();

Construct a Server object and call its ListenAndServe() method, setting up the listening address and the handler, temporarily understood as the router.

Golang’s default handling of Http is that the developer provides a router, and then for each request, calls the router and routes it to the specified handler. Normally, the handler is also specified by the developer. That is, you provide a router, Golang sends you the URI, and you find the function that handles this URI based on the URI. This is all handled in a separate Goroutine, with each request isolated from the other.

Server.listenandserve () implements a simple Listener:

This method is relatively simple to implement:

First, the port will be used for endless rotation until a connection is established, otherwise it will be blocked. It then sets the context used after the connection is established and uses the Accept() method to get a Socket named Rw because the Socket is readable and writable.

This context holds the ServerSocket(Listener) that created the Socket in each Socket. In addition, the implementation of a context is interesting, extending the context or adding value by deriving a child context from a parent context. The query is done recursively, looking in the parent context if the current context does not exist.

This uses the serverSocket newConn() method to construct a connection object. This object contains information about the Socket for reading and writing, serverSocket, etc. We call this the new and more comprehensive Socket(because it is still used for reading and writing on the remote client).

Finally, a new Goroutine is opened to perform read/write requests on the new connection. This is the secret of Golang’s ability to withstand high connections. As you might have guessed by now, the c.server() method simply encapsulates HTTP requests and responses. Indeed! So let’s dig into this method:

This method is simple: construct a Response object based on the Socket’s read/write method and buffer, and pass in the ServeHTTP() of the newly constructed Server object to process the request bound to the Response object.

You can see here that although a Server object is constructed each time, virtually all connections share the same ServerSocket object, even though there is construction.

Here, handle is actually a routing table, pointing out the correspondence between URI+Method=Func, which refers to the business logic function. If not, the default router is used. This default router implementation can be talked about, because we can imitate it to make a router of our own:

This is the structure of the default router. This read-write lock is used to lock the routing table, read lock when reading the routing table, and write lock when updating the routing table. The default router implements the ServeHTTP() Method, which, based on URI+Method as the key, looks for the Entry in the map and calls its handler.serverhttp () Method.

Note that ⚠️, although there are two ServeHTTP() methods, the former does not do business processing and only forwards the request, while the latter is the business processing function corresponding to the forwarded request and handles the actual request.

Back to the original

Golang calls the ServeHTTP() method of the object we pass in for each request, so we have to implement our own routing logic in this method. As for routing, you can completely reconfigure a new business logic method, as long as you can find a Handler/Func that can handle it according to the request. The default router uses the same definition of business processing functions as routing functions for ease of use (so do I.

Don’t duplicate the wheel

Finally, don’t reinvent the wheel. Golang’s HTTP package is already pretty good, but that’s no reason to hand lift frameworks. Here’s one I like:

Gin

Here are some basic uses:

package src

import (
	"fmt"
	"github.com/gin-gonic/gin"
	"strings"
)

func wrapStr(context *gin.Context, str string) {
	context.String(200."%s", str)
}

// CRUD is a simple RESTFul format request
func CRUD(a) {
	router := gin.Default()
	// Request one Context per request
	router.GET("/isGet".func(context *gin.Context) {
		context.String(200."%s"."ok")
	})
	router.POST("/isPost".func(context *gin.Context) {
		context.String(200."%s"."ok")
	})
	router.DELETE("/isDelete".func(context *gin.Context) {
		context.String(200."%s"."ok")
	})
	router.PUT("isPut".func(context *gin.Context) {
		context.String(200."%s"."ok")
	})
	router.Run("127.0.0.1:8190")}func PathVariable(a) {
	router := gin.Default()
	// Path parameters
	router.GET("/param/:name".func(context *gin.Context) {
		wrapStr(context, "name is:"+context.Param("name"))})// Strong matching, prior to path parameter matching, regardless of writing order
	router.GET("/param/msl".func(context *gin.Context) {
		wrapStr(context, "just msl")})// Can be null match
	router.GET("/param/nullable/:name1/*name2".func(context *gin.Context) {
		wrapStr(context, "nullable name: "+context.Param("name1") +","+context.Param("name2"))
	})
	router.Run(": 8190")}func GetAndPost(a) {
	r := gin.Default()
	r.GET("/get".func(context *gin.Context) {
		// Query parameters. You can also set default values
		name := context.DefaultQuery("name"."msl")
		age := context.Query("age")
		wrapStr(context, "name: "+name+", age: "+age)
	})
	r.POST("/post".func(context *gin.Context) {
		name := context.DefaultPostForm("name"."msl")
		age := context.PostForm("age")
		wrapStr(context, "name: "+name+", age: "+age)
	})
	// Of course, path queries can also be mixed with form queries
	r.POST("/map".func(context *gin.Context) {
		// Query parameters in the written map format, such as: /map? ids[0]=1&ids[1]=2
		// Request body: names[0]= MSL; names[1]=cwb
		ids := context.QueryMap("ids")
		names := context.PostFormMap("names")
		context.JSON(200, gin.H{
			"ids":   ids,
			"names": names,
		})
	})
	r.Run(": 8190")}func FileUpload(a) {
	r := gin.Default()
	// Limit the memory used for file storage to 8MB
	r.MaxMultipartMemory = 8 << 20
	r.POST("/upload".func(context *gin.Context) {
		file, _ := context.FormFile("file")
		wrapStr(context, "get file: "+file.Filename+", size: "+fmt.Sprintf("%d", file.Size))
	})
	// Multiple file uploads
	r.POST("/uploads".func(context *gin.Context) {
		form, _ := context.MultipartForm()
		files := form.File["files"]
		stringBuilder := strings.Builder{}
		for _, file := range files {
			// Save the file
			// context.SaveUploadedFile(file, "")
			stringBuilder.WriteString(file.Filename)
			stringBuilder.WriteString(",")
		}
		wrapStr(context, stringBuilder.String())
	})
	r.Run(": 8190")}func MiddleWare(a) {
	r := gin.New()
	r.GET("/test1".func(context *gin.Context) {
		wrapStr(context, "ok")})// Block all requests beginning with /a
	auth := r.Group("/a")
	// Similar to adding request interceptors
	auth.Use(func(context *gin.Context) {
		fmt.Println("need auth")})// The curly braces are just for aesthetics
	// All requests beginning with /a are processed here
	{
		auth.POST("/signIn".func(context *gin.Context) {
			username := context.PostForm("username")
			password := context.PostForm("password")
			context.JSON(200, gin.H{
				"username": username,
				"password": password,
			})
		})
	}
	// Uniform interception is independent of the writing position
	r.GET("/test2".func(context *gin.Context) {
		wrapStr(context, "ok")
	})
	r.Use(gin.CustomRecovery(func(context *gin.Context, err interface{}) {
		// Write panic processing logic here
	}))
	r.Run(": 8190")}Copy the code

Some ideas

Golang’s Goroutine is easy to use and can easily withstand high connections, but it’s not a panacea, especially when it comes to performance.

Now dealing with high connections is nothing more than a Golang/Kotlin coroutine; Or Java/Rust/C++ asynchrony. I heard C++ is starting to support coroutines. There are good and bad ways to do this.

The first is the coroutine, the advantages are obvious, is simple, not easy to write problems, low learning cost; A less obvious downside is that user-level threads can create more Cache misses by customizing the execution context and make it harder for the CPU to do instruction-level optimizations, such as branch prediction.

Asynchronous, which has less obvious advantages, but is still essentially a function call that allows the compiler to do more optimization and instruction level optimization; The downside is obvious. If you’ve ever written a framework like Netty for Java, you’ll know that blocking and threads need to be handled everywhere, which can be buggy and slow.

Finally, life is short, Let’s Go!