Original link: Ewan Valentine. IO, translation is authorized by author Ewan Valentine.
The full code for this article: GitHub
In the previous section, we re-implemented microservices and docker-ified them using Go-Micro, but each microservice would be too cumbersome to maintain its own Makefile individually. This section will learn docker-compose to manage and deploy microservices in a unified manner, and introduce a third microservice, user-service, and store data.
Directing and Postgres
Data storage for microservices
Until now, ffuel-CLI cargo data to be shipped was stored directly in memory managed by ffuel-Service, which will be lost when the service restarts. To facilitate the management and search of goods information, it needs to be stored in a database.
It is possible to provide a separate database for each microservice that runs independently, but few do because of the administrative complexity. How to choose a database for your microservices
Choose a relational database and NoSQL
NoSQL is a good choice if the reliability and consistency of stored data are not high, because it can store data in a very flexible format. For example, data is often stored as JSON for processing. In this section, MongoDB is used for both performance and ecology
If you want to store the data itself is relatively complete, the relationship between the data also has a strong correlation, you can choose a relational database. Think about the structure of the data you want to store in advance. Do you want to read more or write more depending on your business? Is the high frequency query complex? … In view of the small amount of data and operation in this paper, the author chooses Postgres, and readers can change to MySQL by themselves.
More references: How to choose NoSQL databases, comb through the use of relational databases and NoSQL scenarios
docker-compose
To introduce reason
In the previous section, the microservice Docker was made to run in a lightweight container containing only the necessary dependencies of the service. Up to now, to start a container of microservices, you have to set its environment variables while docker run is in its Makefile, which can be very troublesome to manage after more services.
The basic use
Docker – compose tool can directly use a docker – compose. Yaml managed to multiple containers, at the same time set the metadata of each container and the run – time environment (environment variable), The service configuration item of the file to start the container like the previous Docker run command. Here’s an example:
Docker command management container
$ docker run -p 50052:50051 \
-e MICRO_SERVER_ADDRESS=:50051 \
-e MICRO_REGISTRY=mdns \
vessel-service
Copy the code
Equivalent to docker-compose
Version: '3.1' vessel-service: build:./vessel-service ports: -50052:50051 environment: MICRO_ADRESS: ":50051" MICRO_REGISTRY: "mdns"Copy the code
To add, subtract, and configure microservices, it is very convenient to modify docker-comemage. yaml directly.
More references: Orchestrating containers using Docker-compose
Orchestrate the container for the current project
Docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose: docker-compose
Yaml # also follows strict indentation version: '3.1' # services defines the container list services: consignment-cli: build: ./consignment-cli environment: MICRO_REGISTRY: "mdns" consignment-service: build: ./consignment-service ports: - 50051:50051 environment: MICRO_ADRESS: ":50051" MICRO_REGISTRY: "mdns" DB_HOST: "datastore:27017" vessel-service: build: ./vessel-service ports: - 50052:50051 environment: MICRO_ADRESS: ":50051" MICRO_REGISTRY: "mdns"Copy the code
First, we specify version 3.1 of Docker-compose to use, and then use Services to list the three containers to be managed.
Each microservice defines its own container name. The Dockerfile in the build directory will be used to compile the image. You can also use the image option directly to point to the compiled image (used later). Other options specify the container’s port mapping rules, environment variables, and so on.
The docker-compose build can be used to compile and generate three corresponding images; Docker-compose up -d: docker-compose up -d: docker-compose up -d: docker-compose up -d: docker-compose up -d Use docker stop $(docker ps -aq) to stop all running containers.
Running effect
Docker-compose works like this:
Protobuf and database operations
Reuse and its limitations
So far, our two Protobuf protocol files define the data structures for microservice client and server data requests and responses. Because of the standardization of a Protobuf, it is also possible to use its generated structs as database table models for data manipulation. This reuse has its limitations, such as the fact that with Protobuf data types must be strictly consistent with database table fields, which are highly coupled. Many people don’t like using protobuf data structures as table structures in databases: Do you use Protobufs in place of structs?
Mid-tier logic transformation
Typically, when a table structure changes and is inconsistent with a protobuf, a layer of logical conversion is required between the two to handle the difference fields:
func (service *Service)(ctx context.Context, req *proto.User, res *proto.Response) error { entity := &models.User{ Name: req.Name. Email: req.Email, Password: Password,} err := service.repo.Create(entity) // No intermediate conversion layer // err := service.repo.Create(req)... }Copy the code
It seems convenient to separate the database entity Models from the PROto.* structure in this way. However, when message nesting is defined in.proto, models also need to be nested.
It is up to the reader to decide whether or not to isolate the upper margin. Personally, using models in the middle is not necessary. Protobuf is standard enough to use.
Consignment – service refactoring
Looking back at the first micro-service fpc-service, it can be found that the server end implementation and interface implementation are all blocked in main.go, and the functions run out. Now the code should be split to make the project structure clearer and easier to maintain.
MVC code structure
If you are familiar with the MVC development pattern, you may want to split the code into different directories by function, for example:
main.go
models/
user.go
handlers/
auth.go
user.go
services/
auth.go
Copy the code
Microservice code structure
But this organization is not Golang’s style, as microservices are cut out to be stand-alone and concise. For large Golang projects, it should be organized as follows:
main.go
users/
services/
auth.go
handlers/
auth.go
user.go
users/
user.go
containers/
services/
manage.go
models/
container.go
Copy the code
This organization is called domain driven, not MVC function-driven.
The reconstruction of the consignment – service
Because of the brevity of microservices, we put all the code related to the service into a folder and give each file a proper name.
Create three files under consignmet-service/ : handler.go, datastore.go, and repositor. go
├─ app store store store store Store Store Store Store Store Store Store Store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store store ├─ └─ proto ├─ ├─ class.go # Register and start service ├─ proto ├─ repositoryCopy the code
Connect datastore.go to MongoDB
Package main import "gopkg.in/mgo.v2" func CreateSession(host string) (* mgo.session, error) {s, err := mgo.Dial(host) if err ! = nil { return nil, err } s.SetMode(mgo.Monotonic, true) return s, nil }Copy the code
The code for connecting to MongoDB is very compact, passing in the database address, returning the database session and possible errors, and connecting to the database when the microservice starts.
Repository. Go is responsible for interacting with MongoDB
Now let’s break down the code that main.go uses to interact with the database.
package main import (...) const ( DB_NAME = "shippy" CON_COLLECTION = "consignments" ) type Repository interface { Create(*pb.Consignment) error GetAll() ([]*pb.Consignment, Error) Close()} type ConsignmentRepository struct {session *ConsignmentRepository} Func (repo *ConsignmentRepository) Create(c * pb.consignment) error {return repo.collection().insert (c)} func (repo *ConsignmentRepository) GetAll() ([]* pb.consignment, error) {var cons []* pb.consignment // Find() // Bind the result of the query to the cons variable via.all () // the corresponding.one () takes only the first row of err := repo.collection().Find(nil).All(&cons) return cons, Err} // Close the connection func (repo *ConsignmentRepository) Close() {// Close() closes the session at the end of each query // Mgo generates a "master" session at startup // you can use it Copy() directly copies the new session from the primary session, i.e. each query has its own database session // and each session has its own socket connected to the database and error handling. This is safe and efficient. // Mgo can handle concurrent requests perfectly without using locks. // However, after each query, you must make sure that the database session is manually closed or you will create too many useless connections. Func (repo *ConsignmentRepository) collection() * mgo.collection {return repo.session.DB(DB_NAME).C(CON_COLLECTION) }Copy the code
The split main.go
package main import (...) Const (DEFAULT_HOST = "localhost:27017") func main() {// Get the value of the container's environment variable dbHost := os.getenv ("DB_HOST") if dbHost {dbHost = DEFAULT_HOST} session, err := CreateSession(dbHost) Defer session.close () if err! = nil { log.Fatalf("create session error: %v\n", Err)} server := micro.NewService(// must be the same as the package in fPA. Proto micro. micro.Version("latest"), ) / / server parse command line parameters. The Init () / / as a vessel, the service client vClient: = vesselPb. NewVesselServiceClient (" go. Micro. The SRV. Vessel ", Server. The Client ()) / / the server as micro service server pb. RegisterShippingServiceHandler (server. The server (), & handler {session, vClient}) if err := server.Run(); err ! = nil { log.Fatalf("failed to serve: %v", err) } }Copy the code
Implement handler.go on the server side
In main.go, the code for realizing the micro service server interface is separately disassembled to handler.go to realize the processing of business logic.
package main import (...) Ffpa. Go type handler struct {session * mgo style. The Session vesselClient vesselPb. VesselServiceClient} / / from the main Session Clone () new Session handling query func (h * handler) GetRepo () Repository { return &ConsignmentRepository{h.session.Clone()} } func (h *handler)CreateConsignment(ctx context.Context, req *pb.Consignment, Resp * pb.response) error {defer h.getrepo ().close () // vReq := & vesselpb. Specification{Capacity: int32(len(req.Containers)), MaxWeight: req.Weight, } vResp, err := h.vesselClient.FindAvailable(context.Background(), vReq) if err ! Log. Printf("found vessel: %s\n", vResp.Vessel.Name) req.VesselId = vResp.Vessel.Id //consignment, err := h.repo.Create(req) err = h.GetRepo().Create(req) if err ! = nil { return err } resp.Created = true resp.Consignment = req return nil } func (h *handler)GetConsignments(ctx context.Context, req *pb.GetRequest, resp *pb.Response) error { defer h.GetRepo().Close() consignments, err := h.GetRepo().GetAll() if err ! = nil { return err } resp.Consignments = consignments return nil }Copy the code
At this point, main.go split finished, the code file division of labor is clear, very refreshing.
Clone() with Copy()
In handler. Go’s GetRepo() we use Clone() to create a new database connection.
As you can see, after creating the main session in main.go we never use it again. Instead, we use session.clonse () to create a new session for query processing. See the comment Close() in repository.go. All requests will be queried using the same underlying socket, and subsequent requests will block, failing to take advantage of Go’s natural concurrency support.
To avoid blocking requests, the MGO library provides Copy() and Clone() functions to create new sessions, similar in functionality but with important differences in subtlety. The newly cloned session reuses the socket of the main session, avoiding the three-way handshake time and resource cost of creating the socket, which is especially suitable for fast write requests. If complex queries and large data operations are performed, the socket will still be blocked and subsequent requests will be blocked. Copy creates a new socket for the session, which is expensive.
The two should be selected according to different application scenarios. The query in this paper is neither complex nor large in data volume, so it can directly reuse the socket of the main session. But always Close(), remember that.
Vessel – service refactoring
After dismantling the fpc-service /main.go code, now reconstruct vessel-service in the same way
The new ship
We add a method here: add a new freighter and change the protobuf file as follows:
syntax = "proto3"; package go.micro.srv.vessel; VesselService {// FindAvailable (Specification) returns (Response) {} // create VesselService Create(Vessel) returns (Response){} } // ... Message Response {Vessel Vessel = 1; repeated Vessel vessels = 2; bool created = 3; }Copy the code
We Create a Create() method to Create a new freighter with Vessel returning Response. Note that the created field is added to Response to indicate whether the new freighter was created successfully. Use make build to generate the new vessel.pb.go file.
Separate database operations from business logic processing
Then implement Create() in the corresponding repository.go and handler.go classes.
Func (repo *VesselRepository) Create(v * pb.vessel) error {return repo.collection().Insert(v) }Copy the code
// vesell-service/handler.go func (h *handler) GetRepo() Repository { return &VesselRepository{h.session.Clone()} } // Func (h *handler) Create(CTX context.context, req * pb.vessel, resp *pb.Response) error { defer h.GetRepo().Close() if err := h.GetRepo().Create(req); err ! = nil { return err } resp.Vessel = req resp.Created = true return nil }Copy the code
Introducing the mongo
With both microservices refactored, it’s time to introduce MongoDB into the container. Add datastore option to docker-comemage. yaml:
services:
...
datastore:
image: mongo
ports:
- 27017:27017
Copy the code
DB_HOST: “datastore:27017”. Here we use datastore as the host name instead of localhost because docker has a powerful built-in DNS mechanism. Reference: Docker built-in DNSServer working mechanism
Docker-comemage.yaml docker-comemage.yaml docker-comemage.yaml docker-comemage.yaml
Yaml version: '3.1' services: consigment-cli: build:./consignment-cli environment: MICRO_REGISTRY: "mdns" consignment-service: build: ./consignment-service ports: - 50051:50051 environment: MICRO_ADRESS: ":50051" MICRO_REGISTRY: "mdns" DB_HOST: "datastore:27017" vessel-service: build: ./vessel-service ports: - 50052:50051 environment: MICRO_ADRESS: ":50051" MICRO_REGISTRY: "mdns" DB_HOST: "datastore:27017" datastore: image: mongo ports: - 27017:27017Copy the code
Docker-compose build –no-cache: docker-compose build –no-cache
user-service
The introduction of Postgres
Now create a third microservice, introducing Postgres in docker-comemage.yaml:
. user-service: build: ./user-service ports: - 50053:50051 environment: MICRO_ADDRESS: ":50051" MICRO_REGISTRY: "mdns" ... database: image: postgres ports: - 5432:5432Copy the code
Create the user-service directory under the project root and create the following files as the first two services did:
handler.go, main.go, repository.go, database.go, Dockerfile, Makefile
Copy the code
Define a Protobuf file
Create proto/user/user.proto with the following contents:
// user-service/user/user.proto syntax = "proto3"; package go.micro.srv.user; service UserService { rpc Create (User) returns (Response) {} rpc Get (User) returns (Response) {} rpc GetAll (Request) Returns (Response) {} RPC Auth (User) returns (Token) {} RPC ValidateToken (Token) returns (Token) {}} // User information message User { string id = 1; string name = 2; string company = 3; string email = 4; string password = 5; } message Request { } message Response { User user = 1; repeated User users = 2; repeated Error errors = 3; } message Token { string token = 1; bool valid = 2; Error errors = 3; } message Error { int32 code = 1; string description = 2; }Copy the code
Make sure your user-service has makefiles like the first two microservices and use make builds to generate gRPC code.
Handler. go implements business logic processing
In the server-side code implemented by handler.go, the authentication module uses JWT for authentication in the next section.
// user-service/handler.go package main import ( "context" pb "shippy/user-service/proto/user" ) type handler struct { repo Repository } func (h *handler) Create(ctx context.Context, req *pb.User, resp *pb.Response) error { if err := h.repo.Create(req); err ! = nil { return nil } resp.User = req return nil } func (h *handler) Get(ctx context.Context, req *pb.User, resp *pb.Response) error { u, err := h.repo.Get(req.Id); if err ! = nil { return err } resp.User = u return nil } func (h *handler) GetAll(ctx context.Context, req *pb.Request, resp *pb.Response) error { users, err := h.repo.GetAll() if err ! = nil { return err } resp.Users = users return nil } func (h *handler) Auth(ctx context.Context, req *pb.User, resp *pb.Token) error { _, err := h.repo.GetByEmailAndPassword(req) if err ! = nil { return err } resp.Token = "`x_2nam" return nil } func (h *handler) ValidateToken(ctx context.Context, req *pb.Token, resp *pb.Token) error { return nil }Copy the code
Repository. Go for database interaction
package main import ( "github.com/jinzhu/gorm" pb "shippy/user-service/proto/user" ) type Repository interface { Get(id string) (*pb.User, error) GetAll() ([]*pb.User, error) Create(*pb.User) error GetByEmailAndPassword(*pb.User) (*pb.User, error) } type UserRepository struct { db *gorm.DB } func (repo *UserRepository) Get(id string) (*pb.User, error) { var u *pb.User u.Id = id if err := repo.db.First(&u).Error; err ! = nil { return nil, err } return u, nil } func (repo *UserRepository) GetAll() ([]*pb.User, error) { var users []*pb.User if err := repo.db.Find(&users).Error; err ! = nil { return nil, err } return users, nil } func (repo *UserRepository) Create(u *pb.User) error { if err := repo.db.Create(&u).Error; err ! = nil { return err } return nil } func (repo *UserRepository) GetByEmailAndPassword(u *pb.User) (*pb.User, error) { if err := repo.db.Find(&u).Error; err ! = nil { return nil, err } return u, nil }Copy the code
Use UUID
We changed the UUID string created by ORM to an integer, which is safer to use as the primary key or ID of the table. MongoDB uses a similar technique, but Postgres requires us to build it manually using third-party libraries. Create extension.go file under user-service/proto/user:
package go_micro_srv_user import ( "github.com/jinzhu/gorm" uuid "github.com/satori/go.uuid" "github.com/labstack/gommon/log" ) func (user *User) BeforeCreate(scope *gorm.Scope) error { uuid, err := uuid.NewV4() if err ! = nil { log.Fatalf("created uuid error: %v\n", err) } return scope.SetColumn("Id", uuid.String()) }Copy the code
The BeforeCreate() function specifies that the GORM library uses the UUID as the ID column value. Reference: doc. Gorm. IO/callbacks
GORM
Gorm is an easy-to-use, lightweight ORM framework that supports Postgres, MySQL, Sqlite and other databases.
So far, the three microservices involve a small amount of data and few operations that can be handled with native SQL, so it’s up to you to decide if you want to ORM.
user-cli
Analogous to the fill-fan-service test, now create a user-CLI command line application to test the user-service
Create the user-cli directory in the root directory of the project and create the cli.go file:
package main import ( "log" "os" pb "shippy/user-service/proto/user" microclient "github.com/micro/go-micro/client" "github.com/micro/go-micro/cmd" "golang.org/x/net/context" "github.com/micro/cli" "github.com/micro/go-micro" ) func Client := pb.newUserServiceclient (" go.micro-srv.user ", Microclient. DefaultClient) / / set the command line parameters of the service: = micro. NewService (micro) Flags (cli) StringFlag {Name: "Name", the Usage: "You full name", }, cli.StringFlag{ Name: "email", Usage: "Your email", }, cli.StringFlag{ Name: "password", Usage: "Your password", }, cli.StringFlag{ Name: "company", Usage: "Your company", }, ), ) service.Init( micro.Action(func(c *cli.Context) { name := c.String("name") email := c.String("email") password := c.String("password") company := c.String("company") r, err := client.Create(context.TODO(), &pb.User{ Name: name, Email: email, Password: password, Company: company, }) if err ! = nil { log.Fatalf("Could not create: %v", err) } log.Printf("Created: %v", r.User.Id) getAll, err := client.GetAll(context.Background(), &pb.Request{}) if err ! = nil { log.Fatalf("Could not list users: %v", err)} for _, v := range getall.users {log.println (v)} os.exit (0)}),) // Start the client if err := service.run (); err ! = nil { log.Println(err) } }Copy the code
test
The successful running
Before doing this, you need to manually pull the Postgres image and run it:
$ docker pull postgres
$ docker run --name postgres -e POSTGRES_PASSWORD=postgres -d -p 5432:5432 postgres
Copy the code
User data is created and stored successfully:
conclusion
So far, we have created three microservices: Fpc-service, vessel-Service and user-service are all implemented and docker-oriented using Go-Micro, and docker-compose is used for unified management. In addition, we use the GORM library to interact with the Postgres database and store command line data there.
The above user-CLI is for testing only, and storing passwords in plain text is not secure at all. After completing basic functions in this section, JWT will be introduced in the next section for verification.