• Writing a Microservice in Rust
  • By Peter Goldsborough
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: nettee
  • Proofreader: HearFishle, Shixi-Li

Allow me to start with a couple of sentences about C++ before writing an article about writing a microservice in Rust. I’ve been a fairly active member of the C++ community for quite some time now. I attended conferences and contributed presentations, followed the development and spread of the language’s more modern features, and of course wrote a lot of code. C++ gives the user very fine-grained control over all aspects of the program when writing code, but at the cost of a steep learning curve and the vast amount of knowledge needed to write effective C++ code. However, C++ is also a very old language. It was conceived by Bjarne Stroustrup in 1985. As a result, it carries a lot of historical baggage even by modern standards. Of course, research on language design continued after the creation of C++, which led to the creation of interesting new languages like Go, Rust, Crystal, and so on. However, few of these new languages can offer more interesting features than modern C++ while still offering the same performance and control over memory and hardware. Go was intended to replace C++, but as Rob Pike discovered, C++ programmers were not very interested in a language with poor performance and less control. Rust, however, has attracted many C++ enthusiasts. Rust shares many of the same design goals as C++, such as zero-cost abstraction and fine control of memory. In addition, Rust adds many language features that make programs safer, more expressive, and more efficient for development. The thing I’m most interested in about Rust

  • Use check, greatly improving memory security (no moreSEGFAULT!). ;
  • Default immutability (const);
  • Intuitive syntax sugars, such as pattern matching;
  • There is no built-in implicit conversion between (arithmetic) types.

The chatter is over. The rest of this article will guide you through creating a small but complete microservice-similar to the URL shortener I wrote for my blog. By microservice I mean an application that uses HTTP, accepts a request, accesses a database, returns a response (possibly with HTML shipped), packaged in a Docker container, and can be placed somewhere in the cloud. In this article, I’ll build a simple chat application that allows you to store and retrieve messages. I’ll introduce some of the related packages in the process (Crate). You can find the full code for the service on GitHub.

Using the HTTP

The first thing we need to do for our Web service is how to use the HTTP protocol, which means that our application (server) needs to receive and parse HTTP requests and return HTTP responses. While there are many advanced frameworks like Flask or Django that encapsulate all of this, we chose to use the slightly lower level Hyper library to handle HTTP. This library uses the network libraries Tokio and Futures, allowing us to create a clean asynchronous Web server. In addition, we use log and enV-logger crate to implement logging.

First set up Cargo. Toml and download the crate:

[package] name = "microservice_rs" version = "authors = ["you <you@email>"] [dependencies] env_logger = "0.5.3" Futures = "0.1.17" the hyper = "0.11.13" log = "0.4.1"Copy the code

Then there’s the actual code. In Hyper there is the concept of Service. It is a type that implements the Service trait, with a call function that takes a Hyper ::Request object representing a parsed HTTP Request as a parameter. For an asynchronous service, this function must return a Future. Here is the basic boilerplate we can put directly in main.rs:

extern crate hyper;
extern crate futures;

#[macro_use]
extern crate log;
extern crate env_logger;

use hyper::server::{Request, Response, Service};

use futures::future::Future;

struct Microservice;

impl Service for Microservice {
  type Request = Request;
  type Response = Response;
  type Error = hyper::Error;
  type Future = Box<Future<Item = Self::Response, Error = Self::Error>>;

  fn call(&self, request: Request) -> Self::Future { info! ("Microservice received a request: {:? }", request);
    Box::new(futures::future::ok(Response::new()))
  }
}
Copy the code

Note that we also need to declare some basic types for our service. We boxed the Future type because futures:: Future ::Future itself is only a trait and cannot be returned as a function value. Inside call(), we currently return the simplest valid value, a boxed Future containing an empty response.

To start the server, we bind an IP address to the Hyper :: Server ::Http instance and call its run() method:

fn main() {
  env_logger::init();
  let address = "127.0.0.1:8080".parse().unwrap();
  let server = hyper::server::Http::new()
    .bind(&address, || Ok(Microservice {})) .unwrap(); info! ("Running microservice at {}", address);
  server.run().unwrap();
}
Copy the code

With the code above, Hyper starts listening for HTTP requests at LocalHost :8080, parses them, and forwards them to our Microservice class. Notice that every time a new request comes in, a new instance is created. We can now start the server and send some requests with curl! We start the server in terminal:

$RUST_LOG=" microService =debug" cargo run Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs Running ` target/debug/microservice ` INFO T23:2018-01-21 35:05 Z: microservice: Running microservice at 127.0.0.1:8080Copy the code

Then send it some requests from another terminal:

$ curl 'localhost:8080'
Copy the code

On the first terminal, you should see output similar to the following

$ RUST_LOG="microservice=debug" cargo run
  Finished dev [unoptimized + debuginfo] target(s) in0.0 secs Running ` target/debug/microservice ` Running microservice at 127.0.0.1:8080 INFO 2018-01-21 T23:35:05 Z: Microservice: Running microService at 127.0.0.1:8080 INFO 2018-01-21T23:35:06z: microservice: Running microService at 127.0.0.1:8080 INFO 2018-01-21T23:35:06z: microservice: Microservice received a request: Request { method: Get, uri:"/", version: Http11, remote_addr: Some(V4(127.0.0.1:61667)), headers: {"Host": "localhost:8080"."User-Agent": "Curl / 7.54.0"."Accept": "* / *"}}Copy the code

Long live! We have a basic server written in Rust. Notice that in the command above, I added RUST_LOG=”microservice=debug” to cargo Run. Since env_Logger searches for this particular environment variable, we control its behavior in this way. The first part of this environment variable (” MicroService = DEBUG “) specifies the root module of the log we want to start, and the second part (after =) specifies the minimum log level visible. By default, only error! It will be recorded.

Now let’s get our servers to actually do something. Since we are building a chat application, the two types of requests we want to handle are POST requests (with form data containing the username and message) and GET requests (with optional before and after parameters to filter by time).

receivePOSTrequest

Let’s start with the part where we write the data. We accept a POST request to the root path of our service (“/”) and expect the request form data to contain the USERNAME and Message fields. This information is then passed into a function and written into the database. Finally, we return a response.

First override the call() method:

fn call(&self, request: Request) -> Self::Future {
      match (request.method(), request.path()) {
        (&Post, "/") = > {let future = request
            .body()
            .concat2()
            .and_then(parse_form)
            .and_then(write_to_db)
            .then(make_post_response);
          Box::new(future)
        }
        _ => Box::new(futures::future::ok(
          Response::new().with_status(StatusCode::NotFound),
        )),
      }
    }
Copy the code

We differentiate requests by matching the methods and paths of requests. In our case, the request method would be Post or Get. The only valid path for our service is the root path “/”. If the method is &POST and the path is correct, we call the previously mentioned function. Note that we can elegantly concatenate futures using combinatorial functions. The combinator and_THEN calls a function with the values contained in the future if the future resolves correctly (without errors). The called function must also return a new Future. This allows us to pass values between stages of processing rather than compute a value in the field. Finally, we use the combinator then, which executes the callback function regardless of the future’s state. That way, it gets a Result instead of a value.

Here is the content of the function used above:

struct NewMessage {
  username: String,
  message: String,}fn parse_form(form_chunk: Chunk) -> FutureResult<NewMessage, hyper::Error> {
  futures::future::ok(NewMessage {
    username: String::new(),
    message: String::new(),
  })
}

fn write_to_db(entry: NewMessage) -> FutureResult<i64, hyper::Error> {
  futures::future::ok(0)}fn make_post_response(
  result: Result<i64, hyper::Error>,
) -> FutureResult<hyper::Response, hyper::Error> {
  futures::future::ok(Response::new().with_status(StatusCode::NotFound))
}
Copy the code

Our use statement has also changed a bit:

use hyper::{Chunk, StatusCode};
use hyper::Method::{Get, Post};
use hyper::server::{Request, Response, Service};

use futures::Stream;
use futures::future::{Future, FutureResult};
Copy the code

Let’s look at parse_form. It receives a Chunk (the body of the message) from which it resolves the username and the message, while handling errors appropriately. To parse the form, we use the URL crate (you need to download it using Cargo) :

use std::collections::HashMap;
use std::io;

fn parse_form(form_chunk: Chunk) -> FutureResult<NewMessage, hyper::Error> {
  let mut form = url::form_urlencoded::parse(form_chunk.as_ref())
    .into_owned()
    .collect::<HashMap<String.String> > ();if let Some(message) = form.remove("message") {
    let username = form.remove("username").unwrap_or(String::from("anonymous"));
    futures::future::ok(NewMessage {
      username: username,
      message: message,
    })
  } else {
    futures::future::err(hyper::Error::from(io::Error::new(
        io::ErrorKind::InvalidInput,
        "Missing field 'message",)))}}Copy the code

After parsing the form into a HashMap, we tried to remove the Message key from it. Because this is mandatory, an error is returned if the removal fails. If the removal succeeds, we get the username field, and if it doesn’t exist, we use the default value “anonymous”. Finally, we return a successful future containing a simple NewMessage structure.

I won’t discuss the write_to_DB function right now. The database interaction itself is quite complex, so I will use a subsequent section to introduce this function and the corresponding function to read messages from the database. Note, however, that write_TO_DB returns a value of type I64 on success, which is the timestamp when the new message was submitted to the database.

Let’s first look at how we can return the response to any request to the microservice:

#[macro_use]
extern crate serde_json;

fn make_post_response(
  result: Result<i64, hyper::Error>,
) -> FutureResult<hyper::Response, hyper::Error> {
  match result {
    Ok(timestamp) => {
      letpayload = json! ({"timestamp": timestamp}).to_string();
      let response = Response::new()
        .with_header(ContentLength(payload.len() as u64)) .with_header(ContentType::json()) .with_body(payload); debug! ("{:? }", response);
      futures::future::ok(response)
    }
    Err(error) => make_error_response(error.description()),
  }
}
Copy the code

Let’s match on result to see if we can successfully write to the database. If successful, we create a JSON payload that forms the body of the response we return. For this I used serde_json crate, which you should add to Cargo. Toml. When building the response structure, we need to set the correct HTTP header. In this case, this means setting the Content-Length header field to the Length of the response body and the Content-Type header field to Application/JSON.

I’ve refactored the code to make the ability to build the response body in case of an error a separate function, make_error_response, because we’ll reuse it later:

fn make_error_response(error_message: &str) -> FutureResult<hyper::Response, hyper::Error> {
  letpayload = json! ({"error": error_message}).to_string();
  let response = Response::new()
    .with_status(StatusCode::InternalServerError)
    .with_header(ContentLength(payload.len() as u64)) .with_header(ContentType::json()) .with_body(payload); debug! ("{:? }", response);
  futures::future::ok(response)
}
Copy the code

Response of the building is similar to a function quite before, but this time we must the response HTTP status is set to the StatusCode: : InternalServerError state (500). The default state is OK (200), so we didn’t need to set the state before.

receiveGETrequest

Next, we turn to GET requests, which are sent to the server to GET messages. We allow requests to have two query arguments before and after. Both parameters are timestamps that constrain which messages are retrieved based on the timestamp of the message. Both parameters are optional. If neither before nor after arguments exist, we return only the last message.

Here is the match branch that handles GET requests. It has slightly more logic than the previous code.

(&Get, "/") = > {let time_range = match request.query() {
    Some(query) => parse_query(query),
    None= >Ok(TimeRange {
      before: None,
      after: None,})};let response = match time_range {
    Ok(time_range) => make_get_response(query_db(time_range)),
    Err(error) => make_error_response(&error),
  };
  Box::new(response)
}
Copy the code

By calling request.query(), we get an Option<& STR > because a URI might not have a query string at all. If the query exists, we call parse_query, which parses the query parameters and returns a TimeRange structure. It’s defined as

struct TimeRange {
  before: Option<i64>,
  after: Option<i64>,}Copy the code

Since both the before and after arguments are optional, we set both fields of the TimeRange structure to Option. In addition, the timestamp may be invalid (not a number, for example), so we should handle cases where parsing its value fails. In this case, parse_query returns an error message, which we can forward to the make_error_response function we wrote earlier. If the parsing succeeds, we can proceed to call query_db (to get the message for us) and make_get_Response (to create an appropriate Response object and return it to the client).

To parse the query string, we again use the previous URL ::form_urlencoded function, as its syntax is key=value&key=value. We then try to get before and after and convert them to integer types (i.e., timestamp types) :

fn parse_query(query: &str) - >Result<TimeRange, String> {
  let args = url::form_urlencoded::parse(&query.as_bytes())
    .into_owned()
    .collect::<HashMap<String.String> > ();let before = args.get("before").map(|value| value.parse::<i64> ());if let Some(ref result) = before {
    if let Err(ref error) = *result {
        return Err(format!("Error parsing 'before': {}", error)); }}let after = args.get("after").map(|value| value.parse::<i64> ());if let Some(ref result) = after {
    if let Err(ref error) = *result {
      return Err(format!("Error parsing 'after': {}", error)); }}Ok(TimeRange {
    before: before.map(|b| b.unwrap()),
    after: after.map(|a| a.unwrap()),
  })
}
Copy the code

Unfortunately, the code here is clunky and repetitive, but it’s hard to make it better without adding complexity. Essentially, we’re trying to get before and after fields from the form. If the field exists, try parsing it to i64. I want to merge multiple if let statements, so we can write:

if let Some(ref result) = before && let Err(ref error) = *result {
  return Err(format!("Error parsing 'before': {}", error));
}
Copy the code

However, you can’t write this in Rust today (you can write multiple values in an iflet statement through methods packaged in tuples, but they don’t depend on each other as they do here).

Skipping query_db for a moment, make_get_response looks pretty simple:

fn make_get_response(
    messages: Option<Vec<Message>>,
) -> FutureResult<hyper::Response, hyper::Error> {
  let response = match messages {
    Some(messages) => {
      let body = render_page(messages);
      Response::new()
        .with_header(ContentLength(body.len() as u64))
        .with_body(body)
    }
    None=> Response::new().with_status(StatusCode::InternalServerError), }; debug! ("{:? }", response);
  futures::future::ok(response)
}
Copy the code

If the messages option contains a value, we can pass the message to render_page, which returns an HTML page that forms the body of our response, displaying the message in a simple HTML list. If option is empty and an error occurs in the query_DB, we log it but don’t expose it to the user, so we just return the response with status code 500. I’ll cover render_page implementation in the Templates section.

Connecting to the database

Now that we have write and read paths in our service, we need to combine them with the database for reading and writing. Rust has a very useful and popular object relationship model (ORM) library called Diesel. This library is very interesting and intuitive. Add it to your Cargo. Toml and enable Postgres as we will be using the Postgres database for this tutorial:

Diesel = {version = "1.0.0", features = ["postgres"]}Copy the code

Make sure you have Postgres installed on your machine and can log in using PSQL (as a basic robustness check). Diesel also supports other DBMSS such as MySQL, which you can try out after this tutorial.

Let’s start by creating the database schema for the application. We put it in schemas/messages.sql:

CREATE TABLE messages (
  id SERIAL PRIMARY KEY,
  username VARCHAR(128) NOT NULL,
  message TEXT NOT NULL.timestamp BIGINT NOT NULL DEFAULT EXTRACT('epoch' FROM CURRENT_TIMESTAMP))Copy the code

Each row in the table stores a message with monotonically increasing IDS, the author’s username, the message text, and a timestamp. The default value of the timestamp described above inserts the current number of seconds since the epoch for each new entry. Since the ID column is also automatically incremented, we end up just inserting the username and message for each new row.

Now we need to integrate this table with Diesel. To do this, we need to install the Diesel CLI via cargo install diesel_cli. You can then run the following command:

$ export DATABASE_URL=postgres://<user>:<password>@localhost
$ diesel print-schema | tee src/schema.rs
table! {
  messages (id) {
    id -> Int4,
    username -> Varchar,
    message -> Text,
    timestamp -> Int8,
  }
}
Copy the code

Where

: is your database user name and password. If your database does not have a password, just enter the username. The latter command prints a database representation written in Rust, which we can store in SRC /schema.rs. table! Macros come from Diesel. In addition to the schema, Diesel also asked us to write a model. We need to write this ourselves in SRC /models.rs:

#[derive(Queryable, Serialize, Debug)]
pub struct Message {
  pub id: i32.pub username: String.pub message: String.pub timestamp: i64,}Copy the code

This model is the Rust structure that we interact with in our code. To do this, we need to add some declarations in the main module:

#[macro_use]
extern crate serde_derive;
#[macro_use]
extern crate diesel;

mod schema;
mod models;
Copy the code

At this point, we’re ready to add the write_to_DB and query_db functions that we left out earlier.

Write to database

Let’s start with write_to_DB. This function simply writes an entry to the database and returns the timestamp of its creation:

use diesel::prelude::*;
use diesel::pg::PgConnection;

fn write_to_db(
  new_message: NewMessage,
  db_connection: &PgConnection,
) -> FutureResult<i64, hyper::Error> {
  use schema::messages;
  let timestamp = diesel::insert_into(messages::table)
    .values(&new_message)
    .returning(messages::timestamp)
    .get_result(db_connection);

  match timestamp {
    Ok(timestamp) => futures::future::ok(timestamp),
    Err(error) => { error! ("Error writing to database: {}", error.description());
      futures::future::err(hyper::Error::from(
          io::Error::new(io::ErrorKind::Other, "service error"),))}}}Copy the code

It’s that simple! Diesel provides a very intuitive and type-safe query interface that we use to:

  • Specify the table we want to insert,
  • Specify the value we want to insert (more on that in a moment),
  • Specify the value we want to return, if any, and
  • callget_result, which will actually execute the query.

This returns us a QueryResult

object that we can match and handle errors as needed. Two things that should surprise you are that (1) we can pass the NewMessage structure directly into Diesel, and (2) we use a magical db_Connection parameter that didn’t exist before. Let’s solve these two mysteries! For (1), the code I gave you above will not actually compile. For the code to compile, we need to move the NewMessage structure into SRC/Models.rs, just below the Message structure. The code looks like this:

use schema::messages;

#[derive(Queryable, Serialize, Debug)]
pub struct Message {
  pub id: i32.pub username: String.pub message: String.pub timestamp: i64,}#[derive(Insertable, Debug)]
#[table_name = "messages"]
pub struct NewMessage {
  pub username: String.pub message: String,}Copy the code

This way Diesel can directly associate the fields in our structure with the columns in the database. How simple! Notice that to do this, the table in the database must be called messages, as shown in the table_name property.

For the second puzzle, we need to modify the code slightly to introduce the concept of database connections. In Service:: Call (), place the following at the top:

fn call(&self, request: Request) -> Self::Future {
  let db_connection = match connect_to_db() {
    Some(connection) => connection,
    None= > {return Box::new(futures::future::ok(
        Response::new().with_status(StatusCode::InternalServerError),
      ))
    }
  };
Copy the code

Connect_to_db is defined as follows

use std::env;

const DEFAULT_DATABASE_URL: &'static str = "postgresql://postgres@localhost:5432";

fn connect_to_db() - >Option<PgConnection> {
  let database_url = env::var("DATABASE_URL").unwrap_or(String::from(DEFAULT_DATABASE_URL));
  match PgConnection::establish(&database_url) {
    Ok(connection) => Some(connection),
    Err(error) => { error! ("Error connecting to database: {}", error.description());
      None}}}Copy the code

This function looks for the environment variable DATABASE_URL to determine the URL of the Postgres database, otherwise predefined constants are used. It then attempts to create a new database connection and returns if successful. You also need to update the code that handles GET and POST:

(&Post, "/") = > {let future = request
    .body()
    .concat2()
    .and_then(parse_form)
    .and_then(move |new_message| write_to_db(new_message, &db_connection))
    .then(make_post_response);
  Box::new(future)
}
(&Get, "/") = > {let time_range = match request.query() {
    Some(query) => parse_query(query),
    None= >Ok(TimeRange {
      before: None,
      after: None,})};let response = match time_range {
    Ok(time_range) => make_get_response(query_db(time_range, &db_connection)),
    Err(error) => make_error_response(&error),
  };
  Box::new(response)
}
Copy the code

With this scenario, we create a new database connection each time the request comes in. Depending on your configuration, this may work. However, you may also want to consider using R2D2 to set up a connection pool to keep a certain number of connections open and give you a connection when you need one.

Querying the database

We can now write new messages to the database — that’s great. Now, let’s figure out how to read them out again by querying the database properly. Let’s implement query_db:

fn query_db(time_range: TimeRange, db_connection: &PgConnection) -> Option<Vec<Message>> {
  use schema::messages;
  let TimeRange { before, after } = time_range;
  let query_result = match (before, after) {
    (Some(before), Some(after)) => {
      messages::table
        .filter(messages::timestamp.lt(before as i64))
        .filter(messages::timestamp.gt(after as i64))
        .load::<Message>(db_connection)
    }
    (Some(before), _) => {
      messages::table
        .filter(messages::timestamp.lt(before as i64))
        .load::<Message>(db_connection)
    }
    (_, Some(after)) => {
      messages::table
        .filter(messages::timestamp.gt(after as i64))
        .load::<Message>(db_connection)
    }
    _ => messages::table.load::<Message>(db_connection),
  };
  match query_result {
    Ok(result) => Some(result),
    Err(error) => { error! ("Error querying DB: {}", error);
      None}}}Copy the code

Unfortunately, this code is a bit complicated. This is because before and After are both options, and Diesel does not currently support a simple way to build queries increments. So we are left with Some or None before or after, and decide to implement zero, one, or two filters. However, the query itself is very simple and intuitive. Because WHERE is the keyword in Rust, the WHERE clause in SQL is implemented using the Filter method in Diesel. Relational operators like > or = are methods on model structures, such as.gt() or.eq().

Render HTML template

We’re so close! Now all that’s left is to write render_page, which we missed earlier. To do this, we use a template library. In the context of a Web server, templates are a generic concept for creating HTML pages from dynamic data and control flow. Popular template libraries in other languages include Handlebars in JavaScript and Jinja in Python. Although I used Handlebars on Rust for my URL shortener project, I have to say that Rust’s template library is not very good. As with many areas of Rust, there is no “quasi-standard library” like Jinja in Python. This makes it hard to pick one because you never know if it will be abandoned in the next six months.

However, we will use a template library called Maud in the tutorial. While MAUD is not the most scalable choice for real-world applications, it is fun and powerful enough to allow us to write HTML templates directly in Rust. Maud can also harness the power of Macro Rust, if any. That is, Maud needs a daily build of Rust to initiate procedural Macro functionality. This feature seems to be close to stable.

First, add maud to your Cargo. Toml:

[dependencies]
maud = "0.17.2"
Copy the code

Then, add the following declaration to the top of your main.rs:

#! [feature(proc_macro)]
extern crate maud;
Copy the code

Now you can write render_page:

fn render_page(messages: Vec<Message>) -> String {
  (html! {
    head {
      title "microservice"
      style "body { font-family: monospace }"
    }
    body {
      ul {
        @for message in &messages {
          li {
            (message.username) "(" (message.timestamp) ")." (message.message)
          }
        }
      }
    }
  }).into_string()
}
Copy the code

What the hell? It’s a bit surprising. Think about it. Take a deep breath. This is writing HTML pages using Rust macros. Damn it

Indeed! Our micro service is done, and very micro. Let’s run it:

$ DATABASE_URL="postgresql://goldsborough@localhost" RUST_LOG="microservice=debug"Cargo run the Compiling microservice v0.1.0 (file:///Users/goldsborough/Documents/Rust/microservice) Finished dev [unoptimized + debuginfo] target(s)in12.30 secs Running ` target/debug/microservice ` INFO T01:2018-01-22 "Z: microservice: Running microservice at 127.0.0.1:8080Copy the code

Then in the other terminal:

$ curl -X POST -d 'username=peter&message=hi' 'localhost:8080'
{"timestamp":1516584255}
$ curl -X POST -d 'username=mike&message=hi2' 'localhost:8080'
{"timestamp": 1516584282}Copy the code

You should see the debug log immediately:

. DEBUG 2018-01-22T01:24:14Z: microservice: Request { method: Post, uri: "/", version: Http11, remote_addr: Some (V4 (127.0.0.1:64869)), and headers: {" Host ":" localhost: 8080 ", "the user-agent" : "curl / 7.54.0", "Accept" : "*/*", "Content-Length": "25", "Content-Type": "application/x-www-form-urlencoded"} } DEBUG 2018-01-22T01:24:14Z: microservice: Response { status: Ok, version: Http11, headers: {"Content-Length": "24", "Content-Type": "application/json"} } ...Copy the code

Now, we use GET to GET some messages:

$ curl 'localhost:8080'
<head><title>microservice</title><style>body { font-family: monospace }</style></head><body><ul><li>peter (1516584255): hi</li><li>mike (1516584282): hi2</li></ul></body>
Copy the code

Or you can open http://localhost:8080 in your browser:

You can also try adding? After =

&before=

and verify that you really only got the message within the specified time range.

Use Docker packaging

I’ll talk briefly about how to package this application as a Docker container. This has nothing to do with Rust per se, but it is useful to know about the Docker containers in question on this basis.

Rust developers maintain two official Docker images: a stable version and Rust for daily builds. A stable Rust image is Rust, and a daily build image is rust-lang/ Rust :nightly. It’s easy to extend our container based on one of the images. We want to build mirrors based on daily builds. The contents of a Dockerfile should look like this:

FROM rustlang/rust:nightly
MAINTAINER <your@email>

WORKDIR /var/www/microservice/
COPY.
RUN rustc --version
RUN cargo install

CMD ["microservice"]
Copy the code

Referring to a typical microservice architecture, we run the Postgres database in another Docker container. Dockerfile-db:

FROM postgres
MAINTAINER <your@email>

# Create the table on start-up
ADD schemas/messages.sql /docker-entrypoint-initdb.d/
Copy the code

Then combine them with docker-comemage.yaml:

version: '2'
services:
  server:
    build:
      context: .
      dockerfile: docker/Dockerfile
    networks:
      - network
    ports:
        - "8080:80"
    environment:
      DATABASE_URL: postgresql://postgres:secret@db:5432
      RUST_BACKTRACE: 1
      RUST_LOG: microservice=debug
  db:
    build:
      context: .
      dockerfile: docker/Dockerfile-db
    restart: always
    networks:
      - network
    environment:
      POSTGRES_PASSWORD: secret

networks:
  network:
Copy the code

This file is a little bit complicated, but once I’ve written this, everything else is simple. Notice that I put both dockerfiles in the docker/ directory. Now, just run docker-compose up:

$ docker-compose up
Recreating microservice_db_1 ...
Recreating microservice_server_1 ... doneAttaching to microservice_db_1, microservice_server_1 server_1 | INFO 2018-01-22T01:38:57Z: microservice: Running microservice at 127.0.0.1:8080 db_1 | 2018-01-22 01:38:57. 886 UTC [1] the LOG: listening on IPv4 address"0.0.0.0", port 5432 db_1 | 2018-01-22 01:38:57. 886 UTC [1] the LOG: listening on IPv6 address"... "", port 5432 db_1 | 2018-01-22 01:38:57. 891 UTC [1] the LOG: listening on Unix socket"/var/run/postgresql/.s.PGSQL.5432"Db_1 | 2018-01-22 01:38:57. 917 UTC [20] the LOG: Database system was shut down at the end of the 2018-01-22 00:10:07 UTC db_1 | 2018-01-22 01:38:57. 939 UTC [1] the LOG: database system is ready to accept connectionsCopy the code

Of course, the output of your first run might be different. But in any case, our work is all done. You can upload this code to a GitHub repository and put it on a (free) AWS or Google Cloud instance to access your service externally. Wow!

conclusion

Together, the code snippet above is about 270 lines, which is more than enough to create our entire microservice with Rust. Our code is probably not that small compared to, say, the equivalent code in Flask. However, there are many more Web frameworks in Rust that can give you more abstractions, such as Rocket. Nonetheless, I believe that following this tutorial and getting a little closer to the bottom of using Hyper will give you some good ideas on how to write a secure and high-performance Web service using Rust.

I’m writing this post to share what I learned while learning Rust and using my knowledge to write a small URL shortener Web service that I use to shorten the URL of my blog (which is pretty long if you look at the URL bar in your browser). For this reason, I feel I now have a good understanding of the features Rust offers. You also know which of these features of Rust are more expressive and more secure than modern C++, and which are less expressive (but not less secure).

I suspect Rust’s ecosystem may need a few more years to stabilize before stable and well-maintained packages can perform major functions. Nevertheless, the future is very bright. Facebook is already looking into how Rust can be used to build a new Mercurial server that hosts its code base. More and more people see Rust as an interesting alternative to embedded programming. I’ll be keeping an eye on the language, which means I’ve subscribed to R /Rust on Reddit.

If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.