One, foreword

1. I first read this article from wechat official account and learned that the open source project ananas (A C++11/golang protobuf RPC framework) has implemented Linux high-performance network library and RPC functions, and the core of which is to rewrite the usage of C++11 future. The link to the wechat article is:

Mp.weixin.qq.com/s/hurLTscQv…

Ananas was written by Bert Young at github.com/loveyacper on Github

Ananas-rpc and Promise/Future Technology, QQ Communication group Number :784231426

2, Promise /future related source code

Github.com/loveyacper/… — Ananas project source code, download and decompress, change the folder name ananas-master to ananas and compile, otherwise the path will not be found

Github.com/loveyacper/… The core source code for Ananas is Future

Github.com/facebook/fo… Folly Future, Folly: Facebook Open-Source Library

Tencent’s Tars also has promise/ Future implementation:

Github.com/TarsCloud/T…

Boost also implements the future.

Sourceforge.net/projects/bo… — src

www.boost.org/doc/libs/1_… — doc

/boost_1_68_0/boost/thread/futures/*.*

/boost_1_68_0/boost/thread/future.hpp

Github.com/chenshuo/mu… Ananas network library is similar to muduo. One loop per thread+ Threadpool

github.com/netty/netty — Netty project source code, Ananas EventLoopGroup references Java Netty implementation

Github.com/netty/netty…

Netty is an open source Java network programming framework provided by JBOSS. It mainly rewraps the NIO package of Java. Netty offers more powerful, stable features and easy-to-use apis than the Java-native NIO package. Netty was written by Trustin Lee, a Korean who also developed another well-known network programming framework, Mina. They are very similar in many ways, and their threading model is basically the same. However, the netty community is much more active than Mina.

Netty 3.x is the most stable version used by enterprises. Dubbo, for example, uses version 3.x

Netty 4.x introduced significant features such as memory pooling to reduce GC load, and RocketMQ uses 4.x

Netty 5.x has been deprecated at github.com/netty/netty…

In concurrent programming, we typically use a set of non-blocking models: Promise, Future, and Callback. The Future represents the result of an asynchronous task that may not have actually completed, to which a Callback can be added for action upon success or failure of the task, and promises are given to the task-doer. Promise is used by the task-practitioner to mark the completion or failure of the task. This set of models is arguably the basis for many asynchronous non-blocking architectures. This Future/Promise asynchronous model is being provided in Netty 4. Netty documentation shows that Netty’s network operations are all asynchronous. The Future/Promise model is extensively used in the source code, and is defined in Netty as follows:

The Future interface defines isSuccess(),isCancellable(),cause(), and other methods for determining the status of asynchronous execution. (Read -only) The Promise interface adds setSuccess() and setFailure() methods to extneds Future. (writable)

Promise/Future is a very important asynchronous programming model that allows us to escape the traditional callback trap,

This allows for a more elegant and clean way to program asynchronously. Standard c++11 already supports STD ::future/ STD ::promise,

So why did Facebook Folly offer his own implementation? Because the future provided by the c++ standard is too simple,

The biggest improvement in Folly’s implementation is the ability to add callback functions (such as then) to the future to make it easier

Chain calls to write more elegant, indirect code, and the improvements don’t stop there.

A Future means you need something “in the Future” (usually the result of a web request), but you need to make the request now, and the request will be executed asynchronously. Or to put it another way, you need to perform an asynchronous request in the background.

The Future/Promise pattern is implemented in multiple languages. The most typical C++11 standard library provides future/promise, ES2015 has promise and async-await, and Scala has future built in.

Future and Promise are actually two completely different things:

Future: Represents an object that does not yet have a result, and the action that produces the result is an asynchronous operation;

Promise: Future objects can be created using a Promise object (getFuture). Once created, the values held by the Promise object can be read by the Future object, associating the shared state of the two objects. You can think of promises as providing a means for Future result synchronization;

In a nutshell: they provide a set of non-blocking parallel operations that, of course, you can block and wait for the Future to return.

3. Related knowledge links

CentOS 7 install cmake 2.8.12.2, please pay attention to cmake Practice tutorial

My personal Protobuf-3.5.2 practice: Install and test — Ananas relies on the Protobuf protocol, please install it

Linux multithreaded server programming: using the muduo C++ network library

New C++ 11 features (STD)

In-depth understanding of C++11(STD)

C++11 concurrent programming (STD)

C++11 multithreaded future/promise introduction (STD, boost)

Future/Promise implementation (STD, Boost)

Facebook brings a robust and powerful library of folly Futures to C++ 11.

Facebook’s C++ 11 component library Folly Futures

Tars framework Future/Promise use

Future/Promise

C++ future proposal by Herb Sutter

 

Ii. Future is the core of Annans. For more details, please refer to:

Github.com/loveyacper/…

Future and coroutine based Redis client

C++11 ananas library series

 

Iii. Reprint wechat articles as follows:

Ananas is a basic library written in C++11, which includes some common functions of background development: Udp-tcp, epoll-kqueue network library encapsulation, Python-style coroutine, easy to use timer, multi-thread Logger, threadPool, TLS, UNITTest, Google-protobuf-RPC, And the powerful Future-Promise.

1. Ananas comes from

I have been in touch with C++11 for 2-3 years. Two months ago, I decided to clean up the common code in the background and started to write ananas. It was also a coincidence that about 10 days later, in the middle of December, 2016, several colleagues and I cooperated to develop a simple MOBA game. Using frame synchronization, the server only needed to maintain simple room logic and connection management, and send frame messages through timer. Since it is fast demo development, the client is not intended to connect to corporate components, so the server is not using TSF4G. It only took half an afternoon to communicate with the client using Ananas + Protobuf, and successfully completed the game presentation to the leaders in the year before, I also decided to continue to develop and maintain Ananas. This paper first introduces the use of Ananas Future.

2. The introduction of the Future

After using C++11, you should see that the library already implements promise/future. However, if you look a little deeper, this code looks like it was added to meet kpIs, and it’s as weak as STD ::auto_ptr was back then. Yes, you can only poll future or block wait, not in performance-focused code. So Herb Sutter et al came up with a new future proposal: click I open the C++ future proposal and ananas future implements all the features of the proposal and more (when-N, and the very important timeout support). The underlying infrastructure was mostly borrowed from folly future, which helped me solve various obscure syntax problems with C++ templates. The next source code implementation will be explained in detail. A brief introduction to Folly Future can be found in this article: Introduction to the Facebook Folly Future library

Here are a few scenarios to show solutions using Ananas Future.

3. Application scenarios

3.1 Making requests to multiple servers in sequence: Chain call

The server needs to pull basic player information from Redis1, and then request detailed information from Redis2 based on its content. In older C code, we typically saved the context with callback, but C++11 can use shared_ptr and lambda simulation closures to capture the context:

Redis_conn1 ->Get<BasicProfile>("basic_profile_key").Then([redis_conn2](const BasicProfile& data) {//2. Return redis_conn2->Get<DetailProfile>("detail_profile_key"); // it return another Future}).then ([client_conn](const DetailProfile& data) {//3. SUCC processes the returned details and returns them to the client client_conn->SendPacket(data); }). OnTimeout (STD: : chrono: : seconds (3), () [client_conn] {STD: : cout < < "request timeout \ n"; //3. FAIL returns to the client client_conn->SendPacket("server timeout error"); }, &this_event_loop);Copy the code

The first Get initiates the request and returns immediately, Then registers the callback to process the result, Then initiates the second Get request after the first request returns, and when the second request returns, it is sent to the client. If any redis does not return a response within 3s, thisEventLoop timeout callback notifies the client.

3.2 Sending requests to multiple servers at the same time. After all the requests are returned, the server processes them

The basic information and details are not related, and can be requested at the same time, and both are sent to the client:

Obtain basic and detailed information asynchronously auto FUT1 = redis_conn1->Get<BasicProfile>("basic_profile_key"); auto fut2 = redis_conn2->Get<DetailProfile>("detail_profile_key"); ananas::WhenAll(fut1, fut2) .Then([client_conn](std::tuple<BasicProfile, DetailProfile>& results) {//2. SUCC returns to the client client_conn->SendPacket(STD ::get<0>(results)); client_conn->SendPacket(std::get<1>(results)); }). OnTimeout (STD: : chrono: : seconds (3), () [client_conn] {STD: : cout < < "request timeout \ n"; //3. FAIL returns to the client client_conn->SendPacket("server timeout error"); }, &this_event_loop);Copy the code

WhenAll collects the results of all futures, and only when the collection is complete will the callback be executed.

3.3 Sending requests to Multiple Servers at the same time. When a request is returned, the server processes the request

Suppose we have three identical servers, S1, S2, S3, and we want to test 100 requests to see which server responds the fastest. Here is the scenario using WhenAny:

struct Statics { std::atomic<int> completes{0}; std::vector<int> firsts; explicit Statics(int n) : firsts(n) { } }; auto stat = std::make_shared<Statics>(3); Const int kTests = 100; // Count the number of times each server gets first for (int i = 0; i < kTests; ++ i) { std::vector<Future<std::string> > futures; for (int i = 0; i < 3; ++ i) { auto fut = conn[i]->Get<std::string>("ping"); futures.emplace_back(std::move(fut)); } auto anyFut = futures.WhenAny(std::begin(futures), std::end(futures)); anyFut.Then([stat](std::pair<size_t/* fut index*/, std::string>& result) { size_t index = result.first; Stat ->firsts[index] ++; Fetch_add (1) == ktests-1) {// 50 times test completes int snapshot = 0; // 50 times test completes int snapshot = 0; for (int i = 1; i < 3; ++ i) { if (stat->firsts[i] > stat->firsts[quickest]) quickest = i; } printf("The fast server index is %d\n", quickest); }}); }Copy the code

When any of the three requests return (i.e. the fastest server), the callback function is executed, counting the number of times.

At the end of the day, the server with the most requests is basically the most responsive.

3.4. Send requests to multiple servers at the same time. When more than half of the requests return, the server starts processing

The typical scenario is PaxOS. In the first phase, a proposer tries to prepare with a pre-proposal. A second phase can be initiated when the majority acceptors accept a request for a proposed value:

// Paxos phase1: Proposer sends prepare to Acceptors const paxos:: prepare prepare; std::vector<Future<paxos::Promise> > futures; for (const auto& acceptor : acceptors_) { auto fut = acceptor.SendPrepare(prepare); futures.emplace_back(std::move(fut)); } const int kMajority = static_cast<int>(futures.size() / 2) + 1; WhenN(kMajority, STD ::begin(futures), STD ::end(futures)).Then([](STD ::vector<paxos::Promise>& results) {printf(" Offer succeeded, received the majority acceptors' Promise, now initiate the second phase withdraw! \n"); // paxos phase2: Select a value: SelectValue const auto value = SelectValue(hint_value); // a->SendAccept(ctx_id, value); OnTimeout(STD ::chrono::seconds(3), []() {printf(" Prepare timed out, maybe failed, increase the proposal number and retry! \n"); //increase prepareId and continue send prepare }, &this_eventloop);Copy the code

3.5 Specify that Then callbacks are executed on a particular thread

Herb Sutter’s proposal addresses the ability to assign Then callback functions to execute on a particular thread. I made up an example like this:

If the server needs to read a large file that has no non-blocking reads (io_sumbit aside), read may take hundreds of milliseconds. If synchronous reading is adopted, the server is bound to block. We want another IO thread to read and notify us when the IO thread finishes reading. Write the following code using the future:

// Read very_big_file Future<Buffer> ft(ReadFileInSeparateThread(very_big_file)); ft.Then([conn](const Buffer& file_contents) { // SUCCESS : process file_content; conn->SendPacket(file_content); }) .OnTimeout(std::chrono::seconds(3), [=very_big_file]() { // FAILED OR TIMEOUT: printf("Read file %s failed\n", very_big_file); }, &this_loop);Copy the code

Is there a problem with such code? Note that send generally does not allow multithreaded calls for a TCP connection. This line in callback

conn->SendPacket(file_content); 
Copy the code

Is executed in a reading file thread, so there is a risk of multiple calls to SEND.

So we need to specify that the callback will be executed in the original thread, which is as simple as changing a line and calling another Then overload:

ft.Then(&this_loop, [conn](const Buffer& file_contents) { ...
Copy the code

Note the first parameter this_loop, so SendPacket will run in this thread without concurrency errors.

4. Example: Future-based Redis client

After a brief introduction to the various scenarios used by the Future, I conclude with a complete example: the Redis client. The reason why we choose to implement redis client, one is because Redis is widely used, we are very familiar with it; Second, the Redis protocol is simple, and can ensure the order of the protocol response, it is not difficult to achieve, so that we will not distract attention.

4.1 Protocol sending

For protocol packaging, I chose to use the inline protocol. Using C++11’s variable-length template argument, this can be done very easily:

// Build redis request from multiple strings, use inline protocol template <typename... Args> std::string BuildRedisRequest(Args&& ...) ; template <typename STR> std::string BuildRedisRequest(STR&& s) { return std::string(std::forward<STR>(s)) + "\r\n"; } template <typename HEAD, typename... TAIL> std::string BuildRedisRequest(HEAD&& head, TAIL&&... tails) { std::string h(std::forward<HEAD>(head)); return h + " " + BuildRedisRequest(std::forward<TAIL>(tails)...) ; }Copy the code

4.2 Protocol sending and Context maintenance

Redis supports pipeline requests that don’t have to be a one-size-fits-all answer. So we need to save a context for the outgoing request. Since the request and reply correspond in strict order, it simplifies our implementation somewhat. When you make a request, you need to construct a Promise for it. Here’s a brief description of the Promise: Promises and futures correspond one to one, and can be understood as producers operating on promises, populating them with values, while consumers operating on futures, registering callback functions for them that are executed when a value is obtained. The API can then return the corresponding Future, and users can enjoy fluent’s Future interface:

   // set name first, then get name.
    ctx->Set("name", "bertyoung").Then(
            [ctx](const ResponseInfo& rsp) {
                RedisContext::PrintResponse(rsp);
                return ctx->Get("name"); // get name, return another future
            }).Then(
                RedisContext::PrintResponse
            );
Copy the code

Now define the pending request context:

Enum ResponseType {None, Fine, // redis returns OK Error, // redis returns String}; using ResponseInfo = std::pair<ResponseType, std::string>; struct Request { std::vector<std::string> request; ananas::Promise<ResponseInfo> promise; } std::queue<Request> pending_;Copy the code

Create a Request object for each Request and add it to the Pending_ queue. The first in, first out (FIFO) feature of the Queue works perfectly with the redis protocol:

ananas::Future<ResponseInfo>
RedisContext::Get(const std::string& key)
{
    // Redis inline protocol request
    std::string req_buf = BuildRedisRequest("get", key);
    hostConn_->SendPacket(req_buf.data(), req_buf.size());

    RedisContext::Request req;
    req.request.push_back("get");
    req.request.push_back(key);

    auto fut = req.promise.GetFuture();
    pending_.push(std::move(req));

    return fut;
}
Copy the code

4.3 Processing Response

When resolving to the full Redis server callback, fetch the header promise from the pending queue and set the value:

auto& req = pending_.front(); // Set promise req.promise.setValue (ResponseInfo(type_, content_)); Pending_.pop ();Copy the code

4.4 Invocation Example

Make two requests, and when both requests return, print:

void WaitMultiRequests(const std::shared_ptr<RedisContext>& ctx)
{
    // issue 2 requests, when they all return, callback
    auto fut1 = ctx->Set("city", "shenzhen");
    auto fut2 = ctx->Set("company", "tencent");

    ananas::WhenAll(fut1, fut2).Then(
                    [](std::tuple<ananas::Try<ResponseInfo>,
                                  ananas::Try<ResponseInfo> >& results) {
                        std::cout << "All requests returned:\n";
                        RedisContext::PrintResponse(std::get<0>(results));
                        RedisContext::PrintResponse(std::get<1>(results));
            }); 
}
Copy the code

5. Conclusion

This is the end of the usage of Ananas Future, which will be followed by the source code analysis of Future and the use and implementation of other modules.