- About Async Iterators in node.js
- Originally written by Janos
- The Nuggets translation Project
- Permanent link to this article: github.com/xitu/gold-m…
- Translator: Isildur46
- Proofread with: PassionPenguin
About asynchronous iterators in Node.js
When we are not clear about values and end states during iteration, we can use asynchronous iterators to resolve ordinary objects and eventually make promises.
Node introduced asynchronous iterators in version 10.0.0, and the feature has been gaining traction in the community lately. In this article we’ll learn what an asynchronous iterator is and explore its use scenarios.
What is an asynchronous iterator
So what is an asynchronous iterator? They are essentially asynchronous versions of the previous iterators. When we are not clear about values and end states during iteration, we can use asynchronous iterators that resolve ordinary {value: any, done: Boolean} objects and eventually result in a promise. We can also use for-await-of loops to help loop in asynchronous iterators, just like for-of loops in synchronous iterators.
const asyncIterable = [1, 2, 3];
asyncIterable[Symbol.asyncIterator] = async function*() {
for (let i = 0; i < asyncIterable.length; i++) {
yield { value: asyncIterable[i], done: false }
}
yield { done: true };
};
(async function() {
for await (const part of asyncIterable) {
console.log(part);
}
})();
Copy the code
A for-await-of loop waits for each promise it receives to be resolved before executing the next one, as opposed to a regular for-of loop.
Currently, there are not many structures outside of streams that support asynchronous iterators. But as this example shows, you can manually add symbols to any iterable.
A stream as an asynchronous iterator
Asynchronous iterators are very useful when working with streams. Readable streams, writable streams, bidirectional streams, and conversion streams all come with asyncIterator symbols out of the box.
async function printFileToConsole(path) { try { const readStream = fs.createReadStream(path, { encoding: 'utf-8' }); for await (const chunk of readStream) { console.log(chunk); } console.log('EOF'); } catch(error) { console.log(error); }}Copy the code
If you write code like this, you don’t have to listen for the data and end events as you iterate through each shard, and the for-await-of loop terminates itself at the end of the stream.
An API for consuming paging
We can also use asynchronous iterators that make it easy to get paged data from the source. To do this, we need some way to reconstruct the response body of the stream returned by the Node HTTPS request method. Since both requests and responses in Node are streams, we can also use asynchronous iterators to do this:
const https = require('https'); function homebrewFetch(url) { return new Promise(async (resolve, reject) => { const req = https.get(url, async function(res) { if (res.statusCode >= 400) { return reject(new Error(`HTTP Status: ${res.statusCode}`)); } try { let body = ''; /* For await (const chunk of res) {body += chunk; for await (const chunk of res) {body += chunk; } // Handle no response body if (! body) resolve({}); Parse (body) const result = json.parse (body); resolve(result); } catch(error) { reject(error) } }); await req; req.end(); }); }Copy the code
We will make a request to cat API to get some cat pictures, 10 to a page, and pause 7 seconds in the middle of each request to get 5 pages of data at most, so as to avoid cat disease caused by cat API overload.
function fetchCatPics({ limit, page, done }) { return homebrewFetch(`https://api.thecatapi.com/v1/images/search?limit=${limit}&page=${page}&order=DESC`) .then(body => ({ value: body, done })); } function catPics({ limit }) { return { [Symbol.asyncIterator]: async function*() { let currentPage = 0; While (currentPage < 5) {try {const cats = await fetchCatPics({currentPage, limit, done: false}); console.log(`Fetched ${limit} cats`); yield cats; currentPage ++; } catch(error) { console.log('There has been an error fetching all the cats! '); console.log(error); }}}}; } (async function() { try { for await (let catPicPage of catPics({ limit: 10 })) { console.log(catPicPage); // wait 7 seconds between each request await new Promise(resolve => setTimeout(resolve, 7000)); } } catch(error) { console.log(error); }}) ()Copy the code
This way, we can automatically take a full page of cat pictures every seven seconds and explode them.
A more conventional approach to turning pages is to implement and expose next and Previous methods to control page navigation:
function actualCatPics({ limit }) { return { [Symbol.asyncIterator]: () => { let page = 0; return { next: function() { page++; return fetchCatPics({ page, limit, done: false }); }, previous: function() { if (page > 0) { page--; return fetchCatPics({ page, limit, done: false }); } return fetchCatPics({ page: 0, limit, done: true }); }}}}; } try { const someCatPics = actualCatPics({ limit: 5 }); const { next, previous } = someCatPics[Symbol.asyncIterator](); next().then(console.log); next().then(console.log); previous().then(console.log); } catch(error) { console.log(error); }Copy the code
As you can see, asynchronous iterators are useful when we have a lot of page data to fetch, or when we want to implement infinite scrolling in an application’S UI.
If you’re looking for a battle-tested Node.js team to build products on, or want to expand your engineering base, consider using RisingStack’s services: risingstack.com/nodejs-deve…
Browsers have supported these features for some time, Chrome since version 63, Firefox since version 57, and Safari since version 11.1. However, IE and Edge are not currently supported.
Do you have any ideas on how asynchronous iterators can be used? Are you ready to use it in a real application?
Let’s talk in the comments below!
If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.
The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.