“This is the first day of my participation in the First Challenge 2022. For details: First Challenge 2022”
preface
Recently, I got an assignment in my company. The hr department gave me a job form and asked me to update the information in it to the official website. It involves updating the information of existing posts, deleting some posts and adding some posts. Although this kind of work is simple, it is trivial and troublesome.
It took me one morning to update and check the job list and job details of 30 posts. The main workload lies in the difference between the data structure of the table and the data structure in the code, which needs to be modified and the verification of the website display after the updated data.
As a programmer, of course, I had to try to optimize the overall process. I just knew that there was an open API for flying books, so I began to think about whether I could automate the creation of content based on flying books tables, so that the hr side only needed to maintain the flying books tables.
The development of
To get the data
Say hammer hammer, the first thing to understand is how to call the open API of flying book, here you can refer directly to the introduction of the corresponding functions of flying book open platform, or very clear.
If you want to obtain the data in a multidimensional table, the main work is the following two steps:
- Sign up for a feishu app to get appId and appSecret
- Call interface passes appId and appSecret authentication to obtain application identity (tenant_access_token)
- Call the multidimensional table open API and pass in the token and the corresponding table information to get the table data
The open platform provides three different types of access credentials to verify the caller’s identity and ensure that the caller has the necessary permissions to perform an operation: APP_access_token:
Application: Authorization certificate, by which the open platform can identify the application identity of the caller. The application can access information related to the application itself, which is not belonging to a specific enterprise or user, for example, to obtain the identity of the user currently logging in to the application. tenant_access_token:
Tenant: authorization certificate. Using this Access token, the application will perform corresponding operations on behalf of the company or team, such as obtaining information of an address book user. user_access_token:
User: Authorization certificate. With this Access token, the application will perform corresponding operations on behalf of the user, such as creating a cloud document or an agenda through the API.
Since our site is written using next.js, server-side rendering doesn’t involve back and forth interaction, so INSTEAD of using AXIos, I use the Request library directly for sending requests. In fact, flybook’s open API only allows server calls for security.
The code is as follows:
const { promisify } = require("util")
const request = promisify(require("request"))
async function updateCareer() {
// Request to obtain token
const tokenRes = await request({
url: "https://open.feishu.cn/open-apis/auth/v3/tenant_access_token/internal".body: {
app_id: APP_ID,
app_secret: APP_SECRET
},
json: true.method: "POST".headers: {
"content-type": "application/json",}})// Request table data
const tableRes = await request({
url: "https://open.feishu.cn/open-apis/bitable/v1/apps/bascnuEd4NBfjyptIQKl6I75J3e/tables/tbl22HQor1gcwe2A/records".method: "GET".headers: {
"Content-Type": "application/json; charset=utf-8"."Authorization": "Bearer " + tokenRes.body.tenant_access_token
},
})
}
Copy the code
In the code above, I first introduce the promisify method to make the request promise-like and asynchronous programming more clear, and then call the two interfaces to get the table data.
We first send a request to obtain tenANT_access_token, and then send the concatenation of Tenant_access_token as the Authorization of request header to the interface that obtains table data. Our table ID needs to be concatenated in the URL of the interface.
Data formatting
After obtaining the data of the form, we need to do some formatting before generating it into a code. The job list page of our website is shown as follows:
Such a list is generated based on a JSON file, where there are two lists, one for Chinese and one for English, and the data structure for each card is as follows:
{
"zh-CN": [{"title": "Product Operations Manager"."href": "/careers/product-marketing-manager"."type": "Technology Center"."description": "Study APISEVEN Cloud, the company's SaaS product;"}]."en-US": [{"title": "Product Marketing Manager"."href": "/careers/product-marketing-manager"."type": "Technology Center"."description": " Learn about our SaaS product APISEVEN Cloud;"}}]Copy the code
The data structure extracted from the table is as follows:
{id: 'recwb6xrfK', record_id: 'recwb6xrfK', Fields: {'Job Responsibilities': 'XXX ',' department ': 'Operational department ',' last modified date ':1635523200000, 'position requirement ':' XXXX ', 'position status ': 'OPEN',' position name ': 'Technical Document Intern \nTechnical Writer (Internship)'}, fileName: 'technical-writer (internship) '},Copy the code
Therefore, we need a method to format the data. The implementation here is closely related to the specific content of the table, so I will not expand on it. Here is a method to format the data:
function formatCareerList(tableData) {
const careerList = {
"zh-CN": []."en-US": []
}
tableData.forEach((item, index) = > {
const [zhTitle, enTitle] = item.fields["Job Title"].split("\n")
const fileName = (enTitle || zhTitle).replace(/ |\/|\\/ig."-").toLowerCase()
tableData[index].fileName = fileName
const href = "/careers/" + fileName
const typeMap = {
"Technology Center": "Technology Center"."Operations department": "Operation Department"."Functional department": "Functional Department"."Sales Department": "Sales Department"."Global Team": "Global Team"
}
const zhType = item.fields["Department"]
const enType = typeMap[zhType]
const zhDesc = item.fields["Job Responsibilities"].split("\n") [0].substr(2)
const enDesc = item.fields["Job Responsibilities"].split("\n") [0].substr(2)
careerList["zh-CN"].push({
title: zhTitle,
href,
type: zhType,
description: zhDesc
})
careerList["en-US"].push({
title: enTitle || "",
href,
type: enType,
description: enDesc
})
})
return careerList
}
Copy the code
The main operation is to deconstruct the assignment or object to perform a data mapping, implement a data formatting, and generate two object arrays in Chinese and English.
Generate job details
After the job list is generated, you also need to generate the job description details, which is the page where the href field in the list jumps to. Our current job description is generated by rendering the Markdown document in the following format:
---
title: job name
date: xxx
---
### Job responsibilities
xxx
### Job requirements
xxx
### Additional information
xxx
### Contact information
xxx
Copy the code
So I wrote two content generated content in Chinese and English two separate methods, using the template string incoming parameters generated md the content of the document, using the template string can support our editors can wrap text, and embedded in the text variables is particularly convenient, shown below the English version template generated method:
// enTemplate.js
module.exports = function enTemplate(fields) {
const date = new Date(a)return `---
title: ${fields["Job Title"].split("\n") [1]}
date: ${date.getFullYear()}-${date.getMonth() + 1}-${date.getDate()} ${date.getHours()}:${date.getMinutes()}:00
---
### Job Responsibilities
${fields["Job Responsibilities"]}
### Job Requirements
${fields["Job Requirements"]}
## Additional
We"re a remote work company with employees in 6 cities across China, and we do all of our collaboration through GitHub, Slack and Google Docs.
## Contact
[[email protected]](mailto:[email protected])
`}
Copy the code
Once the content is generated, we need to write the content to the file and create the MD file and corresponding JSON list:
fs.writeFileSync(path.join(__dirname, '.. /data/career.json'), JSON.stringify(careerList, null.2), 'utf8')
tableData.forEach(item= > {
fs.writeFileSync(path.join(__dirname, '.. /_posts/careers/en-US/', item.fileName + '.md'), enTemplate(item.fields), 'utf8')
fs.writeFileSync(path.join(__dirname, '.. /_posts/careers/zh-CN/', item.fileName + '.md'), zhTemplate(item.fields), 'utf8')})Copy the code
In the above code I use the writeFileSync method to create a file. The first parameter is passed the location and name of the file to create, overwriting it if it already exists, the third parameter is passed the content of the file, and the third parameter is passed the encoding method. Create multiple MD documents by calling the previously written method by passing in parameters through the table data.
When generating the list, I passed a third argument for readability of the JSON string to specify a blank string for indentation, which is used to beautify the output (pretty-print); If the argument is a number, it represents how many Spaces there are; The upper limit is 10.
The following is the resulting MD document:
Finally, add a startup command for our document generation script, adding a field in package.json to scripts:
//package.json
"scripts": {
"dev": "next dev"."build": "next build"."start": "next start --port=80"."postbuild": "next-sitemap"."career":"node career-api/app.js"
}
Copy the code
You only need to run NPM Run career for the next startup.
conclusion
Currently, if you need to update the post information, you also need to enter the project to execute the specified command. If you want to make it easier, we can configure automatic execution in github action, or use the service of next.js to expose the interface for updating the post. Let the hr side want to update the interface to call directly ~
Through feishu open API, a large number of automation functions can be realized, which transfers the maintenance of some simple functions from technical personnel to non-technical personnel and reduces the development cost. Through my two hours of development, I can avoid spending half a day to revise and check each change, which is the value of programmers ~