Wandering around the recruitment website, I came across the salary of the crawler position and found it really fragrant. Today I decided to climb down and analyze it

PS: If you need Python learning materials, please click on the link below to obtain them

Free Python learning materials and group communication solutions click to join

First, determine the target website:

https://jobs.51job.com/pachongkaifa
Copy the code

1. Start

Open PyCharm and create a new file -> Import the required libraries -> add the usual request headers

  1. # import requests package

  2. import requests

  3. from lxml import etree

  4. # web links

  5. Url = “jobs.51job.com/pachongkaif…”

  6. # request header

  7. headers = {

  8. “Accept”: “text/html,application/xhtml+xml,application/xml; Q = 0.9, image/avif, image/webp image/apng, * / *; Q = 0.8, application/signed – exchange; v=b3; Q = 0.9 “,

  9. “Accept-Encoding”: “gzip, deflate, br”,

  10. “Accept-Language”: “zh-CN,zh; Q = 0.9 “,

  11. “Connection”: “keep-alive”,

  12. “Cookie”: “guid=7e8a970a750a4e74ce237e74ba72856b; partner=blog_csdn_net”,

  13. “Host”: “jobs.51job.com”,

  14. “Sec-Fetch-Dest”: “document”,

  15. “Sec-Fetch-Mode”: “navigate”,

  16. “Sec-Fetch-Site”: “none”,

  17. “Sec-Fetch-User”: “? 1 “,

  18. “Upgrade-Insecure-Requests”: “1”,

  19. “User-Agent”: “Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36”

  20. }

2. Analyze the labels of the target website and find that the desired fields (post, company name, city and salary) are all in the P tag, as shown in the figure below

<p class="info">
Copy the code

3. Start coding

Request web pages first to prevent Chinese garbled characters and conduct GBK encoding (garbled characters will appear if not set)

  1. res = requests.get(url=url, headers=headers)

  2. res.encoding=’gbk’

  3. s = res.text

, and then parse the web page to get the content you want

  1. selector = etree.HTML(s)

  2. for item in selector.xpath(‘/html/body/div[4]/div[2]/div[1]/div/div’):

  3. title = item.xpath(‘.//p/span[@class=”title”]/a/text()’)

  4. name = item.xpath(‘.//p/a/@title’)

  5. location_name = item.xpath(‘.//p/span[@class=”location name”]/text()’)

  6. sary = item.xpath(‘.//p/span[@class=”location”]/text()’)

  7. time = item.xpath(‘.//p/span[@class=”time”]/text()’)

  8. if len(title)>0:

  9. print(title)

  10. print(name)

  11. print(location_name)

  12. print(sary)

  13. print(time)

  14. print(“———–“)

The result is as follows:

4. Save the file to a CSV file

In order to facilitate our data analysis in the next step, I stored the extracted data in a CSV file

Import the required library packages

  1. import csv

  2. import codecs

Create a CSV file and set it to appending mode

  1. F = codecs.open(‘ crawler engineer position salary. CSV ‘,’a’,’ GBK ‘)

  2. writer = csv.writer(f)

  3. Writerow ([” position “,” company “,” city “,” salary “])

Write the contents of the file to the CSV loop while it is being crawled

writer.writerow([title[0]+"",name[0]+"",location_name[0]+"",sary[0]+""])
Copy the code

The saved CSV data is as follows:

5. Analyze the data and visualize it

Read the crawled data from CSV

  1. With open(‘ crawler engineer position salary. CSV ‘,’r’,encoding = ‘GBK ‘) as fp:

  2. reader = csv.reader(fp)

  3. for row in reader:

  4. # jobs

  5. title_list.append(row[0])

  6. # city

  7. city_list.append(row[2][0:2])

  8. # Salary distribution

  9. sary = row[3].split(“-“)

  10. if(len(sary)==2):

  11. try:

  12. Sary = sary[1].replace(“/月”,””)

  13. If “10000” in sary:

  14. Sary = sary. Replace (” 文 “,””)

  15. sary = int(sary)

  16. sary = sary*10000

  17. sary_list.append(sary)

  18. If “1000” in sary:

  19. Sary = sary. Replace (” 1000 “,””)

  20. sary = int(sary)

  21. sary = sary * 1000

  22. sary_list.append(sary)

  23. except:

  24. pass

Three sets are used to store the contents of the system analysis (job, city, salary distribution)

  1. # jobs

  2. title_list=[]

  3. # city

  4. city_list=[]

  5. # Salary distribution

  6. sary_list=[]

Since the salary is 10,000 yuan/month and 20,000 yuan/month, corresponding processing is needed in order to convert to 10,000 yuan/month and 20,000 yuan/month.

Start analyzing

5.1. Visualization 1: Common names of crawler posts

  1. dict_x = {}

  2. for item in title_list:

  3. dict_x[item] = title_list.count(item)

  4. sorted_x = sorted(dict_x.items(), key=operator.itemgetter(1), reverse=True)

  5. k_list = []

  6. v_list = []

  7. for k, v in sorted_x[0:11]:

  8. k_list.append(k)

  9. v_list.append(v)

  10. plt.axes(aspect=1)

  11. plt.pie(x=v_list,labels= k_list,autopct=’%0f%%’)

  12. Plt. savefig(” common name of crawler post. PNG “, dpi=600)

  13. plt.show()

As you can see, most companies need to use the term “crawler developer”

5.2. Visualization 2: Cities with the most crawler jobs

  1. dict_x = {}

  2. for item in city_list:

  3. dict_x[item] = city_list.count(item)

  4. sorted_x = sorted(dict_x.items(), key=operator.itemgetter(1), reverse=True)

  5. k_list = []

  6. v_list = []

  7. for k, v in sorted_x[0:11]:

  8. print(k, v)

  9. k_list.append(k)

  10. v_list.append(v)

  11. Plt. bar(k_list,v_list, label=’ most crawler jobs ‘)

  12. plt.legend()

  13. PLT. Xlabel (‘ city ‘)

  14. PLT. Ylabel (‘ number ‘)

  15. Plt.title (u’ most crawler jobs in city (li Yuchen)’)

  16. Plt.savefig (” city with most crawler jobs. PNG “, dpi=600)

  17. plt.show()

As can be seen from the figure, there are more jobs of reptile engineers in big cities (Beijing, Shanghai, Guangzhou and Shenzhen)

5.3. Visualization 3: Salary distribution

  1. dict_x = {}

  2. for item in sary_list:

  3. dict_x[item] = sary_list.count(item)

  4. sorted_x = sorted(dict_x.items(), key=operator.itemgetter(1), reverse=True)

  5. k_list = []

  6. v_list = []

  7. for k, v in sorted_x[0:15]:

  8. print(k, v)

  9. k_list.append(k)

  10. v_list.append(v)

  11. plt.axes(aspect=1)

  12. Plt.title (u’ salary distribution ‘) plt.title(u’ salary distribution ‘)

  13. plt.pie(x=v_list, labels=k_list, autopct=’%0f%%’)

  14. Plt.savefig (” salary distribution. PNG “, dpi=600)

  15. plt.show()

We can find that the salary of reptile engineers is more than 20000, accounting for half, especially around 20000. It seems that the reptile job is really delicious, are you sour? Haha

  1. data = pd.DataFrame({“value”:sary_list})

  2. cats1 = pd.cut(data[‘value’].values, bins=[8000, 10000, 20000, 30000, 50000,data[‘value’].max()+1])

  3. pinshu = cats1.value_counts()

  4. Pinshu_df = pd.dataframe (pinshu, columns=[‘ frequency ‘])

  5. Pinshu_df [‘ frequency f] = pinshu_df/pinshu_df [‘ frequency ‘] sum ()

  6. Pinshu_df [‘ frequency %] = pinshu_df [‘ frequency f] map (lambda x: ‘%. 2 f % % % (* 100) x)

  7. Pinshu_df [‘ f’] = pinshu_df[‘ f’].cumsum()

  8. Pinshu_df [‘ cumulative frequency %] = pinshu_df [‘ cumulative frequency f] map (lambda x: ‘%. 4 f % % % (* 100) x)

  9. print(pinshu_df)

  10. print()

  11. Print (” print “)

From the salary range, between 10000-20000 stand most, basic very good salary, more than 20000+ have a few, is really too big temptation

Ok, this is the end of today’s sharing, we will see you next time