The text and pictures in this article come from the network, only for learning, communication, do not have any commercial purposes, if you have any questions, please contact us to deal with.
Author: yangrq1018
The original link: segmentfault.com/a/119000001…
Do some small projects, with the technology and skills will be more scattered more miscellaneous, write an essay record, help familiar with.
What you need: You often watch movies on Tencent Video, and there is a “Douban praise” section in the movie library. I usually choose movies under this item. But there are a lot of movies, and there is a lack of indexes, so you have to keep going down, making JS load more items. However, after watching the previous one, it takes a long time to find a new one. So with the crawler will “Douban praise” in the movie climbed down into a table, convenient selection.
Project address: github.com/yangrq1018/…
Rely on
The following Python packages are required:
-
requests
-
bs4 – Beautiful soup
-
pandas
That’s it, no complex automated crawler architecture is needed, simple and common packages are enough.
Crawl movie information
First look at the movie channel, found that it is asynchronously loaded. You can use the Network TAB in Firefox (or Chrome) inspect to filter through the possible APIS. It quickly turns out that the interface’s URL is in this format:
base_url = 'https://v.qq.com/x/bu/pagesheet/list?_all=1&append=1&channel=movie&listpage=2&offset={offset}&pagesize={page_size}&sort ={sort}'Copy the code
Where offset is the start of the requested page, pagesize is the number of requests per page, and sort is the type. Here sort=21 refers to the type of “bean review” we need. Pagesize cannot be larger than 30, which returns only 30 elements, and less than 30 returns a specified number of elements.
-
The URL is too long for Pandas. It will be needed later
-
pd.set_option('display.max_colwidth', -1)
-
base_url = 'https://v.qq.com/x/bu/pagesheet/list?_all=1&append=1&channel=movie&listpage=2&offset={offset}&pagesize={page_size}&sort ={sort}'
-
# Douban best type
-
DOUBAN_BEST_SORT = 21
-
NUM_PAGE_DOUBAN = 167
A quick loop reveals that there are 167 pages of douban reviews, with 30 elements per page.
We use the Requests library to request the page. Get_soup requests the element on page PAGe_IDx. Beautifulsoup parses response.content, creating a DOMlike object that makes it easy to find the element we need. We return a list. Each movie entry is contained within a div called list_Item, so write a function to help us extract all such divs.
-
def get_soup(page_idx, page_size=30, sort=DOUBAN_BEST_SORT):
-
url = base_url.format(offset=page_idx * page_size, page_size=page_size, sort=sort)
-
res = requests.get(url)
-
soup = bs4.BeautifulSoup(res.content.decode('utf-8'), 'lxml')
-
return soup
-
def find_list_items(soup):
-
return soup.find_all('div', class_='list_item')
We iterate through each page, returning an HTML list of all the items that bS4 has passed.
-
def douban_films():
-
rel = []
-
for p in range(NUM_PAGE_DOUBAN):
-
print('Getting page {}'.format(p))
-
soup = get_soup(p)
-
rel += find_list_items(soup)
-
return rel
Here’s the HTML code for one of the movies:
-
<div __wind="" class="list_item">
-
<a class="figure" data-float="j3czmhisqin799r" href="https://v.qq.com/x/cover/j3czmhisqin799r.html" tabindex="-1" > < span style =" color: RGB (51, 51, 51); font-family: arial, sans-serif;
-
<img Alt =" farewell my concubine "class="figure_pic" onerror="picerr(this,'v')" src="//puui.qpic.cn/vcover_vt_pic/0/j3czmhisqin799rt1444885520.jpg/220"/>
-
<img alt="VIP" class="mark_v" onerror="picerr(this)" src="//i.gtimg.cn/qqlive/images/mark/mark_5.png" srcset="//i.gtimg.cn/qqlive/images/mark/[email protected] 2x"/>
-
<div class="figure_caption"></div>
-
< div class = "figure_score" > 9.6 < / div >
-
</a>
-
<div class="figure_detail figure_detail_two_row">
-
<a class="figure_title figure_title_two_row bold" href="https://v.qq.com/x/cover/j3czmhisqin799r.html" target="_blank" Title =" Farewell my concubine "> </a>
-
<div class="figure_desc" title=" 名 人 : 张 荣, 张 fengyi.gong Li Ge You ">
-
</div>
-
<div class="figure_count"><svg class="svg_icon svg_icon_play_sm" height="16" viewbox="0 0 16 16" width="16"><use Xlink: href = "# svg_icon_play_sm" > < / use > < / SVG > 46.71 million < / div >
-
</div>
It is not difficult to find that the movie farewell My Concubine, its name, broadcast address, cover, score, leading actor, whether membership is required and the number of plays are all in this div. In an interactive environment like Ipython, it’s easy to figure out how to extract them using BS. One trick I use is to open a spyder.py file, write the required functions in it, turn on the ipython auto-reload module option, and then debug the code in the console and copy it to the file, and the ipython functions will be updated accordingly. This has the advantage of being much more convenient than changing code in Ipython. How to turn on ipython autoreload:
-
%load_ext autoreload
-
%autoreload 2 # Reload all modules every time before executing Python code
-
%autoreload 0 # Disable automatic reloading
The parse_films function extracts information using two common methods in BS:
- find
- find_all
Because the API of Douban has closed the retrieval function, the crawler will be detected by the anti-crawler, and the function was abandoned when the score of Douban was added in order to retrieve it.
OrderedDict can accept a list of (keys, values), and the order of the keys is remembered. This will be useful later when we export to pandas DataFrame.
-
def parse_films(films):
-
'''films is a list of `bs4.element.Tag` objects'''
-
rel = []
-
for i, film in enumerate(films):
-
title = film.find('a', class_="figure_title")['title']
-
print('Parsing film %d: ' % i, title)
-
link = film.find('a', class_="figure")['href']
-
img_link = film.find('img', class_="figure_pic")['src']
-
# test if need VIP
-
need_vip = bool(film.find('img', class_="mark_v"))
-
score = getattr(film.find('div', class_='figure_score'), 'text', None)
-
if score: score = float(score)
-
cast = film.find('div', class_="figure_desc")
-
if cast:
-
cast = cast.get('title', None)
-
play_amt = film.find('div', class_="figure_count").get_text()
-
# db_score, db_link = search_douban(title)
-
# Store key orders
-
dict_item = OrderedDict([
-
('title', title),
-
('vqq_score', score),
-
# ('db_score', db_score),
-
('need_vip', need_vip),
-
('cast', cast),
-
('play_amt', play_amt),
-
('vqq_play_link', link),
-
# ('db_discuss_link', db_link),
-
('img_link', img_link),
-
])
-
rel.append(dict_item)
-
return rel
export
Finally, we call the written function and run it in the main program.
Objects that are resolved in a List of dictionaries format can be passed directly to the DataFrame constructor. Sort according to the score, the highest score in the front, and then convert the play link into HTML link tags, more beautiful and can be opened directly.
Note that the CSV file generated by pandas is not compatible with Excel and contains Chinese characters. The solution is to select utF_8_SIG for encoding, which will allow Excel to decode properly.
Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle Pickle We save our DataFrame as.pkl. Call the to_html method of the DataFrame to save an HTML file. Make sure escape is set to False or the hyperlink cannot be opened directly.
-
if __name__ == '__main__':
-
df = DataFrame(parse_films(douban_films()))
-
# Sorted by score
-
df.sort_values(by="vqq_score", inplace=True, ascending=False)
-
# Format links
-
df['vqq_play_link'] = df['vqq_play_link'].apply(lambda x: '<a href="{0}">Film link</a>'.format(x))
-
df['img_link'] = df['img_link'].apply(lambda x: '<img src="{0}">'.format(x))
-
# Chinese characters in Excel must be encoded with _sig
-
df.to_csv('vqq_douban_films.csv', index=False, encoding='utf_8_sig')
-
# Pickle
-
df.to_pickle('vqq_douban_films.pkl')
-
# HTML, render hyperlink
-
df.to_html('vqq_douban_films.html', escape=False)
The project management
That’s the code part. When the code is written, archive it for easy analysis. Choose to place on Github.
Github provides a command-line tool (not Git, but an extension of Git) called the Hub. MacOS users can install this way
brew install hub
Copy the code
Hub has many more concise syntax than git, which we mainly use here
hub create -d "Create repo for our proj" vqq-douban-film
Copy the code
How cool is it to create a REPO directly from the command line? No need to open a browser at all. You may then be prompted to register your SSH public key (authentication permissions) on Github. If you don’t have one, use ssh-keygen to generate one, and copy the.pub contents into Github Settings.
The project directory may have __pycache__ and.ds_store files that you don’t want to track. Writing a. Gitignore by hand is too much trouble. Do you have tools? Python has a package
-
pip install git-ignore
-
Git-ignore python # Generate a Python template
-
Add.ds_store manually
Command line only, installed to cool.