前言
在群里和群友们聊天,就聊到了用爬虫去爬小说方法,毫无疑问肯定首选Python啊,依稀记得之前大数据比赛时候联系的数据可视化使用Scrapy和Flask,那就用Scrapy写一个小爬虫吧,说干就干
准备工作
Windows 11
Python 3.7.9
搭建环境
代码语言:javascript复制pip install Scrapy
代码语言:javascript复制scrapy startproject novelScrapy
代码语言:javascript复制novelScrapy/
scrapy.cfg
novelScrapy/
__init__.py
items.py
pipelines.py
settings.py
spiders/
__init__.py
代码语言:javascript复制scrapy genspider novel "https://www.xbiquge.la"
代码语言:javascript复制novelScrapy/
scrapy.cfg
novelScrapy/
__init__.py
items.py
pipelines.py
settings.py
spiders/
__init__.py
novel.py
写代码
代码语言:javascript复制//*[@id="list"]/dl/dd[1]/a
代码语言:javascript复制import scrapy
from novelCrapy.items import NovelcrapyItem
class NovelSpider(scrapy.Spider):
name = 'novel'
allowed_domains = ['www.xbiquge.la']
start_urls = ['https://www.xbiquge.la/xiaoshuodaquan/']
root_url = 'https://www.xbiquge.la'
# 先获取小说列表
def parse(self, response):
# 获取小说分类
novel_class_list = response.xpath('//*[@id="main"]/div[@class="novellist"]')
for i in novel_class_list:
# 具体分类名
novel_class = i.xpath('./h2/text()').get()
# 小说列表
novel_url = i.xpath('./ul/li/a/@href').extract()
for novel in novel_url:
yield scrapy.Request(
url=novel,
meta={'novel_class': novel_class},
callback=self.parse_chapter
)
# 获取小说名,和小说章节
def parse_chapter(self, response):
# 获取小说分类
novel_class = response.meta['novel_class']
# 获取小说名
novel_name = response.xpath('//*[@id="info"]/h1/text()').get()
# 获取小说章节列表
novel_chapter_list = response.xpath('//*[@id="list"]/dl/dd')
for i in novel_chapter_list:
# 获取小说章节名
# novel_chapter = i.xpath('./a/@text()').get()
novel_chapter = i.xpath('./a').xpath('string(.)').get()
# 拼接小说章节完整Url
link = self.root_url i.xpath('./a/@href').get()
yield scrapy.Request(
url=link,
meta={'novel_class': novel_class, 'novel_name': novel_name, 'novel_chapter': novel_chapter},
callback=self.parse_content
)
# 再获取小说章节内容
def parse_content(self, response):
# 小说分类
novel_class = response.meta['novel_class']
# 小说名
novel_name = response.meta['novel_name']
# 小说章节
novel_chapter = response.meta['novel_chapter']
# 获取小说内容
novel_content = response.xpath('//*[@id="content"]/text()').extract()
item = NovelcrapyItem()
item['novel_class'] = novel_class
item['novel_chapter'] = novel_chapter
item['novel_name'] = novel_name
item['novel_content'] = novel_content
# 处理完毕返回数据
yield item
代码语言:javascript复制# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class NovelcrapyItem(scrapy.Item):
# define the fields for your item here like:
# 小说分类
novel_class = scrapy.Field()
# 小说名
novel_name = scrapy.Field()
# 章节名
novel_chapter = scrapy.Field()
# 章节内容
novel_content = scrapy.Field()
代码语言:javascript复制引擎:Hi!Spider, 你要处理哪一个网站?
Spider:老大要我处理xxxx.com。
引擎:你把第一个需要处理的URL给我吧。
Spider:给你,第一个URL是xxxxxxx.com。
引擎:Hi!调度器,我这有request请求你帮我排序入队一下。
调度器:好的,正在处理你等一下。
引擎:Hi!调度器,把你处理好的request请求给我。
调度器:给你,这是我处理好的request
引擎:Hi!下载器,你按照老大的下载中间件的设置帮我下载一下这个request请求
下载器:好的!给你,这是下载好的东西。(如果失败:sorry,这个request下载失败了。然后引擎告诉调度器,这个request下载失败了,你记录一下,我们待会儿再下载)
引擎:Hi!Spider,这是下载好的东西,并且已经按照老大的下载中间件处理过了,你自己处理一下(注意!这儿responses默认是交给def parse()这个函数处理的)
Spider:(处理完毕数据之后对于需要跟进的URL),Hi!引擎,我这里有两个结果,这个是我需要跟进的URL,还有这个是我获取到的Item数据。
引擎:Hi !管道 我这儿有个item你帮我处理一下!调度器!这是需要跟进URL你帮我处理下。然后从第四步开始循环,直到获取完老大需要全部信息。
管道调度器:好的,现在就做!
代码语言:javascript复制# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a single interface
import os
import time
from itemadapter import ItemAdapter
class NovelcrapyPipeline:
def process_item(self, item, spider):
# 定义小说储存路径
dir = 'D:\Project\Python\小说\' item['novel_class'] '\' item['novel_name'] '\'
# 如果不存在则创建
if not os.path.exists(dir):
os.makedirs(dir)
filename = dir item['novel_chapter'] ".txt"
with open(filename, 'w', encoding="utf-8") as f:
f.write("".join(item['novel_content']))
now_time = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(time.time()))
print('[%s] %s %s %s 已下载' % (now_time, item['novel_class'], item['novel_name'], item['novel_chapter']))
return item
代码语言:javascript复制scrapy crawl novel -s JOBDIR=crawls/novel-1
优化
代码语言:javascript复制DOWNLOAD_DELAY = 0
CONCURRENT_REQUESTS = 100
CONCURRENT_REQUESTS_PER_DOMAIN = 100
CONCURRENT_REQUESTS_PER_IP = 100
COOKIES_ENABLED = False
成品
如无特殊说明《Python - 手把手教你用Scrapy编写一个爬虫》为博主MoLeft原创,转载请注明原文链接为:https://moleft.cn/post-216.html