1.scrapy项目结构如下:
2.打开spidler目录下的Duba.py文件,代码如下(这个是根据豆瓣一部分页面获取的热门话题内容,有6条数据):
# -*- coding: utf-8 -*- import scrapy from scrapydemo.items import ScrapydemoItem from lxml import etree class DubaSpider(scrapy.Spider): name = 'Duba' allowed_domains = ['www.douban.com'] start_urls = ['https://www.douban.com/'] def parse(self, response): item = ScrapydemoItem() for each in response.xpath("//li[@class='rec_topics']"): name = each.xpath("./a/@href").extract() level = each.xpath("./a/text()").extract() info = each.xpath("./span/text()").extract() item['name']=name[0] item['level'] = level[0] item['info'] = info[0] yield item
3.修改pipelines.py代码如下:
import json class ScrapydemoPipeline(object): def __init__(self): self.f = open("pipline.json",'a') #pass def process_item(self, item, spider): content = json.dumps(dict(item),ensure_ascii = False,indent=2) ",n" self.f.write(content.encode("utf-8")) return item def close_spider(self,spider): self.f.close()
4.在settings.py文件中将下面代码注释去掉
ITEM_PIPELINES = { 'scrapydemo.pipelines.ScrapydemoPipeline': 300, }
然后在要生成文件的目录运行:scrapy crawl Duba,就可以在当前目录看到生成的文件了。
当然,过程中遇到了一些问题,检查后发现是空格,字符拼写的毛病。尤其是在vim下,一定要注意先引入模块再调用,注意空格,字符拼写等问题。接下来就是多多练习啦。