Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程


Posted in Python onNovember 11, 2021

前言

闲来无聊,写了一个爬虫程序获取百度疫情数据。申明一下,研究而已。而且页面应该会进程做反爬处理,可能需要调整对应xpath。

Github仓库地址:代码仓库

本文主要使用的是scrapy框架。

环境部署

主要简单推荐一下

插件推荐

这里先推荐一个Google Chrome的扩展插件xpath helper,可以验证xpath语法是不是正确。

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

爬虫目标

需要爬取的页面:实时更新:新型冠状病毒肺炎疫情地图

主要爬取的目标选取了全国的数据以及各个身份的数据。

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

项目创建

使用scrapy命令创建项目

scrapy startproject yqsj

webdriver部署

这里就不重新讲一遍了,可以参考我这篇文章的部署方法:Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程

项目代码

开始撸代码,看一下百度疫情省份数据的问题。

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

页面需要点击展开全部span。所以在提取页面源码的时候需要模拟浏览器打开后,点击该按钮。所以按照这个方向,我们一步步来。

Item定义

定义两个类YqsjProvinceItem和YqsjChinaItem,分别定义国内省份数据和国内数据。

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
 
import scrapy
 
 
class YqsjProvinceItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    location = scrapy.Field()
    new = scrapy.Field()
    exist = scrapy.Field()
    total = scrapy.Field()
    cure = scrapy.Field()
    dead = scrapy.Field()
 
 
class YqsjChinaItem(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    # 现有确诊
    exist_diagnosis = scrapy.Field()
    # 无症状
    asymptomatic = scrapy.Field()
    # 现有疑似
    exist_suspecte = scrapy.Field()
    # 现有重症
    exist_severe = scrapy.Field()
    # 累计确诊
    cumulative_diagnosis = scrapy.Field()
    # 境外输入
    overseas_input = scrapy.Field()
    # 累计治愈
    cumulative_cure = scrapy.Field()
    # 累计死亡
    cumulative_dead = scrapy.Field()

中间件定义

需要打开页面后点击一下展开全部。

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

完整代码

# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
 
from scrapy import signals
 
# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapter
from scrapy.http import HtmlResponse
from selenium.common.exceptions import TimeoutException
from selenium.webdriver import ActionChains
import time
 
 
class YqsjSpiderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the spider middleware does not modify the
    # passed objects.
 
    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s
 
    def process_spider_input(self, response, spider):
        # Called for each response that goes through the spider
        # middleware and into the spider.
 
        # Should return None or raise an exception.
        return None
 
    def process_spider_output(self, response, result, spider):
        # Called with the results returned from the Spider, after
        # it has processed the response.
 
        # Must return an iterable of Request, or item objects.
        for i in result:
            yield i
 
    def process_spider_exception(self, response, exception, spider):
        # Called when a spider or process_spider_input() method
        # (from other spider middleware) raises an exception.
 
        # Should return either None or an iterable of Request or item objects.
        pass
 
    def process_start_requests(self, start_requests, spider):
        # Called with the start requests of the spider, and works
        # similarly to the process_spider_output() method, except
        # that it doesn't have a response associated.
 
        # Must return only requests (not items).
        for r in start_requests:
            yield r
 
    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)
 
 
class YqsjDownloaderMiddleware:
    # Not all methods need to be defined. If a method is not defined,
    # scrapy acts as if the downloader middleware does not modify the
    # passed objects.
 
    @classmethod
    def from_crawler(cls, crawler):
        # This method is used by Scrapy to create your spiders.
        s = cls()
        crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
        return s
 
    def process_request(self, request, spider):
        # Called for each request that goes through the downloader
        # middleware.
 
        # Must either:
        # - return None: continue processing this request
        # - or return a Response object
        # - or return a Request object
        # - or raise IgnoreRequest: process_exception() methods of
        #   installed downloader middleware will be called
        # return None
        try:
            spider.browser.get(request.url)
            spider.browser.maximize_window()
            time.sleep(2)
            spider.browser.find_element_by_xpath("//*[@id='nationTable']/div/span").click()
            # ActionChains(spider.browser).click(searchButtonElement)
            time.sleep(5)
            return HtmlResponse(url=spider.browser.current_url, body=spider.browser.page_source,
                                encoding="utf-8", request=request)
        except TimeoutException as e:
            print('超时异常:{}'.format(e))
            spider.browser.execute_script('window.stop()')
        finally:
            spider.browser.close()
 
    def process_response(self, request, response, spider):
        # Called with the response returned from the downloader.
 
        # Must either;
        # - return a Response object
        # - return a Request object
        # - or raise IgnoreRequest
        return response
 
    def process_exception(self, request, exception, spider):
        # Called when a download handler or a process_request()
        # (from other downloader middleware) raises an exception.
 
        # Must either:
        # - return None: continue processing this exception
        # - return a Response object: stops process_exception() chain
        # - return a Request object: stops process_exception() chain
        pass
 
    def spider_opened(self, spider):
        spider.logger.info('Spider opened: %s' % spider.name)

定义爬虫

分别获取国内疫情数据以及省份疫情数据。完整代码:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time    : 2021/11/7 22:05
# @Author  : 至尊宝
# @Site    : 
# @File    : baidu_yq.py
 
import scrapy
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
 
from yqsj.items import YqsjChinaItem, YqsjProvinceItem
 
 
class YqsjSpider(scrapy.Spider):
    name = 'yqsj'
    # allowed_domains = ['blog.csdn.net']
    start_urls = ['https://voice.baidu.com/act/newpneumonia/newpneumonia#tab0']
    china_xpath = "//div[contains(@class, 'VirusSummarySix_1-1-317_2ZJJBJ')]/text()"
    province_xpath = "//*[@id='nationTable']/table/tbody/tr[{}]/td/text()"
    province_xpath_1 = "//*[@id='nationTable']/table/tbody/tr[{}]/td/div/span/text()"
 
    def __init__(self):
        chrome_options = Options()
        chrome_options.add_argument('--headless')  # 使用无头谷歌浏览器模式
        chrome_options.add_argument('--disable-gpu')
        chrome_options.add_argument('--no-sandbox')
        self.browser = webdriver.Chrome(chrome_options=chrome_options,
                                        executable_path="E:\\chromedriver_win32\\chromedriver.exe")
        self.browser.set_page_load_timeout(30)
 
    def parse(self, response, **kwargs):
        country_info = response.xpath(self.china_xpath)
        yq_china = YqsjChinaItem()
        yq_china['exist_diagnosis'] = country_info[0].get()
        yq_china['asymptomatic'] = country_info[1].get()
        yq_china['exist_suspecte'] = country_info[2].get()
        yq_china['exist_severe'] = country_info[3].get()
        yq_china['cumulative_diagnosis'] = country_info[4].get()
        yq_china['overseas_input'] = country_info[5].get()
        yq_china['cumulative_cure'] = country_info[6].get()
        yq_china['cumulative_dead'] = country_info[7].get()
        yield yq_china
        
        # 遍历35个地区
        for x in range(1, 35):
            path = self.province_xpath.format(x)
            path1 = self.province_xpath_1.format(x)
            province_info = response.xpath(path)
            province_name = response.xpath(path1)
            yq_province = YqsjProvinceItem()
            yq_province['location'] = province_name.get()
            yq_province['new'] = province_info[0].get()
            yq_province['exist'] = province_info[1].get()
            yq_province['total'] = province_info[2].get()
            yq_province['cure'] = province_info[3].get()
            yq_province['dead'] = province_info[4].get()
            yield yq_province

pipeline输出结果文本

将结果按照一定的文本格式输出出来。完整代码:

# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
 
 
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
 
from yqsj.items import YqsjChinaItem, YqsjProvinceItem
 
 
class YqsjPipeline:
    def __init__(self):
        self.file = open('result.txt', 'w', encoding='utf-8')
 
    def process_item(self, item, spider):
        if isinstance(item, YqsjChinaItem):
            self.file.write(
                "国内疫情\n现有确诊\t{}\n无症状\t{}\n现有疑似\t{}\n现有重症\t{}\n累计确诊\t{}\n境外输入\t{}\n累计治愈\t{}\n累计死亡\t{}\n".format(
                    item['exist_diagnosis'],
                    item['asymptomatic'],
                    item['exist_suspecte'],
                    item['exist_severe'],
                    item['cumulative_diagnosis'],
                    item['overseas_input'],
                    item['cumulative_cure'],
                    item['cumulative_dead']))
        if isinstance(item, YqsjProvinceItem):
            self.file.write(
                "省份:{}\t新增:{}\t现有:{}\t累计:{}\t治愈:{}\t死亡:{}\n".format(
                    item['location'],
                    item['new'],
                    item['exist'],
                    item['total'],
                    item['cure'],
                    item['dead']))
        return item
 
    def close_spider(self, spider):
        self.file.close()

配置文件改动

直接参考,自行调整:

# Scrapy settings for yqsj project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html
 
BOT_NAME = 'yqsj'
 
SPIDER_MODULES = ['yqsj.spiders']
NEWSPIDER_MODULE = 'yqsj.spiders'
 
 
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'yqsj (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0'
 
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
 
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
 
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
 
# Disable cookies (enabled by default)
COOKIES_ENABLED = False
 
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
 
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Language': 'en',
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.94 Safari/537.36'
}
 
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
   'yqsj.middlewares.YqsjSpiderMiddleware': 543,
}
 
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
   'yqsj.middlewares.YqsjDownloaderMiddleware': 543,
}
 
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}
 
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   'yqsj.pipelines.YqsjPipeline': 300,
}
 
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
 
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

验证结果

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

看看结果文件

Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程

总结

emmmm,闲着无聊,写着玩,没啥好总结的。

分享:

修心,亦是修行之一。顺境修力,逆境修心,缺一不可。 ——《剑来》

如果本文对你有作用的话,不要吝啬你的赞,谢谢。

以上就是Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程的详细内容,更多关于Python Scrapy框架的资料请关注三水点靠木其它相关文章!

Python 相关文章推荐
用virtualenv建立多个Python独立虚拟开发环境
Jul 06 Python
python读取中文txt文本的方法
Apr 12 Python
Python面向对象之类的封装操作示例
Jun 08 Python
Python for循环通过序列索引迭代过程解析
Feb 07 Python
python itsdangerous模块的具体使用方法
Feb 17 Python
Python实现Wordcloud生成词云图的示例
Mar 30 Python
TensorFlow打印输出tensor的值
Apr 19 Python
给keras层命名,并提取中间层输出值,保存到文档的实例
May 23 Python
Django如何实现密码错误报错提醒
Sep 04 Python
python查询MySQL将数据写入Excel
Oct 29 Python
python3中TQDM库安装及使用详解
Nov 18 Python
python制作一个简单的gui 数据库查询界面
Nov 19 Python
python中tkinter复选框使用操作
Nov 11 #Python
Python中的变量与常量
Nov 11 #Python
Python 键盘事件详解
Nov 11 #Python
Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程
Nov 11 #Python
Python 多线程处理任务实例
Nov 07 #Python
python利用while求100内的整数和方式
Nov 07 #Python
python中if和elif的区别介绍
Nov 07 #Python
You might like
PHP学习笔记之二
2011/01/17 PHP
PHP中怎样保持SESSION不过期 原理及方案介绍
2013/08/08 PHP
PHP技术开发微信公众平台
2015/07/22 PHP
[原创]PHP实现SQL语句格式化功能的方法
2017/07/28 PHP
jquery.ui.progressbar 中文文档
2009/11/26 Javascript
javascript开发技术大全-第3章 js数据类型
2011/07/03 Javascript
JS实现div内部的文字或图片自动循环滚动代码
2013/04/19 Javascript
js修改input的type属性问题探讨
2013/10/12 Javascript
IE与FF下javascript获取网页及窗口大小的区别详解
2014/01/14 Javascript
js实现数字每三位加逗号的方法
2015/02/05 Javascript
有关Promises异步问题详解
2015/11/13 Javascript
jQuery选择器用法实例详解
2015/12/17 Javascript
jQuery操作基本控件方法实例分析
2015/12/31 Javascript
基于javascript制作微信聊天面板
2020/08/09 Javascript
js数组的五种迭代方法及两种归并方法(推荐)
2016/06/14 Javascript
JS一个简单的注册页面实例
2017/09/05 Javascript
AngularJS实现的2048小游戏功能【附源码下载】
2018/01/03 Javascript
vue导出html、word和pdf的实现代码
2018/07/31 Javascript
解决vue2 在mounted函数无法获取prop中的变量问题
2018/11/15 Javascript
Vue数据绑定简析小结
2019/05/07 Javascript
微信小程序webview与h5通过postMessage实现实时通讯的实现
2019/08/20 Javascript
Echarts.js无法引入问题解决方案
2020/10/30 Javascript
[33:39]DOTA2上海特级锦标赛C组小组赛#2 LGD VS Newbee第二局
2016/02/27 DOTA
[03:23:49]2016.12.17日完美“圣”典全回顾
2016/12/19 DOTA
Python模块学习 datetime介绍
2012/08/27 Python
基于Python如何使用AIML搭建聊天机器人
2016/01/27 Python
Python实现的列表排序、反转操作示例
2019/03/13 Python
python搜索包的路径的实现方法
2019/07/19 Python
美国综合购物商城:UnbeatableSale.com
2018/11/28 全球购物
英国领先的独立时装店:Van Mildert
2019/10/28 全球购物
致800米运动员广播稿(10篇)
2014/10/17 职场文书
安全员岗位职责
2015/02/10 职场文书
七夕情人节问候语
2015/11/11 职场文书
Python 实现定积分与二重定积分的操作
2021/05/26 Python
nginx+lua单机上万并发的实现
2021/05/31 Servers
Python基础之条件语句详解
2021/06/16 Python