Python爬虫库BeautifulSoup的介绍与简单使用实例


Posted in Python onJanuary 25, 2020

一、介绍

BeautifulSoup库是灵活又方便的网页解析库,处理高效,支持多种解析器。利用它不用编写正则表达式即可方便地实现网页信息的提取。

Python常用解析库

解析器 使用方法 优势 劣势
Python标准库 BeautifulSoup(markup, “html.parser”) Python的内置标准库、执行速度适中 、文档容错能力强 Python 2.7.3 or 3.2.2)前的版本中文容错能力差
lxml HTML 解析器 BeautifulSoup(markup, “lxml”) 速度快、文档容错能力强 需要安装C语言库
lxml XML 解析器 BeautifulSoup(markup, “xml”) 速度快、唯一支持XML的解析器 需要安装C语言库
html5lib BeautifulSoup(markup, “html5lib”) 最好的容错性、以浏览器的方式解析文档、生成HTML5格式的文档 速度慢、不依赖外部扩展

二、快速开始

给定html文档,产生BeautifulSoup对象

from bs4 import BeautifulSoup
html_doc = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title"><b>The Dormouse's story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""
soup = BeautifulSoup(html_doc,'lxml')

输出完整文本

print(soup.prettify())
<html>
 <head>
 <title>
  The Dormouse's story
 </title>
 </head>
 <body>
 <p class="title">
  <b>
  The Dormouse's story
  </b>
 </p>
 <p class="story">
  Once upon a time there were three little sisters; and their names were
  <a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">
  Elsie
  </a>
  ,
  <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">
  Lacie
  </a>
  and
  <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">
  Tillie
  </a>
  ;
and they lived at the bottom of a well.
 </p>
 <p class="story">
  ...
 </p>
 </body>
</html>

浏览结构化数据

print(soup.title) #<title>标签及内容
print(soup.title.name) #<title>name属性
print(soup.title.string) #<title>内的字符串
print(soup.title.parent.name) #<title>的父标签name属性(head)
print(soup.p) # 第一个<p></p>
print(soup.p['class']) #第一个<p></p>的class
print(soup.a) # 第一个<a></a>
print(soup.find_all('a')) # 所有<a></a>
print(soup.find(id="link3")) # 所有id='link3'的标签
<title>The Dormouse's story</title>
title
The Dormouse's story
head
<p class="title"><b>The Dormouse's story</b></p>
['title']
<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>
[<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, <a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, <a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]
<a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>

找出所有标签内的链接

for link in soup.find_all('a'):
  print(link.get('href'))
http://example.com/elsie
http://example.com/lacie
http://example.com/tillie

获得所有文字内容

print(soup.get_text())
The Dormouse's story

The Dormouse's story
Once upon a time there were three little sisters; and their names were
Elsie,
Lacie and
Tillie;
and they lived at the bottom of a well.
...

自动补全标签并进行格式化

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')#传入解析器:lxml
print(soup.prettify())#格式化代码,自动补全
print(soup.title.string)#得到title标签里的内容

标签选择器

选择元素

html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1"><!-- Elsie --></a>,
<a href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')#传入解析器:lxml
print(soup.title)#选择了title标签
print(type(soup.title))#查看类型
print(soup.head)

获取标签名称

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')#传入解析器:lxml
print(soup.title.name)

获取标签属性

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')#传入解析器:lxml
print(soup.p.attrs['name'])#获取p标签中,name这个属性的值
print(soup.p['name'])#另一种写法,比较直接

获取标签内容

print(soup.p.string)

标签嵌套选择

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')#传入解析器:lxml
print(soup.head.title.string)

子节点和子孙节点

html = """
<html>
  <head>
    <title>The Dormouse's story</title>
  </head>
  <body>
    <p class="story">
      Once upon a time there were three little sisters; and their names were
      <a href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">
        <span>Elsie</span>
      </a>
      <a href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> 
      and
      <a href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>
      and they lived at the bottom of a well.
    </p>
    <p class="story">...</p>
"""


from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')#传入解析器:lxml
print(soup.p.contents)#获取指定标签的子节点,类型是list

另一个方法,child:

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')#传入解析器:lxml
print(soup.p.children)#获取指定标签的子节点的迭代器对象
for i,children in enumerate(soup.p.children):#i接受索引,children接受内容
	print(i,children)

输出结果与上面的一样,多了一个索引。注意,只能用循环来迭代出子节点的信息。因为直接返回的只是一个迭代器对象。

获取子孙节点:

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')#传入解析器:lxml
print(soup.p.descendants)#获取指定标签的子孙节点的迭代器对象
for i,child in enumerate(soup.p.descendants):#i接受索引,child接受内容
	print(i,child)

父节点和祖先节点

parent

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')#传入解析器:lxml
print(soup.a.parent)#获取指定标签的父节点

parents

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')#传入解析器:lxml
print(list(enumerate(soup.a.parents)))#获取指定标签的祖先节点

兄弟节点

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')#传入解析器:lxml
print(list(enumerate(soup.a.next_siblings)))#获取指定标签的后面的兄弟节点
print(list(enumerate(soup.a.previous_siblings)))#获取指定标签的前面的兄弟节点

标准选择器

find_all( name , attrs , recursive , text , **kwargs )

可根据标签名、属性、内容查找文档。

name

html='''
<div class="panel">
  <div class="panel-heading">
    <h4>Hello</h4>
  </div>
  <div class="panel-body">
    <ul class="list" id="list-1">
      <li class="element">Foo</li>
      <li class="element">Bar</li>
      <li class="element">Jay</li>
    </ul>
    <ul class="list list-small" id="list-2">
      <li class="element">Foo</li>
      <li class="element">Bar</li>
    </ul>
  </div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all('ul'))#查找所有ul标签下的内容
print(type(soup.find_all('ul')[0]))#查看其类型

下面的例子就是查找所有ul标签下的li标签:

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
for ul in soup.find_all('ul'):
  print(ul.find_all('li'))

attrs(属性)

通过属性进行元素的查找

html='''
<div class="panel">
  <div class="panel-heading">
    <h4>Hello</h4>
  </div>
  <div class="panel-body">
    <ul class="list" id="list-1" name="elements">
      <li class="element">Foo</li>
      <li class="element">Bar</li>
      <li class="element">Jay</li>
    </ul>
    <ul class="list list-small" id="list-2">
      <li class="element">Foo</li>
      <li class="element">Bar</li>
    </ul>
  </div>
</div>
'''


from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(attrs={'id': 'list-1'}))#传入的是一个字典类型,也就是想要查找的属性
print(soup.find_all(attrs={'name': 'elements'}))

查找到的是同样的内容,因为这两个属性是在同一个标签里面的。

特殊类型的参数查找:

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(id='list-1'))#id是个特殊的属性,可以直接使用
print(soup.find_all(class_='element')) #class是关键字所以要用class_

text

根据文本内容来进行选择:

html='''
<div class="panel">
  <div class="panel-heading">
    <h4>Hello</h4>
  </div>
  <div class="panel-body">
    <ul class="list" id="list-1">
      <li class="element">Foo</li>
      <li class="element">Bar</li>
      <li class="element">Jay</li>
    </ul>
    <ul class="list list-small" id="list-2">
      <li class="element">Foo</li>
      <li class="element">Bar</li>
    </ul>
  </div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(text='Foo'))#查找文本为Foo的内容,但是返回的不是标签

所以说这个text在做内容匹配的时候比较方便,但是在做内容查找的时候并不是太方便。

方法

find

find用法和findall一模一样,但是返回的是找到的第一个符合条件的内容输出。

ind_parents(), find_parent()

find_parents()返回所有祖先节点,find_parent()返回直接父节点。

find_next_siblings() ,find_next_sibling()

find_next_siblings()返回后面的所有兄弟节点,find_next_sibling()返回后面的第一个兄弟节点

find_previous_siblings(),find_previous_sibling()

find_previous_siblings()返回前面所有兄弟节点,find_previous_sibling()返回前面第一个兄弟节点

find_all_next(),find_next()

find_all_next()返回节点后所有符合条件的节点,find_next()返回后面第一个符合条件的节点

find_all_previous(),find_previous()

find_all_previous()返回节点前所有符合条件的节点,find_previous()返回前面第一个符合条件的节点

CSS选择器 通过select()直接传入CSS选择器即可完成选择

html='''
<div class="panel">
  <div class="panel-heading">
    <h4>Hello</h4>
  </div>
  <div class="panel-body">
    <ul class="list" id="list-1">
      <li class="element">Foo</li>
      <li class="element">Bar</li>
      <li class="element">Jay</li>
    </ul>
    <ul class="list list-small" id="list-2">
      <li class="element">Foo</li>
      <li class="element">Bar</li>
    </ul>
  </div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.select('.panel .panel-heading'))#.代表class,中间需要空格来分隔
print(soup.select('ul li')) #选择ul标签下面的li标签
print(soup.select('#list-2 .element')) #'#'代表id。这句的意思是查找id为"list-2"的标签下的,class=element的元素
print(type(soup.select('ul')[0]))#打印节点类型

再看看层层嵌套的选择:

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
for ul in soup.select('ul'):
	print(ul.select('li'))

获取属性

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
for ul in soup.select('ul'):
  print(ul['id'])# 用[ ]即可获取属性
  print(ul.attrs['id'])#另一种写法

获取内容

from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
for li in soup.select('li'):
  print(li.get_text())

用get_text()方法就能获取内容了。

总结

推荐使用lxml解析库,必要时使用html.parser

标签选择筛选功能弱但是速度快 建议使用find()、find_all() 查询匹配单个结果或者多个结果

如果对CSS选择器熟悉建议使用select()

记住常用的获取属性和文本值的方法

更多关于Python爬虫库BeautifulSoup的介绍与简单使用实例请点击下面的相关链接

Python 相关文章推荐
python ElementTree 基本读操作示例
Apr 09 Python
Python Matplotlib库入门指南
May 18 Python
简单讲解Python中的数字类型及基本的数学计算
Mar 11 Python
Python打包可执行文件的方法详解
Sep 19 Python
python snownlp情感分析简易demo(分享)
Jun 04 Python
浅谈Django学习migrate和makemigrations的差别
Jan 18 Python
python 编码规范整理
May 05 Python
appium+python adb常用命令分享
Mar 06 Python
Python爬虫基础之爬虫的分类知识总结
May 13 Python
python基础学习之生成器与文件系统知识总结
May 25 Python
Python实现学生管理系统(面向对象版)
Jun 24 Python
python模拟浏览器 使用selenium进入好友QQ空间并留言
Apr 12 Python
使用Python爬虫库requests发送表单数据和JSON数据
Jan 25 #Python
Python爬虫库requests获取响应内容、响应状态码、响应头
Jan 25 #Python
使用Python爬虫库requests发送请求、传递URL参数、定制headers
Jan 25 #Python
flask框架自定义url转换器操作详解
Jan 25 #Python
常用python爬虫库介绍与简要说明
Jan 25 #Python
flask框架url与重定向操作实例详解
Jan 25 #Python
flask框架蓝图和子域名配置详解
Jan 25 #Python
You might like
php日历制作代码分享
2014/01/20 PHP
3个PHP多维数组转为一维数组的方法实例
2014/03/13 PHP
PHP基于数组实现的分页函数实例
2014/08/20 PHP
Symfony2学习笔记之系统路由详解
2016/03/17 PHP
php获取当前月与上个月月初及月末时间戳的方法
2016/12/05 PHP
PHP利用缓存处理用户注册时的邮箱验证,成功后用户数据存入数据库操作示例
2019/12/31 PHP
jQuery窗口、文档、网页各种高度的精确理解
2014/07/02 Javascript
jquery实现的横向二级导航效果代码
2015/08/26 Javascript
jquery京东商城双11焦点图多图广告特效代码分享
2015/09/06 Javascript
浅析JavaScript 调试方法和技巧
2015/10/22 Javascript
用js写的一个路由(简单实例)
2016/09/24 Javascript
vue自定义指令实现v-tap插件
2016/11/03 Javascript
详解用原生JavaScript实现jQuery的某些简单功能
2016/12/19 Javascript
Vue + Webpack + Vue-loader学习教程之相关配置篇
2017/03/14 Javascript
详解Node.js实现301、302重定向服务
2017/04/07 Javascript
使用ES6语法重构React代码详解
2017/05/09 Javascript
详解Vue 非父子组件通信方法(非Vuex)
2017/05/24 Javascript
Vue中父组件向子组件通信的方法
2017/07/11 Javascript
浅谈高大上的微信小程序中渲染html内容—技术分享
2018/10/25 Javascript
微信小程序template模版的使用方法
2019/04/13 Javascript
ES6入门教程之let、const的使用方法
2019/04/13 Javascript
python访问mysql数据库的实现方法(2则示例)
2016/01/06 Python
Python程序中用csv模块来操作csv文件的基本使用教程
2016/03/03 Python
Python面向对象编程基础解析(二)
2017/10/26 Python
linux环境下的python安装过程图解(含setuptools)
2017/11/22 Python
Python lambda表达式filter、map、reduce函数用法解析
2019/09/11 Python
python实现录屏功能(亲测好用)
2020/03/02 Python
CSS教程:CSS3圆角属性
2009/04/02 HTML / CSS
俄罗斯达美乐比萨外送服务:Domino’s Pizza
2020/12/18 全球购物
趣味游戏活动方案
2014/02/07 职场文书
酒店总经理职务说明书
2014/02/26 职场文书
我爱我的祖国演讲稿
2014/05/04 职场文书
2014国庆节演讲稿:祖国在我心中(400字)
2014/09/25 职场文书
法律进社区活动总结
2015/05/07 职场文书
2016高中社会实践心得体会范文
2016/01/14 职场文书
详解Laravel服务容器的优势
2021/05/29 PHP