Python使用requests xpath 并开启多线程爬取西刺代理ip实例


Posted in Python onMarch 06, 2020

我就废话不多说啦,大家还是直接看代码吧!

import requests,random
from lxml import etree
import threading
import time

angents = [
  "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
  "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
  "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
  "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
  "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
  "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
  "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
  "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
  "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
  "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
  "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
  "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
  "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
  "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
  "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
]

def get_all_xici_urls(start_num,stop_num):
  xici_urls = []
  for num in range(start_num,len(stop_num)+1):
    xici_http_url = 'http://www.xicidaili.com/wt/'
    xici_http_url += str(num)
    xici_urls.append(xici_http_url)
  print('获取所有待爬取xici url 已完成...')
  return xici_urls
def get_all_http_ip(xici_http_url,headers,proxies_list):
  try:
    all_ip_xpath = '//table//tr/child::*[2]/text()'
    all_prot_xpath = '//table//tr/child::*[3]/text()'
    response = requests.get(url=xici_http_url,headers=headers)
    html_tree = etree.HTML(response.text)
    ip_list = html_tree.xpath(all_ip_xpath)
    port_list = html_tree.xpath(all_prot_xpath)
    # print(ip_list)
    # print(prot_list)
    new_proxies_list = []
    for index in range(1,len(ip_list)):
      # print('http://{}:{}'.format(ip_list[index],port_list[index]))
      proxies_dict = {}
      proxies_dict['http'] = 'http://{}:{}'.format(str(ip_list[index]),str(port_list[index]))
      new_proxies_list.append(proxies_dict)
    proxies_list += new_proxies_list
    return proxies_list
  except Exception as e:
    print('发生了错误:url为 ',xici_http_url,'错误为 ',e)

if __name__ == '__main__':
  start_num = int(input('请输入起始页面:').strip())
  stop_num = int(input('请输入结束页面:').strip())
  print('开始爬取...')
  t_list = []
  # 容纳需要使用的西刺代理ip
  proxies_list = []
  # 使用多线程
  xici_urls = get_all_xici_urls(start_num,stop_num)
  for xici_get_url in xici_urls:
    #随机筛选一个useragent
    headers = {'User-Agent': random.choice(angents)}
    t = threading.Thread(target=get_all_http_ip,args=(xici_get_url,headers,proxies_list))
    t.start()
    t_list.append(t)
  for j in t_list:
    j.join()
  print('所有需要的代理ip已爬取完成...')
  print(proxies_list)
  print(len(proxies_list))

补充知识:python爬取xici的免费代理、并验证(重点、清楚)

网上爬取xici的帖子很多,但是验证都说的不是很清楚,这里我会认真给大家解释

这里我写了一个代理类proxy,写了四个方法(个人写法不必在意),get_user_agent(得到随机use-agent,请求头中最重要的一个)、get_proxy(爬取代理IP)、test_proxy(验证代理可用性)、store_txt(将可用的代理保存到txt文件中。

1.爬取:headers是请求头,choice是可以选择是爬取Http代理还是https代理,first、end为开始和结束的页码(结束不包含最后一页)

def get_proxy(self, headers, choice='http', first=1, end=2):
    """
    获取代理
    :param choice:
    :param first: 开始爬取的页数
    :param end: 结束爬取的后一页
    :return:
    """
 
    ip_list = []
    base_url = None
    
    # 选择爬取的网站,一个是http、一个是https的
    if choice == 'http':
      base_url = 'http://www.xicidaili.com/wt/'
    elif choice == 'https':
      base_url = 'http://www.xicidaili.com/wn/'
    
    # 控制页码用正则匹配,并将爬取的IP和端口号用:链接
    for n in range(first, end):
      actual_url = base_url + str(n)
      html = requests.get(url=actual_url, headers=headers).text
      pattern = '(\d+\.\d+\.\d+\.\d+)</td>\s*<td>(\d+)'
      re_list = re.findall(pattern, html)
 
      for ip_port in re_list:
        ip_port = ip_port[0] + ':' + ip_port[1]
        ip_list.append(ip_port)
    return ip_list

2. 验证:网上大部分是用request直接请求一个网址看是否通过或者看状态码是否是200, 但是有一个问题是即使你设置了代理IP。可能会通过,但通过的不是用你设置的代理IP而是用你自己公网下的IP(大部分时候我们用ifconfig查询的是我们所在局域网下的IP,及私网IP)。

linux下你可以用这些命令的其中任何一个查看你的公网IP:

curl icanhazip.com
curl ifconfig.me
curl curlmyip.com
curl ip.appspot.com
curl ipinfo.io/ip
curl ipecho.net/plain
curl www.trackip.net/i

注意:那这样要怎么办,其实我们可以向上述命令一样先用你爬下的代理IP访问 http://icanhazip.com/, 它可以返回你电脑发送请求时的公网IP(此时如果你设置代理IP了就会是返回你所发送请求的代理IP),然后你将它爬取下来(直接获取返回的值的文本就可以了),并和你发送请求时的代理IP作比较,如果不相等说明此代理IP不能用,因为虽然你设置了代理Ip,但是电脑在你代理IP请求不同的情况下,直接又使用了你公网的IP去请求,当然成功了,但不代表你的代理IP可以用。如果相等,那就证明此网站就是你所用的代理IP访问请求成功的,所以此IP可用。

def test_proxy(self, ip_port, choice='http'):
    """
    测试代理是否能用
    :param ip_port:
    :param choice:
    :return:
    """
    proxies = None
 
    # 这个网站可以返回你公网下的IP,如果你加代理请求后,返回的就是你代理的IP(这样做是防止你虽然用的是代理IP,但实际是用你自己的公网IP访问的请求)
    tar_url = "http://icanhazip.com/"
 
    # 获取随机User-agent
    user_agent = self.get_user_agent()
 
    # 将user-agent放在headers中
    headers = {'User-Agent': user_agent}
 
    # 选择验证的是http、还是https
    if choice == 'http':
      proxies = {
        "http": "http://"+ip_port,
      }
 
    elif choice == 'https':
      proxies = {
        "https": "https://" + ip_port,
      }
 
    try:
      # 将IP从IP和端口号连起来的分出来
      thisIP = "".join(ip_port.split(":")[0:1])
      res = requests.get(tar_url, proxies=proxies, headers=headers, timeout=8)
 
      # 爬取下来返回的值,一定要用strip去除空格
      proxyIP = res.text.strip()
      
      # 三个状态,如过直接通不过,那就返回false,如果通过但是不是代理的IP,也返回false
      if proxyIP == thisIP:
        return proxyIP
      else:
        return False
    except:
      return False

最后附上整段代码:

import requests
import re
import random
import codecs
from urllib import parse
 
 
class proxy:
  """
  代理类
  """
  def __init__(self):
    pass
 
  def get_user_agent(self):
    """
    得到随机user-agent
    :return:
    """
    user_agents = [
      "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
      "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)",
      "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)",
      "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)",
      "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)",
      "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)",
      "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)",
      "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)",
      "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6",
      "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1",
      "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0",
      "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5",
      "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6",
      "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
      "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20",
      "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52",
      "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11",
      "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER",
      "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)",
      "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E; LBBROWSER)",
      "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 LBBROWSER",
      "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)",
      "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; QQBrowser/7.0.3698.400)",
      "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)",
      "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)",
      "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; QQDownload 732; .NET4.0C; .NET4.0E)",
      "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)",
      "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1",
      "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.89 Safari/537.1",
      "Mozilla/5.0 (iPad; U; CPU OS 4_2_1 like Mac OS X; zh-cn) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8C148 Safari/6533.18.5",
      "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.0b13pre) Gecko/20110307 Firefox/4.0b13pre",
      "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:16.0) Gecko/20100101 Firefox/16.0",
      "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11",
      "Mozilla/5.0 (X11; U; Linux x86_64; zh-CN; rv:1.9.2.10) Gecko/20100922 Ubuntu/10.10 (maverick) Firefox/3.6.10"
    ]
    user_agent = random.choice(user_agents)
    return user_agent
 
 
  def get_proxy(self, headers, choice='http', first=1, end=2):
    """
    获取代理
    :param choice:
    :param first: 开始爬取的页数
    :param end: 结束爬取的后一页
    :return:
    """
 
    ip_list = []
    base_url = None
    if choice == 'http':
      base_url = 'http://www.xicidaili.com/wt/'
    elif choice == 'https':
      base_url = 'http://www.xicidaili.com/wn/'
 
    for n in range(first, end):
      actual_url = base_url + str(n)
      html = requests.get(url=actual_url, headers=headers).text
      pattern = '(\d+\.\d+\.\d+\.\d+)</td>\s*<td>(\d+)'
      re_list = re.findall(pattern, html)
 
      for ip_port in re_list:
        ip_port = ip_port[0] + ':' + ip_port[1]
        ip_list.append(ip_port)
    return ip_list
 
 
  def test_proxy(self, ip_port, choice='http'):
    """
    测试代理是否能用
    :param ip_port:
    :param choice:
    :return:
    """
    proxies = None
    # 这个网站可以返回你公网下的IP,如果你加代理请求后,返回的就是你代理的IP(这样做是防止你虽然用的是代理IP,但实际是用你自己的公网IP访问的请求)
    tar_url = "http://icanhazip.com/"
    user_agent = self.get_user_agent()
    headers = {'User-Agent': user_agent}
    if choice == 'http':
      proxies = {
        "http": "http://"+ip_port,
      }
 
    elif choice == 'https':
      proxies = {
        "https": "https://" + ip_port,
      }
    try:
      thisIP = "".join(ip_port.split(":")[0:1])
      res = requests.get(tar_url, proxies=proxies, headers=headers, timeout=8)
      proxyIP = res.text.strip()
      if proxyIP == thisIP:
        return proxyIP
      else:
        return False
    except:
      return False
 
  def store_txt(self, choice='http', first=1, end=2):
    """
    将测试通过的ip_port保存为txt文件
    :param choice:
    :param first:
    :param end:
    :return:
    """
    user_agent = self.get_user_agent()
    headers = {'User-Agent': user_agent}
    ip_list = self.get_proxy(headers=headers, choice=choice, first=first, end=end)
    with codecs.open("Http_Agent.txt", 'a', 'utf-8') as file:
      for ip_port in ip_list:
        ip_port = self.test_proxy(ip_port, choice=choice)
        print(ip_port)
        if ip_port:
          file.write('\'' + ip_port + "\'\n")

以上这篇Python使用requests xpath 并开启多线程爬取西刺代理ip实例就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持三水点靠木。

Python 相关文章推荐
python读取csv文件示例(python操作csv)
Mar 11 Python
python函数形参用法实例分析
Aug 04 Python
Python计算字符宽度的方法
Jun 14 Python
python 文件操作api(文件操作函数)
Aug 28 Python
Python pyinotify日志监控系统处理日志的方法
Mar 08 Python
TensorFlow 滑动平均的示例代码
Jun 19 Python
Python简易版停车管理系统
Aug 12 Python
AUC计算方法与Python实现代码
Feb 28 Python
Python3 socket即时通讯脚本实现代码实例(threading多线程)
Jun 01 Python
python获取天气接口给指定微信好友发天气预报
Dec 28 Python
Python中快速掌握Data Frame的常用操作
Mar 31 Python
python如何获取网络数据
Apr 11 Python
Python 批量读取文件中指定字符的实现
Mar 06 #Python
python GUI库图形界面开发之PyQt5布局控件QGridLayout详细使用方法与实例
Mar 06 #Python
python3 xpath和requests应用详解
Mar 06 #Python
python 装饰器功能与用法案例详解
Mar 06 #Python
python GUI库图形界面开发之PyQt5布局控件QVBoxLayout详细使用方法与实例
Mar 06 #Python
利用 Python ElementTree 生成 xml的实例
Mar 06 #Python
Python3 xml.etree.ElementTree支持的XPath语法详解
Mar 06 #Python
You might like
php设计模式 Command(命令模式)
2011/06/26 PHP
网页上facebook分享功能具体实现
2014/01/26 PHP
php数组合并的二种方法
2014/03/21 PHP
jquery图片上下tab切换效果
2011/03/18 Javascript
js实现两个值相加alert出来精确到指定位
2013/09/25 Javascript
JavaScript对象的property属性详解
2014/04/01 Javascript
Boostrap模态窗口的学习小结
2016/03/28 Javascript
IOS中safari下的select下拉菜单文字过长不换行的解决方法
2016/09/26 Javascript
JS不完全国际化&amp;本地化手册 之 理论篇
2016/09/27 Javascript
Bootstrap基本样式学习笔记之图片(6)
2016/12/07 Javascript
JavaScript中的编码和解码函数
2017/02/15 Javascript
jquery仿苹果的时间/日期选择效果
2017/03/08 Javascript
js 奇葩技巧之隐藏代码
2017/08/11 Javascript
基于Vue制作组织架构树组件
2017/12/06 Javascript
详解微信小程序canvas圆角矩形的绘制的方法
2018/08/22 Javascript
详解如何更好的使用module vuex
2019/03/27 Javascript
vue.js多页面开发环境搭建过程
2019/04/24 Javascript
React实现阿里云OSS上传文件的示例
2020/08/10 Javascript
如何基于jQuery实现五角星评分
2020/09/02 jQuery
深入理解javascript中的this
2021/02/08 Javascript
[36:41]完美世界DOTA2联赛循环赛FTD vs Magma第一场 10月30日
2020/10/31 DOTA
Python open读写文件实现脚本
2008/09/06 Python
Python文件及目录操作实例详解
2015/06/04 Python
django 装饰器 检测登录状态操作
2020/07/02 Python
pycharm激活码免费分享适用最新pycharm2020.2.3永久激活
2020/11/25 Python
20佳惊艳的HTML5应用程序示例分享
2011/05/03 HTML / CSS
伦敦一家西班牙童装精品店:La Coqueta
2018/02/02 全球购物
乌克兰设计师和品牌的服装:Love&Live
2020/04/14 全球购物
如何高效率的查找一个月以内的数据
2012/04/15 面试题
音乐系毕业生自荐信
2013/10/27 职场文书
标准毕业生自荐信范文
2013/11/04 职场文书
青年文明号事迹材料
2014/01/18 职场文书
《学会待客》教学反思
2014/02/22 职场文书
管理建议书范文
2014/05/13 职场文书
婚礼秀策划方案
2014/05/19 职场文书
导游词之茶卡盐湖
2019/11/26 职场文书