您好,登录后才能下订单哦!
本篇内容介绍了“python爬虫怎么利用requests制作代理池s”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!
爬取代理然后验证代理,将可用代理放入txt文件。
import requests
from scrapy import Selector
start_url = 'http://www.89ip.cn/index_1.html'
url = 'http://www.89ip.cn/index_{}.html'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'}
class MyProxy(object):
def GetPage(self,url):#页面源码获取
response = requests.get(url=url,headers=headers)
text = response.text
return text
def GetInfo(self,text):#页面信息获取
selector = Selector(text=text)
FindTable = selector.xpath('//div[@class="layui-form"]/table/tbody/tr')
for proxy in FindTable:
ip = "".join(proxy.xpath('.//td[1]/text()').get()).replace('\t','').replace('\n','')
port = "".join(proxy.xpath('.//td[2]/text()').get()).replace('\t','').replace('\n','')
print(ip,port)
self.TestIP(ip,port)
def TabPage(self,text):#切换页面
selector = Selector(text=text)
page = selector.xpath('//*[@id="layui-laypage-1"]/a[8]/@data-page').get()
self.new_url = url.format(page)
def TestIP(self,ip,port):
try:
response = requests.get(url='https://www.baidu.com/',headers=headers,proxies={"http":"{}:{}".format(ip,port)})
print(response.status_code)
if response.status_code<200 or response.status_code>200:
print("访问失败")
else:郑州人流医院哪家好 http://mobile.zyyyzz.com/
self.file = open('proxy.txt', 'a+')
self.file.write('{}:{}\n'.format(ip,port))
self.file.close()
except Exception as e:
print("访问失败")
def close(self):
self.file.close()
mypoxy = MyProxy()
text = mypoxy.GetPage(start_url)
while True:
try:
mypoxy.GetInfo(text)
mypoxy.GetPage(text)
text = mypoxy.GetPage(mypoxy.new_url)
except Exception as e:
print('**'*10)
# mypoxy.close()
“python爬虫怎么利用requests制作代理池s”的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注亿速云网站,小编将为大家输出更多高质量的实用文章!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。