python

python requests爬虫如何提高稳定性

小樊
82
2024-12-08 15:46:27
栏目: 编程语言

在使用Python的requests库进行爬虫时,可以通过以下方法来提高稳定性:

  1. 设置User-Agent:为了模拟正常用户的浏览行为,可以在请求头中设置不同的User-Agent,使爬虫更像一个真实用户。
import requests

headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"
}

url = "https://example.com"
response = requests.get(url, headers=headers)
  1. 使用代理IP:通过使用代理IP,可以避免因爬取过多而被目标网站封禁。可以使用免费或付费的代理IP服务。
import requests

proxies = {
    "http": "http://your_proxy_ip:port",
    "https": "https://your_proxy_ip:port"
}

url = "https://example.com"
response = requests.get(url, proxies=proxies)
  1. 设置请求间隔:为了避免在短时间内对目标网站发起大量请求,可以设置请求间隔,降低被检测的风险。
import time
import requests

url = "https://example.com"
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"
}

for _ in range(10):  # 爬取10次
    response = requests.get(url, headers=headers)
    time.sleep(1)  # 每次请求之间间隔1秒
  1. 错误处理:使用try-except语句来捕获可能出现的异常,如网络错误、超时等,确保爬虫在遇到问题时能够继续运行。
import requests
from requests.exceptions import RequestException

url = "https://example.com"
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"
}

try:
    response = requests.get(url, headers=headers)
    response.raise_for_status()  # 如果响应状态码不是200,抛出异常
except RequestException as e:
    print(f"请求出错: {e}")
  1. 使用多线程或多进程:通过多线程或多进程并发发送请求,可以提高爬虫的抓取速度。但请注意,过多并发可能会导致目标网站服务器压力过大,甚至被封禁。
import requests
from concurrent.futures import ThreadPoolExecutor

url = "https://example.com"
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3"
}

def fetch(url):
    try:
        response = requests.get(url, headers=headers)
        response.raise_for_status()
        return response.text
    except RequestException as e:
        print(f"请求出错: {e}")
        return None

urls = [url] * 10  # 假设有10个相同的URL需要爬取

with ThreadPoolExecutor(max_workers=5) as executor:
    results = list(executor.map(fetch, urls))

通过以上方法,可以提高Python requests爬虫的稳定性。

0
看了该问题的人还看了