在Python爬虫中,使用requests库处理反爬机制的方法有很多。以下是一些建议:
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
url = 'https://example.com'
response = requests.get(url, headers=headers)
import requests
proxies = {
'http': 'http://proxy.example.com:8080',
'https': 'https://proxy.example.com:8080'}
url = 'https://example.com'
response = requests.get(url, proxies=proxies)
import time
import requests
url = 'https://example.com'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
for _ in range(5):
response = requests.get(url, headers=headers)
time.sleep(1) # 设置请求间隔为1秒
import requests
url = 'https://example.com'
cookies = {
'cookie_name': 'cookie_value'}
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3'}
response = requests.get(url, headers=headers, cookies=cookies)
from selenium import webdriver
url = 'https://example.com'
driver = webdriver.Chrome()
driver.get(url)
请注意,不同的网站可能有不同的反爬机制,因此在实际应用中,可能需要根据具体情况调整策略。同时,遵循网站的robots.txt规则,尊重网站的爬取政策。