在scrapy框架中,,Downloader Middlewares 稱之為下載中間件,, 可以對(duì)爬蟲(chóng)的requests請(qǐng)求進(jìn)行封裝處理,,典型的應(yīng)用有以下3種 1. 添加用戶代理 所有的中間件代碼都保存在middlewares.py文件中,,通過(guò)自定義類的形式來(lái)創(chuàng)建一個(gè)中間件,代碼如下 from faker import Faker class UserAgentMiddleware(object):
def process_request(self, request, spider): f = Faker() agent = f.chrome() request.headers['User-Agent'] = agent 只需要重寫(xiě)process_request方法即可,。2. 添加IP代理 IP代理也是通用的實(shí)現(xiàn)方式,,代碼如下 class ProxyMiddleware(object):
PROXIES = [ 'https://36.249.118.13:9999', 'https://175.44.108.65:9999', 'http://117.69.12.82:9999']
def process_request(self, request, spider): proxy = random.choice('PROXIES') request.meta['proxy'] = proxy 在scrapy中集成selenium, 可以進(jìn)一步提高爬蟲(chóng)的處理范圍,,代碼如下 from selenium import webdriver from scrapy.http import HtmlResponse class SeleniumMiddleware(object):
def __init__(self): options = webdriver.ChromeOptions() options.add_argument('--headless') self.driver = webdriver.Chrome(chrome_options=options, executable_path='C:/Program Files (x86)/Google/Chrome/Application/chromedriver.exe')
def __del__(self): self.driver.close()
def process_request(self, request, spider): self.driver.get(request.url) return HtmlResponse(url=request.url, body=self.driver.page_source, request=request, encoding='utf-8', status=200) 定義好中間件之后,,必須在settings.py中進(jìn)行啟動(dòng)才可以,代碼示例如下
DOWNLOADER_MIDDLEWARES = { 'hello_world.middlewares.UserAgentMiddleware': 543, 'hello_world.middlewares.SeleniumMiddleware': 600, } 通過(guò)中間件,,我們可以對(duì)requests請(qǐng)求進(jìn)行加工處理,,方便的進(jìn)行擴(kuò)展,最后記得用在settings中啟動(dòng)中間件即可。
|