爬蟲(chóng)筆記——東方財(cái)富科創(chuàng)板數(shù)據(jù)爬取(selenium方法)selenium方式爬取 優(yōu)點(diǎn):無(wú)需觀察網(wǎng)站去查找數(shù)據(jù)來(lái)源 缺點(diǎn):速度較requests方法更慢 網(wǎng)站觀察網(wǎng)址:東方財(cái)富科創(chuàng)板數(shù)據(jù) =========================================================== =========================================================== 網(wǎng)站分析可以發(fā)現(xiàn),,由于在網(wǎng)頁(yè)中存在多個(gè)公司,,且網(wǎng)站代碼中對(duì)每個(gè)標(biāo)簽的區(qū)分度并不高,,所以直接在該網(wǎng)頁(yè)中爬取比較復(fù)雜,,所以這里選擇先爬取每個(gè)公司的詳情介紹的鏈接(href屬性中每個(gè)公司都有一個(gè)對(duì)應(yīng)的code編號(hào)),然后在每個(gè)公司的詳情鏈接網(wǎng)站中爬取想要的信息。 =========================================================== 公司詳情頁(yè)面公司詳情鏈接的頁(yè)面如下: =========================================================== 具體代碼代碼如下,,注意在run函數(shù)中,,需要根據(jù)爬取的總共條數(shù)定義一下函數(shù)的參數(shù),這里由于每頁(yè)為50個(gè)公司,,總共只有149條數(shù)據(jù),,故參數(shù)為(50,149): # 東方財(cái)富科創(chuàng)板數(shù)據(jù)爬取-seleniumfrom selenium import webdriverfrom selenium.webdriver.support import expected_conditions as ECfrom selenium.webdriver.support.ui import WebDriverWaitfrom selenium.webdriver.common.by import Byfrom lxml import etreeimport reimport timeimport pandas as pdclass eastmoneyspider():def __init__(self):self.driver = webdriver.Chrome()self.url = "http://data.eastmoney.com/kcb/"self.page = 1self.company_detail_urls = []self.kcb_data = []self.spider_count = 1def run(self):# 爬取公司詳細(xì)網(wǎng)址(這里共有149個(gè)公司,每頁(yè)50條數(shù)據(jù),,第三頁(yè)只有49條,,可根據(jù)變動(dòng)情況更改函數(shù)參數(shù))self.parse_detail_url(50,149)# 詳細(xì)數(shù)據(jù)爬取self.parse_page()# 數(shù)據(jù)保存self.save_data()print("爬取完成!")# 獲取每個(gè)公司的鏈接,,其中pagesize為每頁(yè)公司個(gè)數(shù),,datasize為公司總數(shù)def parse_detail_url(self,pagesize,datasize):self.driver.get(self.url)while True:# 下一頁(yè)按鈕nextpage_Btn = self.driver.find_element_by_xpath('//div[@id="PageCont1"]/a[last()-1]')print("即將爬取第%d頁(yè)"%self.page)# 爬取過(guò)程WebDriverWait(self.driver,timeout=10).until(EC.presence_of_element_located((By.XPATH,'//div[@id="PageCont1"]/a[last()-1]')))source = self.driver.page_source detail = re.findall('/kcb/detail/(.*?).html',source)# 每個(gè)頁(yè)面中存在重復(fù)數(shù)據(jù),下面去除重復(fù)部分urls = list(set(detail))urls.sort(key=detail.index)# 將網(wǎng)址填全detail_urls = list(map(lambda x:"http://data.eastmoney.com/kcb/detail/"+x+".html",urls))# 由于在每個(gè)頁(yè)面中存在其他公司數(shù)據(jù)(擬申報(bào)企業(yè)),,這里只取每頁(yè)的前pagesize(50)個(gè),,即每頁(yè)顯示的個(gè)數(shù)detail_urls = detail_urls[:pagesize]# 公司詳情網(wǎng)址匯總[self.company_detail_urls.append(x) for x in detail_urls]print("第%d頁(yè)爬取完成"%self.page)if "nolink" in nextpage_Btn.get_attribute("class"):self.company_detail_urls = self.company_detail_urls[:datasize]print("公司詳情網(wǎng)址爬取完成!")print("="*40)breakelse:nextpage_Btn.click()self.page += 1time.sleep(1)# 獲取每個(gè)公司的詳細(xì)信息def parse_page(self):print("開(kāi)始爬取公司詳細(xì)數(shù)據(jù)")for url in self.company_detail_urls:self.driver.get(url)source = self.driver.page_source html = etree.HTML(source) company_details = html.xpath('//tr/td/text()')company_name = company_details[2]company_abbreviation = company_details[6]accept_date = company_details[4]update_date = company_details[12]state = company_details[10]registration = company_details[14]industry = company_details[16]sponsorship_agency = company_details[18]law_agency = company_details[26]account_agency = company_details[22] company = {"公司名稱":company_name,"公司簡(jiǎn)稱":company_abbreviation,"公司詳情網(wǎng)址":url,"受理日期":accept_date,"更新日期":update_date,"審核狀態(tài)":state,"注冊(cè)地":registration,"行業(yè)":industry,"保薦機(jī)構(gòu)":sponsorship_agency,"律師事務(wù)所":law_agency,"會(huì)計(jì)師事務(wù)所":account_agency} self.kcb_data.append(company)print("已爬取%d條數(shù)據(jù)"%self.spider_count)self.spider_count += 1def save_data(self):data = pd.DataFrame(self.kcb_data)data = data[["公司名稱","公司簡(jiǎn)稱","公司詳情網(wǎng)址","審核狀態(tài)","注冊(cè)地","行業(yè)","保薦機(jī)構(gòu)","律師事務(wù)所","會(huì)計(jì)師事務(wù)所","更新日期","受理日期"]]data.to_excel('./data/kcb_data_spider_selenium.xlsx',encoding='utf-8-sig',index=False)def main():kcb_data = eastmoneyspider()kcb_data.run()main() 有用就點(diǎn)個(gè)贊吧,! |
|