Yasin 2020-03-27
还未解决网页超时问题,先放着,爬到一半就没了。
首先看见自己喜欢的图片,忍不住要想下载,一个一个下又很麻烦,只好请求爬虫大大帮助啦。
https://www.ivsky.com/bizhi/code_geass_t1300/
网页分析:
首先;每一页都对应着,很多的图片
所以我们得先找到没一页对应得url,右键检查
发现对应得页数是以一个单位递增,所以我们已经找到所以页数得url
然后我们要提取,对应页图片下得url。
就会找到对应结点下得图片url,然后进入图片url
同样右键检查。
然后就可以发现对应图片得地址
然后就可以直接写代码了
import os import requests from lxml import etree from urllib.request import urlopen, Request class PNimag(): def __init__(self): self.base_url = ‘https://www.ivsky.com/bizhi/code_geass_t1300/index_1.html‘ self.headers = {"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36"} def get_html(self,url): response = requests.get(url, self.headers,timeout = 5) if response.status_code == 200: response.encoding = response.apparent_encoding return response.text return None def get_url(self,html): url_2 = [] x_html = etree.HTML(html) url_2 = x_html.xpath(‘//ul[@class="il"]/li/div[@class="il_img"]/a/@href‘) return url_2 def get_image(self,html): x_html = etree.HTML(html) ima_url = x_html.xpath(‘//img[@id="imgis"]/@src‘) return ima_url def save_image(self,url,name): req = Request(url=url,headers = self.headers) content = urlopen(req).read() with open("C:/Users/25766/AppData/Local/Programs/Python/Python38/imgs/LC/LC"+name,‘wb‘) as f: f.write(content) print(name,‘finsh...‘) url = "https://www.ivsky.com/bizhi/code_geass_t1300/index_" bian = PNimag() p = 2000 for i in range(20,28): url2 = url + str(i) + ‘.html‘ html = bian.get_html(url2) lis = bian.get_url(html) for j in lis: s = ‘https://www.ivsky.com‘ + j l = bian.get_html(s) k = bian.get_image(l) k[0] = ‘https:‘ + k[0] print(k[0]) bian.save_image(k[0],str(p) + ‘.jpg‘) p += 1
运行结果
方便多了