python爬取动态网页

fanhuasijin 2019-12-15

静态网页:根据url即可方便的爬取

动态网页:分为两种:一种是通过F12查看控制台的xhr等文件,找到包含所要爬取的内容的文件,发现这个文件的url路径跟页码有联系,那么就可以根据构造的url来进行访问爬取了。还有一种情况是查看了包含所要爬取内容的文件,发现文件url是固定不变的或者跟页码没有关系,这个时候可以通过简单的模拟浏览器点击行为来请求网页再爬取,这种方案执行效率较慢,不适于多页爬取的情况。代码如下:

def parse(self, response):
    print ‘parser>>>>>>>>>>>>>>>‘
    try:
        self.varint = 1
        while self.varint <= 100:
            url = self.ROOT_URL + ‘*‘ + str(self.varint) 
            responsenow = requests.get(url)
            self.parser_html(responsenow)
            self.varint = self.varint + 1
            # time.sleep(0.1)
        print(‘success>>>>>>>>>>>>‘)
    except:
        print(‘failed>>>>>>>>>‘)

对于动态网页,还可以通过模拟客户端发送post请求来爬取,代码如下:

def parse(self, response):
    formdata = {
        ‘pageNo‘: ‘‘,
        ‘categoryId‘: ‘‘,
        ‘code‘: ‘‘,
        ‘pageSize‘: 10,
        ‘source‘: ‘‘,
        ‘requestUri‘: ‘*‘,
        ‘requestMethod‘: ‘POST‘
    }
    print ‘parser>>>>>>>>>>>>>>>‘
    try:
        self.varint = 1
        formdata[‘source‘] = str(2)
        formdata[‘pageNo‘] = str(self.varint)
        while self.varint <= 46:
            responsenow = requests.post(self.ROOT_URL, data=formdata)
            self.parser_html(responsenow)
            self.varint = self.varint + 1
            formdata[‘source‘] = formdata[‘pageNo‘]   
            formdata[‘pageNo‘] = str(self.varint)
        print(‘success>>>>>>>>>>>>‘)
    except:
        print(‘failed>>>>>>>>>‘)

相关推荐