WanYuLss 2012-07-24
Scrapy,Python开发的一个快速,高层次的屏幕抓取和web抓取框架,用于抓取web站点并从页面中提取结 Scrapy Pthyon爬虫框架 logo[1]构化的数据。Scrapy用途广泛,可以用于数据挖掘、监测和自动化测试。Scrapy吸引人的地方在于它是一个框架,任何人都可以根据需求方便的修改。它也提供了多种类型爬虫的基类,如BaseSpider、sitemap爬虫等,最新版本又提供了web2.0爬虫的支持。
准备工作
Requirements
Python 2.5, 2.6, 2.7 (3.x is not yet supported)
Twisted 2.5.0, 8.0 or above (Windows users: you’ll need to install Zope.Interface and maybe pywin32 because of this Twisted bug)
w3lib
lxml or libxml2 (if using libxml2, version 2.6.28 or above is highly recommended)
simplejson (not required if using Python 2.6 or above)
pyopenssl (for HTTPS support. Optional, but highly recommended)
---------------------------------------------
Twisted安装过程
sudo apt-get install python-twisted python-libxml2 python-simplejson
安装完成后进入python,测试Twisted是否安装成功
pyOpenSSL
wget http://pypi.python.org/packages/source/p/pyOpenSSL/pyOpenSSL-0.13.tar.gz#md5=767bca18a71178ca353dff9e10941929
tar -zxvf pyOpenSSL-0.13.tar.gz
cd pyOpenSSL-0.13
sudo python setup.py install
pycrypto
wget http://pypi.python.org/packages/source/p/pycrypto/pycrypto-2.5.tar.gz#md5=783e45d4a1a309e03ab378b00f97b291
tar -zxvf pycrypto-2.5.tar.gz
cd pycrypto-2.5
sudo python setup.py install
测试是否安装成功
$python
>>> import Crypto
>>> import twisted.conch.ssh.transport
>>> print Crypto.PublicKey.RSA
<module 'Crypto.PublicKey.RSA' from '/usr/python/lib/python2.5/site-packages/Crypto/PublicKey/RSA.pyc'>
>>> import OpenSSL
>>> import twisted.internet.ssl
>>> twisted.internet.ssl
<module 'twisted.internet.ssl' from '/usr/python/lib/python2.5/site-packages/Twisted-10.1.0-py2.5-linux-i686.egg/twisted/internet/ssl.pyc'>
如果出现类似提示,说明pyOpenSSL模块已经安装成功了,否则,请检查上面的安装过程(OpenSSL需要pycrypto)。
w3lib
sudo easy_install -U w3lib
Scrapy
wget http://pypi.python.org/packages/source/S/Scrapy/Scrapy-0.14.3.tar.gz#md5=59f1225f7692f28fa0f78db3d34b3850
tar -zxvf Scrapy-0.14.3.tar.gz
cd Scrapy-0.14.3
sudo python setup.py install
Scrapy安装验证
经过上面的安装和配置过程,已经完成了Scrapy的安装,我们可以通过如下命令行来验证一下:
$ scrapy
Scrapy 0.14.3 - no active project
Usage:
scrapy <command> [options] [args]
Available commands:
fetch Fetch a URL using the Scrapy downloader
runspider Run a self-contained spider (without creating a project)
settings Get settings values
shell Interactive scraping console
startproject Create new project
version Print Scrapy version
view Open URL in browser, as seen by Scrapy
Use "scrapy <command> -h" to see more info about a command
上面提示信息,提供了一个fetch命令,这个命令抓取指定的网页,可以先看看fetch命令的帮助信息,如下所示:
$ scrapy fetch --help
Usage
=====
scrapy fetch [options] <url>
Fetch a URL using the Scrapy downloader and print its content to stdout. You
may want to use --nolog to disable logging
Options
=======
--help, -h show this help message and exit
--spider=SPIDER use this spider
--headers print response HTTP headers instead of body
Global Options
--------------
--logfile=FILE log file. if omitted stderr will be used
--loglevel=LEVEL, -L LEVEL
log level (default: DEBUG)
--nolog disable logging completely
--profile=FILE write python cProfile stats to FILE
--lsprof=FILE write lsprof profiling stats to FILE
--pidfile=FILE write process ID to FILE
--set=NAME=VALUE, -s NAME=VALUE
set/override setting (may be repeated)
根据命令提示,指定一个URL,执行后抓取一个网页的数据,如下所示:
scrapy fetch http://doc.scrapy.org/en/latest/intro/install.html > install.html
2012-04-28 14:34:35+0800 [scrapy] INFO: Scrapy 0.14.3 started (bot: scrapybot)
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Enabled item pipelines:
2012-04-28 14:34:36+0800 [default] INFO: Spider opened
2012-04-28 14:34:36+0800 [default] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-04-28 14:34:36+0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-04-28 14:34:37+0800 [default] DEBUG: Crawled (200) <GET http://doc.scrapy.org/en/latest/intro/install.html> (referer: None)
2012-04-28 14:34:37+0800 [default] INFO: Closing spider (finished)
2012-04-28 14:34:37+0800 [default] INFO: Dumping spider stats:
{'downloader/request_bytes': 227,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 21732,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2012, 4, 28, 6, 34, 37, 567161),
'scheduler/memory_enqueued': 1,
'start_time': datetime.datetime(2012, 4, 28, 6, 34, 36, 433983)}
2012-04-28 14:34:37+0800 [default] INFO: Spider closed (finished)
2012-04-28 14:34:37+0800 [scrapy] INFO: Dumping global stats:
{'memusage/max': 26214400, 'memusage/startup': 26214400}
$ ls -l install.html
-rw-rw-r-- 1 zhuoguoqing zhuoguoqing 21462 2012-04-28 14:34 install.html
可见,我们已经成功抓取了一个网页。
根据scrapy官网的指南来进一步应用scrapy框架
Tutorial链接页面为 http://doc.scrapy.org/en/latest/intro/tutorial.html
http://media.readthedocs.org/pdf/scrapy/0.14/scrapy.pdf