LinkExtractor
LinkExtractor构造器所有的参数都有默认值,如果构造对象不传参,默认提取页面中所有的链接
2020-07-13 15:24:53 [parso.python.diff] DEBUG: diff parser end
In [1]: from scrapy.linkextractors import LinkExtractor
In [2]: le = LinkExtractor()
In [3]: links = le.extract_links(response)
In [4]: [link.url for link in links]
Out[4]:
['http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000206579.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000206641.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000206735.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000206782.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000206845.html',
。。。省略。。。
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000206923.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207001.html']
LinkExtractor中allow参数
接收一个正则表达式或正则表达式列表,提取绝对url与正则匹配的链接,如果该参数为空,提取全部链接
In [21]: from scrapy.linkextractors import LinkExtractor
In [22]: le = LinkExtractor(allow="/catalogue/page.*\.html$")
In [23]: links = le.extract_links(response)
In [24]: [link.url for link in links]
Out[24]: ['http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207001.html']
LinkExtractor中deny参数
接收一个正则表达式或一个正则表达式列表,排除绝对url与正则匹配的链接
In [25]: from scrapy.linkextractors import LinkExtractor
In [26]: le = LinkExtractor(deny="/catalogue/.*/books/.*")
In [27]: links = le.extract_links(response)
In [28]: [link.url for link in links]
Out[28]:
['http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000206579.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000206641.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207329.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207344.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_2021080100020734401.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207360.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207376.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207391.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_2021080100020739101.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207407.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207438.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207548.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207579.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207594.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207626.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207673.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207735.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207813.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207829.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207907.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207969.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000206923.html',
'http://www.51sjk.com/Upload/Articles/1/0/310/310292_20210801000207001.html']
LinkExtractor中allow_domains参数和deny_domains参数
allow_domains:接收一个域名和域名列表,提取指定域名的链接
deny_domains:接收一个域名和域名列表,排除指定域名的链接
#只演示deny_domains
In [29]: from scrapy.linkextractors import LinkExtractor
In [30]: le = LinkExtractor(deny_domains="books.toscrape.com")
In [31]: links = le.extract_links(response)
In [32]: [link.url for link in links]
Out[32]: []
LinkExtractor中restrict_xpaths参数和restrict_css参数
restrict_xpaths:接收一个xpath的表达式,提取表达式选中区域的链接
restrict_css:接收一个css的表达式提取表达式选中区域的链接
#xpaths
In [29]: from scrapy.linkextractors import LinkExtractor
In [30]: le = LinkExtractor(restrict_xpaths="//li[@class='next']")
In [31]: links = le.extract_links(response)
LinkExtractor中tags参数和attrs参数
tags:接收一个标签或标签列表,提取标签内的列表,默认为[‘a’, ‘area’]
attrs:接收一个属性或属性列表,提取指定属性内的链接,默认为[‘href’]
LinkExtractor中process_value参数
用来回调函数,用来处理JavaScript代码