Link Extractors¶
A link extractor is an object that extracts links from responses.
The __init__
method of
LxmlLinkExtractor
takes settings that
determine which links may be extracted. LxmlLinkExtractor.extract_links
returns a
list of matching Link
objects from a
Response
object.
Link extractors are used in CrawlSpider
spiders
through a set of Rule
objects.
You can also use link extractors in regular spiders. For example, you can instantiate
LinkExtractor
into a class
variable in your spider, and use it from your spider callbacks:
def parse(self, response):
for link in self.link_extractor.extract_links(response):
yield Request(link.url, callback=self.parse)
Link extractor reference¶
The link extractor class is
scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor
. For convenience it
can also be imported as scrapy.linkextractors.LinkExtractor
:
from scrapy.linkextractors import LinkExtractor
LxmlLinkExtractor¶
- class scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor(allow=(), deny=(), allow_domains=(), deny_domains=(), deny_extensions=None, restrict_xpaths=(), restrict_css=(), tags=('a', 'area'), attrs=('href',), canonicalize=False, unique=True, process_value=None, strip=True)[source]¶
LxmlLinkExtractor is the recommended link extractor with handy filtering options. It is implemented using lxml’s robust HTMLParser.
- Parameters
allow (str or list) – a single regular expression (or list of regular expressions) that the (absolute) urls must match in order to be extracted. If not given (or empty), it will match all links.
deny (str or list) – a single regular expression (or list of regular expressions) that the (absolute) urls must match in order to be excluded (i.e. not extracted). It has precedence over the
allow
parameter. If not given (or empty) it won’t exclude any links.allow_domains (str or list) – a single value or a list of string containing domains which will be considered for extracting the links
deny_domains (str or list) – a single value or a list of strings containing domains which won’t be considered for extracting the links
deny_extensions (list) –
a single value or list of strings containing extensions that should be ignored when extracting links. If not given, it will default to
scrapy.linkextractors.IGNORED_EXTENSIONS
.Changed in version 2.0:
IGNORED_EXTENSIONS
now includes7z
,7zip
,apk
,bz2
,cdr
,dmg
,ico
,iso
,tar
,tar.gz
,webm
, andxz
.restrict_xpaths (str or list) – is an XPath (or list of XPath’s) which defines regions inside the response where links should be extracted from. If given, only the text selected by those XPath will be scanned for links.
restrict_css (str or list) – a CSS selector (or list of selectors) which defines regions inside the response where links should be extracted from. Has the same behaviour as
restrict_xpaths
.restrict_text (str or list) – a single regular expression (or list of regular expressions) that the link’s text must match in order to be extracted. If not given (or empty), it will match all links. If a list of regular expressions is given, the link will be extracted if it matches at least one.
tags (str or list) – a tag or a list of tags to consider when extracting links. Defaults to
('a', 'area')
.attrs (list) – an attribute or list of attributes which should be considered when looking for links to extract (only for those tags specified in the
tags
parameter). Defaults to('href',)
canonicalize (bool) – canonicalize each extracted url (using w3lib.url.canonicalize_url). Defaults to
False
. Note that canonicalize_url is meant for duplicate checking; it can change the URL visible at server side, so the response can be different for requests with canonicalized and raw URLs. If you’re using LinkExtractor to follow links it is more robust to keep the defaultcanonicalize=False
.unique (bool) – whether duplicate filtering should be applied to extracted links.
process_value (collections.abc.Callable) –
a function which receives each value extracted from the tag and attributes scanned and can modify the value and return a new one, or return
None
to ignore the link altogether. If not given,process_value
defaults tolambda x: x
.For example, to extract links from this code:
<a href="javascript:goToPage('../other/page.html'); return false">Link text</a>
You can use the following function in
process_value
:def process_value(value): m = re.search(r"javascript:goToPage\('(.*?)'", value) if m: return m.group(1)
strip (bool) – whether to strip whitespaces from extracted attributes. According to HTML5 standard, leading and trailing whitespaces must be stripped from
href
attributes of<a>
,<area>
and many other elements,src
attribute of<img>
,<iframe>
elements, etc., so LinkExtractor strips space chars by default. Setstrip=False
to turn it off (e.g. if you’re extracting urls from elements or attributes which allow leading/trailing whitespaces).
Link¶
- class scrapy.link.Link(url: str, text: str = '', fragment: str = '', nofollow: bool = False)[source]¶
Link objects represent an extracted link by the LinkExtractor.
Using the anchor tag sample below to illustrate the parameters:
<a href="https://example.com/nofollow.html#foo" rel="nofollow">Dont follow this one</a>
- Parameters
url – the absolute url being linked to in the anchor tag. From the sample, this is
https://example.com/nofollow.html
.text – the text in the anchor tag. From the sample, this is
Dont follow this one
.fragment – the part of the url after the hash symbol. From the sample, this is
foo
.nofollow – an indication of the presence or absence of a nofollow value in the
rel
attribute of the anchor tag.