When you’re scraping web pages, the most common task you need to perform is to extract data from the HTML source. There are several libraries available to achieve this:
- BeautifulSoup is a very popular screen scraping library among Python programmers which constructs a Python object based on the structure of the HTML code and also deals with bad markup reasonably well, but it has one drawback: it’s slow.
- lxml is a XML parsing library (which also parses HTML) with a pythonic API based on ElementTree (which is not part of the Python standard library).
XPath is a language for selecting nodes in XML documents, which can also be used with HTML. CSS is a language for applying styles to HTML documents. It defines selectors to associate those styles with specific HTML elements.
Scrapy selectors are built over the lxml library, which means they’re very similar in speed and parsing accuracy.
This page explains how selectors work and describes their API which is very small and simple, unlike the lxml API which is much bigger because the lxml library can be used for many other tasks, besides selecting markup documents.
For a complete reference of the selectors API see Selector reference
Scrapy selectors are instances of Selector class constructed by passing a Response object as first argument, the response’s body is what they’re going to be “selecting”:
from scrapy.spider import BaseSpider from scrapy.selector import Selector class MySpider(BaseSpider): # ... def parse(self, response): sel = Selector(response) # Using XPath query print sel.xpath('//p') # Using CSS query print sel.css('p') # Nesting queries print sel.xpath('//div[@foo="bar"]').css('span#bold')
To explain how to use the selectors we’ll use the Scrapy shell (which provides interactive testing) and an example page located in the Scrapy documentation server:
Here’s its HTML code:
<html> <head> <base href='http://example.com/' /> <title>Example website</title> </head> <body> <div id='images'> <a href='image1.html'>Name: My image 1 <br /><img src='image1_thumb.jpg' /></a> <a href='image2.html'>Name: My image 2 <br /><img src='image2_thumb.jpg' /></a> <a href='image3.html'>Name: My image 3 <br /><img src='image3_thumb.jpg' /></a> <a href='image4.html'>Name: My image 4 <br /><img src='image4_thumb.jpg' /></a> <a href='image5.html'>Name: My image 5 <br /><img src='image5_thumb.jpg' /></a> </div> </body> </html>
First, let’s open the shell:
scrapy shell http://doc.scrapy.org/en/latest/_static/selectors-sample1.html
Then, after the shell loads, you’ll have a selector already instantiated and ready to use in sel shell variable.
Since we’re dealing with HTML, the selector will automatically use an HTML parser.
So, by looking at the HTML code of that page, let’s construct an XPath (using an HTML selector) for selecting the text inside the title tag:
>>> sel.xpath('//title/text()') [<Selector (text) xpath=//title/text()>]
As you can see, the .xpath() method returns an SelectorList instance, which is a list of new selectors. This API can be used quickly for extracting nested data.
To actually extract the textual data, you must call the selector .extract() method, as follows:
>>> sel.xpath('//title/text()').extract() [u'Example website']
Notice that CSS selectors can select text or attribute nodes using CSS3 pseudo-elements:
>>> sel.css('title::text').extract() [u'Example website']
Now we’re going to get the base URL and some image links:
>>> sel.xpath('//base/@href').extract() [u'http://example.com/'] >>> sel.css('base::attr(href)').extract() [u'http://example.com/'] >>> sel.xpath('//a[contains(@href, "image")]/@href').extract() [u'image1.html', u'image2.html', u'image3.html', u'image4.html', u'image5.html'] >>> sel.css('a[href*=image]::attr(href)').extract() [u'image1.html', u'image2.html', u'image3.html', u'image4.html', u'image5.html'] >>> sel.xpath('//a[contains(@href, "image")]/img/@src').extract() [u'image1_thumb.jpg', u'image2_thumb.jpg', u'image3_thumb.jpg', u'image4_thumb.jpg', u'image5_thumb.jpg'] >>> sel.css('a[href*=image] img::attr(src)').extract() [u'image1_thumb.jpg', u'image2_thumb.jpg', u'image3_thumb.jpg', u'image4_thumb.jpg', u'image5_thumb.jpg']
The selection methods (.xpath() or .css()) returns a list of selectors of the same type, so you can call the selection methods for those selectors too. Here’s an example:
>>> links = sel.xpath('//a[contains(@href, "image")]') >>> links.extract() [u'<a href="image1.html">Name: My image 1 <br><img src="image1_thumb.jpg"></a>', u'<a href="image2.html">Name: My image 2 <br><img src="image2_thumb.jpg"></a>', u'<a href="image3.html">Name: My image 3 <br><img src="image3_thumb.jpg"></a>', u'<a href="image4.html">Name: My image 4 <br><img src="image4_thumb.jpg"></a>', u'<a href="image5.html">Name: My image 5 <br><img src="image5_thumb.jpg"></a>'] >>> for index, link in enumerate(links): args = (index, link.xpath('@href').extract(), link.xpath('img/@src').extract()) print 'Link number %d points to url %s and image %s' % args Link number 0 points to url [u'image1.html'] and image [u'image1_thumb.jpg'] Link number 1 points to url [u'image2.html'] and image [u'image2_thumb.jpg'] Link number 2 points to url [u'image3.html'] and image [u'image3_thumb.jpg'] Link number 3 points to url [u'image4.html'] and image [u'image4_thumb.jpg'] Link number 4 points to url [u'image5.html'] and image [u'image5_thumb.jpg']
Using selectors with regular expressions¶
Selector also have a .re() method for extracting data using regular expressions. However, unlike using .xpath() or .css() methods, .re() method returns a list of unicode strings. So you can’t construct nested .re() calls.
Here’s an example used to extract images names from the HTML code above:
>>> sel.xpath('//a[contains(@href, "image")]/text()').re(r'Name:\s*(.*)') [u'My image 1', u'My image 2', u'My image 3', u'My image 4', u'My image 5']
Working with relative XPaths¶
Keep in mind that if you are nesting selectors and use an XPath that starts with /, that XPath will be absolute to the document and not relative to the Selector you’re calling it from.
For example, suppose you want to extract all <p> elements inside <div> elements. First, you would get all <div> elements:
>>> divs = sel.xpath('//div')
At first, you may be tempted to use the following approach, which is wrong, as it actually extracts all <p> elements from the document, not only those inside <div> elements:
>>> for p in divs.xpath('//p') # this is wrong - gets all <p> from the whole document >>> print p.extract()
This is the proper way to do it (note the dot prefixing the .//p XPath):
>>> for p in divs.xpath('.//p') # extracts all <p> inside >>> print p.extract()
Another common case would be to extract all direct <p> children:
>>> for p in divs.xpath('p') >>> print p.extract()
For more details about relative XPaths see the Location Paths section in the XPath specification.
Built-in Selectors reference¶
- class scrapy.selector.Selector(response=None, text=None, type=None)¶
An instance of Selector is a wrapper over response to select certain parts of its content.
text is a unicode string or utf-8 encoded text for cases when a response isn’t available. Using text and response together is undefined behavior.
type defines the selector type, it can be "html", "xml" or None (default).
If type is None, the selector automatically chooses the best type based on response type (see below), or defaults to "html" in case it is used together with text.
If type is None and a response is passed, the selector type is inferred from the response type as follow:
Otherwise, if type is set, the selector type will be forced and no detection will occur.
query is a string containing the XPATH query to apply.
Apply the given CSS selector and return a SelectorList instance.
query is a string containing the CSS selector to apply.
In the background, CSS queries are translated into XPath queries using cssselect library and run .xpath() method.
Serialize and return the matched nodes as a list of unicode strings. Percent encoded content is unquoted.
Apply the given regex and return a list of unicode strings with the matches.
regex can be either a compiled regular expression or a string which will be compiled to a regular expression using re.compile(regex)
- register_namespace(prefix, uri)¶
Register the given namespace to be used in this Selector. Without registering namespaces you can’t select or extract data from non-standard namespaces. See examples below.
Remove all namespaces, allowing to traverse the document using namespace-less xpaths. See example below.
- class scrapy.selector.SelectorList¶
The SelectorList class is subclass of the builtin list class, which provides a few additional methods.
Call the .xpath() method for each element in this list and return their results flattened as another SelectorList.
query is the same argument as the one in Selector.xpath()
Call the .css() method for each element in this list and return their results flattened as another SelectorList.
query is the same argument as the one in Selector.css()
Call the .extract() method for each element is this list and return their results flattened, as a list of unicode strings.
Call the .re() method for each element is this list and return their results flattened, as a list of unicode strings.
returns True if the list is not empty, False otherwise.
Selector examples on HTML response¶
sel = Selector(html_response)
Extract the text of all <h1> elements from a HTML response body, returning a list of unicode strings:
sel.xpath("//h1").extract() # this includes the h1 tag sel.xpath("//h1/text()").extract() # this excludes the h1 tag
Iterate over all <p> tags and print their class attribute:
for node in sel.xpath("//p"): ... print node.xpath("@class").extract()
Selector examples on XML response¶
sel = Selector(xml_response)
Extract all prices from a Google Base XML feed which requires registering a namespace:
sel.register_namespace("g", "http://base.google.com/ns/1.0") sel.xpath("//g:price").extract()
When dealing with scraping projects, it is often quite convenient to get rid of namespaces altogether and just work with element names, to write more simple/convenient XPaths. You can use the Selector.remove_namespaces() method for that.
Let’s show an example that illustrates this with Github blog atom feed.
First, we open the shell with the url we want to scrape:
$ scrapy shell https://github.com/blog.atom
Once in the shell we can try selecting all <link> objects and see that it doesn’t work (because the Atom XML namespace is obfuscating those nodes):
>>> sel.xpath("//link") 
But once we call the Selector.remove_namespaces() method, all nodes can be accessed directly by their names:
>>> sel.remove_namespaces() >>> sel.xpath("//link") [<Selector xpath='//link' data=u'<link xmlns="http://www.w3.org/2005/Atom'>, <Selector xpath='//link' data=u'<link xmlns="http://www.w3.org/2005/Atom'>, ...
If you wonder why the namespace removal procedure is not always called, instead of having to call it manually. This is because of two reasons which, in order of relevance, are:
- Removing namespaces requires to iterate and modify all nodes in the document, which is a reasonably expensive operation to performs for all documents crawled by Scrapy
- There could be some cases where using namespaces is actually required, in case some element names clash between namespaces. These cases are very rare though.