Downloader Middleware¶
The downloader middleware is a framework of hooks into Scrapy’s request/response processing. It’s a light, low-level system for globally altering Scrapy’s requests and responses.
Activating a downloader middleware¶
To activate a downloader middleware component, add it to the
DOWNLOADER_MIDDLEWARES
setting, which is a dict whose keys are the
middleware class paths and their values are the middleware orders.
Here’s an example:
DOWNLOADER_MIDDLEWARES = {
'myproject.middlewares.CustomDownloaderMiddleware': 543,
}
The DOWNLOADER_MIDDLEWARES
setting is merged with the
DOWNLOADER_MIDDLEWARES_BASE
setting defined in Scrapy (and not meant to
be overridden) and then sorted by order to get the final sorted list of enabled
middlewares: the first middleware is the one closer to the engine and the last
is the one closer to the downloader.
To decide which order to assign to your middleware see the
DOWNLOADER_MIDDLEWARES_BASE
setting and pick a value according to
where you want to insert the middleware. The order does matter because each
middleware performs a different action and your middleware could depend on some
previous (or subsequent) middleware being applied.
If you want to disable a built-in middleware (the ones defined in
DOWNLOADER_MIDDLEWARES_BASE
and enabled by default) you must define it
in your project’s DOWNLOADER_MIDDLEWARES
setting and assign None
as its value. For example, if you want to disable the user-agent middleware:
DOWNLOADER_MIDDLEWARES = {
'myproject.middlewares.CustomDownloaderMiddleware': 543,
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware': None,
}
Finally, keep in mind that some middlewares may need to be enabled through a particular setting. See each middleware documentation for more info.
Writing your own downloader middleware¶
Each middleware component is a Python class that defines one or more of the following methods:
-
class
scrapy.downloadermiddlewares.
DownloaderMiddleware
¶ -
process_request
(request, spider)¶ This method is called for each request that goes through the download middleware.
process_request()
should either: returnNone
, return aResponse
object, return aRequest
object, or raiseIgnoreRequest
.If it returns
None
, Scrapy will continue processing this request, executing all other middlewares until, finally, the appropriate downloader handler is called the request performed (and its response downloaded).If it returns a
Response
object, Scrapy won’t bother calling any otherprocess_request()
orprocess_exception()
methods, or the appropriate download function; it’ll return that response. Theprocess_response()
methods of installed middleware is always called on every response.If it returns a
Request
object, Scrapy will stop calling process_request methods and reschedule the returned request. Once the newly returned request is performed, the appropriate middleware chain will be called on the downloaded response.If it raises an
IgnoreRequest
exception, theprocess_exception()
methods of installed downloader middleware will be called. If none of them handle the exception, the errback function of the request (Request.errback
) is called. If no code handles the raised exception, it is ignored and not logged (unlike other exceptions).Parameters:
-
process_response
(request, response, spider)¶ process_response()
should either: return aResponse
object, return aRequest
object or raise aIgnoreRequest
exception.If it returns a
Response
(it could be the same given response, or a brand-new one), that response will continue to be processed with theprocess_response()
of the next middleware in the chain.If it returns a
Request
object, the middleware chain is halted and the returned request is rescheduled to be downloaded in the future. This is the same behavior as if a request is returned fromprocess_request()
.If it raises an
IgnoreRequest
exception, the errback function of the request (Request.errback
) is called. If no code handles the raised exception, it is ignored and not logged (unlike other exceptions).Parameters:
-
process_exception
(request, exception, spider)¶ Scrapy calls
process_exception()
when a download handler or aprocess_request()
(from a downloader middleware) raises an exception (including anIgnoreRequest
exception)process_exception()
should return: eitherNone
, aResponse
object, or aRequest
object.If it returns
None
, Scrapy will continue processing this exception, executing any otherprocess_exception()
methods of installed middleware, until no middleware is left and the default exception handling kicks in.If it returns a
Response
object, theprocess_response()
method chain of installed middleware is started, and Scrapy won’t bother calling any otherprocess_exception()
methods of middleware.If it returns a
Request
object, the returned request is rescheduled to be downloaded in the future. This stops the execution ofprocess_exception()
methods of the middleware the same as returning a response would.Parameters:
-
Built-in downloader middleware reference¶
This page describes all downloader middleware components that come with Scrapy. For information on how to use them and how to write your own downloader middleware, see the downloader middleware usage guide.
For a list of the components enabled by default (and their orders) see the
DOWNLOADER_MIDDLEWARES_BASE
setting.
CookiesMiddleware¶
This middleware enables working with sites that require cookies, such as those that use sessions. It keeps track of cookies sent by web servers, and send them back on subsequent requests (from that spider), just like web browsers do.
The following settings can be used to configure the cookie middleware:
Multiple cookie sessions per spider¶
New in version 0.15.
There is support for keeping multiple cookie sessions per spider by using the
cookiejar
Request meta key. By default it uses a single cookie jar
(session), but you can pass an identifier to use different ones.
For example:
for i, url in enumerate(urls):
yield scrapy.Request("http://www.example.com", meta={'cookiejar': i},
callback=self.parse_page)
Keep in mind that the cookiejar
meta key is not “sticky”. You need to keep
passing it along on subsequent requests. For example:
def parse_page(self, response):
# do some processing
return scrapy.Request("http://www.example.com/otherpage",
meta={'cookiejar': response.meta['cookiejar']},
callback=self.parse_other_page)
COOKIES_ENABLED¶
Default: True
Whether to enable the cookies middleware. If disabled, no cookies will be sent to web servers.
COOKIES_DEBUG¶
Default: False
If enabled, Scrapy will log all cookies sent in requests (ie. Cookie
header) and all cookies received in responses (ie. Set-Cookie
header).
Here’s an example of a log with COOKIES_DEBUG
enabled:
2011-04-06 14:35:10-0300 [scrapy] INFO: Spider opened
2011-04-06 14:35:10-0300 [scrapy] DEBUG: Sending cookies to: <GET http://www.diningcity.com/netherlands/index.html>
Cookie: clientlanguage_nl=en_EN
2011-04-06 14:35:14-0300 [scrapy] DEBUG: Received cookies from: <200 http://www.diningcity.com/netherlands/index.html>
Set-Cookie: JSESSIONID=B~FA4DC0C496C8762AE4F1A620EAB34F38; Path=/
Set-Cookie: ip_isocode=US
Set-Cookie: clientlanguage_nl=en_EN; Expires=Thu, 07-Apr-2011 21:21:34 GMT; Path=/
2011-04-06 14:49:50-0300 [scrapy] DEBUG: Crawled (200) <GET http://www.diningcity.com/netherlands/index.html> (referer: None)
[...]
DefaultHeadersMiddleware¶
-
class
scrapy.downloadermiddlewares.defaultheaders.
DefaultHeadersMiddleware
¶ This middleware sets all default requests headers specified in the
DEFAULT_REQUEST_HEADERS
setting.
DownloadTimeoutMiddleware¶
-
class
scrapy.downloadermiddlewares.downloadtimeout.
DownloadTimeoutMiddleware
¶ This middleware sets the download timeout for requests specified in the
DOWNLOAD_TIMEOUT
setting ordownload_timeout
spider attribute.
Note
You can also set download timeout per-request using
download_timeout
Request.meta key; this is supported
even when DownloadTimeoutMiddleware is disabled.
HttpAuthMiddleware¶
-
class
scrapy.downloadermiddlewares.httpauth.
HttpAuthMiddleware
¶ This middleware authenticates all requests generated from certain spiders using Basic access authentication (aka. HTTP auth).
To enable HTTP authentication from certain spiders, set the
http_user
andhttp_pass
attributes of those spiders.Example:
from scrapy.spiders import CrawlSpider class SomeIntranetSiteSpider(CrawlSpider): http_user = 'someuser' http_pass = 'somepass' name = 'intranet.example.com' # .. rest of the spider code omitted ...
HttpCacheMiddleware¶
-
class
scrapy.downloadermiddlewares.httpcache.
HttpCacheMiddleware
¶ This middleware provides low-level cache to all HTTP requests and responses. It has to be combined with a cache storage backend as well as a cache policy.
Scrapy ships with two HTTP cache storage backends:
You can change the HTTP cache storage backend with the
HTTPCACHE_STORAGE
setting. Or you can also implement your own storage backend.Scrapy ships with two HTTP cache policies:
You can change the HTTP cache policy with the
HTTPCACHE_POLICY
setting. Or you can also implement your own policy.You can also avoid caching a response on every policy using
dont_cache
meta key equals True.
Dummy policy (default)¶
This policy has no awareness of any HTTP Cache-Control directives. Every request and its corresponding response are cached. When the same request is seen again, the response is returned without transferring anything from the Internet.
The Dummy policy is useful for testing spiders faster (without having to wait for downloads every time) and for trying your spider offline, when an Internet connection is not available. The goal is to be able to “replay” a spider run exactly as it ran before.
In order to use this policy, set:
HTTPCACHE_POLICY
toscrapy.extensions.httpcache.DummyPolicy
RFC2616 policy¶
This policy provides a RFC2616 compliant HTTP cache, i.e. with HTTP Cache-Control awareness, aimed at production and used in continuous runs to avoid downloading unmodified data (to save bandwidth and speed up crawls).
what is implemented:
- Do not attempt to store responses/requests with no-store cache-control directive set
- Do not serve responses from cache if no-cache cache-control directive is set even for fresh responses
- Compute freshness lifetime from max-age cache-control directive
- Compute freshness lifetime from Expires response header
- Compute freshness lifetime from Last-Modified response header (heuristic used by Firefox)
- Compute current age from Age response header
- Compute current age from Date header
- Revalidate stale responses based on Last-Modified response header
- Revalidate stale responses based on ETag response header
- Set Date header for any received response missing it
what is missing:
- Pragma: no-cache support http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.9.1
- Vary header support http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.6
- Invalidation after updates or deletes http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html#sec13.10
- ... probably others ..
In order to use this policy, set:
HTTPCACHE_POLICY
toscrapy.extensions.httpcache.RFC2616Policy
Filesystem storage backend (default)¶
File system storage backend is available for the HTTP cache middleware.
In order to use this storage backend, set:
HTTPCACHE_STORAGE
toscrapy.extensions.httpcache.FilesystemCacheStorage
Each request/response pair is stored in a different directory containing the following files:
request_body
- the plain request bodyrequest_headers
- the request headers (in raw HTTP format)response_body
- the plain response bodyresponse_headers
- the request headers (in raw HTTP format)meta
- some metadata of this cache resource in Pythonrepr()
format (grep-friendly format)pickled_meta
- the same metadata inmeta
but pickled for more efficient deserialization
The directory name is made from the request fingerprint (see
scrapy.utils.request.fingerprint
), and one level of subdirectories is
used to avoid creating too many files into the same directory (which is
inefficient in many file systems). An example directory could be:
/path/to/cache/dir/example.com/72/72811f648e718090f041317756c03adb0ada46c7
DBM storage backend¶
New in version 0.13.
A DBM storage backend is also available for the HTTP cache middleware.
By default, it uses the anydbm module, but you can change it with the
HTTPCACHE_DBM_MODULE
setting.
In order to use this storage backend, set:
HTTPCACHE_STORAGE
toscrapy.extensions.httpcache.DbmCacheStorage
LevelDB storage backend¶
New in version 0.23.
A LevelDB storage backend is also available for the HTTP cache middleware.
This backend is not recommended for development because only one process can access LevelDB databases at the same time, so you can’t run a crawl and open the scrapy shell in parallel for the same spider.
In order to use this storage backend:
- set
HTTPCACHE_STORAGE
toscrapy.extensions.httpcache.LeveldbCacheStorage
- install LevelDB python bindings like
pip install leveldb
HTTPCache middleware settings¶
The HttpCacheMiddleware
can be configured through the following
settings:
HTTPCACHE_ENABLED¶
New in version 0.11.
Default: False
Whether the HTTP cache will be enabled.
Changed in version 0.11: Before 0.11, HTTPCACHE_DIR
was used to enable cache.
HTTPCACHE_EXPIRATION_SECS¶
Default: 0
Expiration time for cached requests, in seconds.
Cached requests older than this time will be re-downloaded. If zero, cached requests will never expire.
Changed in version 0.11: Before 0.11, zero meant cached requests always expire.
HTTPCACHE_DIR¶
Default: 'httpcache'
The directory to use for storing the (low-level) HTTP cache. If empty, the HTTP cache will be disabled. If a relative path is given, is taken relative to the project data dir. For more info see: Default structure of Scrapy projects.
HTTPCACHE_IGNORE_HTTP_CODES¶
New in version 0.10.
Default: []
Don’t cache response with these HTTP codes.
HTTPCACHE_IGNORE_MISSING¶
Default: False
If enabled, requests not found in the cache will be ignored instead of downloaded.
HTTPCACHE_IGNORE_SCHEMES¶
New in version 0.10.
Default: ['file']
Don’t cache responses with these URI schemes.
HTTPCACHE_STORAGE¶
Default: 'scrapy.extensions.httpcache.FilesystemCacheStorage'
The class which implements the cache storage backend.
HTTPCACHE_DBM_MODULE¶
New in version 0.13.
Default: 'anydbm'
The database module to use in the DBM storage backend. This setting is specific to the DBM backend.
HTTPCACHE_POLICY¶
New in version 0.18.
Default: 'scrapy.extensions.httpcache.DummyPolicy'
The class which implements the cache policy.
HTTPCACHE_GZIP¶
New in version 0.25.
Default: False
If enabled, will compress all cached data with gzip. This setting is specific to the Filesystem backend.
HttpCompressionMiddleware¶
-
class
scrapy.downloadermiddlewares.httpcompression.
HttpCompressionMiddleware
¶ This middleware allows compressed (gzip, deflate) traffic to be sent/received from web sites.
ChunkedTransferMiddleware¶
-
class
scrapy.downloadermiddlewares.chunked.
ChunkedTransferMiddleware
¶ This middleware adds support for chunked transfer encoding
HttpProxyMiddleware¶
New in version 0.8.
-
class
scrapy.downloadermiddlewares.httpproxy.
HttpProxyMiddleware
¶ This middleware sets the HTTP proxy to use for requests, by setting the
proxy
meta value forRequest
objects.Like the Python standard library modules urllib and urllib2, it obeys the following environment variables:
http_proxy
https_proxy
no_proxy
You can also set the meta key
proxy
per-request, to a value likehttp://some_proxy_server:port
.
RedirectMiddleware¶
-
class
scrapy.downloadermiddlewares.redirect.
RedirectMiddleware
¶ This middleware handles redirection of requests based on response status.
The urls which the request goes through (while being redirected) can be found
in the redirect_urls
Request.meta
key.
The RedirectMiddleware
can be configured through the following
settings (see the settings documentation for more info):
If Request.meta
has dont_redirect
key set to True, the request will be ignored by this middleware.
MetaRefreshMiddleware¶
-
class
scrapy.downloadermiddlewares.redirect.
MetaRefreshMiddleware
¶ This middleware handles redirection of requests based on meta-refresh html tag.
The MetaRefreshMiddleware
can be configured through the following
settings (see the settings documentation for more info):
METAREFRESH_ENABLED
METAREFRESH_MAXDELAY
This middleware obey REDIRECT_MAX_TIMES
setting, dont_redirect
and redirect_urls
request meta keys as described for RedirectMiddleware
RetryMiddleware¶
-
class
scrapy.downloadermiddlewares.retry.
RetryMiddleware
¶ A middleware to retry failed requests that are potentially caused by temporary problems such as a connection timeout or HTTP 500 error.
Failed pages are collected on the scraping process and rescheduled at the end, once the spider has finished crawling all regular (non failed) pages. Once there are no more failed pages to retry, this middleware sends a signal (retry_complete), so other extensions could connect to that signal.
The RetryMiddleware
can be configured through the following
settings (see the settings documentation for more info):
About HTTP errors to consider:
You may want to remove 400 from RETRY_HTTP_CODES
, if you stick to the
HTTP protocol. It’s included by default because it’s a common code used
to indicate server overload, which would be something we want to retry.
If Request.meta
has dont_retry
key
set to True, the request will be ignored by this middleware.
RobotsTxtMiddleware¶
-
class
scrapy.downloadermiddlewares.robotstxt.
RobotsTxtMiddleware
¶ This middleware filters out requests forbidden by the robots.txt exclusion standard.
To make sure Scrapy respects robots.txt make sure the middleware is enabled and the
ROBOTSTXT_OBEY
setting is enabled.Warning
Keep in mind that, if you crawl using multiple concurrent requests per domain, Scrapy could still download some forbidden pages if they were requested before the robots.txt file was downloaded. This is a known limitation of the current robots.txt middleware and will be fixed in the future.
If Request.meta
has
dont_obey_robotstxt
key set to True
the request will be ignored by this middleware even if
ROBOTSTXT_OBEY
is enabled.
DownloaderStats¶
-
class
scrapy.downloadermiddlewares.stats.
DownloaderStats
¶ Middleware that stores stats of all requests, responses and exceptions that pass through it.
To use this middleware you must enable the
DOWNLOADER_STATS
setting.
UserAgentMiddleware¶
-
class
scrapy.downloadermiddlewares.useragent.
UserAgentMiddleware
¶ Middleware that allows spiders to override the default user agent.
In order for a spider to override the default user agent, its user_agent attribute must be set.
AjaxCrawlMiddleware¶
-
class
scrapy.downloadermiddlewares.ajaxcrawl.
AjaxCrawlMiddleware
¶ Middleware that finds ‘AJAX crawlable’ page variants based on meta-fragment html tag. See https://developers.google.com/webmasters/ajax-crawling/docs/getting-started for more info.
Note
Scrapy finds ‘AJAX crawlable’ pages for URLs like
'http://example.com/!#foo=bar'
even without this middleware. AjaxCrawlMiddleware is necessary when URL doesn’t contain'!#'
. This is often a case for ‘index’ or ‘main’ website pages.
AjaxCrawlMiddleware Settings¶
AJAXCRAWL_ENABLED¶
New in version 0.21.
Default: False
Whether the AjaxCrawlMiddleware will be enabled. You may want to enable it for broad crawls.