Signals¶
Scrapy uses signals extensively to notify when certain events occur. You can catch some of those signals in your Scrapy project (using an extension, for example) to perform additional tasks or extend Scrapy to add functionality not provided out of the box.
Even though signals provide several arguments, the handlers that catch them don’t need to accept all of them - the signal dispatching mechanism will only deliver the arguments that the handler receives.
You can connect to signals (or send your own) through the Signals API.
Here is a simple example showing how you can catch signals and perform some action:
from scrapy import signals
from scrapy import Spider
class DmozSpider(Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/",
]
@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
spider = super(DmozSpider, cls).from_crawler(crawler, *args, **kwargs)
crawler.signals.connect(spider.spider_closed, signal=signals.spider_closed)
return spider
def spider_closed(self, spider):
spider.logger.info('Spider closed: %s', spider.name)
def parse(self, response):
pass
Deferred signal handlers¶
Some signals support returning Twisted deferreds from their handlers, see the Built-in signals reference below to know which ones.
Built-in signals reference¶
Here’s the list of Scrapy built-in signals and their meaning.
engine_started¶
- scrapy.signals.engine_started()¶
Sent when the Scrapy engine has started crawling.
This signal supports returning deferreds from their handlers.
Note
This signal may be fired after the spider_opened
signal,
depending on how the spider was started. So don’t rely on this signal
getting fired before spider_opened
.
engine_stopped¶
- scrapy.signals.engine_stopped()¶
Sent when the Scrapy engine is stopped (for example, when a crawling process has finished).
This signal supports returning deferreds from their handlers.
item_scraped¶
- scrapy.signals.item_scraped(item, response, spider)¶
Sent when an item has been scraped, after it has passed all the Item Pipeline stages (without being dropped).
This signal supports returning deferreds from their handlers.
item_dropped¶
- scrapy.signals.item_dropped(item, response, exception, spider)¶
Sent after an item has been dropped from the Item Pipeline when some stage raised a
DropItem
exception.This signal supports returning deferreds from their handlers.
- Parameters:
item (dict or
Item
object) – the item dropped from the Item Pipelinespider (
Spider
object) – the spider which scraped the itemresponse (
Response
object) – the response from where the item was droppedexception (
DropItem
exception) – the exception (which must be aDropItem
subclass) which caused the item to be dropped
item_error¶
- scrapy.signals.item_error(item, response, spider, failure)¶
Sent when a Item Pipeline generates an error (ie. raises an exception), except
DropItem
exception.This signal supports returning deferreds from their handlers.
- Parameters:
spider_closed¶
- scrapy.signals.spider_closed(spider, reason)¶
Sent after a spider has been closed. This can be used to release per-spider resources reserved on
spider_opened
.This signal supports returning deferreds from their handlers.
- Parameters:
spider (
Spider
object) – the spider which has been closedreason (str) – a string which describes the reason why the spider was closed. If it was closed because the spider has completed scraping, the reason is
'finished'
. Otherwise, if the spider was manually closed by calling theclose_spider
engine method, then the reason is the one passed in thereason
argument of that method (which defaults to'cancelled'
). If the engine was shutdown (for example, by hitting Ctrl-C to stop it) the reason will be'shutdown'
.
spider_opened¶
- scrapy.signals.spider_opened(spider)¶
Sent after a spider has been opened for crawling. This is typically used to reserve per-spider resources, but can be used for any task that needs to be performed when a spider is opened.
This signal supports returning deferreds from their handlers.
- Parameters:
spider (
Spider
object) – the spider which has been opened
spider_idle¶
- scrapy.signals.spider_idle(spider)¶
Sent when a spider has gone idle, which means the spider has no further:
requests waiting to be downloaded
requests scheduled
items being processed in the item pipeline
If the idle state persists after all handlers of this signal have finished, the engine starts closing the spider. After the spider has finished closing, the
spider_closed
signal is sent.You may raise a
DontCloseSpider
exception to prevent the spider from being closed.This signal does not support returning deferreds from their handlers.
- Parameters:
spider (
Spider
object) – the spider which has gone idle
Note
Scheduling some requests in your spider_idle
handler does
not guarantee that it can prevent the spider from being closed,
although it sometimes can. That’s because the spider may still remain idle
if all the scheduled requests are rejected by the scheduler (e.g. filtered
due to duplication).
spider_error¶
- scrapy.signals.spider_error(failure, response, spider)¶
Sent when a spider callback generates an error (ie. raises an exception).
This signal does not support returning deferreds from their handlers.
request_scheduled¶
request_dropped¶
request_reached_downloader¶
response_received¶
response_downloaded¶
- scrapy.signals.response_downloaded(response, request, spider)¶
Sent by the downloader right after a
HTTPResponse
is downloaded.This signal does not support returning deferreds from their handlers.