New in version 2.0.
Scrapy has partial support for the coroutine syntax.
The following callables may be defined as coroutines using
async def, and
hence use coroutine syntax (e.g.
If you are using any custom or third-party spider middleware, see Mixing synchronous and asynchronous spider middlewares.
Changed in version 2.7: Output of async callbacks is now processed asynchronously instead of collecting all of it first.
process_item()method of item pipelines.
process_exception()methods of downloader middlewares.
process_spider_output()method of spider middlewares.
It must be defined as an asynchronous generator. The input
resultparameter is an asynchronous iterable.
See also Mixing synchronous and asynchronous spider middlewares and Universal spider middlewares.
New in version 2.7.
There are several use cases for coroutines in Scrapy.
Code that would return Deferreds when written for previous Scrapy versions, such as downloader middlewares and signal handlers, can be rewritten to be shorter and cleaner:
from itemadapter import ItemAdapter class DbPipeline: def _update_item(self, data, item): adapter = ItemAdapter(item) adapter['field'] = data return item def process_item(self, item, spider): adapter = ItemAdapter(item) dfd = db.get_some_data(adapter['id']) dfd.addCallback(self._update_item, item) return dfd
from itemadapter import ItemAdapter class DbPipeline: async def process_item(self, item, spider): adapter = ItemAdapter(item) adapter['field'] = await db.get_some_data(adapter['id']) return item
Coroutines may be used to call asynchronous code. This includes other
coroutines, functions that return Deferreds and functions that return
awaitable objects such as
This means you can use many useful Python libraries providing such code:
class MySpiderDeferred(Spider): # ... async def parse(self, response): additional_response = await treq.get('https://additional.url') additional_data = await treq.content(additional_response) # ... use response and additional_data to yield items and requests class MySpiderAsyncio(Spider): # ... async def parse(self, response): async with aiohttp.ClientSession() as session: async with session.get('https://additional.url') as additional_response: additional_data = await additional_response.text() # ... use response and additional_data to yield items and requests
Many libraries that use coroutines, such as aio-libs, require the
asyncio loop and to use them you need to
enable asyncio support in Scrapy.
If you want to
await on Deferreds while using the asyncio reactor,
you need to wrap them.
Common use cases for asynchronous code include:
requesting data from websites, databases and other services (in callbacks, pipelines and middlewares);
storing data in databases (in pipelines and middlewares);
delaying the spider initialization until some external event (in the
calling asynchronous Scrapy methods like
ExecutionEngine.download()(see the screenshot pipeline example).
Mixing synchronous and asynchronous spider middlewares¶
New in version 2.7.
The output of a
Request callback is passed as the
parameter to the
of the first spider middleware from the
list of active spider middlewares.
Then the output of that
process_spider_output method is passed to the
process_spider_output method of the next spider middleware, and so on for
every active spider middleware.
Scrapy supports mixing coroutine methods and synchronous methods in this chain of calls.
However, if any of the
process_spider_output methods is defined as a
synchronous method, and the previous
Request callback or
process_spider_output method is a coroutine, there are some drawbacks to
the asynchronous-to-synchronous conversion that Scrapy does so that the
process_spider_output method gets a synchronous iterable as its
The whole output of the previous
process_spider_outputmethod is awaited at this point.
If an exception raises while awaiting the output of the previous
process_spider_outputmethod, none of that output will be processed.
This contrasts with the regular behavior, where all items yielded before an exception raises are processed.
Asynchronous-to-synchronous conversions are supported for backward compatibility, but they are deprecated and will stop working in a future version of Scrapy.
To avoid asynchronous-to-synchronous conversions, when defining
callbacks as coroutine methods or when using spider middlewares whose
process_spider_output method is an asynchronous generator, all
active spider middlewares must either have their
method defined as an asynchronous generator or define a
When using third-party spider middlewares that only define a
process_spider_output method, consider
making them universal through
Universal spider middlewares¶
New in version 2.7.
To allow writing a spider middleware that supports asynchronous execution of
process_spider_output method in Scrapy 2.7 and later (avoiding
while maintaining support for older Scrapy versions, you may define
process_spider_output as a synchronous method and define an
asynchronous generator version of that method with an alternative name:
class UniversalSpiderMiddleware: def process_spider_output(self, response, result, spider): for r in result: # ... do something with r yield r async def process_spider_output_async(self, response, result, spider): async for r in result: # ... do something with r yield r
This is an interim measure to allow, for a time, to write code that works in Scrapy 2.7 and later without requiring asynchronous-to-synchronous conversions, and works in earlier Scrapy versions as well.
In some future version of Scrapy, however, this feature will be
deprecated and, eventually, in a later version of Scrapy, this
feature will be removed, and all spider middlewares will be expected
to define their
process_spider_output method as an asynchronous