Coroutines¶
New in version 2.0.
Scrapy has partial support for the coroutine syntax.
Supported callables¶
The following callables may be defined as coroutines using async def
, and
hence use coroutine syntax (e.g. await
, async for
, async with
):
Request
callbacks.If you are using any custom or third-party spider middleware, see Mixing synchronous and asynchronous spider middlewares.
Changed in version 2.7: Output of async callbacks is now processed asynchronously instead of collecting all of it first.
The
process_item()
method of item pipelines.The
process_request()
,process_response()
, andprocess_exception()
methods of downloader middlewares.The
process_spider_output()
method of spider middlewares.It must be defined as an asynchronous generator. The input
result
parameter is an asynchronous iterable.See also Mixing synchronous and asynchronous spider middlewares and Universal spider middlewares.
New in version 2.7.
General usage¶
There are several use cases for coroutines in Scrapy.
Code that would return Deferreds when written for previous Scrapy versions, such as downloader middlewares and signal handlers, can be rewritten to be shorter and cleaner:
from itemadapter import ItemAdapter
class DbPipeline:
def _update_item(self, data, item):
adapter = ItemAdapter(item)
adapter["field"] = data
return item
def process_item(self, item, spider):
adapter = ItemAdapter(item)
dfd = db.get_some_data(adapter["id"])
dfd.addCallback(self._update_item, item)
return dfd
becomes:
from itemadapter import ItemAdapter
class DbPipeline:
async def process_item(self, item, spider):
adapter = ItemAdapter(item)
adapter["field"] = await db.get_some_data(adapter["id"])
return item
Coroutines may be used to call asynchronous code. This includes other
coroutines, functions that return Deferreds and functions that return
awaitable objects such as Future
.
This means you can use many useful Python libraries providing such code:
class MySpiderDeferred(Spider):
# ...
async def parse(self, response):
additional_response = await treq.get("https://additional.url")
additional_data = await treq.content(additional_response)
# ... use response and additional_data to yield items and requests
class MySpiderAsyncio(Spider):
# ...
async def parse(self, response):
async with aiohttp.ClientSession() as session:
async with session.get("https://additional.url") as additional_response:
additional_data = await additional_response.text()
# ... use response and additional_data to yield items and requests
Note
Many libraries that use coroutines, such as aio-libs, require the
asyncio
loop and to use them you need to
enable asyncio support in Scrapy.
Note
If you want to await
on Deferreds while using the asyncio reactor,
you need to wrap them.
Common use cases for asynchronous code include:
requesting data from websites, databases and other services (in callbacks, pipelines and middlewares);
storing data in databases (in pipelines and middlewares);
delaying the spider initialization until some external event (in the
spider_opened
handler);calling asynchronous Scrapy methods like
ExecutionEngine.download()
(see the screenshot pipeline example).
Inline requests¶
The spider below shows how to send a request and await its response all from within a spider callback:
from scrapy import Spider, Request
from scrapy.utils.defer import maybe_deferred_to_future
class SingleRequestSpider(Spider):
name = "single"
start_urls = ["https://example.org/product"]
async def parse(self, response, **kwargs):
additional_request = Request("https://example.org/price")
deferred = self.crawler.engine.download(additional_request)
additional_response = await maybe_deferred_to_future(deferred)
yield {
"h1": response.css("h1").get(),
"price": additional_response.css("#price").get(),
}
You can also send multiple requests in parallel:
from scrapy import Spider, Request
from scrapy.utils.defer import maybe_deferred_to_future
from twisted.internet.defer import DeferredList
class MultipleRequestsSpider(Spider):
name = "multiple"
start_urls = ["https://example.com/product"]
async def parse(self, response, **kwargs):
additional_requests = [
Request("https://example.com/price"),
Request("https://example.com/color"),
]
deferreds = []
for r in additional_requests:
deferred = self.crawler.engine.download(r)
deferreds.append(deferred)
responses = await maybe_deferred_to_future(DeferredList(deferreds))
yield {
"h1": response.css("h1::text").get(),
"price": responses[0][1].css(".price::text").get(),
"price2": responses[1][1].css(".color::text").get(),
}
Mixing synchronous and asynchronous spider middlewares¶
New in version 2.7.
The output of a Request
callback is passed as the result
parameter to the
process_spider_output()
method
of the first spider middleware from the
list of active spider middlewares.
Then the output of that process_spider_output
method is passed to the
process_spider_output
method of the next spider middleware, and so on for
every active spider middleware.
Scrapy supports mixing coroutine methods and synchronous methods in this chain of calls.
However, if any of the process_spider_output
methods is defined as a
synchronous method, and the previous Request
callback or
process_spider_output
method is a coroutine, there are some drawbacks to
the asynchronous-to-synchronous conversion that Scrapy does so that the
synchronous process_spider_output
method gets a synchronous iterable as its
result
parameter:
The whole output of the previous
Request
callback orprocess_spider_output
method is awaited at this point.If an exception raises while awaiting the output of the previous
Request
callback orprocess_spider_output
method, none of that output will be processed.This contrasts with the regular behavior, where all items yielded before an exception raises are processed.
Asynchronous-to-synchronous conversions are supported for backward compatibility, but they are deprecated and will stop working in a future version of Scrapy.
To avoid asynchronous-to-synchronous conversions, when defining Request
callbacks as coroutine methods or when using spider middlewares whose
process_spider_output
method is an asynchronous generator, all
active spider middlewares must either have their process_spider_output
method defined as an asynchronous generator or define a
process_spider_output_async method.
Note
When using third-party spider middlewares that only define a
synchronous process_spider_output
method, consider
making them universal through
subclassing.
Universal spider middlewares¶
New in version 2.7.
To allow writing a spider middleware that supports asynchronous execution of
its process_spider_output
method in Scrapy 2.7 and later (avoiding
asynchronous-to-synchronous conversions)
while maintaining support for older Scrapy versions, you may define
process_spider_output
as a synchronous method and define an
asynchronous generator version of that method with an alternative name:
process_spider_output_async
.
For example:
class UniversalSpiderMiddleware:
def process_spider_output(self, response, result, spider):
for r in result:
# ... do something with r
yield r
async def process_spider_output_async(self, response, result, spider):
async for r in result:
# ... do something with r
yield r
Note
This is an interim measure to allow, for a time, to write code that works in Scrapy 2.7 and later without requiring asynchronous-to-synchronous conversions, and works in earlier Scrapy versions as well.
In some future version of Scrapy, however, this feature will be
deprecated and, eventually, in a later version of Scrapy, this
feature will be removed, and all spider middlewares will be expected
to define their process_spider_output
method as an asynchronous
generator.