Deploying code to Scrapy Cloud projects#
For information about deploying your code to a project, see:
For information about Scrapy Cloud stacks, see:
For information about installing additional Python packages into stacks, see:
When using a stack, you can use Scrapy Cloud addons to extend your code, including:
Autothrottle, to crawl gently.
DeltaFetch, to crawl only new pages.
DotScrapy Persistence, to persist data between jobs.
Images, to download images into S3 storage.
Magic Fields, to add item fields.
Page Storage, to store visited pages.
Query Cleaner, to clean request URL query parameters.
Using Docker images#
For information about using Docker images to deploy your code, see: