Welcome to scrawler’s documentation!

“scrawler” = “scraper” + “crawler”

Provides functionality for the automatic collection of website data (web scraping) and following links to map an entire domain (crawling). It can handle these tasks individually, or process several websites/domains in parallel using asyncio and multithreading.

This project was initially developed while working at the Fraunhofer Institute for Systems and Innovation Research. Many thanks for the opportunity and support!

Installation

You can install scrawler from PyPI:

pip install scrawler

Note

Alternatively, you can find the .whl and .tar.gz files on GitHub for each respective release.

Getting Started

Check out the Getting Started Guide.

Important concepts and classes

Website Object

Basic object to contain information on one website. This is basically a wrapper around a BeautifulSoup object constructed from a website’s HTML text, adding additional information such as the URL and its parts and the HTTP response when fetching the website.

Crawling/Scraping attribute objects

Used for specifying options for the crawling/scraping processes, like what data to collect, which URLs to include and where to save the data.

SearchAttributes

Specify which data to collect/search for in the website.

ExportAttributes

Specify how and where to export the collected data.

CrawlingAttributes

Specify how to conduct the crawling, including filtering irrelevant URLs or limiting the number of crawled URLs.

Data Extractors

Data extractors are functions used to retrieve various data points from Website objects.

Indices and tables