Let's get started with a Microservice Architecture with Spring Cloud:
Web Crawler Using WebMagic
Last updated: October 5, 2025
1. Introduction
A web crawler or spider is a program that searches and automatically indexes web content and other data on the web. Web crawlers scan webpages to understand every website page to retrieve, update, and index information when users perform search queries.
WebMagic is a simple, powerful, and scalable web crawler framework. It draws inspiration from Python’s popular framework, Scrapy. It handles HTTP requests, HTML parsing, task scheduling, and data pipeline processing with minimal boilerplate.
In this tutorial, we’ll explore WebMagic, its architecture, setup, and a basic Hello World example.
2. Architecture
WebMagic is built with a modular and extensible architecture. Let’s take a look at its core components:
2.1. Spider
Spider is the main engine that orchestrates the entire crawling process. It takes the initial URL and invokes the downloader, processor, and pipeline.
2.2. Scheduler
The main job of the Scheduler is to manage the queue of URLs that need to be crawled. It also prevents duplicate crawling by keeping track of visited URLs. It sends one request at a time to the Downloader for further processing. We can also use in-memory, file-based, Redis-backed, or custom schedulers.
2.3. Downloader
Downloader is responsible for handling the actual HTTP requests. It’s responsible for downloading the HTML content from the internet. The default implementation of downloader uses Apache HttpClient, but we can customize it to use OkHttp or any other library. Once the page gets downloaded, it passes the downloaded page to PageProcessor.
2.4. PageProcessor
PageProcessor is also known as the heart of the crawler logic. As its name suggests, it defines how to extract the target data (like product, price, etc.) and new links to crawl from a page. We must implement the process method to parse the response and extract the required information.
Once extracted, the data is sent to the Pipeline, and the new links to crawl are sent back to the Scheduler.
2.5. Pipeline
Pipeline handles the post-processing of the extracted data. The most common operations are either saving the extracted data to a database or writing it to a file or console.
3. Setup With Maven
WebMagic uses Maven as its build tool, so it’s best to manage our project with Maven. Let’s take a look at the following dependencies in our pom.xml file:
<dependency>
<groupId>us.codecraft</groupId>
<artifactId>webmagic-core</artifactId>
<version>1.0.3</version>
</dependency>
<dependency>
<groupId>us.codecraft</groupId>
<artifactId>webmagic-extension</artifactId>
<version>1.0.3</version>
</dependency>
Also, WebMagic uses slf4j with slf4j-log4j12 implementation. We need to exclude slf4j-log4j12 from our implementation to avoid conflicts:
<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
</exclusions>
4. Hello World Example
Let’s look at an example where we’ll crawl the books.toscrape.com site and print the first 10 book titles and prices on the console.
public class BookScraper implements PageProcessor {
private Site site = Site.me().setRetryTimes(3).setSleepTime(1000);
@Override
public void process(Page page) {
var books = page.getHtml().css("article.product_pod");
for (int i = 0; i < Math.min(10, books.nodes().size()); i++) {
var book = books.nodes().get(i);
String title = book.css("h3 a", "title").get();
String price = book.css(".price_color", "text").get();
System.out.println("Title: " + title + " | Price: " + price);
}
}
@Override
public Site getSite() {
return site;
}
public static void main(String[] args) {
Spider.create(new BookScraper())
.addUrl("https://books.toscrape.com/")
.thread(1)
.run();
}
}
In the above example, we defined a class BookScraper that implements PageProcessor. The functions process() and getSite() allow us to define how to scrape the page and crawler settings. The line below configures the crawler to retry failed requests up to three times and wait one second between requests to help avoid being blocked:
private Site site = Site.me().setRetryTimes(3).setSleepTime(1000);
The process() function contains the actual scraping logic. It selects all the HTML article elements from the books.toscrape.com site that have a CSS class of .product_pod. We go through each book and use CSS selectors to extract and print the title and price.
In the main function, we create a new WebMagic spider using our class. We start from the book’s homepage, run it with a single thread, and start crawling.
Let’s take a look at the output below of the program, where we can see the title and price of the 10 books:
17:02:26.460 [main] INFO us.codecraft.webmagic.Spider -- Spider books.toscrape.com started!
Title: A Light in the Attic | Price: £51.77
Title: Tipping the Velvet | Price: £53.74
Title: Soumission | Price: £50.10
Title: Sharp Objects | Price: £47.82
Title: Sapiens: A Brief History of Humankind | Price: £54.23
Title: The Requiem Red | Price: £22.65
Title: The Dirty Little Secrets of Getting Your Dream Job | Price: £33.34
Title: The Coming Woman: A Novel Based on the Life of the Infamous Feminist, Victoria Woodhull | Price: £17.93
Title: The Boys in the Boat: Nine Americans and Their Epic Quest for Gold at the 1936 Berlin Olympics | Price: £22.60
Title: The Black Maria | Price: £52.15
get page: https://books.toscrape.com/
5. Conclusion
In this tutorial, we looked into WebMagic, its architecture, and setup details. WebMagic offers a simple and powerful approach to building web crawlers in Java. Its design allows developers to focus on extracting data rather than writing boilerplate code for HTTP, parsing, and threading.
As seen in the example, with just a few lines of code, we created a working crawler and were able to extract book names and prices.
The code backing this article is available on GitHub. Once you're logged in as a Baeldung Pro Member, start learning and coding on the project.
















