How to Collect a Corpus of Websites With a Web Crawler
Contribution to a Book
SAGE Research Methods: Doing Research Online
Conducting research on digital cultures often requires some form of reference to online sources—but online sources are constantly changing, being updated, or deleted on a minute-by-minute basis. This guide will introduce the use of web crawlers as one potential method for gathering a stable, trustworthy collection of online sources. A corpus of sources generated via a web crawler can function as a detailed snapshot of the way an online resource existed at a particular point in time. The guide begins with an introduction to the theory behind web crawling, before moving into discussions of ethical concerns and commonly used tools. After addressing each of these foundational areas, the guide concludes with a step-by-step demonstration of web crawling with the popular command-line based open-source web downloading tool known as Wget.
crawling, search engines, software, web sites
James A. Hodges. "How to Collect a Corpus of Websites With a Web Crawler" SAGE Research Methods: Doing Research Online (2022). https://doi.org/10.4135/9781529609325