The topic of this article may not meet Wikipedia's
notability guidelines for products and services. (September 2016) |
Developer(s) | DigitalPebble, Ltd. |
---|---|
Initial release | September 11, 2014 |
Stable release | 2.8
/ March 29, 2023 |
Repository | |
Written in | Java |
Type | Web crawler |
License | Apache License |
Website |
stormcrawler |
StormCrawler is an open-source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache License and is written mostly in Java (programming language).
StormCrawler is modular and consists of a core module, which provides the basic building blocks of a web crawler such as fetching, parsing, URL filtering. Apart from the core components, the project also provides external resources, like for instance spout and bolts for Elasticsearch and Apache Solr or a ParserBolt which uses Apache Tika to parse various document formats.
The project is used by various organisations, [1] notably Common Crawl [2] for generating a large and publicly available dataset of news.
Linux.com published a Q&A in October 2016 with the author of StormCrawler. [3] InfoQ ran one in December 2016. [4] A comparative benchmark with Apache Nutch was published in January 2017 on dzone.com. [5]
Several research papers mentioned the use of StormCrawler, in particular:
The project Wiki contains a list of videos and slides available online. [9]
The topic of this article may not meet Wikipedia's
notability guidelines for products and services. (September 2016) |
Developer(s) | DigitalPebble, Ltd. |
---|---|
Initial release | September 11, 2014 |
Stable release | 2.8
/ March 29, 2023 |
Repository | |
Written in | Java |
Type | Web crawler |
License | Apache License |
Website |
stormcrawler |
StormCrawler is an open-source collection of resources for building low-latency, scalable web crawlers on Apache Storm. It is provided under Apache License and is written mostly in Java (programming language).
StormCrawler is modular and consists of a core module, which provides the basic building blocks of a web crawler such as fetching, parsing, URL filtering. Apart from the core components, the project also provides external resources, like for instance spout and bolts for Elasticsearch and Apache Solr or a ParserBolt which uses Apache Tika to parse various document formats.
The project is used by various organisations, [1] notably Common Crawl [2] for generating a large and publicly available dataset of news.
Linux.com published a Q&A in October 2016 with the author of StormCrawler. [3] InfoQ ran one in December 2016. [4] A comparative benchmark with Apache Nutch was published in January 2017 on dzone.com. [5]
Several research papers mentioned the use of StormCrawler, in particular:
The project Wiki contains a list of videos and slides available online. [9]