heimrichhannot / crawler
Crawl all internal links found on a website
Requires
- php: ^7.4|^8.0
- guzzlehttp/guzzle: ^7.2
- guzzlehttp/psr7: ^1.7
- illuminate/collections: ^8.17
- nicmart/tree: ^0.3.0
- spatie/browsershot: ^3.14
- spatie/robots-txt: ^1.0.9
- symfony/dom-crawler: ^5.2
Requires (Dev)
- phpunit/phpunit: ^9.4
- 6.0.0
- v5.x-dev
- 5.0.2
- 5.0.1
- 5.0.0
- v4.x-dev
- 4.7.6
- 4.7.5
- 4.7.4
- 4.7.3
- 4.7.2
- 4.7.1
- 4.7.0
- 4.6.9
- 4.6.8
- 4.6.7
- 4.6.6
- 4.6.5
- 4.6.4
- 4.6.3
- 4.6.2
- 4.6.1
- 4.6.0
- 4.5.0
- 4.4.3
- 4.4.2
- 4.4.1
- 4.4.0
- 4.3.2
- 4.3.1
- 4.3.0
- 4.2.0
- 4.1.7
- 4.1.6
- 4.1.5
- 4.1.4
- 4.1.3
- 4.1.2
- 4.1.1
- 4.1.0
- 4.0.5
- 4.0.4
- 4.0.3
- 4.0.2
- 4.0.1
- 4.0.0
- 3.2.1
- 3.2.0
- 3.1.3
- 3.1.2
- 3.1.1
- 3.1.0
- 3.0.1
- 3.0.0
- v2.x-dev
- 2.8.2
- 2.8.1
- 2.7.6
- 2.7.5
- 2.7.4
- 2.7.3
- 2.7.2
- 2.7.1
- 2.7.0
- 2.6.2
- 2.6.1
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.1
- 2.2.0
- 2.1.2
- 2.1.1
- 2.1.0
- 2.0.7
- 2.0.6
- 2.0.5
- 2.0.4
- 2.0.3
- 2.0.2
- 2.0.1
- 2.0.0
- 1.3.1
- 1.3.0
- 1.2.3
- 1.2.2
- 1.2.1
- 1.2.0
- 1.1.1
- 1.1.0
- 1.0.2
- 1.0.1
- 1.0.0
- dev-master / 0.0.x-dev
- 0.0.1
This package is auto-updated.
Last update: 2024-11-26 17:12:42 UTC
README
A fork of spatie/crawler v2 with some adjustments. Only used for an internal project.
Crawl links on a website
This package provides a class to crawl links on a website. Under the hood Guzzle promises are used to crawl multiple urls concurrently.
Because the crawler can execute JavaScript, it can crawl JavaScript rendered site. Under the hood headless Chrome is used to power this feature.
Spatie is a webdesign agency in Antwerp, Belgium. You'll find an overview of all our open source projects on our website.
Installation
This package can be installed via Composer:
composer require spatie/crawler
Usage
The crawler can be instantiated like this
Crawler::create() ->setCrawlObserver(<implementation of \Spatie\Crawler\CrawlObserver>) ->startCrawling($url);
The argument passed to setCrawlObserver
must be an object that implements the \Spatie\Crawler\CrawlObserver
interface:
/** * Called when the crawler will crawl the given url. * * @param \Spatie\Crawler\Url $url */ public function willCrawl(Url $url); /** * Called when the crawler has crawled the given url. * * @param \Spatie\Crawler\Url $url * @param \Psr\Http\Message\ResponseInterface $response * @param \Spatie\Crawler\Url $foundOn */ public function hasBeenCrawled(Url $url, $response, Url $foundOn = null); /** * Called when the crawl has ended. */ public function finishedCrawling();
Executing JavaScript
By default the crawler will not execute JavaScript. This is how you can enable the execution of JavaScript:
Crawler::create() ->executeJavaScript() ...
Under the hood headless Chrome is used to execute JavaScript. Here are some pointers on how to install it on your system.
The package will make an educated guess as to where Chrome is installed on your system. You can also manually pass the location of the Chrome binary to executeJavaScript()
Crawler::create() ->executeJavaScript($pathToChrome) ...
Filtering certain urls
You can tell the crawler not to visit certain urls by passing using the setCrawlProfile
-function. That function expects
an objects that implements the Spatie\Crawler\CrawlProfile
-interface:
/* * Determine if the given url should be crawled. */ public function shouldCrawl(Url $url): bool;
This package comes with three CrawlProfiles
out of the box:
CrawlAllUrls
: this profile will crawl all urls on all pages including urls to an external site.CrawlInternalUrls
: this profile will only crawl the internal urls on the pages of a host.CrawlSubdomainUrls
: this profile will only crawl the internal urls and its subdomains on the pages of a host.
Setting the number of concurrent requests
To improve the speed of the crawl the package concurrently crawls 10 urls by default. If you want to change that number you can use the setConcurrency
method.
Crawler::create() ->setConcurrency(1) //now all urls will be crawled one by one
Setting the maximum crawl count
By default, the crawler continues until it has crawled every page of the supplied URL. If you want to limit the amount of urls the crawler should crawl you can use the setMaximumCrawlCount
method.
// stop crawling after 5 urls Crawler::create() ->setMaximumCrawlCount(5)
Setting the maximum crawl depth
By default, the crawler continues until it has crawled every page of the supplied URL. If you want to limit the depth of the crawler you can use the setMaximumDepth
method.
Crawler::create() ->setMaximumDepth(2)
Using a custom crawl queue
When crawling a site the crawler will put urls to be crawled in a queue. By default this queue is stored in memory using the built in CollectionCrawlQueue
.
When a site is very large you may want to store that queue elsewhere, maybe a database. In such cases you can write your own crawl queue.
A valid crawl queue is any class that implements the Spatie\Crawler\CrawlQueue\CrawlQueue
-interface. You can pass your custom crawl queue via the setCrawlQueue
method on the crawler.
Crawler::create() ->setCrawlQueue(<implementation of \Spatie\Crawler\CrawlQueue\CrawlQueue>)
Changelog
Please see CHANGELOG for more information what has changed recently.
Contributing
Please see CONTRIBUTING for details.
Testing
To run the tests you'll have to start the included node based server first in a separate terminal window.
cd tests/server
npm install
./start_server.sh
With the server running, you can start testing.
vendor/bin/phpunit
Security
If you discover any security related issues, please email freek@spatie.be instead of using the issue tracker.
Postcardware
You're free to use this package, but if it makes it to your production environment we highly appreciate you sending us a postcard from your hometown, mentioning which of our package(s) you are using.
Our address is: Spatie, Samberstraat 69D, 2060 Antwerp, Belgium.
We publish all received postcards on our company website.
Credits
Support us
Spatie is a webdesign agency based in Antwerp, Belgium. You'll find an overview of all our open source projects on our website.
Does your business depend on our contributions? Reach out and support us on Patreon. All pledges will be dedicated to allocating workforce on maintenance and new awesome stuff.
License
The MIT License (MIT). Please see License File for more information.