When a bot visits a website, it views the HTML of a page and visits the links that it finds. If your website is built using JavaScript (whether all of the website, or only parts of the content and navigation), this may mean that the main content of the page is not visible in the HTML, but constructed dynamically after the HTML is fetched. In this case, the bot will not see content or links on the page, and will not be able to discover the pages in your site.
You can solve this by pre-rendering the JavaScript. It’s easy to do in Oncrawl!
Top three reasons to render JavaScript for a crawl
For some sites, it's impossible to crawl the site without JavaScript enabled. But that doesn't mean you don't need more information on how the website performs, and you can only get that information from a crawl.
JavaScript can alter content and links by adding information. Rendering JavaScript ensure that you're analyzing the "real" version of your website, and that it behaves as intended.
JavaScript can be used to redirect one URL to another. Crawling with JavaScript helps you make sure you've audited all of your redirected pages.
Enable the Crawl JS option
Click Set up a new crawl and choose the crawl profile you want to enable JavaScript rendering for.
Click on the Crawl JS title to expand the section.
Check the Crawl the website as a JavaScript website option.
Now you can launch the crawl as usual.
Crawling and rendering the JavaScript for a URL will consume 3 tokens from your quota, compared to just 1 token for only crawling the URL.
Best practices for crawling JS
If your site is built using JavaScript, it may still be worth it to see what your site looks like with JavaScript disabled. Run your crawl with and without JavaScript enabled to compare the results.
The JavaScript crawl is not compatible with authentication by username and password. If you're working with a pre-prod website built with JavaScript, you may need to plan accordingly.
Check your quotas before running a JavaScript crawl. JavaScript crawls use more resources than normal crawls, so crawling a page in JavaScript will consume 3 URLs in your quota.