Screaming Frog Download for Windows 16.3
Screaming Frog SEO Spider is a website crawler, that allows you to crawl websites’ URLs and fetch key elements to analyze and audit technical and onsite SEO. Download for free, or purchase a license for additional advanced features.
The SEO Spider app is a powerful and flexible site crawler, able to crawl both small and very large websites efficiently while allowing you to analyze the results in real-time. It gathers key onsite data to allow SEOs to make informed decisions.
Screaming Frog SEO Spider is a software application that was developed with Java, in order to provide users with a simple means of gathering SEO information about any given site, as well as generate multiple reports and export the information to the HDD.
The interface you come across might seem a bit cluttered, as it consists of a menu bar and multiple tabbed panes which display various information. However, a comprehensive User Guide and some FAQs are available on the developer’s website, which is going to make sure that both power and novice users can easily find their way around it, without encountering any kind of issues.
What can you do with Screaming Frog SEO Spider Software?
- Find Broken Links: Crawl a website instantly and find broken links (404s) and server errors. Bulk export the errors and source URLs to fix, or send to a developer.
- Audit Redirects: Find temporary and permanent redirects, identify redirect chains and loops, or upload a list of URLs to audit in a site migration.
- Analyze Page Titles & Meta Data: Analyze page titles and meta descriptions during a crawl and identify those that are too long, short, missing, or duplicated across your site.
- Discover Duplicate Content: Discover exact duplicate URLs with an md5 algorithmic check, partially duplicated elements such as page titles, descriptions, or headings and find low content pages.
- Extract Data with XPath: Collect any data from the HTML of a web page using CSS Path, XPath, or regex. This might include social meta tags, additional headings, prices, SKUs, or more!
- Review Robots & Directives: View URLs blocked by robots.txt, meta robots, or X-Robots-Tag directives such as ‘noindex’ or ‘nofollow’, as well as canonicals and rel=“next” and rel=“prev”.
- Generate XML Sitemaps: With Screaming Frog you can quickly create XML Sitemaps and Image XML Sitemaps, with advanced configuration over URLs to include last modified, priority, and change frequency.
- Integrate with Google Analytics: Connect to the Google Analytics API and fetch user data, such as sessions or bounce rate and conversions, goals, transactions, and revenue for landing pages against the crawl.
- Crawl JavaScript Websites: Render web pages using the integrated Chromium WRS to crawl dynamic, JavaScript rich websites and frameworks, such as Angular, React, and Vue.js.
- Visualise Site Architecture: Evaluate internal linking and URL structure using interactive crawl and directory force-directed diagrams and tree graph site visualizations.
Screaming Frog SEO Spider For PC Features
- Find Broken Links, Errors & Redirects
- Analyse Page Titles & Meta Data
- Review Meta Robots & Directives
- Audit hreflang Attributes
- Discover Duplicate Pages
- Generate XML Sitemaps
- Site Visualisations
- Crawl Limit
- Scheduling
- Crawl Configuration
- Save Crawls & Re-Upload
- Custom Source Code Search
- Custom Extraction
- Google Analytics Integration
- Search Console Integration
- Link Metrics Integration
- Rendering (JavaScript)
- Custom robots.txt
- AMP Crawling & Validation
- Structured Data & Validation
- Store & View Raw & Rendered HTML
The Screaming Frog is an SEO auditing tool, built by real SEOs with thousands of users worldwide. A quick summary of some of the data collected in a crawl include:
- Errors – Client errors such as broken links & server errors (No responses, 4XX, 5XX).
- Redirects – Permanent, temporary redirects (3XX responses) & JS redirects.
- Blocked URLs – View & audit URLs disallowed by the robots.txt protocol.
- Blocked Resources – View & audit blocked resources in rendering mode.
- External Links – All external links and their status codes.
- Protocol – Whether the URLs are secure (HTTPS) or insecure (HTTP).
- URI Issues – Non-ASCII characters, underscores, uppercase characters, parameters, or long URLs.
- Duplicate Pages – Hash value / MD5checksums algorithmic check for exact duplicate pages.
- Page Titles – Missing, duplicate, over 65 characters, short, pixel width truncation, same as h1, or multiple.
- Meta Description – Missing, duplicate, over 156 characters, short, pixel width truncation, or multiple.
- Meta Keywords – Mainly for reference, as they are not used by Google, Bing, or Yahoo.
- File Size – Size of URLs & images.
- Response Time.
- Last-Modified Header.
- Page (Crawl) Depth.
- Word Count.
- H1 – Missing, duplicate, over 70 characters, multiple.
- H2 – Missing, duplicate, over 70 characters, multiple.
- Meta Robots – Index, noindex, follow, nofollow, noarchive, nosnippet, noodp, noydir, etc.
- Meta Refresh – Including target page and time delay.
- Canonical link element & canonical HTTP headers.
- X-Robots-Tag.
- Pagination – rel=“next” and rel=“prev”.
- Follow & Nofollow – At page and link level (true/false).
- Redirect Chains – Discover redirect chains and loops.
- hreflang Attributes – Audit missing confirmation links, inconsistent & incorrect language codes, non-canonical hreflang, and more.
- AJAX – Select to obey Google’s now deprecated AJAX Crawling Scheme.
- Rendering – Crawl JavaScript frameworks like AngularJS and React, by crawling the rendered HTML after JavaScript has executed.
- Inlinks – All pages linking to a URI.
- Outlinks – All pages a URI links out to.
- Anchor Text – All link text. Alt text from images with links.
- Images – All URIs with the image link & all images from a given page. Images over 100kb, missing alt text, alt text over 100 characters.
- User-Agent Switcher – Crawl as Googlebot, Bingbot, Yahoo! Slurp, mobile user-agents or your own custom UA.
- Custom HTTP Headers – Supply any header value in a request, from Accept-Language to cookie.
- Custom Source Code Search – Find anything you want in the source code of a website! Whether that’s Google Analytics code, specific text, or code, etc.
- Custom Extraction – Scrape any data from the HTML of a URL using XPath, CSS Path selectors, or regex.
- Google Analytics Integration – Connect to the Google Analytics API and pull in user and conversion data directly during a crawl.
- Google Search Console Integration – Connect to the Google Search Analytics API and collect impression, click, and average position data against URLs.
- External Link Metrics – Pull external link metrics from Majestic, Ahrefs, and Moz APIs into a crawl to perform content audits or profile links.
- XML Sitemap Generation – Create an XML sitemap and an image sitemap using the SEO spider.
- Custom robots.txt – Download, edit, and test a site’s robots.txt using the new custom robots.txt.
- Rendered Screen Shots – Fetch, view, and analyze the rendered pages crawled.
- Store & View HTML & Rendered HTML – Essential for analysing the DOM.
- AMP Crawling & Validation – Crawl AMP URLs and validate them, using the official integrated AMP Validator.
- XML Sitemap Analysis – Crawl an XML Sitemap independently or part of a crawl, to find missing, non-indexable and orphan pages.
- Visualizations – Analyse the internal linking and URL structure of the website, using the crawl and directory tree force-directed diagrams and tree graphs.
- Structured Data & Validation – Extract & validate structured data against Schema.org specifications and Google search features.
You can check the response time of multiple links, view page titles, their occurrences, length, and pixel width. It is possible to view huge lists with meta keywords and their length, headers, and images.
Graphical representations of certain situations are also available in the main window, along with a folder structure of all SEO elements analyzed, as well as stats pertaining to the depth of the website and average response time.
It is possible to use a proxy server, create a site map and save it to the HDD using an XML extension and generate multiple reports pertaining to crawl overview, redirect chains and canonical errors.
System Requirements
Operating System | Windows: Windows 7/8/10 Mac: Mac OS X 10.10 to 10.15 Linux: Ubuntu OS |
Memory | 1 GB of RAM. |
Additional Requirements | Java |
Official Video Intro Screaming Frog
- Netpeak Spider
- Spotibo
- Screpy
- Pulno
- Hexometer
- SEO PowerSuite
Screaming Frog Overview
Technical Specification
Software Name | Screaming Frog Software For Windows V 16.3 |
File Size | 550 MB |
Languages | English, Italian, French, Spanish, Polish, Chinese, German, Japanese |
License | Free Trial For 30 Days |
Developer | Screaming Frog Ltd |
Conclusion
Screaming Frog SEO Spider is an efficient piece of software for those who are interested in analyzing their website from SEO standpoint. The interface requires some getting used to it, the response time is good and we did not come by any errors or bugs.
CPU and memory usage is not particularly high, which means that the computer’s performance is not going to be affected most of the time.
Comments are closed.