Scrutiny has been able to report insecure content for a while; links to insecure (http) pages from your secure pages, and secure pages which use resources with insecure (http) urls.
Scrutiny has even been able to alert you to these problems at the end of the scan.
The question was "why are these things in different places" (pages with mixed content were accessed via the SEO results and 'rogue' links to insecure pages were with the links results).
With version 7.3 this has all been made a little more user-friendly. There's a new results table which shows all of this together in a usable way.
To bring these functions into play, make sure that your starting url is https:// and make sure that you have 'check images' and 'check linked css and js files' switched on in your site's settings
Also check 'Alert when links to http site are found' in Preferences > Links.
At the end of the scan, if any issues are found, you'll be alerted and asked whether you want to go straight to a table of results to see the issues.
But regardless of whether you have this alert switched on or what you choose when you see it, you'll now see 'Insecure Content' as an option on your Results selection screen:
The information is still in the old places too; see the details of 'links to internal http pages from secure pages' in the links tables by choosing 'http: links' from the Filter. And you can see pages with mixed content in the SEO results as before, Select 'mixed content' from the Filter drop-down button. If any pages contain links to insecure resources, they'll be listed.
This is all in Scrutiny version 7.3 which has just been released.
Saturday, 29 April 2017
Friday, 21 April 2017
Introductory offer on full release of 404bypass
Problem: you've moved your website. For various reasons the urls of existing pages may have changed.
Solution: a .htaccess file at the root of your old site which redirects old urls to new ones.
But there are a large number of such pages. Matching them up and compiling the redirect file is going to take time....
404bypass scans your old and new sites, 'smart matches' the pages and offers you a table of results. You can make any corrections manually before generating a .htaccess file, or a csv (or any other type of file you like, using a template configuration system). There are more details and screenshots here.
The beta period has come to an end, 404bypass is on full release now. For a limited period, it's available at an introductory price of $5 ($10 when it reverts to full price).
Download and try it here (30 day trial).
Solution: a .htaccess file at the root of your old site which redirects old urls to new ones.
But there are a large number of such pages. Matching them up and compiling the redirect file is going to take time....
404bypass scans your old and new sites, 'smart matches' the pages and offers you a table of results. You can make any corrections manually before generating a .htaccess file, or a csv (or any other type of file you like, using a template configuration system). There are more details and screenshots here.
The beta period has come to an end, 404bypass is on full release now. For a limited period, it's available at an introductory price of $5 ($10 when it reverts to full price).
Download and try it here (30 day trial).
Thursday, 20 April 2017
yellowpages to csv - new app!
I've recently been working on some enhancements to Webscraper so that it can handle some of the odd problems encountered while scraping the US Yellowpages.com. It's doing that pretty well now, with some setting up. (though scraping is a constantly-shifting area.)
Users' needs are usually pretty specific. I wondered whether I could make a much simplified version of WebScraper, pre-configured to scan that particular site.
It turns out that they provide an API, which changes the game. This app can be much more efficient. The trade-off is that there is a 'fair use' limit per API key, but this is very generous and the limit I've built into my app is pretty hard to hit.
This is the interface.
Yes, that's all there is.
In return for such a simple interface, all configuration is fixed. It's locked to yellowpages.com and the output file is pre-configured too.
For more flexibility (or to scrape a different site) please look at WebScraper.
Users' needs are usually pretty specific. I wondered whether I could make a much simplified version of WebScraper, pre-configured to scan that particular site.
It turns out that they provide an API, which changes the game. This app can be much more efficient. The trade-off is that there is a 'fair use' limit per API key, but this is very generous and the limit I've built into my app is pretty hard to hit.
This is the interface.
Yes, that's all there is.
In return for such a simple interface, all configuration is fixed. It's locked to yellowpages.com and the output file is pre-configured too.
For more flexibility (or to scrape a different site) please look at WebScraper.
Saturday, 1 April 2017
Finding redirect chains using Scrutiny for Mac
Setting up redirects is important when moving a site, but don't let it get out of hand over time!
John Mueller has said that the maximum number of hops Googlebot will follow in a chain is five.
Scrutiny keeps track of the number of times any particular request is redirected, and can report these to you if you have any.
Here's how:
First you need to scan your site. Add a config in Scrutiny, give it a name and your starting url (home page)
Then press 'Scan now'.
One you've scanned your site and you're happy that you don't need to tweak your settings for any reason, go to the SEO results.
If there are any urls with a redirect chain, it will be shown in this list:
(Note that at the time of writing, Scrutiny is configured to include pages in this count if they have greater than 5 redirects, but you can see all redirect counts in the Links 'by link' view as described later).
You can see the pages in question by choosing 'Redirect chain' from the Filter button over on the right:
That will show you the urls in question (as things stand in the current version as I write this, it'll show the *final* url - this is appropriate here, because this SEO table lists pages, not links, the url shown is the actual url of the page in question.)
A powerful tool within Scrutiny is to see a trace of the complete journey.
Find the url in the Links results. (You can sort by url, or paste a url into the search box.) Note that as from version 7.2, there is a 'Redirect count' column in the Links 'by link' view. You may need to switch the column on using the selector to the top-left of the table. You can sort by this column to fin the worst offenders:
.. and double-click to open the link inspector. The button to the right of the redirect field will show the number of redirects, and you can use this button to begin the trace:
Some of this functionality is new (or improved) in version 7.2. Users of 7.x should update.
There is a very reasonable upgrade path for users of versions earlier than 7.
John Mueller has said that the maximum number of hops Googlebot will follow in a chain is five.
Scrutiny keeps track of the number of times any particular request is redirected, and can report these to you if you have any.
Here's how:
First you need to scan your site. Add a config in Scrutiny, give it a name and your starting url (home page)
Then press 'Scan now'.
One you've scanned your site and you're happy that you don't need to tweak your settings for any reason, go to the SEO results.
If there are any urls with a redirect chain, it will be shown in this list:
(Note that at the time of writing, Scrutiny is configured to include pages in this count if they have greater than 5 redirects, but you can see all redirect counts in the Links 'by link' view as described later).
You can see the pages in question by choosing 'Redirect chain' from the Filter button over on the right:
That will show you the urls in question (as things stand in the current version as I write this, it'll show the *final* url - this is appropriate here, because this SEO table lists pages, not links, the url shown is the actual url of the page in question.)
A powerful tool within Scrutiny is to see a trace of the complete journey.
Find the url in the Links results. (You can sort by url, or paste a url into the search box.) Note that as from version 7.2, there is a 'Redirect count' column in the Links 'by link' view. You may need to switch the column on using the selector to the top-left of the table. You can sort by this column to fin the worst offenders:
.. and double-click to open the link inspector. The button to the right of the redirect field will show the number of redirects, and you can use this button to begin the trace:
Some of this functionality is new (or improved) in version 7.2. Users of 7.x should update.
There is a very reasonable upgrade path for users of versions earlier than 7.