Wednesday, 17 April 2019

New tool for analysing http response

Your web page looks great in your browser but a tool such as a link checker or SEO checker reports something unexpected.

It's an age-old story. I'm sure we've all been there a thousand times.

OK, maybe not, but it does happen. An http request can be very different depending on what client is sending it. Maybe the server gives a different response depending on the user-agent string or another http request header field. Maybe the page is 'soft 404'.

What's needed is a tool that allows you to make a custom http request and view exactly what's coming back; the status, the response header fields and the content. And maybe to experiment with different request settings.

I've been meaning to build such a tool for a long time. Incorporated into Scrutiny, or maybe a standalone app.

I've also been thinking of making some online tools. The ultimate plan is an online Integrity. The request / response tool is the ideal candidate for an online tool.

For a name, one of my unused domains seemed strangely appropriate. A few minutes with the Wacom and it has a logo.



It's early days and there are more features to add. But feel free to try it. If you see any problems, please let me know.

http://purpleyes.com

Monday, 1 April 2019

Scraping links from Google search results

This tutorial assumes that you want to crawl the first few search results pages and collect the links from the results.

Remember that:

1. The search results that you get from WebScraper will be slightly different from the ones that you see in your browser. Google's search results are personalised. If someone else in a different place does the same search, they will get a different set of results from you, based on their browsing history. WebScraper doesn't use cookies and so it doesn't have a personality or browsing history.

2. Google limits the number of pages that you can see within a certain time. If you use this once with these settings, that'll be fine, but if you use it several times, it'll eventually fail. I believe Google allows each person to make 100 requests per hour before showing a CAPTCHA but I'm not sure about that number.  If you run this example a few times, it may stop working. If this happens, press the 'log in' button at the bottom of the scan tab. You should see the CAPTCHA. If you complete it you should be able to continue.  If this is a problem, adapt this tutorial for another search engine which doesn't have this limit.

We're using WebScraper for Mac which has some limits in the unregistered version.

1. The crawl
Your starting url looks like this: http://www.google.com/search?q=ten+best+cakes.   Set Crawl maximum to '1 click from home' because in this example we can reach the first ten pages of search results within one click of the starting url (see Pagination below).
One important point not ringed in the screenshot. Choose 'IE11 / Windows' for your user-agent string. Google serves different code depending on the browser. The regex below was written with the user-agent set to IE11/Windows.

2. Pagination 
We want to follow the links at the bottom of the page for the next page of search results, and nothing else. (These settings are about the crawl, not about collecting data). So we set up a rule that says "ignore urls that don't contain &start= " (see above)

3. Output file
Add a column, choose Regex. The expression is <div class="r"><a href="(.*?)"
You can bin the other two default columns if you want to. (I didn't bother and you'll see them in the final results at the bottom of this article.)

4. Separating our results
WebScraper is designed to crawl a site and extract a piece of data from each page. Each new row in the output file represents a page from the crawl. Here we want to collect multiple data from each page. So the scraped urls from each search results page will appear in a single cell (separated by a special character.) So we want to ask WebScraper to split these onto separate rows, which it does when it exports.
5. Run
Press the >Go button, and you'll see the results fill up in the Results tab. As mentioned, each row is one page of the crawl, you'll need to export to split the results onto separate rows.
 Here's the output in Finder/quicklook. The csv should open in your favourite spreadsheet app.

Wednesday, 27 March 2019

C64 / 6502 IDE for Mac

This post seems to fit equally well here on the PM blog as it does on the separate blog for my hobby project. To avoid duplication I'll post it there and link to it from here.

https://newstuffforoldstuff.blogspot.com/2019/03/homebrew-game-for-c64-part-5-c64-6502.html


Version 9 of Integrity and Scrutiny underway

As with version 8, the next major release is likely to look very similar on the surface.

The biggest change will be more information of the 'warning' kind.

So far it's been easy to isolate or export 'bad links'. These tend to be a 4xx or 5xx server response codes, so you can already see at a glance what's wrong. You may have seen links marked with orange, which usually denotes a redirect. And you can easily filter those and export them if you want.
But there are many other warnings that Integrity and Scrutiny could report.

At this point, this won't mean a full validation of the html (that's another story) but there are certain things that the crawling engine encounters as it parses the page for links. One example is the presence of two canonical tags (it happens more often than you'd think). If they contain different urls, it's unclear what Google would do. There are a number of situations like this that the engine handles, and which the user might like to know about.

There are other types of problem, such as an old perennial; one or more "../" ('up a directory') at the start of a relative url which technically takes the url above the server's http root. (Some users take the line "if it works in my browser, then it's fine and your software is incorrectly reporting an error", and for that reason there's a preference to make the app as tolerant as browsers tend to be with these links.)

In short, there are various warnings that Integrity and Scrutiny could list. Version 9 will make use of the orange highlighting. There will be tabs within the link inspector so that it can display more information, and the warnings relating to that link (if any) will be one of these tabs.

Reporting / exporting is important to users. So there will be another main table (with export option) with the links results to display a list of each warning, by link / page.

This will be the main focus but there will be other features. Better re-checking is on the list. This is early days, so if there's anything else that you'd like to see, please do tell!

Saturday, 23 March 2019

Press release - Slight change of name and full launch for Website Watchman



Press release
Mon 25 Mar 2019


begins---------------

Watchman from PeacockMedia is now fully released and it's now called Website Watchman. The original name clashed with another project, but Website Watchman better sums up the app.

It'll be useful to anyone with a mac and a website. It does a very impressive trick. As it scans a website and checks all of the files that make up a page; html, js, css, images linked documents. It archives any changes. So the webarchive that it builds is four-dimensional. You can browse the pages, and then browse the versions of that page back through time. You view a 'living' version of the page, not just a snapshot.

As the name suggests, you can watch a single page, part of a website or a whole website, or even part of a page, using various filters including a regex, and be alerted to changes.

Thank you for your attention and any exposure that you can give Website Watchman.

https://peacockmedia.software/mac/watchman

ends---------------

Thursday, 21 March 2019

Hidden gems: Scrutiny's site search

Yes, you can use Google to search a particular site. But you can't search the entire source, or show pages which *don't* contain something. For example, you may want to search your website for pages that don't have your Google Analytics code.

Scrutiny's site search has just had some improvements and another option added (version 8.3.3) so it's a good time to show off this easily-overlooked feature.
The usual options that you've set up in your site config apply to the crawl: Blacklisting / whitelisting of pages, link and level limits and lots of other options, including authentication.

Here's where you enter your search term (or search terms - you can search for multiple terms at once, the results will have a column showing which term(s) were found on the page).

There's a 'reverse' button so that you can see pages that *don't* contain. Your search term can be a regular expression (Regex) if you want to match a pattern. You can search the source or the visible text.

Today this panel has gained a case sensitivity button. It's set to 'insensitive' by default because that was the behaviour before.

It's worth noting a couple of 'beware's here.

1. If you're searching the entire source, make sure that your search term matches the way it appears in the source. Today I was confused when I searched for a sentence on Integrity's french page; "Le vérificateur de liens pour vos sites internet". The accented character appears in the source correctly encoded as &eacute; (if you search the body text rather than source, any such entities will be 'unencoded' before checking.)

2. There's a global setting for what should be included / excluded from the visible text of a page. You may want to ignore the contents of navs and footers, for example. One of these options is 'only look at the contents of paragraph and heading tags'. If this is switched on, then quite a lot is excluded. Visible text may not be in <p> tags but a span or a div. Anything in an unordered list, for example, may not (perhaps should not) be within paragraph tags.

The search dialog above now has a friendly warning which informs you if you're searching body text and if you've got some of these exclusions switched on.

The download for Scrutiny is here and includes a free 30-day trial.