Monday, 5 November 2018

Webscraper and pagination with case studies

If you're searching for help using WebScraper for MacOS then the chances are that the job involves pagination, because this situation provides some challenges.

Right off, I'll say that there is another approach to extracting data in cases like this from certain sites. It uses a different tool which we haven't made publicly available, but contact me if you're interested.

Here's the problem:  the search results are paginated (page 1, 2, 3 etc). In this case, all of the information we want is right there on the search results pages, but it may be that you want Webscraper to follow the pagination, and then follow the links through to the actual product pages (let's call them 'detail pages') and extract the data from those.


1. We obviously want to start WebScraper at the first page of search results. It's easy to grab that url and give it to WebScraper:

2. We aren't interested in Webscraper following any links other than those pagination links. (we'll come to detail pages later). In this case it's easy to 'whitelist' those pagination pages.

3. The pagination may stop after a certain number of pages. But in this case it seems to go on for ever. One way to limit our crawl is to use these options:

A more precise way to stop the crawl at a certain point in the pagination is to set up more rules:

4. At this point, running the scan proves that WebScraper will follow the search results pages we're interested in, and stop when we want.

5. In this particular case, all of the information we want is right there in the search results lists. So we can use WebScraper's class and regex helpers to set up the output columns.



Detail pages

In the example above, all of the information we want is there on the search result pages, so the job is done. But what if we have to follow the 'read more' link and then scrape the information from the detail page?

There are a few approaches to this, and a different approach that I alluded to at the start. The best way will depend on the site.

1. Two-step process

This method involves using the technique above to crawl the pagination, and collect *only* the urls of the detail pages  in a single column of the output file.  Then as a separate project, use that list as your starting point (File > Open list of links)  so that WebScraper scrapes data from the pages whose those urls, ie your detail pages. This is a good clean method, but it does involve a little more work to run it all. With the two projects set up properly and saved as project files,  you can open the first project, run it, export the results, open the second project, run it and then export your final results.

2. Set up the rules necessary to crawl through to the detail pages and scrape the information from only those.

Here are the rules for a recent successful project

"?cat=259&sort=price_asc&set_page_size=12&page=" is the rule which allows us to crawl the paginated pages.
"?productid="  is the one which identifies our product page.

Notice here that the two rules appear to contradict each other. But when using 'Only follow', the two rules are 'OR'd. The 'ignore' rules that we used in the first case study are 'AND'ed, which results in no results if you have more than one 'ignore urls that don't contain'.

So here we're following pages which are search results pages, or product detail pages.  

The third rule is necessary because the product page (in this case) contains links to 'related products' which aren't part of our search but do fit our other rules. We need to ignore those, otherwise we'll end up crawling all products on the entire site.

That would probably work fine, but we'd get irrelevant lines in our output because WebScraper will try to scrape data from the search results pages as well as the detail pages. This is where the Output filter comes into play.

The important one is "scrape data from pages where... URL does contain ?productid".  The other rule probably isn't needed (because we're ignoring those pages during the crawl) but I added it to be doubly sure that we don't get any data from 'related product' pages.


Whichever of those methods you try, the next thing is to set up the columns in the output file (ie what data you want to scrape.)  That's beyond the scope of this article, and the 'helpers' are much improved in recent WebScraper versions. There's a separate article about using regex to extract the information you want here.

Wednesday, 26 September 2018

http requests - they're not all the same

This is the answer to a question that I was asked yesterday. I thought that the discussion was such an interesting one that I'd post the reply publicly here.

A common perception is that a request for a web page is simply a request. Why might a server give different responses to different clients? To be specific, why might Integrity / Scrutiny receive one response when testing a url, yet a browser sees something different? What are the differences?


user-agent string

This is sent with a request to identify "who's asking". Abuses of the user-agent string by servers range from sending a legitimate-looking response to search engine bots and dodgy content to browsers, through to refusing to respond to requests that don't appear to come from browsers. Integrity and Scrutiny are good citizens and by default have their own standards-compliant user-agent string. If it's necessary for testing purposes, this can be changed to that of a browser or even a search engine bot.

header fields

A request contains a bunch of header fields. These are specifically designed to allow a server to tailor its content to the client. There are loads of possible ones and you can invent custom ones, some are mandatory, many optional. By default, Scrutiny includes the ones that the common browsers include, with similar settings.  If your own site requires a particular unusual or custom header field / value to be present, you can add them (in Scrutiny's 'Advanced settings'). 

cookies and javascript

Browsers have these things enabled by default, They're just part of our online lives now (though accessibility standards say that sites should be usable without them) but they're options in Scrutiny and deliberately both off by default. I'm discovering more and more sites which will test for cookies being enabled in the browser (with a handshake-type thing) and refuse to serve if not. There are a few sites which refuse to work properly without javascript being enabled in the browser. This is a terrible practice but it does happen, thankfully rarely. Switch cookies on in Scrutiny if you need to. But always leave the javascript option *off* unless your site does this when you switch js off in your browser:
An image showing a blank web page, message: This site requires Javascript to work


GET and HEAD

There are a couple of other things under Scrutiny's Preferences > Links > Advanced (and Integrity's Preferences > Advanced).   'Use GET for all connections' and 'Load data for all connections'. Both will probably be off by default. 
Screenshot of a couple of Scrutiny's preferences, always use GET and load data for all connections

A  browser will generally use GET when making a request (unless you're sending a form) and it will probably load all of the data that is returned.  For efficiency, a webcrawler can use the HEAD method when testing external links (because it doesn't need the actual content of the page, only the status code). If it does use the GET (for internal connections where it does want the content, or if  you have 'always use GET' switched on) and if if doesn't need the page content, it can cancel a request after getting the status code. This very rarely causes a problem, but I have had one or two cases where a large number of cancelled requests to the same server can cause problems.  

'Use GET for all connections' is unlikely to make any visible difference when scanning a site. Using the HEAD method (which by all standards should work) may not always work. but if a link returns any kind of error after using the HEAD method, Integrity / Scrutiny tests the same url again using GET. 

Other considerations

Outside of the particulars of the http request itself are a couple of things that may also cause different responses to be returned to a webcrawler and a browser. 

One is the frequency of the requests. Integrity and Scrutiny will send many more requests in a given space of time than a browser, probably many at the same time (depending on your settings). This is one of the factors involved in LinkedIn's infamous 999 response code. 

The other is authentication. A frequently-asked question is why a link to social media link returns a response code such as 'forbidden' when the link works fine in a browser. Having cookies switched on (see above) may resolve this but we forget that when we visit social media sites we have logged in at some point in the past and our browser remembers who we are. It may be necessary to be authenticated as a genuine user of a site when viewing a page that may appear 'public'.  Scrutiny and Webscraper allow authentication, the Integrity family doesn't.

I love this subject. Comments and discussion are very welcome.

Friday, 21 September 2018

New free flashcard / Visualisation & Association method for MacOS

Vocabagility is more than a flashcard system, it's a method. Cards are selected and shuffled, one side is shown. Give an answer, did you get it right? Move on. As quick and easy as using a pack of real cards in your pocket.



The system also encourages you to invent an amusing mental image linking the question and answer (Visualization and Association)

Cards that you're not certain about have a greater probability of being shown.



This is an effective system for learning vocabulary / phrases for any language but could be used for learning other things too.

Download Vocabagility for Mac for free here.

Sunday, 16 September 2018

ScreenSleeves ready to go as a standalone app

In the last post I gave a preview of a new direction for ScreenSleeves and now it's ready to go.


Changes in MacOS Mojave have made it impossible to continue with ScreenSleeves as a true screensaver. Apple have not made it possible (as far as I know at the time of writing) to grant a screensaver plugin the necessary permission to communicate with or control other apps.

Making ScreenSleeves run as a full app (in its own window) has several benefits:

  • Resize the window from tiny to large, and put it into full-screen mode.
  • Choose to keep the window on top of others when it's small, or allow others to move on top of it
  • The new version gives you the option to automate certain things, emulating a screensaver:
    • Switch to full-screen mode with a keypress (cmd-ctrl-F) or after a configurable period of inactivity
    • Switch back from full-screen to the floating window with a wiggle of the mouse or keypress
    • Block system screensaver, screen sleep or computer sleep while in full-screen mode and as long as music is playing
As mentioned, Mojave has much tighter security. The first time you run this app, you'll be asked to allow ScreenSleeves access to several other things. It won't ask for permission for anything which isn't necessary for it to function as intended. You should only be troubled once for each thing that Screensleeves needs to communicate with.

The new standalone version (6.0.0) is available for download, it runs for free for a trial period, then a small price to continue using it. (Previously, the screensaver came in a free and 'pro' versions with extras in the paid version).

Friday, 7 September 2018

Screensleeves album art screensaver as a standalone app

Screensleeves has been a popular screensaver for a number of years, but the security changes in the new Mojave OS may make its functionality impossible.

Over the years people have suggested that it could be a free-standing app rather than a screensaver. This comes with some advantages - eg you can keep it minimised and floating above other windows in the corner of the screen when it's not in full screen mode.

This may be the only way to keep the screensaver alive. I've been experimenting with the idea, ironing out some issues related to the change, and using it. I have to say that I like it very much.

Here's a very quick peek at what all this means.


Monday, 27 August 2018

Migrating website data when switching from app store version of Integrity Plus / Pro to web download version

There are reasons why you might want to start using the web download version of Integrity Plus or Integrity Pro after buying the App Store version.

(We're happy to provide a key, with evidence of the App Store purchase as long as it's for the same user).

The App Store version is necessarily 'sandboxed', a security measure imposed by Apple for apps sold on their Store. However, this kills certain features, such as the ability to crawl a site stored as local files. So the web download version remains un-sandboxed (it pre-dates sandboxing).

The sandboxed and un-sandboxed apps store their data in different places. When switching from the web download version to the app store version, the migration is taken care of by the system (this is the way Apple want you to go and so they make this easy. Invisible in fact).

The app doesn't (yet) detect and automatically handle the reverse situation. But it's possible to do this manually.

Option 1. Integrity Plus / Pro have the option to export / import your websites. 
This requires you to export while you have the app store version installed, and import after you've replaced it with the web download version.

Option 2. Use these Terminal commands. 

They check for and remove any preference file which will be present if you've already run the web download version. Then copy the data from the sandbox 'container' to the location used by the web download version.

(This first set of instructions is for Integrity Plus. For Integrity Pro, scroll down)

First make sure Integrity Plus isn't running.

Then enter this into Terminal:

rm ~/Library/Preferences/com.peacockmedia.Integrity-plus.plist

(if it says there's no file there, don't worry.) Then this:

cp ~/Library/Containers/com.peacockmedia.integrityPlus/Data/Library/Preferences/com.peacockmedia.integrityPlus.plist ~/Library/Preferences/com.peacockmedia.Integrity-plus.plist

Important: now log out of the system and log back in. The system does some wicked things with caching these files. It's sometimes possible to make our change 'stick' using another Terminal command, but I've not found that as reliable for these purposes as logging out / in.

Now start the web download Integrity Plus and see whether your data appears.

Here are the corresponding instructions for Integrity Pro

Make sure Integrity Pro isn't running

First enter into Terminal:

rm ~/Library/Preferences/com.peacockmedia.Integrity-pro.plist

(if it says there's no file there, don't worry.) Then this:

cp ~/Library/Containers/com.peacockmedia.Integrity-pro/Data/Library/Preferences/com.peacockmedia.Integrity-pro.plist ~/Library/Preferences/com.peacockmedia.Integrity-pro.plist

Important: log out of the system and back in.

Wednesday, 8 August 2018

Getting started - Hue-topia for Mac

After following these steps which will only take a couple of minutes, you’ll know how to make and use presets, set your lamps to turn on and off on schedule and use effects.

(This tutorial was written on 8 August 2018 and supersedes the previous version of the same tutorial.)

The latest version of Huetopia is available here

1.  If you’ve not already done so, make sure your Bridge and some bulbs are switched on and start Hue-topia. The first time that you start the app it will try to find your bridge and attempt to log in. Finding the bridge requires an internet connection.

The only thing that you should need to do is to press the button on the bridge when instructed, to pair it with the Hue-topia app. If there are any problems at this stage, see Troubleshooting in the Hue-ser manual

Make and try two presets

2. turn the brightness and the whiteness of all of your lamps all the way up and make sure all are on.


3. Click the [+] button (Save preset) and type ‘All white’ for the name of the new preset. OK that.

4. Turn the brightness and also the whiteness of all of your lamps to three quarters of the way up.

5. Click the [+] button (Save preset) and type ‘All warm’ for the name of the new preset. OK that.

6. You now have two presets and can use these from the Presets button in the toolbar and also from the status bar. Try this.

Make a preset that affects only certain lamps

7. Go to 'Manage presets...' from the Presets toolbar button or the Lamps menu.

8. Choose your preset from the window that appears, and press 'Lamps affected'. You'll now see a checkbox alongside each lamp in the main control window. Uncheck some of the lamps, press 'OK'.  Your preset will now only affect the lamps that remained checked.

Set your lamps to turn on and off on schedule

9. Press the Schedules button or ‘Show schedules’ from the View menu (command-2 also shows this window).

10. Press the [+] button at the bottom-left of the Schedules window.

11. Type ‘Daily’ for the name, select ‘On & Off’, select ‘group: all’, type 17:00 for on and 23:00 for off. Leave all days selected. Click somewhere outside of the small window to save and close those settings.

All lamps are now set to switch on at 5pm and off at 11pm. Note that this will work even when your computer and Huetopia aren't running, because Hue-topia copies its schedules to the bridge.

Make and try an effect

12. Press the Effects toolbar button, and press the [+] button below your list of effects.

13. Type the name 'Pastels', and press the [+] below the timeline strip a couple of times to add a couple more nodes. Space them out equally


14. Click inside the colour swatch of the first node and choose a nice pastel colour. Do the same for the other two. Adjust the cycle time to a value that you like and make sure 'Loop' is selected. The preview swatch should show the effect animating. When that's working as you like, OK the sheet.


15. Return to the main window. Choose a light or group that you want to apply your effect to. Look for the little 'effect' icon in the control strip (ringed below). Click that and a menu of your effects will pop up. Choose your new Pastels effect and Hue-topia should start animating that effect for the chosen bulb or group. While the effect is running, the little icon will rotate.