Although the interface looks quite simple, the rules behind these boxes may not be quite so obvious and so in this simple tutorial I'd like to help you to get the result that you want.
The best way to explain this will be with three examples. The manuals for some of my software are on the peacockmedia domain and I'll assume that I want to run Integrity or Scrutiny but check those manual pages separately or not at all.
The first thing to say is that you may not need to use these rules. Integrity and Scrutiny have a 'down but not up' policy. So if you start your scan at: https://peacockmedia.software/mac/scrutiny/manual/v9/index.html
then the scan will automatically be limited to urls 'below' https://peacockmedia.software/mac/scrutiny/manual/v9/. For the purposes of this tutorial, I'll show some examples using blacklist / whitelist rules.
1. Blacklisting based on url (Integrity or Scrutiny)Ignore urls that contain /manual/
All of the manual pages have '/manual/' in the url, so I can type '/manual/' (without quotes). Including the slash ensures that it'll only blacklist directories called 'manual'. If I was confident that no other urls included the word 'manual' there'd be no need for the slashes.
Simply type a keyword or part of the url. (If you like, you can use an asterisk to mean 'any number of any character' and a dollar sign to indicate 'comes at the end')
We have the option of using 'Ignore', 'Do not check..' or 'Do not follow..' Check means get the header information and reporting the server response code. Follow means go one step further and collect the html of the target page and find the links on it.
So to disregard the manuals but still check the outgoing links to that area, I'll want to 'Do not follow..' If I don't even want to check those links but see them listed then it's 'Do not check..' If I want to disregard them completely then it's 'Ignore'.
Another use of the 'Do not check' box is to speed up the crawl by disregarding certain file types or that you either don't need to check or can't check properly anyway (such as secure pages if you're using Integrity which doesn't allow authentication). For example you can type .pdf, .mp4 or https:// into that box (or multiple values separated by comma).
2. Whitelisting based on url (Integrity or Scrutiny)
Do not check urls that don't contain /manual/
This time, links which are not whitelisted (ie those that don't contain 'manual') are checked and seem to be ok, but are in red because they're not being followed and I'm still highlighting blacklisted links.
3. Blacklisting based on content (Scrutiny)
The result is the same as the screenshot in the first example, but Scrutiny is finding my search term in the page content rather than the url.
In this example, the phrase won't be found in urls because it contains spaces, but for a single keyword, Scrutiny would look for the term in both url and content and blacklist the page if it finds it in either.
If the manuals were all on a subdomain, such as manual.peacockmedia.co.uk, it would be possible to blacklist or whitelist using the term "manual." but it would also be possible to use the 'Treat subdomains as internal' checkbox in Preferences. Subdomains is a bigger topic and one for its own tutorial.
Any problems, do get in touch