Like almost all of my software, I wrote Organise to fill my own needs and still use it heavily myself.
Occasionally someone has contacted me about problems with calculations if their system preferences are set to format numbers differently from here in the UK (for example, if they're in a country using a comma as a decimal separator and dot '.' as a thousand separator).
But I've never gone through the app and made sure that Organise works perfectly regardless of user number formats. The same thing applies to the sales tax rate (for example, Canada apparently has three different taxes, which Organise hasn't been able to handle).
I had a wake-up call when Organise was put on offer on a popular download site recently. Why people buy software without taking advantage of a free trial I don't know, but they apparently do, and it's very awkward when they find that it doesn't work for them.
It has been a tough task. The code behind Organise is vast now and there are many places where it makes calculations and needs to take account of the user's choice of number format. But I'm just about there.
And to complete a truly international version (version 6!), I've also tried to add the flexibility that's needed for folks to calculate their sales tax wherever they are. Defaults are in line with the UK rules, but as you can see here it's now as customisable as I think it needs to be.
I hope to have a release candidate ready for download in the new year.
Saturday, 22 December 2012
Monday, 3 December 2012
disappointment with iTunes 11
I'd like to give some link love to Joe of eMac Consulting.
His procedure for 'upgrading' iTunes 11 back to 10 worked perfectly for me
I've been open about the fact that I enjoy using Snow Leopard for my day-to-day work. I do think it is the height of Apple's powers and I don't like the general direction since Lion.
After reading about iTunes 11 I couldn't resist getting hold of the shiny new interface.
I think the lack of coverflow was the last straw, but before I realised that I'd started to feel disappointed with the interface.
I'm prepared to accept that this may be a reluctance to adopt change, but for the record here are a few things that baffled me:
His procedure for 'upgrading' iTunes 11 back to 10 worked perfectly for me
I've been open about the fact that I enjoy using Snow Leopard for my day-to-day work. I do think it is the height of Apple's powers and I don't like the general direction since Lion.
After reading about iTunes 11 I couldn't resist getting hold of the shiny new interface.
I think the lack of coverflow was the last straw, but before I realised that I'd started to feel disappointed with the interface.
I'm prepared to accept that this may be a reluctance to adopt change, but for the record here are a few things that baffled me:
- Your music list is data - why not be able to change the way you view that data (list, album list, grid or coverflow) easily with buttons at the top of the window, consistent with Finder?
- The box at the top was a good analogy for an LCD display. OK, it no longer has the LCD look (what is it meant to look like now?) but it just doesn't seem right to have controls in there (eg the shuffle button)
- The play, next and previous buttons no longer look like physical buttons - there's nothing to tell you that they're 'pressable'. I can't see any reason for this change other than 'for the sake of it' but whatever the reason, why still have the volume slider looking 3D physical thing?
Monday, 5 November 2012
sniffing out difficult-to-find broken links
I thought it might be good to document one of the more obscure features of Scrutiny.
The Microsoft Content Management System (or at least the system that I've had the misfortune to be on the editorial end of) had an odd way of dealing with broken links.
If it detected a bad link when generating the page, it would replace the expected 'href' within the link with 'zref'. <a zref = "foo.com">foo.com</a>
This would prevent the user from experiencing a broken page, but they had a link that they could click with nothing happening, and made it impossible to track down broken links.
I don't know whether this thinking is a part of MCMS or part of the system that our providers had built on top of it (and may still be providing to others).
But Scrutiny will find and flag such links.
Do you use a system that makes broken links hard to find?
The Microsoft Content Management System (or at least the system that I've had the misfortune to be on the editorial end of) had an odd way of dealing with broken links.
If it detected a bad link when generating the page, it would replace the expected 'href' within the link with 'zref'. <a zref = "foo.com">foo.com</a>
This would prevent the user from experiencing a broken page, but they had a link that they could click with nothing happening, and made it impossible to track down broken links.
I don't know whether this thinking is a part of MCMS or part of the system that our providers had built on top of it (and may still be providing to others).
But Scrutiny will find and flag such links.
Do you use a system that makes broken links hard to find?
Friday, 26 October 2012
Server returning 400 for url with no referer
I've had an interesting support problem this morning and thought that it might be useful to log the answer here.
The problem was Scrutiny not being able to get past the starting url - reporting '400 bad request'. But the same url would return the expected page in a browser.
It seems that this particular server doesn't like not being sent a 'referer' field. Scrutiny does send a referer for all other pages it crawls, filled in with the url of the page that the link appears on. But by definition there is no referer for the starting url and at present it doesn't send one.
The problem was Scrutiny not being able to get past the starting url - reporting '400 bad request'. But the same url would return the expected page in a browser.
It seems that this particular server doesn't like not being sent a 'referer' field. Scrutiny does send a referer for all other pages it crawls, filled in with the url of the page that the link appears on. But by definition there is no referer for the starting url and at present it doesn't send one.
Going to Advanced settings and entering 'referer' as the name of the first custom header field, with any valid url (including 'http://') as the value then the crawl worked.
Sending an empty string or a space for the value doesn't seem to work, so I'm not sure what the browsers do (If anyone knows the answer to this I'd be grateful)
Sunday, 21 October 2012
Testing page weight where gzip is being used
Webmasters using Scrutiny and Reactivity can now see uncompressed and compressed size of files and therefore easily see the benefit of their servers' gzip service
Following a support request, I've been looking at the way that these apps show file size and page weight where gzip compression is being used.
The idea is simple and effective - the server sends the file compressed and it's re-inflated at the client side. The transfer time is thus reduced.
Scrutiny and Reactivity were giving file size as the uncompressed size.
It was easy to enhance these apps to take this into consideration. They now show both the compressed and uncompressed size of files, and both totals.
The enhancement is now available in Scrutiny 3.1 and ReActivity 1.1
Following a support request, I've been looking at the way that these apps show file size and page weight where gzip compression is being used.
The idea is simple and effective - the server sends the file compressed and it's re-inflated at the client side. The transfer time is thus reduced.
Scrutiny and Reactivity were giving file size as the uncompressed size.
It was easy to enhance these apps to take this into consideration. They now show both the compressed and uncompressed size of files, and both totals.
The enhancement is now available in Scrutiny 3.1 and ReActivity 1.1
Monday, 15 October 2012
Making best use of Scrutiny's SEO and keyword analysis
I've always tried to write for the human reading the page.
Happily this approach seems to be the Panda- and Penguin-proof one. And of course it makes good long-term sense. It's Google's job to give the searcher the best page, rather than a crap one that's used some clever tricks to get a good rank. Google doesn't want to promote that page and the user doesn't want to see it. Going forward they'll get better at their job and good content will be king.
With this in mind, is a keyword strategy the right way to go?
Brian Clark says yes.
In The Business Case for Agile Content Marketing, he says that Google is getting smarter but still needs help. And that it’s still important to gently tweak your content so that Google knows exactly who are the right people to deliver it to. If you don't use the words "green widgets" in the right locations and frequency, then the search engine won't know that's what your page is about.
I'm not going to try to fine-tune Scrutiny to analyse your content in line with Google's latest update, because they don't tell us anyway, it's constantly changing and different search engines will weight things differently. Instead it will help you get the basics in place, get you thinking about the right keywords and synonyms and show you how well (or not) you're using those words.
After crawling your site, Scrutiny's SEO window will show you a list of your pages. See those with missing title, description or headings can be seen by choosing the appropriate option from the 'Filter' button.
Simply type a keyword or phrase into the search box and the list will be filtered to show only pages containing that phrase. You'll also see a count in various columns to show you the occurrences of the phrase in the url, title, description, headings.
It will also count occurrences in the content, but this is a feature that you have to turn on in Preferences. (switched off by default, only because it slows the crawl and uses disc space).
Scrutiny is free to use unrestricted for 7 scans, and then only 55 GBP for a lifetime licence. More information and download at Scrutiny's home page.
Happily this approach seems to be the Panda- and Penguin-proof one. And of course it makes good long-term sense. It's Google's job to give the searcher the best page, rather than a crap one that's used some clever tricks to get a good rank. Google doesn't want to promote that page and the user doesn't want to see it. Going forward they'll get better at their job and good content will be king.
With this in mind, is a keyword strategy the right way to go?
Brian Clark says yes.
In The Business Case for Agile Content Marketing, he says that Google is getting smarter but still needs help. And that it’s still important to gently tweak your content so that Google knows exactly who are the right people to deliver it to. If you don't use the words "green widgets" in the right locations and frequency, then the search engine won't know that's what your page is about.
I'm not going to try to fine-tune Scrutiny to analyse your content in line with Google's latest update, because they don't tell us anyway, it's constantly changing and different search engines will weight things differently. Instead it will help you get the basics in place, get you thinking about the right keywords and synonyms and show you how well (or not) you're using those words.
After crawling your site, Scrutiny's SEO window will show you a list of your pages. See those with missing title, description or headings can be seen by choosing the appropriate option from the 'Filter' button.
Simply type a keyword or phrase into the search box and the list will be filtered to show only pages containing that phrase. You'll also see a count in various columns to show you the occurrences of the phrase in the url, title, description, headings.
It will also count occurrences in the content, but this is a feature that you have to turn on in Preferences. (switched off by default, only because it slows the crawl and uses disc space).
Scrutiny is free to use unrestricted for 7 scans, and then only 55 GBP for a lifetime licence. More information and download at Scrutiny's home page.
Saturday, 13 October 2012
Tutorial - making a custom Bin-it theme
When OSX was first released, Apple moved the trash from the desktop to the dock, which upset a lot of people who found the earlier desktop trash more convenient*.
I first made Bin-it in 2006 to add a desktop trash to OSX, and shortly afterwards collaborated with Chris Knight to add a very quick and easy progressive indication of the trash level with a changing icon.
Built-in themes include the standard OSX trash (with added levels of trash) and for the retro look, pixellated OS7 and OS9 cans.
Here's how to add your own:
1. Prepare between two to six images, 128 pixels x 128 pixels with transparent background. Save in a format that preserves the transparency such as .png or .tiff
2. Go to Preferences > Themes and click the [+] button
3. Drag-and-drop your images into the image wells. You can also add sound files if you like, again by drag-and-drop
4. Give your new theme a name and OK
5. If you have only two images (as I have here) you can just fill in the first and last wells. Although if you drop the empty / full images into wells 2 -> 5 you can control the point at which the bin appears full. OK that sheet and use the Threshold slider to further control the point at which the bin appears full.
More information and the latest version of Bin-it is available at http://peacockmedia.co.uk/bin-it, free for existing licence holders, or £4.95 if not.
* The big inconvenience of the desktop trash was that it was often covered with a window. Note that Bin-it allows you to choose its level: desktop, floating or 'keep on top'.
I first made Bin-it in 2006 to add a desktop trash to OSX, and shortly afterwards collaborated with Chris Knight to add a very quick and easy progressive indication of the trash level with a changing icon.
Built-in themes include the standard OSX trash (with added levels of trash) and for the retro look, pixellated OS7 and OS9 cans.
Here's how to add your own:
1. Prepare between two to six images, 128 pixels x 128 pixels with transparent background. Save in a format that preserves the transparency such as .png or .tiff
2. Go to Preferences > Themes and click the [+] button
3. Drag-and-drop your images into the image wells. You can also add sound files if you like, again by drag-and-drop
4. Give your new theme a name and OK
5. If you have only two images (as I have here) you can just fill in the first and last wells. Although if you drop the empty / full images into wells 2 -> 5 you can control the point at which the bin appears full. OK that sheet and use the Threshold slider to further control the point at which the bin appears full.
More information and the latest version of Bin-it is available at http://peacockmedia.co.uk/bin-it, free for existing licence holders, or £4.95 if not.
* The big inconvenience of the desktop trash was that it was often covered with a window. Note that Bin-it allows you to choose its level: desktop, floating or 'keep on top'.
Friday, 12 October 2012
AuthorRank and Blogger blogs
Have you seen posts about how important a 'rel = author' link could be for your Google page rank?
(obviously using the id number of your profile)
Note that you also need to update your G+ profile to include websites that you author - edit your profile and add your websites to "Contributor to".
Implementing this in my regular websites was a simple copy and paste.
Then I turned to my Blogger blogs. I found that Google already adds the important 'rel=author' code if your name is shown at the bottom of your posts. But it was linking to my Blogger profile (out of date and has a distant photo). The answer was simple.
If you go to the Blogger home page and click the little cog near the top-right, there's an option called 'Google+'. Clicking this allowed me to very easily switch from my Blogger profile to my G+ profile. This simple action meant that all of my posts on all of my blogs now contain the rel=author link to my G+ profile.
Google futher helps you by asking you whether you want to add your blogs to the 'contributor to' list on your profile (if you've not already added all of them in the step above).
It does two things:
- Displays your Google+ profile image by your pages in search results, which increases visibility and trust in your page.
- It's also said that pages with a linked author will rank better than pages without.
Google give two ways to achieve this. One is to a link your page to your Google profile like this:
<a href="https://plus.google.com/109412257237874861202?rel=author">Google</a>
(obviously using the id number of your profile)
Note that you also need to update your G+ profile to include websites that you author - edit your profile and add your websites to "Contributor to".
Implementing this in my regular websites was a simple copy and paste.
Then I turned to my Blogger blogs. I found that Google already adds the important 'rel=author' code if your name is shown at the bottom of your posts. But it was linking to my Blogger profile (out of date and has a distant photo). The answer was simple.
If you go to the Blogger home page and click the little cog near the top-right, there's an option called 'Google+'. Clicking this allowed me to very easily switch from my Blogger profile to my G+ profile. This simple action meant that all of my posts on all of my blogs now contain the rel=author link to my G+ profile.
Google futher helps you by asking you whether you want to add your blogs to the 'contributor to' list on your profile (if you've not already added all of them in the step above).
Wednesday, 3 October 2012
Scrutiny on Tiger
This is Scrutiny running and looking good on 10.4 (Tiger).
It's looking particularly good to me tonight because I've just spent the best part of the last 24+ hours trying to track down a very elusive bug, only apparent in 10.4 and without the benefit of developer tools on this machine due to lack of free space.
I was vexed to find that I'd introduced said beastie at some point (into both Integrity and Scrutiny) without spotting it until it was reported yesterday.
Will shortly have fixed versions of both apps uploaded followed by a good and much longed-for night's sleep.
I've noticed a couple of other minor cosmetic problems, which I will enjoy working on. It's very good to know that people are still using the older-but-still-lovely systems!
It's looking particularly good to me tonight because I've just spent the best part of the last 24+ hours trying to track down a very elusive bug, only apparent in 10.4 and without the benefit of developer tools on this machine due to lack of free space.
I was vexed to find that I'd introduced said beastie at some point (into both Integrity and Scrutiny) without spotting it until it was reported yesterday.
Will shortly have fixed versions of both apps uploaded followed by a good and much longed-for night's sleep.
I've noticed a couple of other minor cosmetic problems, which I will enjoy working on. It's very good to know that people are still using the older-but-still-lovely systems!
Monday, 1 October 2012
The beautiful snow leopard
[post originally written after the release of Lion and copied from a different blog system]
Isn't she gorgeous? I'm now appreciating that the operating system she represents is beautiful too.
I've found plenty of things about Lion [edit: and now Mountain Lion] that I really don't like.
I found a secondhand machine for day-to-day use and now that I'm back on Snow Leopard I love it.
Something else has made me think about users of older systems too. A user of my new product Scrutiny asked about ppc support. The answer is that I have been forced to move to XCode 4 in order to find and fix Lion problems. However, XCode 4 doesn't allow me to build a version which runs on ppc machines. Therefore I released the beta as intel / 10.5 upwards.
It seems that maybe I can use XCode 4 to build for ppc and all of these things have made me think that it would be a good thing to support older macs and versions of OSX.
Rather than try the XCode 4 conversion (I've found messing with build settings to be hair-tearingly frustrating) I may instead rebuild the products using XCode 3 and work in that.
Either way, I'm aiming to support Integrity and Scrutiny on ppc and intel, 10.4 and above if possible, or 10.5 and above if not.
[update: I'm now able to build my apps as a single version to run on 10.4 updwards and code-sign them for 10.8's Gatekeeper too.]
Tracking down zombies in objective-C applications
Aren't the worst bugs the ones that cause crashes infrequently seemingly randomly?
Even when you think you've found the problem and run the app successfully 100 times, you still can't be 100% sure you've got it! It's like trying to prove a negative.
Tracking down such a problem can be like the proverbial needle in a haystack. You can get clues from the crash report or debugger console, commenting out lines can work if the problem is fairly reproduceable but a good analysis tool can save many hours of poring over code and testing and re-testing.
In my experience a problem like this is most likely caused by object alloc / release. In such cases XCode will report 'EXC_BAD_ACCESS' which means (most likely but not always) that you're trying to access an object which has been released.
Instruments with Zombies (XCode>Run with performance tool>Zombies) is a little baffling at first but is well worth it. If the bad access happens, it'll show you the life cycle of the object in question, where and when it was retained and released.
Even when you think you've found the problem and run the app successfully 100 times, you still can't be 100% sure you've got it! It's like trying to prove a negative.
Tracking down such a problem can be like the proverbial needle in a haystack. You can get clues from the crash report or debugger console, commenting out lines can work if the problem is fairly reproduceable but a good analysis tool can save many hours of poring over code and testing and re-testing.
In my experience a problem like this is most likely caused by object alloc / release. In such cases XCode will report 'EXC_BAD_ACCESS' which means (most likely but not always) that you're trying to access an object which has been released.
Instruments with Zombies (XCode>Run with performance tool>Zombies) is a little baffling at first but is well worth it. If the bad access happens, it'll show you the life cycle of the object in question, where and when it was retained and released.
Thursday, 27 September 2012
New 'Links by page view' in Integrity
I must admit that I wasn't really seeing the true value of this new idea until this afternoon when I hooked it up to the 'Bad links only' switch and -
Wow! A list of pages that need attention, each opening up to show you the bad links on that page!
I'm just tidying up some loose ends and testing. The new version, v3.9 will be available very shortly. Still free.
Any thoughts, do let me know.
Wow! A list of pages that need attention, each opening up to show you the bad links on that page!
I'm just tidying up some loose ends and testing. The new version, v3.9 will be available very shortly. Still free.
Any thoughts, do let me know.
Wednesday, 26 September 2012
Panda and Penguin in plain English
Thank you to Tekdig for this very easy-to-understand guide to Google's Panda and Penguin updates.
Do read the article but in short, make sure your content is high-quality - don't fill a page with links or stuff it with keywords, don't have lots of inbound links using the same keyword or from pages which look like link farms.
Tuesday, 25 September 2012
Tutorial - how to limit the crawl of your website when using Integrity and Scrutiny
Although the interface looks quite simple, the rules behind these boxes may not be quite so obvious and so in this simple tutorial I'd like to help you to get the result that you want.
The best way to explain this will be with three examples. The manuals for some of my software are on the peacockmedia domain and I'll assume that I want to run Integrity or Scrutiny but check those manual pages separately or not at all.
The first thing to say is that you may not need to use these rules. Integrity and Scrutiny have a 'down but not up' policy. So if you start your scan at: https://peacockmedia.software/mac/scrutiny/manual/v9/index.html
then the scan will automatically be limited to urls 'below' https://peacockmedia.software/mac/scrutiny/manual/v9/. For the purposes of this tutorial, I'll show some examples using blacklist / whitelist rules.
1. Blacklisting based on url (Integrity or Scrutiny)
Ignore urls that contain /manual/All of the manual pages have '/manual/' in the url, so I can type '/manual/' (without quotes). Including the slash ensures that it'll only blacklist directories called 'manual'. If I was confident that no other urls included the word 'manual' there'd be no need for the slashes.
Simply type a keyword or part of the url. (If you like, you can use an asterisk to mean 'any number of any character' and a dollar sign to indicate 'comes at the end')
We have the option of using 'Ignore', 'Do not check..' or 'Do not follow..' Check means get the header information and reporting the server response code. Follow means go one step further and collect the html of the target page and find the links on it.
So to disregard the manuals but still check the outgoing links to that area, I'll want to 'Do not follow..' If I don't even want to check those links but see them listed then it's 'Do not check..' If I want to disregard them completely then it's 'Ignore'.
Another use of the 'Do not check' box is to speed up the crawl by disregarding certain file types or that you either don't need to check or can't check properly anyway (such as secure pages if you're using Integrity which doesn't allow authentication). For example you can type .pdf, .mp4 or https:// into that box (or multiple values separated by comma).
2. Whitelisting based on url (Integrity or Scrutiny)
Do not check urls that don't contain /manual/
This time, links which are not whitelisted (ie those that don't contain 'manual') are checked and seem to be ok, but are in red because they're not being followed and I'm still highlighting blacklisted links.
3. Blacklisting based on content (Scrutiny)
The result is the same as the screenshot in the first example, but Scrutiny is finding my search term in the page content rather than the url.
In this example, the phrase won't be found in urls because it contains spaces, but for a single keyword, Scrutiny would look for the term in both url and content and blacklist the page if it finds it in either.
If the manuals were all on a subdomain, such as manual.peacockmedia.co.uk, it would be possible to blacklist or whitelist using the term "manual." but it would also be possible to use the 'Treat subdomains as internal' checkbox in Preferences. Subdomains is a bigger topic and one for its own tutorial.
Any problems, do get in touch
Monday, 24 September 2012
Running a website check on schedule, sorting data and mining content for SEO keywords
Scrutinty v3 is finished, tested and released.
Although the new version of the webmaster tool suite comes fairly soon after v2, I've decided to make this a major release rather than a point version because there has been some serious work 'under the hood' making the crawl slightly faster and more memory-efficient, some interface improvements such as sorting on all views, and some important new features such as the ability to include content in the keyword count (pictured) or to run on schedule.
I've made a page detailing the new features:
Once more, the web download will be two or three weeks ahead of the App Store. I'm always challenged about this but it's simply because of the long wait for Apple to check it. And if it's rejected for some reason, then there's more work to do before re-submitting.
Any thoughts or questions, do get in touch
Any thoughts or questions, do get in touch