Display Ads: Stop & Think!

Display Advertising - Stop & Think!
Standard

So, today in random news I recently decided to become a supporter on the Guardian. The BBC’s continuous pandering to the royals has been pissing me off more than usual lately (newsflash: I’m anti royal family) and I’ve found that the Guardian has actually replaced the BBC as my go-to “let’s check in on how screwed up the world is today” destination.

I noticed the message on the site today asking me to sign up (not that I’ve not seen it before, but still) and after discovering that it now costs less than an annual Playstation Plus subscription I thought hey, why the hell not. They accept Paypal as well, which is always nice as it saves me having to hunt down my card from wherever the pugs have randomly dragged my purse today, so that’s one conversion barrier neatly bypassed. In fact the whole signup was pretty slick and smooth, on par with an Amazon checkout but without the hustle.

Of course after something so profoundly adult-ish on a Saturday afternoon I went back to writing fanfiction like a normal lunatic, which for various reasons led me to the always-invaluable www.fantasynamegenerators.com. This site is on my adblocking whitelist because a) it is genuinely useful so I don’t mind the creator earning from it and b) its advertising is discreet and doesn’t make the site entirely unusable in the process.

Now given that I had, no more than one quarter of an hour earlier, literally just counted myself as an onsite conversion (and from organic, no less) for The Guardian, I was rather surprised to be greeted by this (on the Viking Name Generator page, if you must know, but that’s not really the point):

Guardian Display Advertising FailUm. Open another tab, check email…nope, there’s the subscription confirmation, right in my inbox.

I wish I could say this is the first time I’ve seen this sort of thing, or even that it was the first time I’ve had it happen so painfully obviously to me. I can’t, of course. Not even close.

All together now, boys and girls: don’t target your converters just after they convert.

Aside from the fact that this is a wasted bid effort, not to mention inventory that could have far more usefully gone to someone else, this kind of “haunting” effect is the sort of thing that can really alienate today’s web-savvy consumers to a brand. Now I work in digital marketing so I can just sigh, roll my eyes, rant on my blog and move on with my life without any real change in my sentiments to the Guardian as a journalistic entity, but the average web user does not (gasp) work in this field.

On the whole, consumers don’t like ads. They tolerate them. It’s just a fact of marketing life. But tolerance levels go sharply down and irritation goes sharply up when ads are irrelevant, too persistent, interfere with whatever the user was trying to do in the first place (I’m looking at you, YouTube pre-roll) or are otherwise just poorly targeted.

Say, for example, by trying to sell a user something they bought less than fifteen minutes ago.

Now remarketing, properly handled, is an immensely powerful tool. So, frankly, is display prospecting when done correctly. I would never in a million years suggest that a brand abandons its display/RTB efforts provided the appropriate KPIs look good and brand safety is assured (check your white and blacklists folks – is NakedFurriesRUs.com really where you want your high end luxury name appearing?) so no need to start setting fires just yet.

But please, please turn your brain on and think when setting up your targeting options. I’m by no means claiming this is always as simple as finding a decent Viking name generator in the riches of the internet – Mona Elesseily’s excellent article about GDN targeting and layering is a great read, if nothing else to show how complex just this single network can be to sort out properly – but since when was something only worth doing if it was easy?

Rakuten Marketing recently ran some surveys around online advertising and amongst the other interesting (and worrying) things they discovered was that a whole bunch of people around the world associate ads with “other negative online experiences like fake news.”

Awkward. Especially for this particular example!

While the immediate “last click” ramifications of such an advertising faux pas may seem insignificant to the point of ridiculousness, in today’s ever-more-crowded online marketplace it is critical to realise that brand preference and good, old-fashioned, squishy feelings play more of a factor than ever before. Yes, you got the conversion right now, but if you continue to stalk your new customer with crappily-targeted display creatives you’re not only going to piss them off but alienate them to your brand and very potentially lose them as a customer in the near – if not immediate – future.

Given how much it probably cost you, considering all touchpoints and appropriate channel attribution, to acquire that customer in the first place, doing something that actively drives down their lifetime value to your business is so far beyond stupid that it can barely see stupid in the distance.

I may forgive the Guardian its little whoops moment because I know the struggle (and really enjoyed their recent article about tardigrades), but the vast majority of users will be far less generous!

Besides, every time you advertise unnecessarily to a recent converter a kitten gets brutally shot by a floating hand.

Don't Kill Kittens

Isn’t that reason enough to sort out your targeting strategy?

Edit 16th August 2017 – someone just sent me the below message on Facebook in response to my sharing of this post. Case in point, much? 🙁

Bad Retargeting

Aaand someone else sent me this over Slack as well (the booking he made was for next week, and they’re already trying to get him to book again). Shocking.

More Bad Retargeting

Anyone else have any classic retargeting failures they’d like to share – especially ones that alienated them from the brand in question?

Photo Credit: Simon Launay

How To Future Proof RIGHT NOW For the IoT

IoT Future Proofing
Standard

This post was originally made on the 4Ps Marketing blog.

This big malarkey about “the Internet of Things” (IoT) just won’t go away, will it? From radiators you control with your phone to fridges with internal cameras so you can check the contents on the move, it seems like everything is connected these days. Alexa and Google Home are all set to start battling it out on the home voice search stage, and it seems like only a matter of time before you’ll be shouting at your loo to get more toilet paper or hollering at the oven to order pizza.

One of the biggest challenges marketers currently face is providing some kind of actionable answer to the big question everyone is asking – what does this mean for my brand, how can we leverage it, and how can we start future proofing our digital assets?

I could spend the blog equivalent of War & Peace giving my best shot at answering all of those questions (drop me a line for a cuppa and we can talk about it if you’re interested) but what I’m going to focus on right now is the third one – specifically, what you can do right now on your website that will form the first steps of future proofing it against the rise of the voice-driven Internet of Things.

You’ll need a friendly developer (or, failing that, an unfriendly developer and something to bribe them with) and ideally a tech-fluent SEO on hand to get this done. It is worth it though, as an immediate and sometimes surprisingly simple-to-implement form of future-proofing that doesn’t require a multi-million pound technology investment.

That’s right, I’m going to tell you to mark up your website with schema.org again.

Schema? Again?

Right, now the groaning noises have stopped let me tell you why you should do this – and specifically do it with JSON-LD script injections rather than microdata. Well, other than the previously covered reasons when I updated my recommendation last year.

Google Home & Alexa Skills Use JSON

The first image is a screencap of an Action for Google Home (from here) and the second is a Skill for Alexa (from here). Notice anything about both of these?

That’s right, they’re both powered by JSON.

We’ve already seen the early beginnings of JSON driving actions on pages as well as simply structuring data to be machine readable – the most obvious example is the sitelink search box markup which allows users to directly interface with your website’s search bar from the Google results page, saving a click. In a future without a conventional results “page” – say, the Internet of Things or a voice search heavy technology ecosystem – it’s easy to see how these sorts of interactions can evolve. What precisely this looks like is still to be determined, but all the signs point to it being written in JSON.

Schema.org already has a whole mess of options available for Actions as well as Objects. A lot of them, especially as given in the examples, are rather pedantic and not necessarily of immediate use from a marketer’s perspective, but the point is that the vocabulary is there. It’s quite accessible as well – even I can write JSON scripts, and I haven’t done any formal coding since the FORTRAN 90 module in my undergraduate degree.

So if you’re a brand, get marked up with JSON rather than microdata, and start using this to signpost key actions on your site – from Order Brochure to Add To Basket, or whatever else you can implement. I recommend inline markup where you can; while it is perfectly possible to deploy schema.org using Google Tag Manager and similar systems, there seems to be a marked delay in pickup by crawlers and there’s every possibility that non-Google entities won’t even realise the stuff is there.

If you’re a marketer, and doing anything in the vague region of technical SEO or web development, try and get at least basic reading fluency in JSON scripting. W3CSchools is a good starting point, and I would personally recommend Codecademy if you want something more structured towards progressive learning.

Photo Credit: Gian Prosdocimo

Guide To An SEO-Safe Domain Migration

Standard

This post was originally made on the 4Ps Marketing Knowledge Base.

There are lots of reasons why a business may need to change website domains during its lifetime. A rebrand is the most common, although there are other reasons, for example going from a .co.uk to a .com to facilitate a single domain international expansion. At any rate, a domain change is an enormously risky event for a website – the mass change of URLs necessitates a great deal of unsettlement and re-indexing which can send organic visibility haywire.

It certainly isn’t something to be taken lightly. A mishandled domain migration can destroy a website’s visibility, and organic traffic can take months or even years to recover after a botched domain change. Fortunately we’ve got a checklist at 4Ps that can help your domain switch go smoothly.

PREP YOUR REDIRECTIONS

Before you do anything else, make sure you set up your 301 redirects to go from old URL to new URL. Just pushing everything on the old domain to the homepage of the new domain will be devastating to visibility – authority needs to be passed on a page by page basis in order to preserve it in the right manner to avoid losses.

Ensure that these redirects are tested thoroughly in a development environment – the last thing you want is lots of redirect chain slamming your site’s speed, or something misconfigured so a URL is missed, when the big day comes.

UPDATE ALL INTERNAL LINKING

Again in your development environment, ensure that all links and associated elements are properly updated to reflect the new domain. If you’re using relative rather than absolute internal links that saves a job, but it is a good idea to run a crawl to be sure just in case as some sites can end up with a mix.

As well as general hyperlinks, ensure that all linking elements are properly updated to reflect the new domain. XML sitemaps, rel alt hreflang, rel canonical, rel alt amphtml, Open Graph, Twitter Cards, structured data – all these sorts of elements must be checked and updated to reflect the final absolute URLs that will be on the site after the domain changes. Not doing so will result in broken links and can cause other difficulties with organic visibility if any markup ends up invalid.

PREP MEASUREMENT TOOLS

Setting up query visibility measurement well before the domain switch is due to take place is always good to establish some kind of benchmark for the old domain before the change. Ensure your analytics is set up appropriately so that all traffic can be tracked and attributed correctly.

Another excellent measurement tool is Google Search Console (and Bing Webmaster Tools, plus any international variants). The best way to handle this setup is to have both the old and new domains validated at the same time, ready for the switch. You can do this relatively easily in a variety of ways, and having both domains verified will allow you to monitor impression levels for both.

Naturally the optimal result will be organic impressions for the old domain decreasing at about the same rate that they increase for the new one, indicating a smooth transition. Ensuring these tools are enabled and verified in advance will ensure that the right data is available to diagnose any issues as they arise – missed redirects, dropped URLs and other such problems are all much more easily corrected if they can be clearly identified.

THE BIG DAY

Update all analytics and PPC (and any other external tools or platforms) as soon as possible after the domain switch, if it is not appropriate to do so beforehand. When making the DNS switch, it is worth reducing your TTL (Time To Live) to speed up the migration and help propagate the new domain as rapidly as possible.

Once the new domain is live, run a full redirect check and then an internal crawl to check internal links. Ensure any problems are corrected as soon as possible – there’s no such thing as “too soon” for fixing issues. The next thing you should do is register the change of address in Google Search Console (and all other webmaster tools accounts). Once this is done, resubmit your XML sitemap (or sitemaps, or sitemap index file). I also like to manually submit the homepage to the index as well, just as an extra nudge to the crawlers to get them going.

MONITOR AND WAIT

Expect at least six weeks of ranking and visibility fluctuations while the new domain settles in. Sometimes this can last up to two months or even longer, depending on how well the migration is handled and how smoothly the redirects go in.

It is also worth noting that in some circumstances, when migrations go well, it is entirely possible for a site to experience few to no fluctuations at all – but for purposes of managing expectations internally it is generally recommended to prepare for the worst, and to be pleasantly surprised if they do not arise!

FURTHER ADVICE

If you’ve got a domain change or similar high SEO risk event on the horizon for your brand, I’ve got tons of experience in website migrations of all shapes and sizes. Give me a shout or take a look at some of the related guides on my agency’s website:

Recommendation Update: From Microdata To JSON

Standard

This post was originally made on the 4Ps Marketing blog.

I’m all about the structured data markup over here. In a world increasingly driven not just by users searching for immediate answers, but by artificial intelligences like Siri and Cortana performing searches on behalf of their users, the ability to have rich information easily machine readable and machine processable is of increasingly vital importance.

lthough the official line from Google (and others) continues to be that structured markup has no direct “ranking impact” or is not used as a “ranking factor” (although rumours continue to circulate that it will be part of the algorithm one day), evidence continues to pile up in favour of its implementation as an SEO consideration. Studies like this 2014 one from SearchMetrics consistently show a correlation between the use of structured data and high organic website performance, and Google themselves say that use of the markup helps increase the chance that you’ll appear favourable in rich snippets and enhanced listing types in organic SERPs – things like Knowledge Graph cards and rich answers. As these result types often pip conventional results to the top of the SERP, that’s a pretty powerful message for potential exposure of your brand.

What Is Structured Markup?

In its simplest terms, structured or semantic markup turns a webpage from a bunch of text and images into a set of things with each have their own properties. Rather than relying on search robots and algorithms understanding the concept of a pair of shoes as a product for sale (rather than a webpage that contains lots of text about shoes and £ symbol), for example, structured data lets us explicitly state this in the form of additional markup in the source code of the page. This means that machines reading the page (including things like search robots and other “parsing” things like Siri/Cortana et al) don’t need to work as hard to understand what it is about – they can see all the information and attributes laid out in a way they can nicely understand.

As we all (should) know, a search bot that doesn’t need to work as hard is a happy bot. Happy bots generally mean visibility boosts, one way or another! So think of structured markup as the bot equivalent of tasty chocolate…

Happy Robot

Implementing Structured Markup

In the past the generally accepted and most widely-used manner of implementing structured markup has been using microdata. This is generally simple to implement into existing HTML templates as it essentially just adds a bunch more attributes (in the form of itemscopes and itemprops) to your pages.

Microdata Markup Example

This approach can often be tricky depending how complex your templates are, and a bit of jiggery pokery is often required to get things to nest correctly, and often you need to include a bunch of meta tags to make sure all the needed attributes are present and in the correct format. It works fine most of the time though, and there are often plugins available for the big, mainstream content management systems which make life easier.

Switching To JSON-LD

JSON-LD, or JavaScript Object Notation for Linked Data (which is what its mother calls it when she’s angry with it), is essentially a way of encoding structured data – including schema.org markup – using the JSON data interchange format. That’s a posh way of saying, in very simple terms, that it lets you put your structured data bits and bobs into a script element that sits independently to the existing HTML template of your website.

JSON Markup Example

There are a few advantages to implementing structure data with JSON-LD rather than microdata:

  • It keeps the structured markup independent of your template layout, so if your site’s page templates change you won’t need to redo the markup each time because your nesting breaks or the entire thing starts producing errors
  • It is much easier and more efficient to mark up complex nested objects and concepts in JSON than it is microdata, so you can implement more comprehensive markup to take advantage of more potential opportunities
  • You can pull fields and properties directly from your content management system without needing to play about with meta itemprops and other formatting pain
  • You can even (with the right analytics consultant to help) link up your JSON deployment with your data layer to start tracking properties as custom dimensions in one neat package

Most importantly though – and the number one reason I’ve made the switch to formally recommend to all clients and prospects going forward that they utilise the JSON-LD markup method from now on – is that Google itself has switched from am ambiguous level of “we support any schema.org formats” to specifically recommending the use of JSON-LD above the other markup techniques.

Google JSON Recommendation

We’ve already seen evidence in the past that Google has been more inclined to properly pick up and parse JSON implementations of things like the sitelink search markup, so this isn’t particularly shocking, but with the new advice on things like Rich Cards and the change to Google’s developer documentation on the subject – not to mention the pain of having to rebuild the microdata spec every time a template is tweaked slightly – I decided it was time for a formal change of recommendation.

There’s no evidence (thus far) that Google is going to stop supporting microdata implementations of structured markup, but this is one of the rare cases where the search giant has provided not only general guidelines but a clear preference for a particular implementation. So in the name of future proofing, not to mention an easier life, I’d suggest the following:

  1. If you haven’t implemented any structured data on your site yet (and why not?) then make sure you get this into your technical roadmap soon, and use JSON-LD when you do.
  2. If you’ve got structured data implemented already using JSON-LD, run it through the revamped structured data testing tool to be safe – Google has tweaked and expanded some of its own specifications, such as for Articles and Recipes, in ways that diverge slightly from the core specs on schema.org, so it is worth checking.
  3. If you’ve got structured data implemented on your site using microdata or RDFa, don’t panic! Google does still support these, but you’ll probably want to look at getting a revamp in JSON-LD into your technical roadmap within the next 12-18 months to be safe (and you can use this as a good time to overhaul the markup to make sure you’re taking advantage of all your opportunities, too).
  4. If you’ve got structured data implemented on your site using some other vocabulary than schema.org, such as microformats, you’re running a risk of both losing any benefits you still have (as Google has firmly thrown itself into the schema.org camp) and are probably missing opportunities too, so make the switch to schema.org (with JSON-LD implementation) as soon as you can.

To find out more about the benefits of schema.org markup for your search marketing and brand promotion efforts, take a look at the newly refreshed Google documentation which is a good read for both developers and less technical marketing types.

SEO In 2017: The Crystal Ball Predictions

Standard

This post was originally made on the 4Ps Marketing blog.

After the ever-saintly Ben Davis featured my thoughts on AMP in the usual seasonal Econsultancy roundup it seemed like a good idea to get the rest of my thoughts down on e-paper before getting swamped in keeping the pugs from eating all the tinsel and getting the cat away from the wrapping paper. As is normal, of course, I’ve only just had a chance to sit down and actually start thinking about the next (and last) twelve months with an appropriate level of depth, so here we go…

Real Time Penguin

Google finally launching real time Penguin was a huge highlight of 2016 for me – better late than never, and so on. I’m also very pleased about Google’s refreshed approach to this – ignoring rather than immediately penalising bad links – as this very much diminishes the nastiness potential of “negative SEO” and seems like an adoption of a “more carrot than stick” kind of attitude towards webmasters on Google’s side.

Voice Searching

The mainstream launches of the “home PA” systems like Amazon Alexa and Google home are very interesting and seem like they might finally propel voice search into the mind of mainstream consumers and brands. This idea of bringing search into an always-on state is a natural evolution of device proliferation but we’ll be watching very curiously to see how it starts to shake up user interaction with digital, especially buying patterns. It’ll also be interesting to see how paid advertising starts to rear its head on voice platforms without fundamentally damaging the user experience on them.

The implications for longer and more specific searches, which tend to occur more naturally in voice queries, are also going to have knock-on effects on everything from content structure to search market research. Alex Smith, on the Food & Leisure team a my agency, commented

“What I’d like to see is the evolution of a keyword planner-esque tool that can work on phrase match or uses machine learning to handle the longer tail, more semantic and context-driven queries.”

Whoever can start providing these sorts of datasets, given the increasingly vicious throttle that Google is putting on its own search data and the lack of granularity in Adwords tools for things like device split and media search method (voice vs type vs image etc), is likely to make a lot of forward-thinking marketers very happy people. Platform providers of the world, take note!

UX Integration

Something a I’ve started to notice with quite a few clients this year is the (very welcome!) development of marketing teams starting to take a real interest in their site’s performance in terms of user experience rather than just bottom line. Site speed, especially, has far too often in the past been written off as “a problem for the IT/web guys” so I’m bloody glad to start seeing some decline in this silo-ised thinking. Analytics, data, marketing and customer journeys seem to be finally starting to get joined up in brand thinking, so although there’s still a long way to go for most businesses this is a great step forward and here’s hoping to see it continue to see deeper adoption in 2017 and beyond.

VR, Bots And New Search Touchpoints

There’s been a lot of buzz around the tech and digital industries about virtual reality and the rise of chatbots as interaction tools for brands. VR is a big unknown for search at the moment as it is (of course) more experientially focused but Nick Shread, my fellow Kent resident and colleague at 4Ps who heads up the third sector team, notes

“I’m wondering how VR will impact search. Searches performed from within games or experience playbacks perhaps? Maybe a “find me something related to this” sort of prompted discovery angle that ties into results across other devices?”

There’s a wide open field for experimentation here of course – watch this space! Chatbots and similar machine learning or AI-driven tools are already starting to make waves in the search space though, especially on mobile. Google set the precedent with their own machine learning driven RankBrain, of course, but there’s an increasing trend of AI-type entities making searches on behalf of human users, rather than the human user undertaking the search themselves.

That’s all the average chatbot does if you dig deep enough under the bonnet of the technology (at the risk of oversimplifying a very cool and complex field), and you only need to look at the typical behaviour of digital assistants like Cortana and Alexa to get some cool ideas of where this could be going. Matt Stannard, my usual partner in predictive crime, and I predicted this back in 2015, so we’re applying for our licensed digital sector psychic badges this year. Matt (who serves at the Innovation Director at 4Ps when he isn’t building mad-scientist type analytics gadgets) also comments that he thinks different search methods than “words” are going to start rising soon too as things like image and sound recognition keep developing.

“What about searching by proxy? Cortana, find me something that looks like this, with “this” being an image, or phrase, or sound, or smell…or even a feel. Haptic interfaces are going to start showing up sooner or later!”

On the Google front, for 2017 the only thing I’m reasonably confident of myself is that AMP is going to get bigger before it goes away, despite some signs of rising controversy in its potential user benefits and current implementation form. Google is pushing it immensely hard and it seems to be only a matter of time before it extends to full capability deployment in new verticals like eCommerce. This will be particularly interesting as and when the mobile-first organic index gets rolled out, as despite Google’s claims that they’re aiming for a “low delta” I suspect that non-responsive sites are going to see some big shifts in visibility if they don’t get their content and markup synced up.

How do you see the organic search landscape shifting in 2017 and beyond? How will new technology and potential touchpoints start disrupting the way brands need to present their content to users? How will measurement, analytics and data struggle or stride ahead to keep up? I’m always up for a coffee and a geek out so drop a line to 4Ps for a chat or hit me up direct and let’s talk.

A Robots.txt Guide For SEOs

Standard

This post was originally made on the 4Ps Marketing Knowledge Base.

Every SEO should know their way around the core principles of a robots.txt file. It is the first thing a crawler looks for when it hits a subdomain so getting the basics (and the not-so-basics) spot on is important to ensure you don’t end up with pages showing ineffectually in search results or just dropping out of them altogether.

Robots.txt Location

Your robots.txt file must sit at the root of your subdomain. No negotiation here. What actually happens is that the crawler strips out the path from the URL, which is everything after the first forward slash, but in practical terms this means your robots text should sit at the root.

  • http://www.website1.com/robots.txt
  • http://website2.com/robots.txt
  • http://place.website3.com/robots.txt

Put it anywhere else, and crawlers won’t find it, which means you effectively have no robots.txt file on your site. That means, incidentally, that bots will assume they can access everything and so will just go berserk and crawl every inch of the site they can get to. This might be perfectly fine if you have a smaller website – but it can be very risky SEO-wise on a large catalogue or enterprise site where you want to more carefully control crawler behaviour to make sure things are indexed to best effect.

The Basics

You can create a robots.txt file in any basic text editor, up to and including Notepad. A very basic robots.txt file will look something like this:

The first line uses a wildcard * to mean “any user agent” (so “any robot”), the disallow being blank means nothing on the site is disallowed from crawling, and the sitemap line specifies the location of the XML sitemap (or sitemap index file) for the website so the bot can hop onto it and start indexing from that list. Keeps things nice and efficient!

If you want to stop all bots from indexing content within certain folders – say, an area only accessible to logged-in users, that’s pretty simple to do.

You can also keep robots out from a single page or file if you want.

Important Notes On Robots.txt Blocking

It is important to note that blocking things in robots.txt does not prevent them from appearing in search engine results pages altogether. What you may end up seeing in a SERP might well be something like this:

Robots Blocked SERP

Now for most things this may actually be fine. User areas or invoice templates and so forth – you’re probably not too worried about outlier cases where they show up like this, as long as their full content isn’t being indexed and ranked organically.

In some cases, however, brands may be more sensitive to certain URLs or files and want to ensure they will never show up in a search engine in any shape or form. If this is the case, it is vitally important to ensure that these files are not blocked in robots.txt – the bot will need to crawl the asset thoroughly, not just “ping the URL,” so it can see the robots meta noindex tag or x-robots noindex HTTP header.

It is also critical not to block assets in robots.txt that are needed to render pages in a browser. In the past many developers would mass block things like scripts or CSS folders, but doing this now will result in a grumpy message from Google in Search Console and can have a direct negative impact on your organic visibility levels (Google announced this change in 2014).

Other Important Notes

There are plenty of other elements you might need to know about a robots.txt file. Keep an eye out for some of the following:

  • Crawl delays. These were used back in the day to throttle robot access. There’s no reason to have them in a modern setup – and Google ignores crawl delay rules anyway.
  • Pattern matching. Both Google and Bing robots will honour rules that make use of * (a wildcard, meaning “any sequence of characters”) and/or $ (which matches the end of a URL).
  • The robots.txt file is case sensitive in all senses – don’t call it robots.TXT, for example, and make sure any rules you put in are case matched to the URLs required.
  • Only one URL rule can go per line. Three file or folder disavows, for example, must go on three lines.
  • Processing order for rules is important! Google and Bing robots both make use of the “most specific rule first” principle, while standard processing order is top to bottom. If in doubt, put any Allows above any Disavows (for example Allow a file in a director before you Disallow the entire directory in order to achieve a “disallow all in this directory except this file” effect).
  • Avoid blocking files in robots.txt when you should be using other techniques. Some of the most common problems we see include blocking mobile websites from non-mobile bots or using robots.txt to block duplication caused by internal architecture problems. Make sure you address situations like this with search engine recommended solutions, not just by throwing robots.txt rules in!
  • You can add comments (human but not machine-readable notes) to robots.txt files by using # at the beginning of a line.

Remember that while the robots.txt standard is a directive, it is not enforceable. Naughty and malicious bots and crawlers will generally ignore it altogether in favour of whatever they want from your site. Be aware too that the robots.txt file is always public – anyone can see it by going to the /robots.txt URL on your site! Definitely don’t rely on the robots.txt file to keep secure areas of your site “hidden” or “safe” – use the appropriate encryption and login protocols.

Now Is The Time To Migrate To HTTPS

Standard

This post was originally made on the 4Ps Marketing blog.

I’m really not a fan of jumping on bandwagons at the drop of a hat. In the fast moving digital world, flavours of the month come and go far more rapidly than they can be realistically evaluated for genuine business impacts. While early adoption can have its merits, diving feet-first into potentially high risk scenarios without a clear picture of the payoffs is not advisable for brands operating in a massively competitive space.

That’s why I was cautious about the newly announced HTTPS “SEO signal” back in 2014 that sent parts of the search industry into uproar. It was an enormously minor SEO positive but the transition was immensely risky as it (then) involved a complete change of address site move-style transition. Sites that handled the migration wrong ran horrible SEO risks and more than one ended up severely damaging their organic visibility in a way that far outweighed the minor benefit of having all their URLs on HTTPS.

I was therefore pretty “meh” about the whole thing.

Over the last two years, this has gradually refined into a more judicious “if you’re doing a big migration or relaunch anyway, let’s get onto HTTPS at the same time” with caveats around implementation and site speed implications. The SEO signal aspect of HTTPS remains minor at best, but Google is pushing hard to get the whole web secured and other factors started to come into serious consideration.

This is why, as of now, I’m officially revising our recommendation. It has been two years since the introduction and Google shows no signs of letting up on HTTPS. There’s been confirmation that as of January 2017, the Chrome browser will start flagging ordinary HTTP pages a “non secure” if they collect any passwords or card details. Google actually say that this is part of “a long-term plan to mark all HTTP sites as non-secure” in a way that is very obvious to users.

Eventual HTTP Treatment In Chrome

Next steps could be even more drastic – some current (unsubstantiated) rumours include Chrome not loading pages with mixed protocols, or even that the browser will stop rendering non-secure sites altogether. With Chrome usage accounting for comfortably over 55% of monthly browser market share, this is no longer something that brands can sit back on.

To help, I’ve pulled together a checklist for migrating to HTTPS with minimal organic visibility impact. If you’ve got concerns or want help managing this process, give me a shout here or via 4Ps.

How To (Safely) Migrate Your Website To HTTPS

Standard

This post was originally made on the 4Ps Marketing Knowledge Base.

If you weren’t sure about the pros outweighing the cons of migrating to HTTPS when Google first announced their SEO boost to secure sites back in 2014 then nobody would have blamed you for being cautious. Now, however, I’ve officially updated my recommendation to advise all brands to migrate to HTTPS for reasons that go far beyond just SEO considerations.

HTTPS Migration Checklist

  • Get your certificate, install and ensure it is configured properly by using a test server first.
  • Using the test server, ensure that all HTTP to HTTPS 301 redirects are implemented correctly.
  • This should be a server/CDN level rule.
  • Do not make use of canonicals to indicate the HTTP version – the HTTPS URLs should be the only ones that are accessible.
  • Do not just 301 your entire HTTP site to the new HTTPS homepage – this can cause drastic organic visibility drops.
  • Ensure server settings are forcing HTTPS URLs using a 301 – don’t play about with things like trying to block HTTP URL versions or anything similarly complex.
  • Update/check all ad codes, analytics, social sharing and other external plugins to ensure they are compliant and will support your new HTTPS URLs.
  • Update all internal links, XML sitemaps, canonicals, structured markup and your internal site search to the HTTPS versions of URLs.
  • Everything check out okay? Right, you’re ready to deploy from the test server to your live environment!

Next you need to monitor and measure carefully.

  • Validate your new HTTPS site version in Google Search Console/Bing WMT but also keep the old HTTP versionvalidated so that you can monitor impressions/clicks dropoff and transition.
  • Consider updating your disavow file if you have one (this is less critical since Penguin 4 but depending on past “activity” you may want to keep your disavows on record).
  • Note there is now no need to do a change of address in Search Console/WMT – in fact if you try it you’ll get stuck because Google doesn’t consider HTTP to HTTPS to be one so you won’t be able to select the necessary URLs.
  • Keep an eye out for any rogue 404s or broken backlinks in Search Console and other tools – large volumes can indicate a systemic problem with your HTTP to HTTPS 301 implementation, but the odd cheeky one can still crop up and should be mopped up as per normal housekeeping.
  • Monitor both analytics traffic and any organic visibility tools like rank samplers carefully so you can pounce on any issues.
  • Keep a close eye on your site speed – one of the main issues we’ve seen when migrating to HTTPS is post-launch problems with response times and corresponding performance drops, so consider getting in a specialist monitoring tool or just keep an eagle eye on site speed sampling in Google Analytics.

Congratulations! You’ve now future-proofed your site against Google’s push to the secure web.

A Guide To Alt & Title Attributes For SEO

Standard

This post was originally made on the 4Ps Marketing Knowledge Base.

If there’s ever a set of fundamental technical guidelines to get SEOs up in arms, it is the use of alt and title attributes on images and links. Some say they’re essential, others say they’re a nice to have, still others are split down the middle depending on the type of website and the purpose of the image or link in question.

Of course until Google and Bing both issue a set of one hundred percent, no-holds-barred guidelines about how to use these two attributes, we’re reduced to what we can measure in terms of visibility effects and good old-fashioned experience. Here’s how I recommend handling alts and titles for images, links and any combination of the two.

Image Alts

Probably the most straightforward. An alt attribute is meant to be an alternative information source for those people who have chosen to disable images in their browsers and those user agents that are simply unable to “see” the images (including search robots and screen readers for the blind). It should describe what the image is about in context so that the user understands what they would be seeing if they could see the image.

The in context part is what often trips people up here. Just describing the image is not often enough. Let’s take an example from a famous accountancy firm, shown below.

Alts In Context

In this instance the basic accessibility rule would mean you gave the image on the right an alt attribute like “offshore wind farm.” However while this is technically accurate, it doesn’t quite explain properly what the image is designed to illustrate, which is the article on renewable energy tenders that it is attached to. A better alt could be something like “Offshore Wind Farm Example Of Renewable Energy Tender.” You could even just hyphenate something like “Offshore Wind Farm – Renewable Energy Tenders.” There’s no one answer, but the really important thing is to cover both angles – describing the image as if to someone who can’t see it, but also giving the context of what it is there to illustrate.

Of course in reality you won’t be able to get this spot on for every image – especially on large enterprise or catalogue sites, the need to autogenerate alt attributes, rather than leaving them empty, will make tailoring them this way impractical – but it is important to try to get high quality alts onto the site as often as possible!

The SEO weight of an image alt is high. It is (still) one of the primary methods that search engines use to “see” images on the page. Creating a good, descriptive alt attribute that still supports the topical query targeting for a piece of page content is a very worthwhile skill and worth cultivating.

Image Titles

An image title (the attribute name rather speaks for itself here) is designed to provide additional information. Good title attributes follow the same rules as for any other content title; relevant, short, catchy, and concise. When used on images, titles often show as popup “tooltip” style text in browsers – but don’t be tempted to use them this way by filling them with terms like “click here” or “read more!”

Title Attribute Example

Officially a title “offers advisory information about the element for which it is set” which is a rather fancy way of rephrasing the above. Especially on big sites, image titles are often autopopulated to be identical to the alts, which isn’t technically correct, but I’ve never seen it do any harm either.

The SEO weight of image titles is debatable. I generally deem them a “nice to have” but certainly not a deal-breaker in the same way an image alt would be.

Link Titles

You can have title attributes on hyperlinks, too (actually, you can put titles on any HTML element if you really want to, as they’re global attributes, but generally they’re mainly used on links and images).

Whether or not you should have a title on a link depends on the link itself. If it is obvious from both the link anchor (that’s the text the link is on) and its surrounding copy/context where the link will lead, then a link title will actually reduce usability by being one more thing users have to look at. So in cases like that, leave it off.

On the other hand if it is not clear where the link is going, a title is pretty essential for accessibility. This could be because of poor anchor text – “click here” and “read more” are common offenders! Alternatively it could just be a bit ambiguous where the link goes. For example, a link on an author’s name at the bottom of a blog post could lead to more posts by the author or it could go to the author’s biography, but there would be no way to know which unless you clicked on it. That’s where a link title comes in.

The SEO weight of a link title is debatable, but I’ve seen good results from including them on internal links where the anchor text is poor so generally consider them a nice to have.

Combining Images And Links

Of course a lot of sites do this – from basic buttons to big, impressive clickable visuals. The principle of including a title on the link here still holds (think of any image like a “read more” anchor that needs more definition), and of course you still need an image alt, but I’d recommend against including a title on the image as well – it is redundant since the link will enclose the image.

The reality of this situation is of course that if handling a large catalogue site, the chances are you’ll be autogenerating the majority of your titles and alts regardless of how their parent tags are combined. That’s unlikely to cause any significant issues, but ideally you’ll be able to tailor the title to the link and the alt to the image.

At the end of the day as long as you remember to include appropriate alts on your images you’re unlikely to be missing out on any big opportunities as far as onsite optimisation goes. But remember SEO is often a game of nuances, so why not take advantage of every possible opportunity you have on your website to get a little extra edge? That’s exactly where getting the right titles on your links and images can come in.

Google To Launch Separate Mobile Index In “Months”

Standard

This post was originally made on the 4Ps Marketing blog.

Earlier this month, Gary Ilyes from Google announced during his keynote address at Pubcon that the search engine will be moving forward with an entirely separate mobile web index. The idea has been mentioned previously but only in relatively offhand terms as an experiment being run – now, however, Illyes has said that the separate mobile index will happen within “months” which brings it sharply into the reality of search marketing.

A Separate What?

At present Google only keeps one index of pages and content for the entire web. When the index is queried (i.e. when someone searches for something) the standard ranking algorithm runs to produce the output results in the determine order, but with some “modifiers” by device. “Mobile friendly” and site speed measurements, for example, are applied as modifying “filters” to mobile search results at present.

Nonetheless most SERPs end up looking very similar, with only minor disparities between mobile and desktop sample rankings and visibility. In fact Google have previously said that while they do compare mobile and desktop pages to make sure nothing fishy is going on, most of the time a mobile page will be “ranked” according to the content and other signals associated with the desktop version of the page. As a result, a lot of sites choose to trim down content or omit rich markups like structured data from their mobile-only pages in order to save costs and improve speed.

Needless to say, this may well have to be something that needs revisiting. Update 4/11/2016 Google’s official Webmaster Blog has confirmed in their discussion of mobile index experimentation that brands should be ensuring structured markup is present on mobile-only pages. No explicit statements on content (yet) but we can certainly infer that this suggestion would carry over too.

The New Mobile-Led Google World

Google has been making “mobile first” noises for a long time but these have been mere whispers compared to the “omnichannel” and “unified user experience” shouting they do, so the apparent decision to entirely separate desktop and mobile indexing is a slightly odd one that (at least in my view) doesn’t quite marry up with Google’s overall “users first” channel-independent experience message.

Nonetheless, here we are (or will be). From one desktop-centric index to two entirely separate indexes for mobile and desktop, with mobile becoming the “lead” one that will be more up to date. There are lots – and I mean lots – of questions churning around the industry about this almost offhand revelation from Google. Just some of the most frequently raised are

  • Will the two indexes really be entirely separate and independent?
  • What about tablets or “phablet”-type devices that straddle the desktop/mobile divide?
  • Will the mobile index only contain “mobile friendly” content?
  • Just how out of date will the desktop index end up being?
  • As both offsite links and onsite content tend to be scarcer within mobile environments, especially for transactional sites, how drastically will ranking signals change for the mobile index?

Lots more questions than answers at this point, it seems – as more information starts to circulate everyone in the organic space is going to be pouncing on it. It would be nice if Google issued a firm release as they did with the mobile UX update as they did last year but of course we can’t rely on this level of transparency being forthcoming.

Edit 9/11/2016 SEL has put together a useful FAQ article about the mobile-first index which includes useful information on things like rollups/compressed content, Google’s aim to minimise ranking position changes and the nature of the experimental rollout. Very worthwhile reading.

So, what now?

Getting Ready For The New Indexes

With very limited information on the index split and any potentially drastic differences in organic visibility signals around at present, the to-do list for preparation isn’t as comprehensive as I’d like, but let’s see what we can extrapolate.

  • Make sure you have a mobile-friendly website. This should be a no-brainer in 2016, really!
  • Move to fully responsive if you can. Google has shown clear preferences for responsive websites in the past and this will potentially save a lot of hassle rather than having separate HTML and/or URL sets for your mobile content.
  • Ensure rich markup is present across both site versions. Soon to be gone are the days when you can omit structured markup elements from your mobile-only pages and still get their benefits.
  • Not practical to go fully responsive? Make sure you have the right markup and HTTP headers in place to clearly indicate the relationship between your mobile and desktop page versions. We don’t know how (or if) the two indexes may interact yet, but better safe than sorry.
  • If you aren’t already monitoring visibility of key terms on both mobile and desktop SERPs, start now. Knowing the differences in visibility and how they change after the dual index launch will be critical to dissecting future opportunities and upgrading your organic strategy.
  • Review your mobile site for core optimisation opportunities. Start with the basics like heading tags, code hierarchy and meta data. Move on to structured data and supporting content. Now’s the time to brush up your mobile site so it really can stand on its own two feet without leaning on your more expansive desktop content in the background for support.
  • Boost your site speed. Again you should be doing this anyway but with a mobile-only index on the way, mobile-specific ranking signals are very likely to get beefed up. That means a slick, tap-friendly UX and super-fast speed options. Look into getting a “from the edge” CDN in to supercharge your infrastructure. Optimise your multimedia assets. Talk to your developers about clever techniques like RESS to improve cross device delivery. Get serious about AMP, even if you’ve got a mere blog rather than a full-fledged publishing platform.

Above all, make sure mobile is an integrated and mature part of your overall digital strategy. Google is officially going mobile first, and the days of relying on desktop websites to boost the visibility of poor or “just-an-afterthought” mobile sites are officially numbered.

You have been warned! #DramaticMusic

Dramatic Pug