4 Key Lessons Content Marketers Can Take From Data Journalists

Posted by matt_gillespie

There’s an oft-cited statistic in the world of technology professionals, from marketers to startup founders to data scientists: 90% of the world’s data has been created in the last two years.

This instantly-Tweetable snippet was referenced in Forbes in 2018, mentioned by MediaPost in 2016, and covered on Science Daily in 2013. A casual observer could be forgiven for asking: How could that be true in three different years?

At Fractl, the data makes perfect sense to us: The global amount of digital information is growing exponentially over time.

From Seagate

This means that the “90 percent of all data…” statistic was true in 2013, 2016, and 2018, and it will continue to be true for the foreseeable future. As our culture continues to become more internet-integrated and mobile, we continue to produce massive amounts of data year over year while also becoming more comfortable with understanding large quantities of information.

This is hugely important to anyone who creates content on the web: Stats about how much data we create are great, but the stories buried in that data are what really matter. In the opening manifesto for FiveThirtyEight, one of the first sites on the web specifically devoted to data journalism, Editor-in-Chief Nate Silver wrote:

“Almost everything from our sporting events to our love lives now leaves behind a data trail.” 

This type of data has always been of interest to marketers doing consumer research, but the rise of data journalism shows us that there is both consumer demand and almost infinite potential for great storytelling rooted in numbers.

In this post, I’ll highlight four key insights from data science and journalism and how content marketers can leverage them to create truly newsworthy content that stands out from the pack:

  • The numbers drive the narrative
  • Plotted points are more trustworthy than written words (especially by brands!)
  • Great data content is both beautiful and easy-to-interpret
  • Every company has a (data) story to tell

 By the time you’re done, you’ll have gleaned a better understanding of how data visualization, from simple charts to complex interactive graphics, can help them tell a story and achieve wide visibility for their clients.

The numbers drive the narrative

Try Googling “infographics are dead,” and your top hit will be a 2015 think piece asserting that the medium has been dead for years, followed by many responses that the medium isn’t anywhere close to “dead.” These more optimistic articles tend to focus on the key aspects of infographics that have transformed since their popularity initially grew:

  • Data visualization (and the public’s appetite for it) is evolving, and
  • A bad data viz in an oversaturated market won’t cut it with overloaded consumers.

For content marketers, the advent of infographics was a dream come true: Anyone with even basic skills in Excel and a good graphic designer could whip up some charts, beautify them, and use them to share stories. But Infographics 1.0 quickly fizzled because they failed to deliver anything interesting — they were just a different way to share the same boring stories.

Data journalists do something very different. Take the groundbreaking work from Reuters on the Rohingya Muslim refugee camps in southern Bangladesh, which was awarded the Global Editors Network Award for Best Data Visualization in 2018. This piece starts with a story—an enormous refugee crisis taking place far away from the West—and uses interactive maps, stacked bar charts, and simple statistics visualizations to contextualize and amplify a heartbreaking narrative.

The Reuters piece isn’t only effective because of its innovative data viz techniques; rather, the piece begins with an extremely newsworthy human story and uses numbers to make sure it’s told in the most emotionally resonant way possible. Content marketers, who are absolutely inundated with advice on how storytelling is essential to their work, need to see data journalism as a way to drive their narratives forward, rather than thinking of data visualization simply as a way to pique interest or enhance credibility.

Plotted points are more trustworthy than written words

This is especially true when it comes to brands.

In the era of #FakeNews, content marketers are struggling more than ever to make sure their content is seen as precise, newsworthy, and trustworthy. The job of a content marketer is to produce work for a brand that can go out and reasonably compete for visibility against nonprofits, think tanks, universities, and mainstream media outlets simultaneously. While some brands are quite trusted by Americans, content marketers may find themselves working with lesser-known clients seeking to build up both awareness and trust through great content.

One of the best ways to do both is to follow the lead of data journalists by letting visual data content convey your story for you.

“Numbers don’t lie” vs. brand trustworthiness

In the buildup to the 2012 election, Nate Silver’s previous iteration of FiveThirtyEight drew both massive traffic to the New York Times and criticism from traditional political pundits, who argued that no “computer” could possibly predict election outcomes better than traditional journalists who had worked in politics for decades (an argument fairly similar to the one faced by the protagonists in Moneyball). In the end, Silver’s “computer” (actually a sophisticated model that FiveThirtyEight explains in great depth and open-sources) predicted every state correctly in 2012.

Silver and his team made the model broadly accessible to show off just how non-partisan it really was. It ingested a huge amount of historical election data, used probabilities and weights to figure out which knowledge was most important, and spit out a prediction as to what the most likely outcomes were. By showing how it all worked, Silver and FiveThirtyEight went a long way toward improving the public confidence in data—and, by extension, data journalism.

But the use of data to increase trustworthiness is nothing new. A less cynical take is simply that people are more likely to believe and endorse things when they’re spelled out visually. We know, famously, that users only read about 20-28 percent of the content on the page, and it’s also known that including images vastly increases likes and retweets on Twitter.

So, in the era of endless hot takes and the “everyone’s-a-journalist-now” mentality, content marketers looking to establish brand authority, credibility, and trust can learn an enormous amount from the proven success of data journalists — just stick to the numbers.

Find the nexus of simple and beautiful

Our team at Fractl has a tricky task on our hands: We root our content in data journalism with the ultimate goal of creating great stories that achieve wide visibility. But different stakeholders on our team (not to mention our clients) often want to achieve those ends by slightly different means.

Our creatives—the ones working with data—may want to build something enormously complex that crams as much data as possible into the smallest space they can. Our media relations team—experts in knowing the nuances of the press and what will or won’t appeal to journalists—may want something that communicates data simply and beautifully and can be summed up in one or two sentences, like the transcendent work of Mona Chalabi for the Guardian. A client, too, will often have specific expectations for how a piece should look and what should be included, and these factors need to be considered as well.

Striking the balance

With so many ways to present any given set of numbers, we at Fractl have found success by making data visualizations as complex as they need to be while always aiming for the nexus of simple and beautiful. In other words: Take raw numbers that will be interesting to people, think of a focused way to clearly visualize them, and then create designs that fit the overall sentiment of the piece.

On a campaign for Porch.com, we asked 1,000 Americans several questions about food, focusing on things that were light and humorous conversation starters. For example, “Is a hot dog a sandwich?” and “What do you put on a hot dog?” As a native Chicagoan who believes there is only one way to make a hot dog, this is exactly the type of debate that would make me take notice and share the content with friends on social media.

In response to those two questions, we got numbers that looked like this:

Using Tableau Public, an open-source data reporting solution that is one of the go-to tools for rapid building at Fractl, the tables above were transformed into rough cuts of a final visualization:

With the building blocks in place, we then gave extensive notes to our design team on how to make something that’s just as simple but much, much more attractive. Given the fun nature of this campaign, a more lighthearted design made sense, and our graphics team delivered. The entire campaign is worth checking out for the project manager’s innovative and expert ability to use simple numbers in a way that is beautiful, easy-to-approach, and instantly compelling.

All three of the visualizations above are reporting the exact same data, but only one of them is instantly shareable and keeps a narrative in mind: by creatively showing the food items themselves, our team turned the simple table of percentages in the first figure into a visualization that could be shared on social media or used by a journalist covering the story.

In other cases, such as if the topic is more serious, simple visualizations can be used to devastating effect. In work for a brand in the addiction and recovery space, we did an extensive analysis of open data hosted by the Centers for Disease Control and Prevention. The dramatic increase in drug overdose deaths in the United States is an emotional story fraught with powerful statistics. In creating a piece on the rise in mortality rate, we wanted to make sure we preserved the gravity of the topic and allowed the numbers to speak for themselves:

A key part of this visualization was adding one additional layer of complexity—age brackets—to tell a more contextualized and human story. Rather than simply presenting a single statistic, our team chose to highlight the fact that the increase in overdose deaths is something affecting Americans across the entire lifespan, and the effect of plotting six different lines on a single chart makes the visual point that addiction is getting worse for all Americans.

Every brand’s data has a story to tell

Spotify has more than 200 million global users, nearly half of whom pay a monthly fee to use the service (the other half generate revenue by listening to intermittent ads). As an organization, Spotify has data on how a sizeable portion of the world listens to its music and the actual characteristics of that music.

Data like this is what makes Spotify such a valuable brand from a dollars and cents standpoint, but a team of data journalists at The New York Times also saw an incredible story about how American music taste has changed in the last 30 years buried in Spotify’s data. The resulting piece, Why Songs of Summer Sound the Same, is a landmark work of data-driven, interactive journalism, and one that should set a content marketer’s head spinning with ideas.

Of course, firms will always be protective of their data, whether it’s Netflix famously not releasing its ratings, Apple deciding to stop its reporting of unit sales, or Stanford University halting its reporting of admissions data. Add to the equation a public that is increasingly wary of data privacy and susceptibility to major data breaches, and clients are often justifiably nervous to share data for the purpose of content production.

Deciding when to share

That said, a firm’s data often is central to its story, and when properly anonymized and cleared of personal identifying information, or PII, the newsworthiness of a brand reporting insights from its own internal numbers can be massive. 

For example, GoodRx, a platform that reports pricing data from more than 70,000 U.S. pharmacies, released a white paper and blog post that compared its internal data on prescription fills with US Census data on income and poverty. While census data is free, only GoodRx had the particular dataset on pharmacy fills—it’s their own proprietary data set. Data like this is obviously key to their overall valuation, but the way in which it was reported here told a deeply interesting story about income and access to medication without giving away anything that could potentially cost the firm. The report was picked up by the New York Times, undoubtedly boosting GoodRx’s ratings for organic search.

The Times’ pieces on Spotify and GoodRx both highlight the fourth key insight on the effective use of data as content marketers: Every brand’s data has a story to tell. These pieces could only have come from their exact sources because only they had access to the data, making the particular findings singular and unique to that specific brand and presenting a key competitive advantage in the content landscape. While working with internal data comes with its own potential pitfalls and challenges, seeking to collaborate with a client to select meaningful internal data and directing its subsequent use for content and narrative should be at the forefront of a content marketer’s mind.

Blurring lines and breaking boundaries

A fascinating piece recently on Recode sought to slightly reframe the high-publicity challenges facing journalists, stating:

“The plight of journalists might not be that bad if you’re willing to consider a broader view of ‘journalism.’” 

The piece detailed that while job postings for journalists are off more than 10 percent since 2004, jobs broadly related to “content” have nearly quadrupled over the same time period. Creatives will always flock to the options that allow them to make what they love, and with organic search largely viewed as a meritocracy of content, the opportunities for brands and content marketers to utilize the data journalism toolkit have never been greater.

What’s more, much of the best data journalism out there typically only uses a handful of visualizations to get its point across. It was also reported recently that the median amount of data sources for pieces created by the New York Times and The Washington Post was two. It too is worth noting that more than 60 percent of data journalism stories in both the Times and Post during a recent time period (January-June, 2017) relied only on government data.

Ultimately, the ease of running large surveys via a platform like Prolific Research, Qualtrics, or Amazon Mechanical Turk, coupled with the ever-increasing number of free and open data sets provided by both the US Government or sites like Kaggle or data.world means that there is no shortage of numbers out there for content marketers to dig into and use to drive storytelling. The trick is in using the right blend of hard data and more ethereal emotional appeal to create a narrative that is truly compelling.

Wrapping up

As brands increasingly invest in content as a means to propel organic search and educate the public, content marketers should seriously consider putting these key elements of data journalism into practice. In a world of endless spin and the increasing importance of showing your work, it’s best to remember the famous quote written by longtime Guardian editor C.P. Scott in 1921: “Comment is free, but facts are sacred.”

What do you think? How do you and your team leverage data journalism in your content marketing efforts?

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

10 Basic SEO Tips to Index + Rank New Content Faster – Whiteboard Friday

Posted by Cyrus-Shepard

In SEO, speed is a competitive advantage.

When you publish new content, you want users to find it ranking in search results as fast as possible. Fortunately, there are a number of tips and tricks in the SEO toolbox to help you accomplish this goal. Sit back, turn up your volume, and let Cyrus Shepard show you exactly how in this week’s Whiteboard Friday.

[Note: #3 isn’t covered in the video, but we’ve included in the post below. Enjoy!]

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans. Welcome to another edition of Whiteboard Friday. I’m Cyrus Shepard, back in front of the whiteboard. So excited to be here today. We’re talking about ten tips to index and rank new content faster.

You publish some new content on your blog, on your website, and you sit around and you wait. You wait for it to be in Google’s index. You wait for it to rank. It’s a frustrating process that can take weeks or months to see those rankings increase. There are a few simple things we can do to help nudge Google along, to help them index it and rank it faster. Some very basic things and some more advanced things too. We’re going to dive right in.

Indexing

1. URL Inspection / Fetch & Render

So basically, indexing content is not that hard in Google. Google provides us with a number of tools. The simplest and fastest is probably the URL Inspection tool. It’s in the new Search Console, previously Fetch and Render. As of this filming, both tools still exist. They are depreciating Fetch and Render. The new URL Inspection tool allows you to submit a URL and tell Google to crawl it. When you do that, they put it in their priority crawl queue. That just simply means Google has a list of URLs to crawl. It goes into the priority, and it’s going to get crawled faster and indexed faster.

2. Sitemaps!

Another common technique is simply using sitemaps. If you’re not using sitemaps, it’s one of the easiest, quickest ways to get your URLs indexed. When you have them in your sitemap, you want to let Google know that they’re actually there. There’s a number of different techniques that can actually optimize this process a little bit more.

The first and the most basic one that everybody talks about is simply putting it in your robots.txt file. In your robots.txt, you have a list of directives, and at the end of your robots.txt, you simply say sitemap and you tell Google where your sitemaps are. You can do that for sitemap index files. You can list multiple sitemaps. It’s really easy.

Sitemap in robots.txt

You can also do it using the Search Console Sitemap Report, another report in the new Search Console. You can go in there and you can submit sitemaps. You can remove sitemaps, validate. You can also do this via the Search Console API.

But a really cool way of informing Google of your sitemaps, that a lot of people don’t use, is simply pinging Google. You can do this in your browser URL. You simply type in google.com/ping, and you put in the sitemap with the URL. You can try this out right now with your current sitemaps. Type it into the browser bar and Google will instantly queue that sitemap for crawling, and all the URLs in there should get indexed quickly if they meet Google’s quality standard.

Example: https://www.google.com/ping?sitemap=https://example.com/sitemap.xml

3. Google Indexing API

(BONUS: This wasn’t in the video, but we wanted to include it because it’s pretty awesome)

Within the past few months, both Google and Bing have introduced new APIs to help speed up and automate the crawling and indexing of URLs.

Both of these solutions allow for the potential of massively speeding up indexing by submitting 100s or 1000s of URLs via an API.

While the Bing API is intended for any new/updated URL, Google states that their API is specifically for “either job posting or livestream structured data.” That said, many SEOs like David Sottimano have experimented with Google APIs and found it to work with a variety of content types.

If you want to use these indexing APIs yourself, you have a number of potential options:

Yoast announced they will soon support live indexing across both Google and Bing within their SEO WordPress plugin

Indexing & ranking

That’s talking about indexing. Now there are some other ways that you can get your content indexed faster and help it to rank a little higher at the same time.

4. Links from important pages

When you publish new content, the basic, if you do nothing else, you want to make sure that you are linking from important pages. Important pages may be your homepage, adding links to the new content, your blog, your resources page. This is a basic step that you want to do. You don’t want to orphan those pages on your site with no incoming links. 

Adding the links tells Google two things. It says we need to crawl this link sometime in the future, and it gets put in the regular crawling queue. But it also makes the link more important. Google can say, “Well, we have important pages linking to this. We have some quality signals to help us determine how to rank it.” So linking from important pages.

5. Update old content 

But a step that people oftentimes forget is not only link from your important pages, but you want to go back to your older content and find relevant places to put those links. A lot of people use a link on their homepage or link to older articles, but they forget that step of going back to the older articles on your site and adding links to the new content.

Now what pages should you add from? One of my favorite techniques is to use this search operator here, where you type in the keywords that your content is about and then you do a site:example.com. This allows you to find relevant pages on your site that are about your target keywords, and those make really good targets to add those links to from your older content.

6. Share socially

Really obvious step, sharing socially. When you have new content, sharing socially, there’s a high correlation between social shares and content ranking. But especially when you share on content aggregators, like Reddit, those create actual links for Google to crawl. Google can see those signals, see that social activity, sites like Reddit and Hacker News where they add actual links, and that does the same thing as adding links from your own content, except it’s even a little better because it’s external links. It’s external signals.

7. Generate traffic to the URL

This is kind of an advanced technique, which is a little controversial in terms of its effectiveness, but we see it anecdotally working time and time again. That’s simply generating traffic to the new content. 

Now there is some debate whether traffic is a ranking signal. There are some old Google patents that talk about measuring traffic, and Google can certainly measure traffic using Chrome. They can see where those sites are coming from. But as an example, Facebook ads, you launch some new content and you drive a massive amount of traffic to it via Facebook ads. You’re paying for that traffic, but in theory Google can see that traffic because they’re measuring things using the Chrome browser. 

When they see all that traffic going to a page, they can say, “Hey, maybe this is a page that we need to have in our index and maybe we need to rank it appropriately.”

Ranking

Once we get our content indexed, talk about a few ideas for maybe ranking your content faster. 

8. Generate search clicks

Along with generating traffic to the URL, you can actually generate search clicks.

Now what do I mean by that? So imagine you share a URL on Twitter. Instead of sharing directly to the URL, you share to a Google search result. People click the link, and you take them to a Google search result that has the keywords you’re trying to rank for, and people will search and they click on your result.

You see television commercials do this, like in a Super Bowl commercial they’ll say, “Go to Google and search for Toyota cars 2019.” What this does is Google can see that searcher behavior. Instead of going directly to the page, they’re seeing people click on Google and choosing your result.

  1. Instead of this: https://moz.com/link-explorer
  2. Share this: https://www.google.com/search?q=link+tool+moz

This does a couple of things. It helps increase your click-through rate, which may or may not be a ranking signal. But it also helps you rank for auto-suggest queries. So when Google sees people search for “best cars 2019 Toyota,” that might appear in the suggest bar, which also helps you to rank if you’re ranking for those terms. So generating search clicks instead of linking directly to your URL is one of those advanced techniques that some SEOs use.

9. Target query deserves freshness

When you’re creating the new content, you can help it to rank sooner if you pick terms that Google thinks deserve freshness. It’s best maybe if I just use a couple of examples here.

Consider a user searching for the term “cafes open Christmas 2019.” That’s a result that Google wants to deliver a very fresh result for. You want the freshest news about cafes and restaurants that are going to be open Christmas 2019. Google is going to preference pages that are created more recently. So when you target those queries, you can maybe rank a little faster.

Compare that to a query like “history of the Bible.” If you Google that right now, you’ll probably find a lot of very old pages, Wikipedia pages. Those results don’t update much, and that’s going to be harder for you to crack into those SERPs with newer content.

The way to tell this is simply type in the queries that you’re trying to rank for and see how old the most recent results are. That will give you an indication of what Google thinks how much freshness this query deserves. Choose queries that deserve a little more freshness and you might be able to get in a little sooner.

10. Leverage URL structure

Finally, last tip, this is something a lot of sites do and a lot of sites don’t do because they’re simply not aware of it. Leverage URL structure. When Google sees a new URL, a new page to index, they don’t have all the signals yet to rank it. They have a lot of algorithms that try to guess where they should rank it. They’ve indicated in the past that they leverage the URL structure to determine some of that.

Consider The New York Times puts all its book reviews under the same URL, newyorktimes.com/book-reviews. They have a lot of established ranking signals for all of these URLs. When a new URL is published using the same structure, they can assign it some temporary signals to rank it appropriately.

If you have URLs that are high authority, maybe it’s your blog, maybe it’s your resources on your site, and you’re leveraging an existing URL structure, new content published using the same structure might have a little bit of a ranking advantage, at least in the short run, until Google can figure these things out.

These are only a few of the ways to get your content indexed and ranking quicker. It is by no means a comprehensive list. There are a lot of other ways. We’d love to hear some of your ideas and tips. Please let us know in the comments below. If you like this video, please share it for me. Thanks, everybody.

Video transcription by Speechpad.com

Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

How to explore a SERP feature strategy with STAT

Posted by TheMozTeam

Your organic result game is on point, but you’ve been hearing a lot of chatter about SERP features and are curious if they can help grow your site’s visibility — how do you find out? Our SERP Features dashboard will be your one-stop shop for everything feature-related.

    If it’s the features in your space that you’re after, you’ll have ’em. The number of keywords producing each feature? You’ll have that, too. The share of voice they’re driving and how much you’re owning? Of course, and more.

    Here’s a step-by-step guide on how you can use the dashboard to suss out a SERP feature strategy that’s right for your site.

    1. Establish viable sites and segments

    For context, let’s say that we’re working for a large supermarket chain with locations across the globe. Once in the dashboard, we’ll immediately look to the Overview module, which will give us a strong indication of whether a SERP feature strategy is viable for any of our keyword segments. We may just find that organic is the road best travelled.

    Clicking through our segments, we stumble across one that’s driving a huge amount of share of voice — an estimated 309.8 million views, which is actually up by 33.4 million over the 30-day average.

    SERP Features tab Overview module

    At this point, regardless of what the deal is with SERP features, we know that we’re looking at a powerful set of keywords. But, because we’re on a mission, we need to know how much of that share of voice is compliments of SERP features.

    Since the green section of the chart represents organic share of voice and the grey represents SERP feature share of voice, right away we can see that features are creating a huge amount of visibility. Surprisingly, even more than regular ol’ organic results.

    By hovering over each segment of the chart, we can see their exact breakdowns. SERP features are driving a whopping 188.2 million eyeballs, up by 18 million over the 30-day average, while organic results are driving only 121.6 million, having also gained share of voice along the way.

    We’re confident that a SERP feature strategy is worth exploring for this segment.

    2. Get a lay of the SERP feature landscape 

    Next, we want to know what the SERP features appearing in our space are, and whether they make sense for us to tackle.

    As a supermarket chain, not only do we sell fresh eats from our brick-and-mortar stores, but our site also has a regularly updated blog with delectable recipes, so we’ve got a few SERP features already in mind (can anyone say places and recipe results?).

    But, if for some strange reason our SERPs are full of flights and jobs, maybe we’ll move onto a segment that we can have more impact on, and check in on this one another time.

    Daily snapshot

    To see what we’re working with, we head to the [Current Day] SERP Features chart, make sure every feature is enabled in the legend, and select SoV: Total from the dropdown, which will show us the total share of voice generated by each feature appearing on our SERPs.

    Right away we know that the top two share of voice earners are directly in our wheelhouse: places and recipe results. What are the odds!

    Carousels and knowledge graphs — features that we have little or no control over — might be next on the list, but the ones trailing them aren’t far behind and are winnable. So, we’ll pick our favourite five — places, recipes, list snippets, “People also ask” boxes, and paragraph snippets — to build strategies around, and make sure only they appear on our chart.

    Since food and food-related activities tend to be heavy on the visuals, it wouldn’t be wise for us to neglect images and videos entirely, so we’ll also enable them just to creep on. (We’ll think of recipes and AMP recipes as one, and make a mental note to look into an overall AMP strategy at some point.)

    Our [Current Day] SERP Features chart now shows how our chosen features stack up against each other in terms of share of voice. Apparently, videos have such a small impact that they don’t even warrant a bar on the chart.

    Over time

    But, before we ride off into the sunset with our SERP features just yet, we still need to do a little more research to see whether they’re a long-term relationship option or a mere flash in the pan.

    To do this, we look to the SERP Features Over Time chart, take the SoV: Total metric with us, and select a date-range wide enough to give us a good idea of their past behavior. Ideally, we’d love to see that they’re making continual progress.

    At the very least, they appear to have a pretty stable presence — no questionable dips to be seen — which means that we’ve got ourselves some dependable features. Cool.

    3. Know how many keywords you’re working with 

    Now that we know which SERP features will help boost our site visibility, it’s time to see how many keywords that each feature’s strategy will revolve around.

    So, back to the [Current Day] SERP Features chart we head, switching our metric to Count: Total to get the exact number of keywords that produce each result type.

    This changes our view rather drastically — video and image results now take top billing. Of course, we’ll remember that despite their apparent popularity on our SERPs, they have very little sway.

    As far as the result types that we care about go, “People also ask” boxes and places appear for most of our keywords, and more keywords to optimize for means more time and effort.

    We’re absolutely tickled pink to see that a relatively small number of keywords are responsible for producing all that recipes share of voice — this is the feature we’ll probably want to start with.

    To get these groups of keywords, we’ll simply click the SERP feature icons along the bottom of the chart and voila! We’ll see a filtered view of them appear in the Keywords tab, allowing us to create individual tags for them. This way, we can monitor them more closely.

    Now we can perform some SEO magic.

    4. Chart your daily progress against general trends 

    As we optimize for our various SERP features, not only can we track our progress, but we can keep an eye on the general happenings of features on our SERPs.

    We’ll use modules in the Share of Voice: SERP Features panel for these quick health-checks, customizing them to show only our chosen SERP features, which will make unearthing these insights even easier.

    SERP trends

    The Top Increases/Decreases module shows us that places, PAAs, and paragraph snippets have gained the most share of voice on our SERPs. The metric for each feature tells us exactly how much movement has been made between the current day and the segment’s 30-day average.

    In other words, the overall health of the features we’ve put our lot in with is doing well. And snagging one of them could mean more share of voice than we’d originally anticipated.

    Only videos have taken a slight hit, but since we’re not interested in them, we’ll breathe a sigh of relief and pat ourselves on the back for putting them off to the side.

    We’ll keep an eye here to make sure that our features continue to trend up on the SERPs.

    Personal gains

    But how are we doing?

    The Your Top Gains/Losses module tells us that our hard work is paying off for places packs. Not only has this result type grown in influence on the SERPs in general, but we’ve managed to increase our share. Woo!

    And while we’ve only made a smidgen of improvement with recipes, it’s still better than the none we had before.

    Unfortunately, we appear to have lost some ground with our featured snippets. Did we fall out of a few? Did they get bumped down the SERPs because of other, more relevant features? Are snippets just super volatile in our space? We’d be smart to do some investigating.

    And finally, since our biggest growing SERP feature for the day isn’t necessarily what drives most of our site visibility, we’ll take a quick peek at the Your Primary Source of SoV module to see who our SERP feature superstar is.

    As it happens, out of all the SERP features that we own, places are giving us the most visibility as well.

    We’ll watch the needle to see if we keep making gains — we’re currently only owning an estimated 1.7 million views out of an available 60.5 million — or see whether another SERP feature appears here, usurping places as our top earner.

    5. Keep track of ownership over the long-haul 

    Daily progress reports are great, but we’ll also need a running tally of our successes (and failures) to help us zero-in on when and why things were (or weren’t) working for us.

    To do this, we’ll go to the SERP Features Over Time chart, set our metric to Count: Owned and our date-range to whenever we’re curious about, and see how the number of keywords with features that we own has been trending during that period.

    Looking over our first month of optimizing — we were doing a great job of increasing our appearance in paragraph and list snippets until recently. We’ll have to look back at what we were up to on September 14 and see if we can replicate our success that day in order to dig ourselves out of our current hole.

    Our spot in places results have at least held steady.

    Go get ’em, tigers! 

    Now that you know how to explore a SERP feature strategy, what are you waiting for! 

    Want more info or a personalized walk-through of what you saw here? Say hello and request a demo.

      What SERP feature strategies are you keen on exploring — tell us below in the comments?

      Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

      The New Moz Local Is on Its Way!

      Posted by MiriamEllis

      Exciting secrets can be so hard to keep. Finally, all of us at Moz have the green light to share with all of you a first glimpse of something we’ve been working on for months behind the scenes. Big inhale, big exhale…

      Announcing: the new and improved Moz Local, to be rolled out beginning June 12!

      Why is Moz updating the Moz Local platform?

      Local search has evolved from caterpillar to butterfly in the seven years since we launched Moz Local. I think we’ve spent the time well, intensively studying both Google’s trajectory and the feedback of enterprise, marketing agency, and SMB customers.

      Your generosity in telling us what you need as marketers has inspired us to action. Over the coming months, you’ll be seeing what Moz has learned reflected in a series of rollouts. Stage by stage, you’ll see that we’re planning to give our software the wings it needs to help you fully navigate the dynamic local search landscape and, in turn, grow your business.

      We hope you’ll keep gathering together with us to watch Moz Local take full flight — changes will only become more robust as we move forward.

      What can I expect from this upgrade?

      Beginning June 12th, Moz Local customers will experience a fresh look and feel in the Moz Local interface, plus these added capabilities:

      • New distribution partners to ensure your data is shared on the platforms that matter most in the evolving local search ecosystem
      • Listing status and real-time updates to know the precise status of your location data
      • Automated detection and permanent duplicate closure, taking the manual work out of the process and saving you significant time
      • Integrations with Google and Facebook to gain deeper insights, reporting, and management for your location’s profiles
      • An even better data clean-up process to ensure valid data is formatted properly for distribution
      • A new activity feed to alert you to any changes to your location’s listings
      • A suggestion engine to provide recommendations to increase accuracy, completeness, and consistency of your location data

      Additional features available include:

      • Managing reviews of your locations to keep your finger on the pulse of what customers are saying
      • Social posting to engage with consumers and alert them to news, offers, and other updates
      • Store locator and landing pages to share location data easily with both customers and search engines (available for Moz Local customers with 100 or more locations)

      Remember, this is just the beginning. There’s more to come in 2019, and you can expect ongoing communications from us as further new feature sets emerge!

      When is it happening?

      We’ll be rolling out all the new changes beginning on June 12th. As with some large changes, this update will take a few days to complete, so some people will see the changes immediately while for others it may take up to a week. By June 21st, everyone should be able to explore the new Moz Local experience!

      Don’t worry — we’ll have several more communications between now and then to help you prepare. Keep an eye out for our webinar and training materials to help ensure a smooth transition to the new Moz Local.

      Are any metrics/scores changing?

      Some of our reporting metrics will look different in the new Moz Local. We’ll be sharing more information on these metrics and how to use them soon, but for now, here’s a quick overview of changes you can expect:

      • Profile Completeness: Listing Score will be replaced by the improved Profile Completeness metric. This new feature will give you a better measurement of how complete your data is, what’s missing from it, and clear prompts to fill in any lacking information.
      • Improved listing status reporting: Partner Accuracy Score will be replaced by improved reporting on listing status with all of our partners, including continuous information about the data they’ve received from us. You’ll be able to access an overview of your distribution network, so that you can see which sites your business is listed on. Plus, you’ll be able to go straight to the live listing with a single click.
      • Visibility Index: Though they have similar names, Visibility Score is being replaced by something slightly different with the new and improved Visibility Index, which notates how the data you’ve provided us about a location matches or mismatches your information on your live listings.
      • New ways to measure and act on listing reach: Reach Score will be leaving us in favor of even more relevant measurement via the Visibility Index and Profile Completeness metrics. The new Moz Local will include more actionable information to ensure your listings are accurate and complete.

      Other FAQs

      You’ll likely have questions if you’re a current Moz Local customer or are considering becoming one. Please check out our resource center for further details, and feel free to leave us a question down in the comments — we’ll be on point to respond to any wonderings or concerns you might have!

      Head to the FAQs

      Where is Moz heading with this?

      As a veteran local SEO, I’m finding the developments taking place with our software particularly exciting because, like you, I see how local search and local search marketing have matured over the past decade.

      I’ve closely watched the best minds in our industry moving toward a holistic vision of how authenticity, customer engagement, data, analysis, and other factors underpin local business success. And we’ve all witnessed Google’s increasingly sophisticated presentation of local business information evolve and grow. It’s been quite a ride!

      At every level of local commerce, owners and marketers deserve tools that bring order out of what can seem like chaos. We believe you deserve software that yields strategy. As our CEO, Sarah Bird, recently said of Moz,

      “We are big believers in the power of local SEO.”

      So the secret is finally out, and you can see where Moz is heading with the local side of our product lineup. It’s our serious plan to devote everything we’ve got into putting the power of local SEO into your hands.

      Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

      How Often Does Google Update Its Algorithm?

      Posted by Dr-Pete

      In 2018, Google reported an incredible 3,234 improvements to search. That’s more than 8 times the number of updates they reported in 2009 — less than a decade ago — and an average of almost 9 per day. How have algorithm updates evolved over the past decade, and how can we possibly keep tabs on all of them? Should we even try?

      To kick this off, here’s a list of every confirmed count we have (sources at end of post):

      • 2018 – 3,234 “improvements”
      • 2017 – 2,453 “changes”
      • 2016 – 1,653 “improvements”
      • 2013 – 890 “improvements”
      • 2012 – 665 “launches”
      • 2011 – 538 “launches”
      • 2010 – 516 “changes”
      • 2009 – 350–400 “changes”

      Unfortunately, we don’t have confirmed data for 2014-2015 (if you know differently, please let me know in the comments).

      A brief history of update counts

      Our first peek into this data came in spring of 2010, when Google’s Matt Cutts revealed that “on average, [Google] tends to roll out 350–400 things per year.” It wasn’t an exact number, but given that SEOs at the time (and to this day) were tracking at most dozens of algorithm changes, the idea of roughly one change per day was eye-opening.

      In fall of 2011, Eric Schmidt was called to testify before Congress, and revealed our first precise update count and an even more shocking scope of testing and changes:

      “To give you a sense of the scale of the changes that Google considers, in 2010 we conducted 13,311 precision evaluations to see whether proposed algorithm changes improved the quality of its search results, 8,157 side-by-side experiments where it presented two sets of search results to a panel of human testers and had the evaluators rank which set of results was better, and 2,800 click evaluations to see how a small sample of real-life Google users responded to the change. Ultimately, the process resulted in 516 changes that were determined to be useful to users based on the data and, therefore, were made to Google’s algorithm.”

      Later, Google would reveal similar data in an online feature called “How Search Works.” Unfortunately, some of the earlier years are only available via the Internet Archive, but here’s a screenshot from 2012:

      Note that Google uses “launches” and “improvements” somewhat interchangeably. This diagram provided a fascinating peek into Google’s process, and also revealed a startling jump from 13,311 precisions evaluations (changes that were shown to human evaluators) to 118,812 in just two years.

      Is the Google algorithm heating up?

      Since MozCast has kept the same keyword set since almost the beginning of data collection, we’re able to make some long-term comparisons. The graph below represents five years of temperatures. Note that the system was originally tuned (in early 2012) to an average temperature of 70°F. The redder the bar, the hotter the temperature …

      Click to open a high-resolution version in a new tab

      You’ll notice that the temperature ranges aren’t fixed — instead, I’ve split the label into eight roughly equal buckets (i.e. they represent the same number of days). This gives us a little more sensitivity in the more common ranges.

      The trend is pretty clear. The latter half of this 5-year timeframe has clearly been hotter than the first half. While warming trend is evident, though, it’s not a steady increase over time like Google’s update counts might suggest. Instead, we see a stark shift in the fall of 2016 and a very hot summer of 2017. More recently, we’ve actually seen signs of cooling. Below are the means and medians for each year (note that 2014 and 2019 are partial years):

      • 2019 – 83.7° /82.0°
      • 2018 – 89.9° /88.0°
      • 2017 – 94.0° /93.7°
      • 2016 – 75.1° / 73.7°
      • 2015 – 62.9° / 60.3°
      • 2014 – 65.8° / 65.9°

      Note that search engine rankings are naturally noisy, and our error measurements tend to be large (making day-to-day changes hard to interpret). The difference from 2015 to 2017, however, is clearly significant.

      Are there really 9 updates per day?

      No, there are only 8.86 – feel better? Ok, that’s probably not what you meant. Even back in 2009, Matt Cutts said something pretty interesting that seems to have been lost in the mists of time…

      “We might batch [algorithm changes] up and go to a meeting once a week where we talk about 8 or 10 or 12 or 6 different things that we would want to launch, but then after those get approved … those will roll out as we can get them into production.”

      In 2016, I did a study of algorithm flux that demonstrated a weekly pattern evident during clearer episodes of ranking changes. From a software engineering standpoint, this just makes sense — updates have to be approved and tend to be rolled out in batches. So, while measuring a daily average may help illustrate the rate of change, it probably has very little basis in the reality of how Google handles algorithm updates.

      Do all of these algo updates matter?

      Some changes are small. Many improvements are likely not even things we in the SEO industry would consider “algorithm updates” — they could be new features, for example, or UI changes.

      As SERP verticals and features evolve, and new elements are added, there are also more moving parts subject to being fixed and improved. Local SEO, for example, has clearly seen an accelerated rate of change over the past 2-3 years. So, we’d naturally expect the overall rate of change to increase.

      A lot of this is also in the eye of the beholder. Let’s say Google makes an update to how they handle misspelled words in Korean. For most of us in the United States, that change isn’t going to be actionable. If you’re a Korean brand trying to rank for a commonly misspelled, high-volume term, this change could be huge. Some changes also are vertical-specific, representing radical change for one industry and little or no impact outside that niche.

      On the other hand, you’ll hear comments in the industry along the lines of “There are 3,000 changes per year; stop worrying about it!” To me that’s like saying “The weather changes every day; stop worrying about it!” Yes, not every weather report is interesting, but I still want to know when it’s going to snow or if there’s a tornado coming my way. Recognizing that most updates won’t affect you is fine, but it’s a fallacy to stretch that into saying that no updates matter or that SEOs shouldn’t care about algorithm changes.

      Ultimately, I believe it helps to know when major changes happen, if only to understand whether rankings shifted due something we did or something Google did. It’s also clear that the rate of change has accelerated, no matter how you measure it, and there’s no evidence to suggest that Google is slowing down.


      Appendix A: Update count sources

      2009 – Google’s Matt Cutts, video (Search Engine Land)
      2010 – Google’s Eric Schmidt, testifying before Congress (Search Engine Land)
      2012 – Google’s “How Search Works” page (Internet Archive)
      2013 – Google’s Amit Singhal, Google+ (Search Engine Land)
      2016 – Google’s “How Search Works” page (Internet Archive)
      2017 – Unnamed Google employees (CNBC)
      2018 – Google’s “How Search Works” page (Google.com)

      Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

      SEO & Progressive Web Apps: Looking to the Future

      Posted by tombennet

      Practitioners of SEO have always been mistrustful of JavaScript.

      This is partly based on experience; the ability of search engines to discover, crawl, and accurately index content which is heavily reliant on JavaScript has historically been poor. But it’s also habitual, born of a general wariness towards JavaScript in all its forms that isn’t based on understanding or experience. This manifests itself as dependence on traditional SEO techniques that have not been relevant for years, and a conviction that to be good at technical SEO does not require an understanding of modern web development.

      As Mike King wrote in his post The Technical SEO Renaissance, these attitudes are contributing to “an ever-growing technical knowledge gap within SEO as a marketing field, making it difficult for many SEOs to solve our new problems”. They also put SEO practitioners at risk of being left behind, since too many of us refuse to explore – let alone embrace – technologies such as Progressive Web Apps (PWAs), modern JavaScript frameworks, and other such advancements which are increasingly being seen as the future of the web.

      In this article, I’ll be taking a fresh look at PWAs. As well as exploring implications for both SEO and usability, I’ll be showcasing some modern frameworks and build tools which you may not have heard of, and suggesting ways in which we need to adapt if we’re to put ourselves at the technological forefront of the web.

      1. Recap: PWAs, SPAs, and service workers

      Progressive Web Apps are essentially websites which provide a user experience akin to that of a native app. Features like push notifications enable easy re-engagement with your audience, while users can add their favorite sites to their home screen without the complication of app stores. PWAs can continue to function offline or on low-quality networks, and they allow a top-level, full-screen experience on mobile devices which is closer to that offered by native iOS and Android apps.

      Best of all, PWAs do this while retaining – and even enhancing – the fundamentally open and accessible nature of the web. As suggested by the name they are progressive and responsive, designed to function for every user regardless of their choice of browser or device. They can also be kept up-to-date automatically and — as we shall see — are discoverable and linkable like traditional websites. Finally, it’s not all or nothing: existing websites can deploy a limited subset of these technologies (using a simple service worker) and start reaping the benefits immediately.

      The spec is still fairly young, and naturally, there are areas which need work, but that doesn’t stop them from being one of the biggest advancements in the capabilities of the web in a decade. Adoption of PWAs is growing rapidly, and organizations are discovering the myriad of real-world business goals they can impact.

      You can read more about the features and requirements of PWAs over on Google Developers, but two of the key technologies which make PWAs possible are:

      • App Shell Architecture: Commonly achieved using a JavaScript framework like React or Angular, this refers to a way of building single page apps (SPAs) which separates logic from the actual content. Think of the app shell as the minimal HTML, CSS, and JS your app needs to function; a skeleton of your UI which can be cached.
      • Service Workers: A special script that your browser runs in the background, separate from your page. It essentially acts as a proxy, intercepting and handling network requests from your page programmatically.

      Note that these technologies are not mutually exclusive; the single page app model (brought to maturity with AngularJS in 2010) obviously predates service workers and PWAs by some time. As we shall see, it’s also entirely possible to create a PWA which isn’t built as a single page app. For the purposes of this article, however, we’re going to be focusing on the ‘typical’ approach to developing modern PWAs, exploring the SEO implications — and opportunities — faced by teams that choose to join the rapidly-growing number of organizations that make use of the two technologies described above.

      We’ll start with the app shell architecture and the rendering implications of the single page app model.

      2. The app shell architecture

      URLs

      In a nutshell, the app shell architecture involves aggressively caching static assets (the bare minimum of UI and functionality) and then loading the actual content dynamically, using JavaScript. Most modern JavaScript SPA frameworks encourage something resembling this approach, and the separation of logic and content in this way benefits both speed and usability. Interactions feel instantaneous, much like those on a native app, and data usage can be highly economical.

      Credit to https://developers.google.com/web/fundamentals/architecture/app-shell

      As I alluded to in the introduction, a heavy reliance on client-side JavaScript is a problem for SEO. Historically, many of these issues centered around the fact that while search crawlers require unique URLs to discover and index content, single page apps don’t need to change the URL for each state of the application or website (hence the phrase ‘single page’). The reliance on fragment identifiers — which aren’t sent as part of an HTTP request — to dynamically manipulate content without reloading the page was a major headache for SEO. Legacy solutions involved replacing the hash with a so-called hashbang (#!) and the _escaped_fragment_ parameter, a hack which has long-since been deprecated and which we won’t be exploring today.

      Thanks to the HTML5 history API and pushState method, we now have a better solution. The browser’s URL bar can be changed using JavaScript without reloading the page, thereby keeping it in sync with the state of your application or site and allowing the user to make effective use of the browser’s ‘back’ button. While this solution isn’t a magic bullet — your server must be configured to respond to requests for these deep URLs by loading the app in its correct initial state — it does provide us with the tools to solve the problem of URLs in SPAs.

      // Run this in your console to modify the URL in your 
      // browser - note that the page doesn't actually reload. 
      history.pushState(null, "Page 2", "/page2.html");

      The bigger problem facing SEO today is actually much easier to understand: rendering content, namely when and how it gets done.

      Rendering content

      Note that when I refer to rendering here, I’m referring to the process of constructing the HTML. We’re focusing on how the actual content gets to the browser, not the process of drawing pixels to the screen.

      In the early days of the web, things were simpler on this front. The server would typically return all the HTML that was necessary to render a page. Nowadays, however, many sites which utilize a single page app framework deliver only minimal HTML from the server and delegate the heavy lifting to the client (be that a user or a bot). Given the scale of the web this requires a lot of time and computational resource, and as Google made clear at its I/O conference in 2018, this poses a major problem for search engines:

      “The rendering of JavaScript-powered websites in Google Search is deferred until Googlebot has resources available to process that content.”

      On larger sites, this second wave of indexation can sometimes be delayed for several days. On top of this, you are likely to encounter a myriad of problems with crucial information like canonical tags and metadata being missed completely. I would highly recommend watching the video of Google’s excellent talk on this subject for a rundown of some of the challenges faced by modern search crawlers.

      Google is one of the very few search engines that renders JavaScript at all. What’s more, it does so using a web rendering service that until very recently was based on Chrome 41 (released in 2015). Obviously, this has implications outside of just single page apps, and the wider subject of JavaScript SEO is a fascinating area right now. Rachel Costello’s recent white paper on JavaScript SEO is the best resource I’ve read on the subject, and it includes contributions from other experts like Bartosz Góralewicz, Alexis Sanders, Addy Osmani, and a great many more.

      For the purposes of this article, the key takeaway here is that in 2019 you cannot rely on search engines to accurately crawl and render your JavaScript-dependent web app. If your content is rendered client-side, it will be resource-intensive for Google to crawl, and your site will underperform in search. No matter what you’ve heard to the contrary, if organic search is a valuable channel for your website, you need to make provisions for server-side rendering.

      But server-side rendering is a concept which is frequently misunderstood…

      “Implement server-side rendering”

      This is a common SEO audit recommendation which I often hear thrown around as if it were a self-contained, easily-actioned solution. At best it’s an oversimplification of an enormous technical undertaking, and at worst it’s a misunderstanding of what’s possible/necessary/beneficial for the website in question. Server-side rendering is an outcome of many possible setups and can be achieved in many different ways; ultimately, though, we’re concerned with getting our server to return static HTML.

      So, what are our options? Let’s break down the concept of server-side rendered content a little and explore our options. These are the high-level approaches which Google outlined at the aforementioned I/O conference:

      • Dynamic Rendering — Here, normal browsers get the ‘standard’ web app which requires client-side rendering while bots (such as Googlebot and social media services) are served with static snapshots. This involves adding an additional step onto your server infrastructure, namely a service which fetches your web app, renders the content, then returns that static HTML to bots based on their user agent (i.e. UA sniffing). Historically this was done with a service like PhantomJS (now deprecated and no longer developed), while today Puppeteer (headless Chrome) can perform a similar function. The main advantage is that it can often be bolted into your existing infrastructure.
      • Hybrid Rendering — This is Google’s long-term recommendation, and it’s absolutely the way to go for newer site builds. In short, everyone — bots and humans — get the initial view served as fully-rendered static HTML. Crawlers can continue to request URLs in this way and will get static content each time, while on normal browsers, JavaScript takes over after the initial page load. This is a great solution in theory, and comes with many other advantages for speed and usability too; more on that soon.

      The latter is cleaner, doesn’t involve UA sniffing, and is Google’s long-term recommendation. It’s also worth clarifying that ‘hybrid rendering’ is not a single solution — it’s an outcome of many possible approaches to making static prerendered content available server-side. Let’s break down how a couple of ways such an outcome can be achieved.

      Isomorphic/universal apps

      This is one way in which you might achieve a ‘hybrid rendering’ setup. Isomorphic applications use JavaScript which runs on both the server and the client. This is made possible thanks to the advent of Node.js, which – among many other things – allows developers to write code which can run on the backend as well as in the browser.

      Typically you’ll configure your framework (React, Angular Universal, whatever) to run on a Node server, prerendering some or all of the HTML before it’s sent to the client. Your server must, therefore, be configured to respond to deep URLs by rendering HTML for the appropriate page. In normal browsers, this is the point at which the client-side application will seamlessly take over. The server-rendered static HTML for the initial view is ‘rehydrated’ (brilliant term) by the browser, turning it back into a single page app and executing subsequent navigation events with JavaScript.

      Done well, this setup can be fantastic since it offers the usability benefits of client-side rendering, the SEO advantages of server-side rendering, and a rapid first paint (even if Time to Interactive is often negatively impacted by the rehydration as JS kicks in). For fear of oversimplifying the task, I won’t go into too much more detail here, but the key point is that while isomorphic JavaScript / true server-side rendering can be a powerful solution, it is often enormously complex to set up.

      So, what other options are there? If you can’t justify the time or expense of a full isomorphic setup, or if it’s simply overkill for what you’re trying to achieve, are there any other ways you can reap the benefits of the single page app model — and hybrid rendering setup — without sabotaging your SEO?

      Prerendering/JAMstack

      Having rendered content available server-side doesn’t necessarily mean that the rendering process itself needs to happen on the server. All we need is for rendered HTML to be there, ready to serve to the client; the rendering process itself can happen anywhere you like. With a JAMstack approach, rendering of your content into HTML happens as part of your build process.

      I’ve written about the JAMstack approach before. By way of a quick primer, the term stands for JavaScript, APIs, and markup, and it describes a way of building complex websites without server-side software. The process of assembling a site from front-end component parts — a task a traditional site might achieve with WordPress and PHP — is executed as part of the build process, while interactivity is handled client-side using JavaScript and APIs.

      Think of it this way: everything lives in your Git repository. Your content is stored as plain text markdown files (editable via a headless CMS or other API-based solution) and your page templates and assembly logic are written in Go, JavaScript, Ruby, or whatever language your preferred site generator happens to use. Your site can be built into static HTML on any computer with the appropriate set of command line tools before it’s hosted anywhere. The resulting set of easily-cached static files can often be securely hosted on a CDN for next to nothing.

      I honestly think static site generators – or rather the principles and technologies which underpin them — are the future. There’s every chance I’m wrong about this, but the power and flexibility of the approach should be clear to anyone who’s used modern npm-based automation software like Gulp or Webpack to author their CSS or JavaScript. I’d challenge anyone to test the deep Git integration offered by specialist webhost Netlify in a real-world project and still think that the JAMstack approach is a fad.

      The popularity of static site generators on GitHub, generated using https://stars.przemeknowak.com

      The significance of a JAMstack setup to our discussion of single page apps and prerendering should be fairly obvious. If our static site generator can assemble HTML based on templates written in Liquid or Handlebars, why can’t it do the same with JavaScript?

      There is a new breed of static site generator which does just this. Frequently powered by React or Vue.js, these programs allow developers to build websites using cutting-edge JavaScript frameworks and can easily be configured to output SEO-friendly, static HTML for each page (or ‘route’). Each of these HTML files is fully rendered content, ready for consumption by humans and bots, and serves as an entry point into a complete client-side application (i.e. a single page app). This is a perfect execution of what Google termed “hybrid rendering”, though the precise nature of the pre-rendering process sets it quite apart from an isomorphic setup.

      A great example is GatsbyJS, which is built in React and GraphQL. I won’t go into too much detail, but I would encourage everyone who’s read this far to check out their homepage and excellent documentation. It’s a well-supported tool with a reasonable learning curve, an active community (a feature-packed v2.0 was released in September), an extensible plugin-based architecture, rich integrations with many CMSs, and it allows developers to utilize modern frameworks like React without sabotaging their SEO. There’s also Gridsome, based on VueJS, and React Static which — you guessed it — uses React.

      Nike’s recent Just Do It campaign, which utilized the React-powered static site generator GatsbyJS and is hosted on Netlify.

      Enterprise-level adoption of these platforms looks set to grow; GatsbyJS was used by Nike for their Just Do It campaign, Airbnb for their engineering site airbnb.io, and Braun have even used it to power a major e-commerce site. Finally, our friends at SEOmonitor used it to power their new website.

      But that’s enough about single page apps and JavaScript rendering for now. It’s time we explored the second of our two key technologies underpinning PWAs. Promise you’ll stay with me to the end (haha, nerd joke), because it’s time to explore Service Workers.

      3. Service Workers

      First of all, I should clarify that the two technologies we’re exploring — SPAs and service workers — are not mutually exclusive. Together they underpin what we commonly refer to as a Progressive Web App, yes, but it’s also possible to have a PWA which isn’t an SPA. You could also integrate a service worker into a traditional static website (i.e. one without any client-side rendered content), which is something I believe we’ll see happening a lot more in the near future. Finally, service workers operate in tandem with other technologies like the Web App Manifest, something that my colleague Maria recently explored in more detail in her excellent guide to PWAs and SEO.

      Ultimately, though, it is service workers which make the most exciting features of PWAs possible. They’re one of the most significant changes to the web platform in its history, and everyone whose job involves building, maintaining, or auditing a website needs to be aware of this powerful new set of technologies. If, like me, you’ve been eagerly checking Jake Archibald’s Is Service Worker Ready page for the last couple of years and watching as adoption by browser vendors has grown, you’ll know that the time to start building with service workers is now.

      We’re going to explore what they are, what they can do, how to implement them, and what the implications are for SEO.

      What can service workers do?

      A service worker is a special kind of JavaScript file which runs outside of the main browser thread. It sits in-between the browser and the network, and its powers include:

      • Intercepting network requests and deciding what to do with them programmatically. The worker might go to network as normal, or it might rely solely on the cache. It could even fabricate an entirely new response from a variety of sources. That includes constructing HTML.
      • Preloading files during service worker installation. For SPAs this commonly includes the ‘app shell’ we discussed earlier, while simple static websites might opt to preload all HTML, CSS, and JavaScript, ensuring basic functionality is maintained while offline.
      • Handling push notifications, similar to a native app. This means websites can get permission from users to deliver notifications, then rely on the service worker to receive messages and execute them even when the browser is closed.
      • Executing background sync, deferring network operations until connectivity has improved. This might be an ‘outbox’ for a webmail service or a photo upload facility. No more “request failed, please try again later” – the service worker will handle it for you at an appropriate time.

      The benefits of these kinds of features go beyond the obvious usability perks. As well as driving adoption of HTTPS across the web (all the major browsers will only register service workers on the secure protocol), service workers are transformative when it comes to speed and performance. They underpin new approaches and ideas like Google’s PRPL Pattern, since we can maximize caching efficiency and minimize reliance on the network. In this way, service workers will play a key role in making the web fast and accessible for the next billion web users.

      So yeah, they’re an absolute powerhouse.

      Implementing a service worker

      Rather than doing a bad job of writing a basic tutorial here, I’m instead going to link to some key resources. After all, you are in the best position to know how deep your understanding of service workers needs to be.

      The MDN Docs are a good place to learn more about service workers and their capabilities. If you’re already confident with the essentials of web development and enjoy a learn-by-doing approach, I’d highly recommend completing Google’s PWA training course. It includes a whole practical exercise on service workers, which is a great way to familiarize yourself with the basics. If ES6 and promises aren’t yet a part of your JavaScript repertoire, prepare for a baptism of fire.

      They key thing to understand — and which you’ll realize very quickly once you start experimenting — is that service workers hand over an incredible level of control to developers. Unlike previous attempts to solve the connectivity conundrum (such as the ill-fated AppCache), service workers don’t enforce any specific patterns on your work; they’re a set of tools for you to write your own solutions to the problems you’re facing.

      One consequence of this is that they can be very complex. Registering and installing a service worker is not a simple exercise, and any attempts to cobble one together by copy-pasting from StackExchange are doomed to failure (seriously, don’t do this). There’s no such thing as a ready-made service worker for your site — if you’re to author a suitable worker, you need to understand the infrastructure, architecture, and usage patterns of your website. Uncle Ben, ever the web development guru, said it best: with great power comes great responsibility.

      One last thing: you’ll probably be surprised how many sites you visit are already using a service worker. Head to chrome://serviceworker-internals/ in Chrome or about:debugging#workers in Firefox to see a list.

      Service workers and SEO

      In terms of SEO implications, the most relevant thing about service workers is probably their ability to hijack requests and modify or fabricate responses using the Fetch API. What you see in ‘View Source’ and even on the Network tab is not necessarily a representation of what was returned from the server. It might be a cached response or something constructed by the service worker from a variety of different sources.

      Credit: https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API

      Here’s a practical example:

      • Head to the GatsbyJS homepage
      • Hit the link to the ‘Docs’ page.
      • Right-click – View Source

      No content, right? Just some inline scripts and styles and empty HTML elements — a classic client-side JavaScript app built in React. Even if you open the Network tab and refresh the page, the Preview and Response tabs will tell the same story. The actual content only appears in the Element inspector, because the DOM is being assembled with JavaScript.

      Now run a curl request for the same URL (https://www.gatsbyjs.org/docs/), or fetch the page using Screaming Frog. All the content is there, along with proper title tags, canonicals, and everything else you might expect from a page rendered server-side. This is what a crawler like Googlebot will see too.

      This is because the website uses hybrid rendering and a service worker — installed in your browser — is handling subsequent navigation events. There is no need for it to fetch the raw HTML for the Docs page from the server because the client-side application is already up-and-running – thus, View Source shows you what the service worker returned to the application, not what the network returned. Additionally, these pages can be reloaded while you’re offline thanks to the service worker’s effective use of the cache.

      You can easily spot which responses came from the service worker using the Network tab — note the ‘from ServiceWorker’ line below.

      On the Application tab, you can see the service worker which is running on the current page along with the various caches it has created. You can disable or bypass the worker and test any of the more advanced functionality it might be using. Learning how to use these tools is an extremely valuable exercise; I won’t go into details here, but I’d recommend studying Google’s Web Fundamentals tutorial on debugging service workers.

      I’ve made a conscious effort to keep code snippets to a bare minimum in this article, but grant me this one. I’ve put together an example which illustrates how a simple service worker might use the Fetch API to handle requests and the degree of control which we’re afforded:

      The result:

      I hope that this (hugely simplified and non-production ready) example illustrates a key point, namely that we have extremely granular control over how resource requests are handled. In the example above we’ve opted for a simple try-cache-first, fall-back-to-network, fall-back-to-custom-page pattern, but the possibilities are endless. Developers are free to dictate how requests should be handled based on hostnames, directories, file types, request methods, cache freshness, and loads more. Responses – including entire pages – can be fabricated by the service worker. Jake Archibald explores some common methods and approaches in his Offline Cookbook.

      The time to learn about the capabilities of service workers is now. The skillset required for modern technical SEO has a fair degree of overlap with that of a web developer, and today, a deep understanding of the dev tools in all major browsers – including service worker debugging – should be regarded as a prerequisite.

      4. Wrapping Up

      SEOs need to adapt

      Until recently, it’s been too easy to get away with not understanding the consequences and opportunities posed by PWAs and service workers.

      These were cutting-edge features which sat on the periphery of what was relevant to search marketing, and the aforementioned wariness of many SEOs towards JavaScript did nothing to encourage experimentation. But PWAs are rapidly on their way to becoming a norm, and it will soon be impossible to do an effective job without understanding the mechanics of how they function. To stay relevant as a technical SEO (or SEO Engineer, to borrow another term from Mike King), you should put yourself at the forefront of these kinds of paradigm-shifting developments. The technical SEO who is illiterate in web development is already an anachronism, and I believe that further divergence between the technical and content-driven aspects of search marketing is no bad thing. Specialize!

      Upon learning that a development team is adopting a new JavaScript framework for a new site build, it’s not uncommon for SEOs to react with a degree of cynicism. I’m certainly guilty of joking about developers being attracted to the latest shiny technology or framework, and at how rapidly the world of JavaScript development seems to evolve, layer upon layer of abstraction and automation being added to what — from the outside — can often seem to be a leaning tower of a development stack. But it’s worth taking the time to understand why frameworks are chosen, when technologies are likely to start being used in production, and how these decisions will impact SEO.

      Instead of criticizing 404 handling or internal linking of a single page app framework, for example, it would be far better to be able to offer meaningful recommendations which are grounded in an understanding of how they actually work. As Jono Alderson observed in his talk on the Democratization of SEO, contributions to open source projects are more valuable in spreading appreciation and awareness of SEO than repeatedly fixing the same problems on an ad-hoc basis.

      Beyond SEO

      One last thing I’d like to mention: PWAs are such a transformative set of technologies that they obviously have consequences which reach far beyond just SEO. Other areas of digital marketing are directly impacted too, and from my standpoint, one of the most interesting is analytics.

      If your website is partially or fully functional while offline, have you adapted your analytics setup to account for this? If push notification subscriptions are a KPI for your website, are you tracking this as a goal? Remembering that service workers do not have access to the Window object, tracking these events is not possible with ‘normal’ tracking code. Instead, it’s necessary to configure your service worker to build hits using the Measurement Protocol, queue them if necessary, and send them directly to the Google Analytics servers.

      This is a fascinating area that I’ve been exploring a lot lately, and you can read the first post in my series of articles on PWA analytics over on the Builtvisible blog.

      That’s all from me for now! Thanks for reading. If you have any questions or comments, please leave a message below or drop me a line on Twitter @tomcbennet.

      Many thanks to Oliver Mason and Will Nye for their feedback on an early draft of this article.

      Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!

      Visualizing Speed Metrics to Improve SEO, UX, & Revenue – Whiteboard Friday

      Posted by sam.marsden

      We know how important page speed is to Google, but why is that, exactly? With increasing benefits to SEO, UX, and customer loyalty that inevitably translates to revenue, there are more reasons than ever to both focus on site speed and become adept at communicating its value to devs and stakeholders. In today’s Whiteboard Friday, Sam Marsden takes us point-by-point through how Google understands speed metrics, the best ways to access and visualize that data, and why it all matters.

      Click on the whiteboard image above to open a high-resolution version in a new tab!

      Video Transcription

      Hi, Moz fans, and welcome to another Whiteboard Friday. My name is Sam Marsden, and I work as an SEO at web crawling platform DeepCrawl. Today we’re going to be talking about how Google understands speed and also how we can visualize some of the performance metrics that they provide to benefit things like SEO, to improve user experience, and to ultimately generate more revenue from your site.

      Google & speed

      Let’s start by taking a look at how Google actually understands speed. We all know that a faster site generally results in a better user experience. But Google hasn’t actually directly been incorporating that into their algorithms until recently. It wasn’t until the mobile speed update, back in July, that Google really started looking at speed. Now it’s likely only a secondary ranking signal now, because relevance is always going to be much more important than how quickly the page actually loads.

      But the interesting thing with this update was that Google has actually confirmed some of the details about how they understand speed. We know that it’s a mix of lab and field data. They’re bringing in lab data from Lighthouse, from the Chrome dev tools and mixing that with data from anonymized Chrome users. So this is available in the Chrome User Experience Report, otherwise known as CrUX.

      CrUX metrics

      Now this is a publicly available database, and it includes five different metrics. You’ve got first paint, which is when anything loads on the page. You’ve then got first contentful paint, which is when some text or an image loads. Then you’ve DOM content loaded, which is, as the name suggests, once the DOM is loaded. You’ve also got onload, which is when any additional scripts have loaded. That’s kind of like the full page load. The fifth and final metric is first input delay, and that’s the time between when a user interacts with your site to when the server actually responds to that.

      These are the metrics that make up the CrUX database, and you can actually access this CrUX data in a number of different ways. 

      Where is CrUX data?

      1. PageSpeed Insights

      The first and easiest way is to go to PageSpeed Insights. Now you just plug in whatever page you’re interested in, and it’s going to return some of the CrUX metrics along with Lighthouse and a bunch of recommendations about how you can actually improve the performance of your site. That’s really useful, but it just kind of provides a snapshot rather than it’s not really good for ongoing monitoring as such.

      2. CrUX dashboard

      Another way that you can access CrUX data is through the CrUX dashboard, and this provides all of the five different metrics from the CrUX database. What it does is it looks at the percentage of page loads, splitting them out into slow, average, and fast loads. This also trends it from month to month so you can see how you’re tracking, whether you’re getting better or worse over time. So that’s really good. But the problem with this is you can’t actually manipulate the visualization of that data all that much.

      3. Accessing the raw data

      To do that and get the most out of the CrUX database, you need to query the raw data. Because it’s a freely available database, you can query the database by creating a SQL query and then putting this into BigQuery and running it against the CrUX database. You can then export this into Google Sheets, and then that can be pulled into Data Studio and you can create all of these amazing graphs to visualize how speed is performing or the performance of your site over time.

      

      It might sound like a bit of a complicated process, but there are a load of great guides out there. So you’ve got Paul Calvano, who has a number of video tutorials for getting started with this process. There’s also Rick Viscomi, who’s got a CrUX Cookbook, and what this is, is a number of templated SQL queries, where you just need to plug in the domains that you’re interested in and then you can put this straight into BigQuery.

      Also, if you wanted to automate this process, rather than exporting it into Google Sheets, you could pull this into Google Cloud Storage and also update the SQL query so this pulls in on a monthly basis. That’s where you kind of want to get to with that.

      Why visualize?

      Once you’ve got to this stage and you’re able to visualize the data, what should you actually do with it? Well, I’ve got a few different use cases here.

      1. Get buy-in

      The first is you can get buy-in from management, from clients, whoever you report into, for various optimization work. If you can show that you’re flagging behind competitors, for example, that might be a good basis for getting some optimization initiatives rolling. You can also use the Revenue Impact Calculator, which is a really simple kind of Google tool which allows you to put in some various details about your site and then it shows you how much more money you could be making if your site was X% faster.

      2. Inform devs

      Once you’ve got the buy-in, you can use the CrUX visualizations to inform developers. What you want to do here is show exactly the areas that your site is falling down. Where are these problem areas? It might be, for example, that first contentful paint is suffering. You can go to the developers and say, “Hey, look, we need to fix this.” If they come back and say, “Well, our independent tests show that the site is performing fine,” you can point to the fact that it’s from real users. This is how people are actually experiencing your site.

      3. Communicate impact

      Thirdly and finally, once you’ve got these optimization initiatives going, you can communicate the impacts that they’re actually having on performance and also business metrics. You could trend these various performance metrics from month to month and then overlay various business metrics. You might want to look at conversion rates. You might want to look at bounce rates, etc. and showing those side-by-side so that you can see whether they’re improving as the performance of the site is improving as well.

      Faster site = better UX, better customer loyalty, and growing SEO benefit

      These are different ways that you can visualize the CrUX database, and it’s really worthwhile, because if you have a faster site, then it’s going to result in better user experience. It’s going to result in better customer loyalty, because if you’re providing your users with a great experience, then they’re actually more likely to come back to you rather than going to one of your competitors.

      There’s also a growing SEO benefit. We don’t know how Google is going to change their algorithms going forward, but I wouldn’t be surprised if speed is coming in more and more as a ranking signal.

      This is how Google understands page speed, some ways that you can visualize the data from the CrUX database, and some of the reasons why you would want to do that.

      I hope that’s been helpful. It’s been a pleasure doing this. Until the next time, thank you very much.

      Video transcription by Speechpad.com

      Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!