Category: Products

Revisiting Measurement Strategy with the Advent of GA4

Are your measurement strategy and tagging implementation aligned? It’s OK, you’re in a safe space here—we know that keeping technology, tactics, and strategy in 100% alignment is nearly impossible in practice. Fortunately, the advent of Google Analytics 4 (or “GA4,” formerly Google Analytics for App + Web) is an ideal time to approach a strategic measurement review.

Which came first, your tags or your measurement strategy?

Which came first, the chicken or the egg? Wikipedia refers to this question as a “causality dilemma”—we can’t decide which event is the cause, and which is the effect.

Which came first, your tags or your measurement strategy?

Do any of these options sound familiar?

  • There is no strategy
  • The strategy and tagging bear no relation
  • The strategy is retrofitted to match the organically grown, free range, tag management

There is no shame in accepting that the strategy might not be up to date with the current tagging implementation. Tactical measurement is more volatile, for sure. Tag management is meant to help you move fast! However, lack of a strategy, significant disconnect between strategy and tagging, or strategy adapted to fit the tags (as opposed to the right way around) are not acceptable and must be addressed.

“Some people spend their entire lives waiting for the time to be right to make an improvement.”

James Clear, “Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones”

An opportunity presents itself

The advent of GA4 (formerly Google Analytics for App + Web) is an ideal time to approach a strategic measurement review. Don’t think this means you’re going to throw away your existing Universal Analytics (UA) implementation and start again. Far from it. An existing reference point to work from is a valuable asset.

You need to consider the following in your current tagging in order to decide the correct tactical and strategic alignment:

  • what currently works and aligns with strategy
  • what’s currently broken and is misaligned
  • what’s missing from tagging and/or the strategy
  • what’s bloat and simply needs to be removed

Fix the broken stuff, fill in the gaps, and ditch the unnecessary to trim down and align your tagging and measurement strategy.

Connect measurement strategy and implementation

As a quick refresher, let us recall what is meant by a “measurement strategy”:

  • Goals
  • Audiences
  • KPIs

A measurement strategy is a formalisation of what is measured, why, and what success criteria look like. The lack of an objective set of measurements is a key cause of digital marketing failure.

Accepting that the current measurement implementation and strategy need to be reviewed and adjusted, this provokes a number of questions:

  • How did we end up here?
  • How do you fix it?
  • Why do you fix it? What’s the value?
  • How often do you realign strategy and measurement?

In the absence of any formalised process for tactical and strategic data alignment, measurement tactics will naturally diverge from the ideal mandated by the organisational aims.

A good starting cadence for a process to address this issue is quarterly. This will be driven by the pace of change in your tag management, rather than your organizational strategy.

Start now.

Industry guru Avinash Kaushik has already written what needs to be written on measurement strategy so I won’t repeat it here.

The golden opportunity at hand is to reflect on the legacy measurement, consider what is possible with GA4 and ensure that the next generation of digital analytics instrumentation is as aligned with your global strategy as possible. Go beyond “fit for purpose” and strive for “OMG, this is digital marketing performance visibility I never thought possible!”

Priceless advice—don’t get this bit wrong

When you embark on this process, be aware that UA tag types no longer exist. There is only one tag: an event. GA4 is event driven and user-centric. The GA4 core measurement is based on the concept of the event which means event name choice is critical to success.

Use the GA4 event name to convey the meaning of the event. This needs strategic alignment of course, but, as much as possible, it is important to use the GA4 automatic, enhanced and recommended events before committing to a new custom event. This ensures the best/right reports are available for your data out of the box. Using customised event names might not enable all reports.

Flow chart

In conclusion

To not have a strategically aligned measurement approach is to court disaster. Recognising that Google Analytics is changing, and in so many ways, for the better, is to embrace a fabulously valuable opportunity to address strategic alignment and remedy tactical issues in one swoop.

Learn about GA4, and use it to plan the migration from UA. Build a measurement roadmap that complements the digital marketing plan. Be proactive, rather than reactive in measurement and strategy. Draw these components into a repeatable process, and ensure tagging remains aligned with strategy.

Server-Side Google Tag Manager Deep Impact

 

Before we dive into server-side Google Tag Manager (GTM), I’ll prefix the meat of this post with a caveat: always respect user privacy

Any data collection techniques discussed here must be applied righteously and not as a workaround to circumvent data collection consent regulation.

10,000 Foot View

Here’s a familiar situation – Google Tag Manager as we’ve known it for years.

Your container is loaded on all pages, or screens in your site/app, and based on trigger events, data is sent to first- and third-party endpoints.

It works, it’s fine, but it’s not perfect. Tracking blockers, JavaScript failures, many, many requests to endpoints, and inefficient JavaScript are all risks, and potential performance problems that can lead to data quality issues.  

Server-side GTM moves the tag vendor request from the client to a server—a server on Google Cloud Platform living on a subdomain of your site. The container loaded in the browser/app still has tags and still sends a request but has way less code, sends fewer requests, isn’t necessarily affected by anti-tracking software, doesn’t send the user’s IP address to third-party tag vendors, and first-party cookies are correctly set in an ITP compliant manner.  

Out of the Box – What’s Cool?

There’s a lot to be excited about with server-side GTM in that, on the client side, it’s all very familiarbut way better! The “traditional” digital marketer can still set up their Facebook tag(s) with the same triggers, and deploy Floodlights as required. Same, same… but different.

As mentioned earlier, rather than sending data to the tag vendor endpoint, it’s sent to a subdomain. For example, if you’re on www.mysite.com, server-side GTM will send data to tracking.mysite.com, a subdomain you can have configured.  

And that’s great because…?

  • It respects user privacy: The user’s IP address isn’t sent to a third party.
  • It preserves data quality: Tracking prevention doesn’t happen on requests to your own domain.
  • It lightens code bloat from the client side: The tags require less work on the browser, shifting the workload to the server instead. This means what remains in GTM on the browser does less, so the site runs faster.
  • It consolidates requests from the client side: You can send multiple requests from the server based on one request from the client.

At MightyHive, we strongly advocate for focusing on what’s best for the user, not the ability to foil or circumvent anti-tracking software. Reminder: act righteously, not selfishly. As it stands now, data is collected, not captured. In the future data will be exchanged… Think about that for a minute.

Deeper Impact

Have you noticed that tracking requests are sent to your domain and not a third-party domain? The data collection workload is moved to your infrastructure.

Does that feel like just going back to web server logging? How different is this from web server logging?  

Very. 

Analytics data is formatted (sessionized), cleaned (PII removed), integrated (joined with data from Google Ads, Search Ads/Display & Video 360) and presented ready to perform its function: analysis and optimization of all aspects of the online business, which, let’s face it, is all about better marketing.  

Web server logs don’t collect all behavioral data. Typically, log-level data isn’t integrated with marketing channel data, meaning there’s no feedback loop for activation of the data. 

But! There are similarities between server-side GTM and web server logging. The web server receives a request, typically for a page, builds the page content and responds, possibly setting first-party cookies along with the response. The server-side GTM endpoint also receives requests, and responds, potentially with cookies (but with less content).

Now… the web server knows what page it’s returning.

It knows what data to render on the data layer to record a transaction (for example). 

The data layer is picked up by a tag firing in the browser and then sent back to the tracking endpoint. 

The end point then takes the same data and fires it off to Google Analytics (GA) to complete the round trip and get your analytics data recorded. 

Phew!

Wait one minute. If the web server knows it’s rendering a “thank you” confirmation page, and it knows what data to render on the data layer, why bother sending this to the browser for the browser to just send it back to the tracking end point and then to GA?  

Why not remove some steps for efficiency? The web server knows it is rendering a confirmation page. So it builds the exact same request the browser was going to, and sends the GA transaction data straight to the tracking end point. Cut out the client round trip.

It’s quite normal to fire off conversion tags, Floodlights, FB pixels, Adnxs, TTD, and so on to record transactions. Don’t send those to the client to handle. As the web server responds with the confirmation page, send those requests straight to the tracking endpoint. The endpoint responds with the details of the cookies to set, and the web server sends those with the confirmation page content in the response to the client.

Think how many marketing tags and tracking pixels fire on page level events. How many tags actually need to fire on the client? How many tags don’t even need to be exposed to the browser? What if, just maybe, you only had page-level event-triggered tags? Maybe you only need page-level tracking if you’ve removed all of your data bloat? Then you don’t need to CNAME the tracking subdomain, you can restrict access to your tracking endpoint to only allow your web server to access it via https (think IP range restriction). That’s a bunch less complexity and a fair amount of moving parts removed from the solution.

Simpler is better. No code is better than no code, as the saying goes.

In Conclusion

The server-side GTM solution offers a good and correct solution to digital analytics measurement. It’s good because data quality can be improved, user privacy is further protected, and significantly, it’s a step towards doing less work in the browser, meaning sites and apps get faster.

Thinking about the possible solutions the technology offers, with the right motivation in mind, demonstrates how versatile the solution is, how much power is available and what avenues are still to be explored to leverage first-party data.

 

Download the Slides and Video for Comparing Log-level Data to Ads Data Hub

**Scroll down to download everything**

High Fidelity: Log Files vs. Ads Data Hub

Recently I had the pleasure of being a guest on “Live with MightyHive” to talk about how Google Data Transfer Files (DT Files) compare to the data inside Ads Data Hub (ADH).

As a refresher, DT Files are the log-level data marketers can access from Google Marketing Platform (GMP). Google announced in 2018 that it would begin redacting user-level IDs in order to protect user privacy and comply with regulations like GDPR (and now CCPA, as we cover in this episode). Ads Data Hub, on the other hand, is the “data clean room” for GMP; a privacy-preserving application for analyzing campaigns and audiences without user-level data needing to leave Google.

What has happened is that the detailed user-level data offered by DT Files in many cases ends up heavily redacted (for both privacy and business reasons), whereas Ads Data Hub keeps user-level data 100% private and is consequently not nearly as redacted. Marketers NEED to understand this trade-off. This episode covers what you find when you compare the two, including:

  • The key differences between DT Files and Ads Data Hub
  • How to check data parity between DT Files and Ads Data Hub before making other comparisons
  • How to compare the effects of user ID redactions, Safari ITP, and CCPA between log-level data and ADH

We’ve packaged everything up and you can register to download these materials below:

  • The full episode video
  • The slide deck
  • Sample SQL queries for both DT Files (BigQuery) and Ads Data Hub

If you want to develop a better understanding of how Ads Data Hub differs from legacy analytics and log-level data, then this is a great set of materials. I hope you find them useful!

 

Preview the slides


A few sample slides from “High Fidelity: Log Files vs. Ads Data Hub” and the appendix of SQL queries for BigQuery and Ads Data Hub.

 

DOWNLOAD THE VIDEO, SLIDE DECK, AND SAMPLE SQL QUERIES

Press Release: MightyHive Launches Global Data Practice

SAN FRANCISCO, July 16, 2020 (GLOBE NEWSWIRE) — Leading data and digital media consultancy MightyHive today announced the launch of its global data practice to help organizations better understand their customers and make informed decisions across their businesses with machine-learning (ML) technologies that drive the effective and sustainable use of connected first-party data and analytics.

Eighty-seven percent of marketers consider data their organizations’ most underutilized asset, yet according to Forrester, more than half of marketers still feel overwhelmed by the incoming data. In order to make data-driven marketing a reality, marketers need data experts to guide their strategy while effectively navigating a changing privacy landscape. This includes new regulations and technological changes that affect browser cookies and mobile advertiser IDs. The risks of not using or misusing data can be detrimental to a business, as 52% of Americans have decided not to use a product or service because of privacy concerns.

“Data is the vital connective tissue that drives personalization at scale and makes good creative even more powerful,” said Sir Martin Sorrell, executive chairperson of S4Capital. “As the leading tech-led new age marketing services company, launching the MightyHive data practice is a natural step in our evolution.”

MightyHive has already sustained positive results from privacy-first data strategies for international brands like Electrolux, Pandora Jewelry and Mondelēz International. A key example of this was MightyHive’s work in helping Mondelēz International create a Media Data Spine, a scaled repository to analyze media data for Mondelēz marketing teams across the globe, which has spearheaded advanced personalization-at-scale and closed-loop sales measurement with retailers such as Target.

“MightyHive has been instrumental in helping Mondelēz lead a digital renaissance in the CPG industry,” said Jon Halvorson, VP global media, digital & data at Mondelēz International. “The MightyHive data team has been by my side as we’ve architected advanced data environments and future-proofed our measurement systems.”

The MightyHive data practice has achieved certifications for Google, Amazon and Salesforce marketing clouds. MightyHive is the only global company that has both Google Marketing Platform (GMP) and Google Cloud Platform (GCP) certifications with specializations across Cloud Identity and Security Expertise.

The MightyHive data practice, globally helmed by Tyler Pietz, senior vice president, global data, is a natural addition to its media consulting services. MightyHive has been steadily building upon its existing data and analytics expertise, including its deep understanding of ML and artificial intelligence (AI) through strategic mergers and calculated hires. Within the past year, the company has merged with Digodat, ConversionWorks, and MightyHive Korea, which has grown its regional presence in Latin America, Europe, Asia and Australia. The MightyHive data practice is globally available to clients with teams and support available locally and across regions.

In addition to welcoming the merged companies’ teams, noteworthy recently appointed talent includes Julien Coquet, Sayf Sharif and Toby McAra. The global data practice team has deep competency in analytics, as well as the change-maker mindset required to drive digital transformation and fill the data and analytics services gap.

“Data impacts every aspect of the customer experience – not just marketing – but it can still be overwhelming and confusing,” said Pete Kim, CEO and co-founder of MightyHive. “With the launch of the global data practice, MightyHive helps brands break down data silos within their company to more effectively intertwine data with media and creative for more compelling customer experiences and maximum returns.”

To learn more about the MightyHive global data practice, please visit https://mightyhive.com/data-practice/.

About MightyHive
MightyHive is the leading data and digital media consultancy that helps marketers take control. MightyHive delivers sustained results from the ground up through advisory for business transformation, privacy-first data strategy, and digital media services.

The company is headquartered in San Francisco, with a team of consultants, platform experts, data scientists, and marketing engineers in 19 countries and 24 cities around the world. In 2018, MightyHive merged with S4Capital plc (SFOR.L), a tech-led new age/new era digital advertising and marketing services company established by Sir Martin Sorrell.

Contact
Blast PR for MightyHive
mightyhive@blastpr.com

Read Press Release on GlobeNewswire

Making Google Optimize More Effective with Double Blind Testing

Doug Hall
Senior Director of Analytics
Allison Hannah Simpson Blog Post
Allison Hannah Simpson
Marketing Associate

The Opportunity for Better Testing

We’re big fans of Google Optimize, a premium testing and personalisation tool. We’re also big fans of Double Blind Testing. Double Blind Testing weeds out the bias that can diminish the effectiveness of your data analysis. This article proposes integrating Double Blind Testing with Google Optimize to further validate your marketing research, thus helping your marketing dollars go further.

What is Double Blind Testing? A handy definition:

A double blind test is an experiment where both the subject and observer are unaware that the exercise in practice is a test. Double blind testing is referred to as the gold standard of testing. Double blind tests are used in science experiments in medicine and psychology, including theoretical and practical testing.

See how that’s different from Optimize testing? With Optimize, the analyst often sets up the test, runs the test, analyses the test, calls the winner (if there is one) and shares learnings. That last part of the process, sharing the learnings from the test, is the most important piece.

Caution: Unconscious Bias Ahead

One person wearing multiple hats during testing is also known as a single blind test and comes with consequences. A single blind test risks being influenced by unconscious bias. In a single blind test, the test participant is the only individual unaware of the experiment they’re being subjected to. The person running the test knows what is the control and what is the experiment.

There’s the rub. It’s quite possible for an analyst to draw a conclusion around results based on their knowledge of the test participant – not necessarily to the extent of creating manufactured data, but unconscious bias creeps in subtly.

For example, the analyst may be presented with an aggregated view of the data that shows the experiment outperformed the control. At first, this sounds like a success! However, confirmation bias could make the analyst less likely to dig deeper and explore the data under a more critical lens. With the results in the bag, the analyst moves on and misses important insights into the effects of the experiment.

The opposite is also possible: the control wins, so the experiment tanks. This cannot be true! So the analyst, misled by their cognitive bias, wastes time digging for signals that validate their hypothesis.

Change the Methodology to Remove Bias

Testing is a team effort. Divide [effort across the team] and conquer [cognitive bias] to achieve proper double blind tests. Let’s take a look at how a simple A/B test approach might change:

First, the test owner develops the hypothesis, which must remain private to the test owner.

Based on data [a] and feedback [b], we believe that doing [c] for audience [d] will make [e] happen.

We will know this to be true when we observe data [f] and get feedback [g].

We expect a change in metric [h] of magnitude [i] within [j] business cycles.

The test owner will then work with the designer and front end engineer to develop and implement the test experiment. It’s important for the test owner to keep the hypothesis a secret and only share what the test experiment is with other team members. The reason for the test should not be revealed.

The test is executed and now an analyst is introduced to the test data. The analyst has no previous knowledge of the hypothesis, experiment design or testing process at all. They only know this is a test in which they must perform a calculation on the presented data to determine the test experiment performance and significance.

Setting Up a Double Blind Test

Tell your data engineers to get the Google Analytics add-on for Google Sheets. Share the audience definition piece of the hypothesis with the data engineer. The data engineer will set up a query as shown below to produce two reports accurately representing the audience defined in the hypothesis. One report is the daily session, conversion volume and conversion rate for the test experiment. The other report is for the control. Name the report Population A and B – something anonymous will work.

Google Analytic Report Information

Now schedule the report to refresh daily. Having done this, hide the config sheet and make sure the test analyst is not given editor rights on the data so they cannot see the produced report config:

Hidden Google Sheet message

Now the analyst has blind access to the data with no awareness of what the experiments are for, what the hypothesised effect is, who the audiences are or which population corresponds with what experiment. Due to the blind data access, there is no cognitive bias in the analysis.

Google Sheet data

The analyst reports on the performance of each population as they see it in the numbers. Each member of the team is sufficiently removed from the hypothesis and test detail so as to achieve a double blind test production and analysis. Insert happy-dance here!

Integrate into Your Data Team’s Workflow

Google Optimize just leveled up by going one step further with Double Blind Testing. The double blind testing strategy has the power to improve your data analysis. It can open opportunities for you and your team by decreasing unconscious bias and increasing the effectiveness of your media spend. Choose to take more control of your marketing dollars by integrating this testing process into your daily workflow.

To learn more about Google Optimize and Double Blind Testing, reach out to us at questions@mightyhive.com.

Apple, Google, Privacy, and Bad Tech Journalism

 

Wait, did they just say Safari now blocks Google Analytics?

(Spoiler alert: it doesn’t)

At the 2020 edition of the Apple Worldwide Developers Conference (WWDC), Apple announced that the new version of MacOS (nicknamed Big Sur) would ship with version 14 of the Safari web browser – promising Safari would be more privacy friendly. Which is a great move and in line with the regulatory and digital marketing landscapes.

However, based on fuzzy, out-of-context screenshots shown during the announcement, some digital marketing publications started asserting that the new Safari would block Google Analytics.

[Narrator’s voice: it didn’t]

Here are some of the articles in question:

Within minutes, that poorly researched bit of fake news was all over social media.

So what really happened? Should you worry?

Cooler heads always prevail, so let’s take a step back and look closely at what really happened.

What is ITP and why does it matter?

The WWDC is generally the occasion for Apple to announce new features and key developments in their tech ecosystem from desktop and mobile operating systems to SDKs, APIs, and all that good technical stuff.

In recent years, Apple has used the WWDC to announce changes to the way they handle privacy in web and mobile apps, namely with initiatives such as ITP (Intelligent Tracking Protection), which is used in Safari, Apple’s Webkit-based browser on Macs, iPhones, and iPads.

In a nutshell, ITP restricts the creation and the lifetime of cookies, which are used to persist and measure someone’s visit on one site (first party, a.k.a. 1P) or across multiple websites (third party, a.k.a. 3P). ITP makes things more difficult for digital marketers because users become harder to track and target.

If we use Google Analytics as a comparison, ITP can “reset” a known visitor to a new visitor after only a couple of days, instead of the usual 2 years – assuming users don’t change devices or clear their cookies.

If we look at ITP with our privacy hat on, even collecting user consent will not stop ITP from neutralizing cookies.

ITP arrives at the right moment; just as online privacy starts to finally take root with pieces of legislation such as GDPR and ePrivacy in Europe, CCPA in California, LGPD in Brazil, APA/NDB in Australia, APP in Japan, PIPA in Korea, and a lot more being made into bills and/or written into law.

Arguably the above pieces of legislation allow for the collection of user consent prior to collecting. So we should not really be worrying about Safari potentially collecting information that users consented to, right?

That was not even a consideration in the aforementioned pieces on “Safari blocks Google Analytics.”

Does the new Safari really block Google Analytics?

(Second spoiler alert: it still doesn’t)

The most obvious way to show you is with a test. Luckily, I had MacOS Big Sur beta installed so I took a look under the hood – especially on the sites that published that “Safari blocks Google Analytics” story. Let’s fire up Safari and turn on developer mode.

Sure enough, Google Analytics sends a tracking call that makes it home to Google collection servers. Safari does not block Google Analytics.

Now let’s take another look at that new privacy report: it shows “22 trackers prevented.”

Wait, the list shows google-analytics.com?! Didn’t we just establish that Google Analytics tracking went through?

Let’s clarify: what the panel below shows are the domain names of resources loaded by the page that are flagged in the ITP lists as potential tracking vectors using third-party cookies.

Other than that, ITP plays its role in drastically reducing the Google Analytics cookie’s lifetime to just a week as shown below.

Let’s drive this point home again if needed: Safari 14 does not block Google Analytics.

ITP is enforced as per the spec by blocking third-party cookies and limiting cookies to a lifetime of a week at most.

So what’s the big impact?

As mentioned, ITP is primarily going to reduce the time during which a visitor is identified. After a week, ITP deletes/resets the user cookie and the visitor is “reborn”. Not a great way to study user groups or cohorts, right?

If you’re worrying about the impact of ITP on your data collection, may I suggest reading this awesome piece on ITP simulation by my colleague Doug Hall.

What is important to remember is that Apple is using ITP block lists built in partnership with DuckDuckGo, a search engine that has made a name for itself as a privacy-friendly (read: anti-Google). I, for one, have yet to see what their business model is but that’s a story for another post.

At any rate, ITP lists are meant to block cookies for specific domain names.

Even if Apple did decide to block Google Analytics altogether, how big a deal are we talking about? According to StatCounter, Safari accounts for roughly 18% of browser market share (as of June 2020). Let’s round this up to a neat 20%. That’s an awful lot of data to lose.

Arguably, Google Analytics wouldn’t be the only tracking solution that could be impacted. Let’s not forget about Adobe, Criteo, Amazon, Facebook, Comscore, Oracle—to name a few.

So if you keep implementing digital analytics according to the state of the art, by respecting privacy and tracking exclusively first-party data, you’ll be a winner!

Is it really just bad tech journalism?

Let’s get real for a moment. If tech journalists posting the story about Safari blocking Google Analytics knew about ITP, they wouldn’t have published the story – or at the very least with a less sensational headline. Even John Wilander, the lead Webkit engineer behind ITP spoke out against the misconceptions behind this “Safari blocks GA piece.”

This is unfortunately a case of bad tech journalism, where half-truths and clickbait titles drive page views. Pitting tech giants Apple and Google is just sensational and does not highlight the real story from WWDC: privacy matters and Apple are addressing it as they should.

In this, I echo my esteemed colleague Simo Ahava in that this kind of journalism is poorly researched at best, intentionally misleading at worst.

Most of the articles on this particular topic backtracked and offered “updates” but they got caught with their hand in the cookie jar.

To be fair, it is also Apple’s fault for using misleading labeling.

But is it so bad considering we’re talking about a beta version of a web browser? Ìf anything, Apple now has a few months ahead of them to make adjustments before Big Sur and Safari.

Beyond the fear, uncertainty and doubt, this kind of publication is symptomatic of an industry that is scared by the effect that privacy regulation is having on their business.

How is MightyHive addressing this?

While we at MightyHive have long been preparing  for the death of the cookie and digital ecosystem focusing on first-party data, we can appreciate that initiatives such as ITP can make a digital marketer’s life very complicated.

We strongly believe that the future of digital marketing lies in first party data, consent and data quality.

Cookies are on their way out but this does not mean the end of the world.

Need help navigating the ever-changing digital marketing landscape? Contact us for guidance!

Identifying Significance in Your Analytics Data

A few weeks ago I had the chance to help launch our new “Live with MightyHive” series. My colleague Myles Younger and I chatted about how to distill significance and causality from analytics data, and then how to act on those types of insights. You can watch the full episode and access the slide deck below.

If you want to chat further with MightyHive about advanced analytics solutions like these, please reach out to us at questions@mightyhive.com.

What is significance?

Making decisions based on data needs the support of a robust measure of confidence in the data.

Off the back of an event of some sort (campaign starts, new app feature, global pandemic), if we observe any change in our data we need to be confident the “thing” that happened was actually responsible for the change in data—not just a correlation. We need to be able to demonstrate that had this thing not happened, the data wouldn’t have changed.

Then we can infer a causal relationship between the event and the change in the data. Remember—it’s still a probability, we can never prove causality in a categorical sense, but we can be highly confident (and it’s way better than guessing!). We can remove emotion and unconscious bias from decision-making. We don’t eyeball data or use our gut—mathematics informs the decision making process.

Here’s the full chat and slides from last week’s “Live with MightyHive” episode (scroll to the end for the slides):

How does it work?

The technology behind the Google CausalImpact R package that was demonstrated in the episode constructs a Bayesian structural time-series model and then tries to predict the counterfactual.

Simply, the mathematical model uses data prior to the event to predict what the data would look like had the event not happened. Important: the prediction is actually a probabilistic range of values. If the historic data is noisy, then the accuracy of the prediction will change. See the screenshot below from the demo walk through linked above. In the image below, the blue shaded area is the prediction (synthetic control estimator) from the model. If the observed data falls outside the blue region, we have significance!

The blue region gets bigger with noisier data. The broader the blue region, the more extreme the observation will need to be in order to achieve a significant signal.

Using Google CausalImpact

You can use the CausalImpact package with as little as three lines of R. R Studio is open source or you could try it out using rstudio.cloud.

Be advised, if you install the CausalImpact package locally, due to dependencies, you’ll need at least v3.5 of R. I updated Linux on the Chromebook to get the latest version of R and R Studio via this very useful article and the package installation was very straightforward.

There’s another option thanks to Mark Edmondson from IIH Nordic. Mark wrote a great Shiny app front end for CausalImpact that’s free to use, so you can explore significance in your own GA data.

Using significance to establish causality and take action

We used the package to analyse client data to confidently answer key business questions that arose regarding KPI changes since the UK was locked down.

As well as considering YTD data (setting the ‘event’ as Jan 1), we use pre- and post-lockdown (Mar 9) date periods. Data shows clear patterns in purchase behaviour for retails sites. Media sites appear to exhibit explosive growth. However, the specifics regarding growth areas of content are highly informative—not what you’d expect to see by just eyeballing the data from afar.

For retail and media clients, the ability to identify current and future growth areas with confidence is a highly valuable tactic. At a strategic level, the forecast output from CausalImpact is highly actionable in driving campaign content, budgets, and timing.

While tactics for the current global situation include “managing,” there is a clear near for preparation as well. Making decisions on current data and using forecasts with confidence proves to be valuable for our clients.

Additional Resources

Thank you for reading! The slides from the episode can be accessed here:

Watch the CausalImpact R package introductory video here (mandatory viewing!):
youtube.com/watch?v=GTgZfCltMm8

Remember to sign up here for future Live with MightyHive episodes:
livewithmightyhive.splashthat.com

Press Release: MightyHive Offers Clients Free Trial of MightyDesk Ad Trafficking Tool

SAN FRANCISCO, May 07, 2020 (GLOBE NEWSWIRE) — Leading media consultancy MightyHive today announced it is offering its global clients a free trial of MightyDesk, the tool suite which enables media buyers to effectively scale their programmatic operations. It connects to leading programmatic platforms and allows for the management of campaigns, automation of workflows, and generation of cross platform reports, alerts and insights.

The COVID-19 global pandemic has disrupted international economies and many industries are expected to face financial challenges throughout Q2. To help its clients save time and increase advertising return on investment (ROI) during this difficult time, MightyHive is offering the MightyDesk trafficking tool within the MightyDesk Automation module at no cost through the end of Q2 2020.

Ad trafficking is the process of creating and setting up programmatic advertising campaigns in an efficient and effective way. It is one of the most critical, yet time- and labor-intensive aspects of digital marketing.

In this current climate, advertisers are in dire need of cost-effective and impactful solutions. MightyDesk was originally built to solve pain points MightyHive teams encountered when using programmatic platforms and is the same tool they use every day to service thousands of clients. MightyDesk Automation features helped one client save over 300 hours a month on ad trafficking tasks.

“Trust and transparency across the marketing supply chain is more vital than ever,” said Pete Kim, CEO and founder of MightyHive. “We built MightyHive on these strong moral values, so we are proud to do our part in helping our industry survive and accelerating the digital transformation that is necessary for advertisers around the world. We’re pleased to offer our clients the same tool we rely on to run our business at a global level, MightyDesk, at no cost during this difficult time.”

MightyHive clients can reach out directly to their account managers to schedule a MightyDesk demo and training session. Marketers who are not yet MightyHive clients can reach out to mightydesk@mightyhive.com.

For more information on MightyDesk, please visit https://mightyhive.com/mightydesk.

About MightyHive
MightyHive is a new breed of media consultancy that partners with global brands and agencies seeking transformative marketing results in a time of massive disruption and opportunity. Recognized as a global leader in advanced marketing and advertising technologies, MightyHive provides consulting and services in the areas of media operations and training, data strategy and analytics.

The company is headquartered in San Francisco, with offices in Auckland, Chicago, Hong Kong, Jakarta, London, Melbourne, Milan, Montreal, Mumbai, New York, Paris, São Paulo, Seoul, Singapore, Stockholm, Sydney, Tokyo, Toronto and Vancouver. In 2018, MightyHive merged with S4Capital plc (SFOR.L), a new age/new era digital advertising and marketing services company established by Sir Martin Sorrell in 2018.

Contact
Blast PR for MightyHive
mightyhive@blastpr.com

Read full press release on Globe Newswire

Managing COVID-19 Brand Safety in The Trade Desk

 

Introduction

In the third part of our series about how advertisers can ensure brand safety while still supporting publishers during the global pandemic, we are sharing best practices for several major programmatic buying platforms. As one of the largest independent media buying platforms, with a focus on transparency, openness, global scale, and advanced TV, The Trade Desk is an important part of an advertiser’s strategy looking to reach consumers globally. 

Through a three-step process, you can confidently set up campaigns avoiding sites that peddle in misinformation and sensationalist content that can appear during global crises, while still supporting reputable news sites.

Step 1: Site/Category Blacklists

Working with trusted partners to ensure brand adjacency is more important than ever. The Trade Desk allows advertisers with a list of trusted partners to block specific websites/apps as well as specific content categories.

There is no limit to the number of sites that can be added to the blacklist and these lists can be shared across an entire advertiser for usage with every campaign.

the trade desk block

Step 2: Pre-Bid Technology

The next precaution advertisers can take is to enable pre-bid targeting and only bid on traffic that falls within certain parameters. For example: block fraudulent traffic, bid only on ads likely to be in a user’s view, or target pages related to relevant contextual categories. Note: this is not currently supported for connected TV campaigns. 

The following pre-bid targeting vendors are available:

  • DoubleVerify
  • Grapeshot
  • Integral Ad Science
  • Moat
  • Peer39

Each vendor offers a unique list of category exclusions including:

  • Site content
  • Content rating
  • Keyword targeting
  • Invalid traffic/fraud

ttd ad targeting

Step 3: The Trade Desk + Quality Alliance™️

In addition to the aforementioned vendors, The Trade Desk also offers Quality Alliance™️ as a viewability targeting solution in desktop web and mobile web environments, for both display and video. Where viewability can be measured, this solution will also work in-app.

The Trade Desk includes enriched reporting metrics to allow for more sophisticated analysis against inventory that was categorized as fraudulent (Eg. Adware/Malware Impressions, Sophisticated Nonhuman Data Center Traffic, etc.). This helps advertisers further optimize media spend by giving them a better understanding of risky inventory.

In Conclusion

We know brand safety is highly subjective and mission critical when the direction of global recovery changes on a daily basis. We hope this guide provides a quick review of steps you can take, or ensure your partners are taking, to make sure your ad dollars are reaching the users and supporting the content you deem appropriate. 

MightyHive clients can reach out to their account teams for more guidance on how to implement these tactics within campaigns. If you’re not already a MightyHive client, contact us. We’d love to talk.

Check out our full brand safety series here:

Managing COVID-19 Brand Safety in Amazon DSP

 

Introduction

In the fourth part of our series about how advertisers can ensure brand safety while still supporting reputable publishers during the global pandemic, we are sharing best practices for several major programmatic buying platforms. 

As one of the fastest-growing DSPs, Amazon DSP specializes in exclusive first-party audience data sets and unique owned and operated web and advanced TV properties, alongside robust brand safety safeguards. Through a quick two-step process, advertisers can ensure their campaigns avoid risky sites and content that can be particularly hard to avoid during a global crisis, when the volume of breaking news and misinformation challenges even the savviest advertisers. 

Step 1: Site/App Whitelist in Amazon DSP

Advertisers with a list of trusted partners and properties should turn to Amazon’s whitelist features first. In order to positively target domains/apps, advertisers must input a minimum of 50 domains. 

It’s also worth noting that Amazon requires 50+ domains in consideration of its private marketplace (PMP) offering, which is an alternative/parallel option for open exchange whitelists. PMPs provide higher priority access to premium inventory and are a viable alternative to secure placements with premium publishers.

amazon domain targeting

Step 2: Brand Safety Targeting

Pre-bid filtering is another powerful option advertisers have in Amazon DSP to keep ads dollars away from content they deem unsuitable for their brand. Once an advertiser has selected the properties on which ads will appear, pre-bid filtering provides another layer of security around the specific content categories the brand will appear next to. Amazon has a robust mix of third-party brand safety integrations including:

amazon 3p prebid

DoubleVerify

Web content category exclusion; authentic brand safety (custom brand safety profiles that can be ingested via audience ID through DoubleVerify’s platform)

double verify

Oracle Data Cloud

Custom Grapeshot segments determined by brand safety controls in the Oracle Data Cloud Context UI

oracle data cloud

Integral Ad Science

Web content category exclusion

integral ad science

NOTE: All third-party verification is currently free for advertisers, which is a unique benefit in comparison to other demand-side platforms

Over-The-Top (OTT) targeting

Content ratings and genres: With advanced TV offering, Amazon DSP allows advertisers to select the ratings and genres of the content ads will be placed on, allowing a level of granularity that can protect spend and improve control.

ott

In addition to these capabilities, Amazon enables its own safeguards to monitor and review third-party sites and apps for unsafe content. If a third-party site or app is identified as unsafe, it is blocked from advertising.

In Conclusion

We know brand safety is highly subjective and mission critical at a time like this. We hope this guide can provide a quick review of steps you can take, or ensure your partners are taking, to make sure your ad dollars are reaching users and supporting the content you deem appropriate. 

MightyHive clients can reach out to their account teams for more guidance on how to implement these tactics within campaigns. If you’re not already a MightyHive client, contact us. We’d love to talk. If you’re not familiar with Amazon DSP yet, read our explainer here.

Check out our full brand safety series here: