Tag: data & analytics

Revisiting Measurement Strategy with the Advent of GA4

Are your measurement strategy and tagging implementation aligned? It’s OK, you’re in a safe space here—we know that keeping technology, tactics, and strategy in 100% alignment is nearly impossible in practice. Fortunately, the advent of Google Analytics 4 (or “GA4,” formerly Google Analytics for App + Web) is an ideal time to approach a strategic measurement review.

Which came first, your tags or your measurement strategy?

Which came first, the chicken or the egg? Wikipedia refers to this question as a “causality dilemma”—we can’t decide which event is the cause, and which is the effect.

Which came first, your tags or your measurement strategy?

Do any of these options sound familiar?

  • There is no strategy
  • The strategy and tagging bear no relation
  • The strategy is retrofitted to match the organically grown, free range, tag management

There is no shame in accepting that the strategy might not be up to date with the current tagging implementation. Tactical measurement is more volatile, for sure. Tag management is meant to help you move fast! However, lack of a strategy, significant disconnect between strategy and tagging, or strategy adapted to fit the tags (as opposed to the right way around) are not acceptable and must be addressed.

“Some people spend their entire lives waiting for the time to be right to make an improvement.”

James Clear, “Atomic Habits: An Easy & Proven Way to Build Good Habits & Break Bad Ones”

An opportunity presents itself

The advent of GA4 (formerly Google Analytics for App + Web) is an ideal time to approach a strategic measurement review. Don’t think this means you’re going to throw away your existing Universal Analytics (UA) implementation and start again. Far from it. An existing reference point to work from is a valuable asset.

You need to consider the following in your current tagging in order to decide the correct tactical and strategic alignment:

  • what currently works and aligns with strategy
  • what’s currently broken and is misaligned
  • what’s missing from tagging and/or the strategy
  • what’s bloat and simply needs to be removed

Fix the broken stuff, fill in the gaps, and ditch the unnecessary to trim down and align your tagging and measurement strategy.

Connect measurement strategy and implementation

As a quick refresher, let us recall what is meant by a “measurement strategy”:

  • Goals
  • Audiences
  • KPIs

A measurement strategy is a formalisation of what is measured, why, and what success criteria look like. The lack of an objective set of measurements is a key cause of digital marketing failure.

Accepting that the current measurement implementation and strategy need to be reviewed and adjusted, this provokes a number of questions:

  • How did we end up here?
  • How do you fix it?
  • Why do you fix it? What’s the value?
  • How often do you realign strategy and measurement?

In the absence of any formalised process for tactical and strategic data alignment, measurement tactics will naturally diverge from the ideal mandated by the organisational aims.

A good starting cadence for a process to address this issue is quarterly. This will be driven by the pace of change in your tag management, rather than your organizational strategy.

Start now.

Industry guru Avinash Kaushik has already written what needs to be written on measurement strategy so I won’t repeat it here.

The golden opportunity at hand is to reflect on the legacy measurement, consider what is possible with GA4 and ensure that the next generation of digital analytics instrumentation is as aligned with your global strategy as possible. Go beyond “fit for purpose” and strive for “OMG, this is digital marketing performance visibility I never thought possible!”

Priceless advice—don’t get this bit wrong

When you embark on this process, be aware that UA tag types no longer exist. There is only one tag: an event. GA4 is event driven and user-centric. The GA4 core measurement is based on the concept of the event which means event name choice is critical to success.

Use the GA4 event name to convey the meaning of the event. This needs strategic alignment of course, but, as much as possible, it is important to use the GA4 automatic, enhanced and recommended events before committing to a new custom event. This ensures the best/right reports are available for your data out of the box. Using customised event names might not enable all reports.

Flow chart

In conclusion

To not have a strategically aligned measurement approach is to court disaster. Recognising that Google Analytics is changing, and in so many ways, for the better, is to embrace a fabulously valuable opportunity to address strategic alignment and remedy tactical issues in one swoop.

Learn about GA4, and use it to plan the migration from UA. Build a measurement roadmap that complements the digital marketing plan. Be proactive, rather than reactive in measurement and strategy. Draw these components into a repeatable process, and ensure tagging remains aligned with strategy.

WEBINAR: Slides and Video for ‘The Essential Role of Data Taxonomy in Bayer Digital Marketing’

 

For the August 25 Live with MightyHive episode, I was pleased to be joined by special guest Jeff Rasp, VP Digital Platforms, Channels & Capabilities at Bayer. He and I discussed the many benefits of an airtight marketing data taxonomy and Rasp explained how consistent campaign naming conventions are the foundation of a Bayer in-house team that earned the AdExchanger award for “Best In-House Media Operation” in 2020

According to a recent report conducted by Dataroma, 57% of marketers spend a week out of each month manually integrating data. As part of the MightyHive and Bayer partnership, Bayer established a sound taxonomy early in the in-housing process to essentially automate that task. And in “The Essential Role of Data Taxonomy in Bayer Digital Marketing,” Rasp explains how he distilled the KPIs of multiple departments into a unified naming convention capable of great versatility. 

Today, the Bayer marketing team generates insights in minutes, not days. And in this episode, Rasp outlines how his team pivots to accommodate the diverse reporting needs across the company. Watch the video to see Rasp and I discuss: 

  • How to create a data framework and hierarchy
  • The power of a well-planned media taxonomy
  • Data governance and the importance of consistency

WATCH “THE ESSENTIAL ROLE OF DATA TAXONOMY IN BAYER DIGITAL MARKETING.”

 

VIEW THE SLIDES FOR “THE ESSENTIAL ROLE OF DATA TAXONOMY IN BAYER DIGITAL MARKETING.”

Thanks for watching! If you have questions about the role of marketing data taxonomy in your organization, please contact us.

 

 

Award-Winning Data & Analytics Consultancy Brightblue to Merge with MightyHive

Today Brightblue, an award-winning UK-based data analytics and measurement consultancy, announced it will merge with MightyHive. The merger builds upon MightyHive’s recently-launched data practice, adding deep probabilistic attribution expertise to its experienced and rapidly-growing team. 

With COVID-19 accelerating existing digital transformation trends, analytics will be more important than ever as marketers seek to fast-track digital maturity plans. This merger addresses an immediate need for marketers in the region and globally, over 65% of whom reported plans to increase analytics investments in the next year (2020 Global State of Enterprise Analytics).

Brightblue was founded in 2012 by Michael Cross after over a decade of consulting around predictive modeling and marketing ROI effectiveness, and is supported by directors Jamie Gascoigne, David Walsh, Ruan van de Venter, and Stephen Hilton. Today, the company helps help clients grow their sales and profits using a range of mathematical and econometric techniques to allow marketers to gain greater insight into their digital audiences. Clients include HSBC, The Co-op, LV, reed.co.uk, Three, and Royal Mail. 

brightblue1

Since its inception, Brightblue’s outstanding work has earned the consultancy a number of prestigious awards, including a Cannes Lion, two honors from The Institute of Practitioners in Advertising (IPA), Financial Times 1000 Fastest Growing Companies in Europe, Start Ups 100 for the UK, Goldman Sachs 10k Small Business Program, and the Mayor’s International Business Program. Additionally, both Brightblue and MightyHive have been named Best Places to Work. Brightblue is also an official effectiveness partner of the Incorporated Society of British Advertisers (ISBA).   

As a platform-agnostic, independent consultancy, Brightblue is a trusted advisor for clients looking for understandable, actionable insights from their analytics data. Brightblue and MightyHive both believe in delivering transparent solutions that will drive the greatest ROI for clients, leveraging the most appropriate platforms, media partners, and modeling strategies for each client’s individual needs. 

Continued Momentum in Data and Analytics

Brightblue’s merger with MightyHive enables both companies to help marketers meet the rapidly growing need to move away from reliance on cookies. The econometric marketing mix models that Brightblue champions provide marketers with a view of performance and sales attribution across channels without relying on user-level data. Combining probabilistic models with privacy-safe, customized, deterministic attribution models will give marketers unprecedented visibility and control over their investments. 

Brightblue’s team of statisticians and mathematicians provides unparalleled expertise in designing marketing mix models and optimization strategies, and uses proprietary automated workflows to achieve maximum efficiency for clients. This merger positions Brightblue to offer a larger suite of global solutions to its clients, as well as integrated marketing services through S4Capital.

 

MightyHive and S4Capital: Leading the Way for Marketers

As companies seek to make the most of their investments and meet new consumer expectations as a result of the COVID-19 pandemic, smart marketers realize that it’s imperative to make data-driven decisions. Together, MightyHive and Brightblue will help companies more effectively analyze, understand, and mobilize their data for the best possible performance, revenue growth, and return on investment. 

MightyHive, reinforced by mergers in the UK and South Korea, Latin America, and Australia, and now joined by the stellar Brightblue team, is well-positioned to help marketers deliver the insights clients need to find efficiencies, optimize their marketing spend, and drive more profitable growth. As marketers accelerate digital transformation plans, MightyHive and S4Capital strive to help them quickly and efficiently stand up digital solutions that will leave them stronger and more prepared than their competitors.

In December 2018, MightyHive merged with S4Capital, a new age/new era advertising and marketing services company established by Sir Martin Sorrell. S4Capital’s strategy is to create a purely digital advertising and marketing services offering by integrating leading businesses in three areas: first-party data, digital content, digital media planning and buying.

To learn more, contact us.

Server-Side Google Tag Manager Deep Impact

 

Before we dive into server-side Google Tag Manager (GTM), I’ll prefix the meat of this post with a caveat: always respect user privacy

Any data collection techniques discussed here must be applied righteously and not as a workaround to circumvent data collection consent regulation.

10,000 Foot View

Here’s a familiar situation – Google Tag Manager as we’ve known it for years.

Your container is loaded on all pages, or screens in your site/app, and based on trigger events, data is sent to first- and third-party endpoints.

It works, it’s fine, but it’s not perfect. Tracking blockers, JavaScript failures, many, many requests to endpoints, and inefficient JavaScript are all risks, and potential performance problems that can lead to data quality issues.  

Server-side GTM moves the tag vendor request from the client to a server—a server on Google Cloud Platform living on a subdomain of your site. The container loaded in the browser/app still has tags and still sends a request but has way less code, sends fewer requests, isn’t necessarily affected by anti-tracking software, doesn’t send the user’s IP address to third-party tag vendors, and first-party cookies are correctly set in an ITP compliant manner.  

Out of the Box – What’s Cool?

There’s a lot to be excited about with server-side GTM in that, on the client side, it’s all very familiarbut way better! The “traditional” digital marketer can still set up their Facebook tag(s) with the same triggers, and deploy Floodlights as required. Same, same… but different.

As mentioned earlier, rather than sending data to the tag vendor endpoint, it’s sent to a subdomain. For example, if you’re on www.mysite.com, server-side GTM will send data to tracking.mysite.com, a subdomain you can have configured.  

And that’s great because…?

  • It respects user privacy: The user’s IP address isn’t sent to a third party.
  • It preserves data quality: Tracking prevention doesn’t happen on requests to your own domain.
  • It lightens code bloat from the client side: The tags require less work on the browser, shifting the workload to the server instead. This means what remains in GTM on the browser does less, so the site runs faster.
  • It consolidates requests from the client side: You can send multiple requests from the server based on one request from the client.

At MightyHive, we strongly advocate for focusing on what’s best for the user, not the ability to foil or circumvent anti-tracking software. Reminder: act righteously, not selfishly. As it stands now, data is collected, not captured. In the future data will be exchanged… Think about that for a minute.

Deeper Impact

Have you noticed that tracking requests are sent to your domain and not a third-party domain? The data collection workload is moved to your infrastructure.

Does that feel like just going back to web server logging? How different is this from web server logging?  

Very. 

Analytics data is formatted (sessionized), cleaned (PII removed), integrated (joined with data from Google Ads, Search Ads/Display & Video 360) and presented ready to perform its function: analysis and optimization of all aspects of the online business, which, let’s face it, is all about better marketing.  

Web server logs don’t collect all behavioral data. Typically, log-level data isn’t integrated with marketing channel data, meaning there’s no feedback loop for activation of the data. 

But! There are similarities between server-side GTM and web server logging. The web server receives a request, typically for a page, builds the page content and responds, possibly setting first-party cookies along with the response. The server-side GTM endpoint also receives requests, and responds, potentially with cookies (but with less content).

Now… the web server knows what page it’s returning.

It knows what data to render on the data layer to record a transaction (for example). 

The data layer is picked up by a tag firing in the browser and then sent back to the tracking endpoint. 

The end point then takes the same data and fires it off to Google Analytics (GA) to complete the round trip and get your analytics data recorded. 

Phew!

Wait one minute. If the web server knows it’s rendering a “thank you” confirmation page, and it knows what data to render on the data layer, why bother sending this to the browser for the browser to just send it back to the tracking end point and then to GA?  

Why not remove some steps for efficiency? The web server knows it is rendering a confirmation page. So it builds the exact same request the browser was going to, and sends the GA transaction data straight to the tracking end point. Cut out the client round trip.

It’s quite normal to fire off conversion tags, Floodlights, FB pixels, Adnxs, TTD, and so on to record transactions. Don’t send those to the client to handle. As the web server responds with the confirmation page, send those requests straight to the tracking endpoint. The endpoint responds with the details of the cookies to set, and the web server sends those with the confirmation page content in the response to the client.

Think how many marketing tags and tracking pixels fire on page level events. How many tags actually need to fire on the client? How many tags don’t even need to be exposed to the browser? What if, just maybe, you only had page-level event-triggered tags? Maybe you only need page-level tracking if you’ve removed all of your data bloat? Then you don’t need to CNAME the tracking subdomain, you can restrict access to your tracking endpoint to only allow your web server to access it via https (think IP range restriction). That’s a bunch less complexity and a fair amount of moving parts removed from the solution.

Simpler is better. No code is better than no code, as the saying goes.

In Conclusion

The server-side GTM solution offers a good and correct solution to digital analytics measurement. It’s good because data quality can be improved, user privacy is further protected, and significantly, it’s a step towards doing less work in the browser, meaning sites and apps get faster.

Thinking about the possible solutions the technology offers, with the right motivation in mind, demonstrates how versatile the solution is, how much power is available and what avenues are still to be explored to leverage first-party data.

 

WEBINAR: A Discussion with Ace Hardware About Offline Data and Online Marketing

 

**Scroll down to watch the video**

How Ace Hardware is Using Offline Data to Measure its Digital Marketing

A recent Google report titled “How Consumers Solve Their Needs in the Moment,” cited 76% of people who search for something nearby on their smartphone visit a related business within a day. And 28% of those searches result in a purchase. “That’s significant,” said Mark Lowe, Director of Digital Marketing at Ace Hardware. “So it is critical for us to have the [online customer experience] be as helpful as possible.”

Lowe is a seasoned digital marketer and on August 4, he provided a glimpse into how Ace Hardware is adjusting to shifting consumer habits during an eMarketer “Tech Talk” webinar titled “How Ace Hardware is Using Offline Data to Measure Digital Marketing.” Along with Myles Younger, Senior Director of Marketing MightyHive, the two discussed how he balances nationwide digital trends with in-store buying at the retail level.

Ace Hardware generates a significant amount of first-party data from its website and its stores. By partnering with MightyHive, they reimagined ways for online and offline data to inform each other. “That means a very pure approach to data since many data sources need talk to each other,” explained Jack Pace, Project Lead at MightyHive. “Offline data from the store, online data from the site and app, franchisee sales data, manufacturer data from the thousands of SKUs carried throughout the chain — each store is different.”

Lowe is excited by the possibilities. “We are getting good insight into how many people are going into the store to pick up their online orders as well as the attachment sales when they are making their pickup,” explained Lowe. “And we are really leveraging several tools such as Google store visits and store sales direct from Google to connect the dots and understand the impact across all channels.”

With 4500 retail locations, a robust online presence, and thousands of SKUs, Ace Hardware had a unique challenge with its data but the efforts are paying off. “As data-driven marketers, we really want the same level of precision that we have with our online measurement with our offline,” explained Lowe. “There is going to be a certain level of extrapolation but it’s all about getting to the point where you can make actionable business decisions.”

In this eMarketer “Tech Talk” webinar you will learn about:

  • The Ace Hardware digital marketing tech stack
  • Successes, opportunities, and challenges in measuring the offline impact of digital campaigns
  • The role web UX can play in collecting and growing first-party data

FILL OUT THE FORM TO WATCH THE eMARKETER WEBINAR: “HOW ACE HARDWARE IS USING OFFLINE DATA TO MEASURE DIGITAL MARKETING”

Digital Hygiene: Fighting Data Bloat

 

Some years ago, as digital storage grew more affordable, the attitude towards data by many companies was to “store everything.” Every. Single. Data. Point. 

Next came “big data” and cloud computing, which brought even more data, more computing power, and ostensibly more opportunity and insights.  As a result, data consumption skyrocketed, driven by the Internet, social networks, and digital services.

To paraphrase my guru Avinash Kaushik, we now have more data than God ever intended anyone to have. 

The instinct to store everything is understandable. Why throw away data? But there have been a few unforeseen effects:

  • It increases the workload associated with data quality assurance
  • It increases data processing times
  • It makes data sets more complex and more difficult to work with
  • Most of the data is irrelevant to business analysis

The decision to keep all the data was an easy one. Discerning which data points should be considered is difficult. This consideration phase will be implemented either as companies are specifying a data project (BEFORE), or as they introduce a new release of their digital assets (AFTER).

For mature audiences only

Imagine you’re building the specification for your project and figuring out how to measure project success. You will most likely consider the following KPIs:

  • Key feature usage rate (conversion rate)
  • Marketing effectiveness (budget, cost per acquisition)
  • Vanity metrics (volume, users)

Sounds too basic? Fair enough. And yet that’s a great base to work from! 

Important Tip: Your project must be in sync with your organization’s maturity level.

First, you need to make sure the basic data you intend to collect from your site or app resonates with your product managers, your marketing team, or your analysts. They need to understand how these basic numbers can help shape your product or marketing strategies. 

Then, a specification document must be established. A Data Collection Bible of sorts. Call it a tagging plan, a data collection blueprint, a solution design document… get creative! That document will not be set in stone. It will evolve with your company as you enrich your data set to meet your measurement requirements. Make sure to include significant stakeholders in that process, or else…

Only after you’ve gone through a thorough data specification phase can you consider enriching your data during subsequent development cycles. Data enrichment will either be:

  • Vertical: more metrics to measure specific user events
  • Horizontal: more dimensions/attributes to give metrics more context

Keep enriching your data to assess the KPIs that support the measurement of your business objectives. Give them as much context as you can so the analysis is as relevant and actionable as possible.

Does your data spark joy?

All this talk about enriching your data sounds great, but you may be at a stage where you’ve collected way too much data already. Arguably, getting a ton of data means getting the fuel to power machine learning, artificial intelligence, or any reasonably advanced data processing.

Having said that, too much unidentified/non-cataloged data will ultimately yield confusion and storage/processing costs. For instance, if you have a contract with a digital analytics vendor (say Adobe or Google), it is very likely you’re paying a monthly/yearly subscription fee based on the number of hits your system collects and processes into reports, cubes, and miscellaneous datasets. Additionally, digital marketing teams are not known for questioning the status quo when it comes to data and tracking, in particular.

If you combine both facets of data cleanup, we’re looking at an optimization campaign that turns into a cost-saving effort. This is where you as a company should start asking yourself: “do I really need that data? Can my team function without measuring metric X and attribute Y?”

To borrow from Marie Kondo’s konmari method, you should keep only data points that speak to the heart. Identify metrics/attributes that no longer “spark joy,” thank them for their service before brutally disposing of them with a firm and satisfying press of the DELETE button.

How can you tell whether you should discard a specific data point?

This requires a bit of investigation that can be done in your data repository by looking at your data structure (column names and values for instance). If you cannot make up your mind, ask yourself whether one particular data point really “sparks joy,” or in our case, drives analysis and can be used as a factor in machine learning. In fact, this is a great occasion to actually use machine learning to find out! 

Feed your data set into R/Python (insert your favorite machine learning package here) and look at the results:

You could also look at factor analysis another way and see where a specific factor really contributes to performance, metric by metric:

Once you’re done analyzing which data points still belong in your data architecture, it’s time for pruning. If you have made the decision to delete existing data, this can be as simple as deleting a column or a set of entries in a database, data lake, or data repository. But that’s only for data you already collected. What about data collection moving forward? 

If you want to change the way data is collected, you need to go konmari on your digital assets: web site tracking, mobile SDKs, OTT devices. Using a tag management system (TMS), you can start by deactivating/pausing tags you no longer need before safely deleting them from future versions:

From a management perspective, stakeholders need to make themselves known and express clear data requirements that can easily be retrieved. That way, when you prune/retire data that is deemed to no longer spark joy, you’re not inadvertently sabotaging your colleagues’ reports.

And this is why you needed that Data Collection Bible in the first place!

Which data stage are you at? Before or after? Basic or complex? Don’t hesitate to contact MightyHive for a data maturity audit or a digital analytics health check!

 

 

Download the Slides and Video for Comparing Log-level Data to Ads Data Hub

**Scroll down to download everything**

High Fidelity: Log Files vs. Ads Data Hub

Recently I had the pleasure of being a guest on “Live with MightyHive” to talk about how Google Data Transfer Files (DT Files) compare to the data inside Ads Data Hub (ADH).

As a refresher, DT Files are the log-level data marketers can access from Google Marketing Platform (GMP). Google announced in 2018 that it would begin redacting user-level IDs in order to protect user privacy and comply with regulations like GDPR (and now CCPA, as we cover in this episode). Ads Data Hub, on the other hand, is the “data clean room” for GMP; a privacy-preserving application for analyzing campaigns and audiences without user-level data needing to leave Google.

What has happened is that the detailed user-level data offered by DT Files in many cases ends up heavily redacted (for both privacy and business reasons), whereas Ads Data Hub keeps user-level data 100% private and is consequently not nearly as redacted. Marketers NEED to understand this trade-off. This episode covers what you find when you compare the two, including:

  • The key differences between DT Files and Ads Data Hub
  • How to check data parity between DT Files and Ads Data Hub before making other comparisons
  • How to compare the effects of user ID redactions, Safari ITP, and CCPA between log-level data and ADH

We’ve packaged everything up and you can register to download these materials below:

  • The full episode video
  • The slide deck
  • Sample SQL queries for both DT Files (BigQuery) and Ads Data Hub

If you want to develop a better understanding of how Ads Data Hub differs from legacy analytics and log-level data, then this is a great set of materials. I hope you find them useful!

 

Preview the slides


A few sample slides from “High Fidelity: Log Files vs. Ads Data Hub” and the appendix of SQL queries for BigQuery and Ads Data Hub.

 

DOWNLOAD THE VIDEO, SLIDE DECK, AND SAMPLE SQL QUERIES

MightyHive Achieves Google Cloud Platform Global Marketing Analytics Specialization Certification

We’re excited to announce that MightyHive has been certified as a global sales and service partner for Google Cloud Platform (GCP) with a Marketing Analytics Specialization. The Marketing Analytics Specialization certification enables MightyHive to provide data advisory services to help clients advance their marketing analytics strategies using the GCP platform for data infrastructure modernization and machine-learning automation. This enables marketers to measure the customer journey at every step of an interaction and tailor future campaigns at scale. MightyHive has been certified as a Google Cloud Platform Partner since 2018 and has recently been featured for its advanced cloud work helping to personalize digital customer experiences using Google Cloud. Read the full Mondelēz Case Study here.

MightyHive: Committed to Client Success

MightyHive’s continued investment in its data practice, inclusive of the Google Cloud Platform, demonstrates its commitment to the evolving needs of marketers seeking to make better use of their first-party data and navigate data privacy compliance and governance. As a certified Google Cloud Platform partner, MightyHive helps advertisers employ innovative technology and workload tools needed to overcome the challenges of unifying critical data from their marketing strategies, customer touch points, and sales transactions, generating new business intelligence and customized insights related to advertising performance, and activating those insights across marketing and customer-facing channels.

“MightyHive has been instrumental in helping Mondelēz lead a digital renaissance in the CPG industry. The MightyHive data team has been by my side as we’ve architected advanced data environments and future-proofed our measurement systems.”

Jon Halvorson, VP Global Media, Digital & Data, Mondelēz International

The combined expertise and certification in Google Marketing Platform and Google Cloud Platform puts MightyHive into a class with few other companies; where the interplay between cloud technology, marketing technology and advertising technology will continue to be leveraged by data-driven marketers of the present and future. As a leading marketing services provider, MightyHive focuses on the needs of brands under pressure to show that marketing strategies contribute to profits and need to develop closer connections with their customers at scale, a critical aspect of digital-readiness during this time of rapid digital transformation. Leveraging technology partnerships with both Google Marketing Platform and Google Cloud Platform, MightyHive helps clients take a more holistic approach with developing and even modernizing its data and marketing strategies to create a more unified and powerful set of customer experiences.

A Marketing Analytics Partner for the Future

Through recently completed mergers with Digodat, ConversionWorks, and MightyHive Korea and the pending merger with Lens10, MightyHive continues to rapidly expand its services capabilities and global presence with new certifications and specializations in data lake modernization, AI, and Google Cloud Identity & Security. With increased global service operations, many multinational clients trust MightyHive to provide globally consistent marketing services delivery to meet their unique in region business needs that drive their marketing analytics using the most advanced marketing and cloud technology together to deliver, contribute and measure business outcomes guiding future investments.

Press Release: MightyHive Launches Global Data Practice

SAN FRANCISCO, July 16, 2020 (GLOBE NEWSWIRE) — Leading data and digital media consultancy MightyHive today announced the launch of its global data practice to help organizations better understand their customers and make informed decisions across their businesses with machine-learning (ML) technologies that drive the effective and sustainable use of connected first-party data and analytics.

Eighty-seven percent of marketers consider data their organizations’ most underutilized asset, yet according to Forrester, more than half of marketers still feel overwhelmed by the incoming data. In order to make data-driven marketing a reality, marketers need data experts to guide their strategy while effectively navigating a changing privacy landscape. This includes new regulations and technological changes that affect browser cookies and mobile advertiser IDs. The risks of not using or misusing data can be detrimental to a business, as 52% of Americans have decided not to use a product or service because of privacy concerns.

“Data is the vital connective tissue that drives personalization at scale and makes good creative even more powerful,” said Sir Martin Sorrell, executive chairperson of S4Capital. “As the leading tech-led new age marketing services company, launching the MightyHive data practice is a natural step in our evolution.”

MightyHive has already sustained positive results from privacy-first data strategies for international brands like Electrolux, Pandora Jewelry and Mondelēz International. A key example of this was MightyHive’s work in helping Mondelēz International create a Media Data Spine, a scaled repository to analyze media data for Mondelēz marketing teams across the globe, which has spearheaded advanced personalization-at-scale and closed-loop sales measurement with retailers such as Target.

“MightyHive has been instrumental in helping Mondelēz lead a digital renaissance in the CPG industry,” said Jon Halvorson, VP global media, digital & data at Mondelēz International. “The MightyHive data team has been by my side as we’ve architected advanced data environments and future-proofed our measurement systems.”

The MightyHive data practice has achieved certifications for Google, Amazon and Salesforce marketing clouds. MightyHive is the only global company that has both Google Marketing Platform (GMP) and Google Cloud Platform (GCP) certifications with specializations across Cloud Identity and Security Expertise.

The MightyHive data practice, globally helmed by Tyler Pietz, senior vice president, global data, is a natural addition to its media consulting services. MightyHive has been steadily building upon its existing data and analytics expertise, including its deep understanding of ML and artificial intelligence (AI) through strategic mergers and calculated hires. Within the past year, the company has merged with Digodat, ConversionWorks, and MightyHive Korea, which has grown its regional presence in Latin America, Europe, Asia and Australia. The MightyHive data practice is globally available to clients with teams and support available locally and across regions.

In addition to welcoming the merged companies’ teams, noteworthy recently appointed talent includes Julien Coquet, Sayf Sharif and Toby McAra. The global data practice team has deep competency in analytics, as well as the change-maker mindset required to drive digital transformation and fill the data and analytics services gap.

“Data impacts every aspect of the customer experience – not just marketing – but it can still be overwhelming and confusing,” said Pete Kim, CEO and co-founder of MightyHive. “With the launch of the global data practice, MightyHive helps brands break down data silos within their company to more effectively intertwine data with media and creative for more compelling customer experiences and maximum returns.”

To learn more about the MightyHive global data practice, please visit https://mightyhive.com/data-practice/.

About MightyHive
MightyHive is the leading data and digital media consultancy that helps marketers take control. MightyHive delivers sustained results from the ground up through advisory for business transformation, privacy-first data strategy, and digital media services.

The company is headquartered in San Francisco, with a team of consultants, platform experts, data scientists, and marketing engineers in 19 countries and 24 cities around the world. In 2018, MightyHive merged with S4Capital plc (SFOR.L), a tech-led new age/new era digital advertising and marketing services company established by Sir Martin Sorrell.

Contact
Blast PR for MightyHive
mightyhive@blastpr.com

Read Press Release on GlobeNewswire

Making Google Optimize More Effective with Double Blind Testing

Doug Hall
Senior Director of Analytics
Allison Hannah Simpson Blog Post
Allison Hannah Simpson
Marketing Associate

The Opportunity for Better Testing

We’re big fans of Google Optimize, a premium testing and personalisation tool. We’re also big fans of Double Blind Testing. Double Blind Testing weeds out the bias that can diminish the effectiveness of your data analysis. This article proposes integrating Double Blind Testing with Google Optimize to further validate your marketing research, thus helping your marketing dollars go further.

What is Double Blind Testing? A handy definition:

A double blind test is an experiment where both the subject and observer are unaware that the exercise in practice is a test. Double blind testing is referred to as the gold standard of testing. Double blind tests are used in science experiments in medicine and psychology, including theoretical and practical testing.

See how that’s different from Optimize testing? With Optimize, the analyst often sets up the test, runs the test, analyses the test, calls the winner (if there is one) and shares learnings. That last part of the process, sharing the learnings from the test, is the most important piece.

Caution: Unconscious Bias Ahead

One person wearing multiple hats during testing is also known as a single blind test and comes with consequences. A single blind test risks being influenced by unconscious bias. In a single blind test, the test participant is the only individual unaware of the experiment they’re being subjected to. The person running the test knows what is the control and what is the experiment.

There’s the rub. It’s quite possible for an analyst to draw a conclusion around results based on their knowledge of the test participant – not necessarily to the extent of creating manufactured data, but unconscious bias creeps in subtly.

For example, the analyst may be presented with an aggregated view of the data that shows the experiment outperformed the control. At first, this sounds like a success! However, confirmation bias could make the analyst less likely to dig deeper and explore the data under a more critical lens. With the results in the bag, the analyst moves on and misses important insights into the effects of the experiment.

The opposite is also possible: the control wins, so the experiment tanks. This cannot be true! So the analyst, misled by their cognitive bias, wastes time digging for signals that validate their hypothesis.

Change the Methodology to Remove Bias

Testing is a team effort. Divide [effort across the team] and conquer [cognitive bias] to achieve proper double blind tests. Let’s take a look at how a simple A/B test approach might change:

First, the test owner develops the hypothesis, which must remain private to the test owner.

Based on data [a] and feedback [b], we believe that doing [c] for audience [d] will make [e] happen.

We will know this to be true when we observe data [f] and get feedback [g].

We expect a change in metric [h] of magnitude [i] within [j] business cycles.

The test owner will then work with the designer and front end engineer to develop and implement the test experiment. It’s important for the test owner to keep the hypothesis a secret and only share what the test experiment is with other team members. The reason for the test should not be revealed.

The test is executed and now an analyst is introduced to the test data. The analyst has no previous knowledge of the hypothesis, experiment design or testing process at all. They only know this is a test in which they must perform a calculation on the presented data to determine the test experiment performance and significance.

Setting Up a Double Blind Test

Tell your data engineers to get the Google Analytics add-on for Google Sheets. Share the audience definition piece of the hypothesis with the data engineer. The data engineer will set up a query as shown below to produce two reports accurately representing the audience defined in the hypothesis. One report is the daily session, conversion volume and conversion rate for the test experiment. The other report is for the control. Name the report Population A and B – something anonymous will work.

Google Analytic Report Information

Now schedule the report to refresh daily. Having done this, hide the config sheet and make sure the test analyst is not given editor rights on the data so they cannot see the produced report config:

Hidden Google Sheet message

Now the analyst has blind access to the data with no awareness of what the experiments are for, what the hypothesised effect is, who the audiences are or which population corresponds with what experiment. Due to the blind data access, there is no cognitive bias in the analysis.

Google Sheet data

The analyst reports on the performance of each population as they see it in the numbers. Each member of the team is sufficiently removed from the hypothesis and test detail so as to achieve a double blind test production and analysis. Insert happy-dance here!

Integrate into Your Data Team’s Workflow

Google Optimize just leveled up by going one step further with Double Blind Testing. The double blind testing strategy has the power to improve your data analysis. It can open opportunities for you and your team by decreasing unconscious bias and increasing the effectiveness of your media spend. Choose to take more control of your marketing dollars by integrating this testing process into your daily workflow.

To learn more about Google Optimize and Double Blind Testing, reach out to us at questions@mightyhive.com.