Tracking Performance with Dimensions

In this article, we'll offer some valuable strategies for using dimensions to measure the impact of various changes to your account and the campaigns each dimension contains. 

If you're new to the world of Dimensions, be sure to read our comprehensive help center article introduction before you read on. Please also note that this feature is designed for clients using Marin Bidding.

What Can Dimensions Track? 

Dimensions can be created to help you track performance increases for various metrics, including:

  • Bid changes
  • Keyword expansion
  • Negative keyword expansion
  • Creative tests
  • Budget changes

Dimensions can also be used to track the performance of keyword sets that do not have any changes (to be used as a 'control' set).

Before measuring lift, it is important to determine which metrics to look at based on your specific goals. It is common to want more revenue with less ad spend. However, when determining lift, it is important to consider which metric is the most important.

Examples of specific goals (and the metrics to look at):

  • Maximize conversions while maintaining constant spend
  • Keep revenue constant while decreasing spend

How To Create Dimensions And Tag Campaigns 

To measure lift from automated bidding, you'll need to create a new dimension and tag your campaigns:

  1. Using the instructions in our Creating & Tagging Dimensions article, create a new dimension called: 'AutoBid_Results'.
  2. On the first day bids are trafficked, add a dimension tag to all the campaigns in the trafficked Strategy called: 'Bidding_Phase1_mmdd_yyyy' and tag all campaigns contained within that Strategy.

You can repeat the process above for any Strategy that is actively being trafficked.

Note: Trafficked Strategies operate in 'phases' so performance can be compared against sets of campaigns that are not using automated bidding (referred to as a 'control') and to minimize spend opted into a new feature that may require settings adjustments.

Setting Up Your Control Strategy 

For lift to be measured correctly, you'll need to set up your automated bidding with a control Strategy. To do that, follow these steps:

  1. Add a dimension tag called 'Control' to use for campaigns that are not opted into automated bidding. 
  2. Create a Strategy and add campaigns to it that are similar to the trafficked campaigns in your test group.
  3. Then, tag all campaigns in your non-trafficked Strategy with your 'Control' dimension tag. 

Note: Strategies not trafficked in the initial phase can serve as a valid ‘Control’ for comparison if the performance from keywords in those Strategies is expected to have the same seasonal changes as the keywords in the trafficked Strategy.

It is recommended to compare only non-trademark search vs. other non-trademark search, content vs. content, etc., to ensure a more 'apples to apples' type of comparison.

Going Forward 

When you add any new trafficked Strategies, follow these steps to continue using dimensions to track lift:

  1. On any days where additional Strategies are trafficked, add a dimension tag to all the campaigns in the trafficked Strategy, tagging them with the appropriate dimension tag (e.g., 'Bidding_Phase2_mmmdd_yyyy').
  2. Repeat above for any trafficked Strategy.

Note: Applying the steps above is important for being able to measure lift, especially if users remap campaigns to Strategies at later times (making the Strategy view inappropriate for measuring lift). The dimension tags will remain consistent, regardless of where the campaigns are mapped.

Monitoring Your Performance 

Once you've set up the above, you'll want to monitor the performance as you go forward. Here's how you do it:

  1. Start looking at metrics from the first day after trafficking a Strategy (the first full day using calculated bids). Measure roughly 1-2 weeks after trafficking a Strategy (most changes in performance occur during this initial period) unless there is significant latency (the days between the paid click and the conversion).

    This time span is referred to as the Test period. If there is significant latency, (e.g. on average there are 10 days between a click and conversion), then adjust the test period accordingly.
     
  2. It is recommended to compare changes vs. the prior week (or prior months if there is monthly seasonality). This time span is referred to as the Baseline period and is typically the same number of days as the Test period.
  • If there were unusual events during the baseline or test periods (e.g. data issues, promotions that only affected the test or control campaigns) these days should be removed (along with the corresponding day of the week from either the test or control).

Additional Notes 

The process outlined above can be very tricky, so please speak with your Marin account representative if you're at all apprehensive about setting up your test or measuring performance.

Written by Marketing @ Marin Software

Last published at: January 18th, 2024