AI is here, and we’re told it will make our lives easier in many ways – such as with automated bidding tools. And it can. But it isn’t as simple as flipping a switch. Which automated bidding strategy should be used for which type of account? What kind of monitoring and testing should we do? What can we do if we aren’t getting the results we expected? Based on our extensive experience with automated bidding tools, here are a couple initial questions to consider:

  • What is the goal for the account? Different goals lead to different strategies
  • What is the historical performance of the account? The algorithm needs data to perform well

Want some broader context? Check out more about our (un)Common Logic approach to paid media services.

Ready for more details about (un)Common Logic’s experience and best practices regarding automated bidding strategies? Read on!

Automated Bidding Strategies: The What & Why

Automated bidding strategies have become less a possibility and more of an inevitability. The balance of control between advertisers and platforms has gone back and forth over the years as Google continues to move away from manual bidding. In their eyes, being able to optimize bids at nuanced levels like keyword, device, or location has become an inefficient use of time that could be better spent developing higher-level strategies. Google will set the bids for you, using machine learning and algorithms to reach a desired goal. While this seems tempting at first there are several factors that need to be thought through before reducing or eliminating manual bidding and rolling out automated bidding strategies. What should you consider? Let’s start with the basics and then discuss strategies and tactics around automated bidding.

To begin, you can set automated bidding at two levels, for a single campaign or an entire portfolio. A single campaign is exactly what it sounds like, you select the bid strategy for one campaign, and the system optimizes bids based on the data from that campaign only. For portfolio-level bidding, you can group campaigns together and select a bid strategy that will be applied to all campaigns in that group. Google will work to optimize all campaigns in the portfolio to achieve the goal: this does not mean each campaign individually will reach the goal, but rather collectively performance should meet the goal. For portfolio-level bidding, it is important to use shared budgets and regularly evaluate groups to ensure optimal performance.

There are eight different automated bidding strategies to choose from depending on the desired outcome. Each strategy will automatically set your bids:

  • Target CPA – The user sets a target Cost-Per-Acquisition (CPA) that the algorithm will work towards while capturing as many conversions to stay more or less around the desired target. If you are using tCPA at the portfolio level you have the option of setting min and max cpc bids.
  • Target ROAS – Similar to target CPA, but instead, the user sets a target Return on Ad Spend (ROAS) that the algorithm will optimize towards
  • Maximize Clicks – Within your set budget, the system will work to capture as many clicks as possible, whether or not they could result in a conversion. You are able to set a maximum CPC bid limit to keep CPCs low, as Google will spend the full daily budget with this strategy.
  • Maximize Conversions – Similar to Maximize Clicks, except the system will work to capture as many conversions as possible, but it can come at the cost of drastically increased CPAs. You can set a target CPA to limit the negative impact to CPA. It’s important that these campaigns have their own budgets vs a shared budget.
  • Maximize Conversion Value – Also similar to Maximize Clicks and Maximize Conversions, except it tries to capture as much revenue regardless of the efficiency as long as it remains within the set budget. You can choose to set a target ROAS to help inform bidding.
  • Target Impression Share – The system optimizes to show your ad at the absolute top of the page, on the top of the page, or anywhere within a Google search result. (this bid strategy is only available on search networks)
  • Viewable CPM – This strategy is only available for Display and is the cost per thousand viewable impressions. This strategy allows you to bid for impressions and can be useful for branding-focused campaigns.
  • Cost Per View – CPV is only available on video ads. This strategy allows you to set a campaign-level max bid you are willing to pay for the view. Google defines a view as having watched 30 seconds of your video (or the duration if it is shorter) or an interaction with your ad.

Enhanced Cost-Per-Click (eCPC), while considered by many to be a variation of automated bidding, is not included in Google’s definition since the user can still implement several manual adjustments.

Each automated strategy uses the historical performance of the account or campaign, typically ranging from 14 to 30 days, to adjust bids and define a user profile that is more likely to complete the desired outcome.

With that covered, let’s talk about best practices for running automated bidding strategies.

How to Choose the Right Automated Bidding Strategy

I attended a workshop a few years ago when automated bidding strategies were really starting to gain ground, and I remember the Google representative saying that the ideal budget for running an automated strategy was an unlimited one. Now, in the real world, this is entirely unrealistic, but it speaks to the designated outcome Google wants and what we need to consider before implementing an automated bid system.

Like a fingerprint, each account is unique in its performance and structure. What works for one account doesn’t necessarily translate to others, but in my experience, most accounts are working towards a common goal: efficiently growing lead or revenue volume. Target CPA (tCPA) and Target ROAS (tROAS) are the most popular forms of automated bidding because they’re geared toward that goal.

When an account manager sets a target for tCPA or tROAS, the algorithm automatically sets bids and is primarily concerned with reaching that designated efficiency goal. If a campaign’s daily allotted budget is too small to serve ads throughout the day evenly, it hinders the tCPA and tROAS algorithm’s ability to test various auctions and adjust bids accordingly.

When the tCPA or tROAS algorithm is evaluating if it should bid on a user, it is not concerned about the cost of that bid; it’s concerned about the target CPA or ROAS. The result is that, based on how likely it thinks a user will convert, the algorithm will adjust the aggressiveness of the bid to ensure that it captures the user and optimizes towards the CPA or ROAS goal.

Meanwhile, the Maximize variations, Target Impression Share, vCPM, and CPV strategies are designed to work within the constraints of any budget. That means if a campaign is limited by budget, these strategies will still work to maximize volume. The major drawback is that, often, this comes at the expense of efficiency. The strategy will bid as aggressively as possible to capture volume, which can lead to higher CPCs and CPAs and lower ROAS.

Another big consideration is historical performance. Google now stipulates that you don’t need a certain conversion threshold to use the tCPA or tROAS strategies, but the best practice would be to have a minimum of 15 conversions within the past 30 days. The more data the better, since these algorithms use past performance to dictate bids. If your campaigns have low volume, then using a Maximize Conversions or Maximize Conversion Value strategy could help bolster performance and allow you to switch towards an efficient automated bidding strategy.

The next question is what user action the automated bidding algorithms should optimize towards. For e-commerce, it’s typically a purchase; for lead-generation, it’s filling out a form. We only want to count people who have completed this action, and we usually don’t want to count people that added something to their cart but didn’t complete the purchase or only partially filled out a lead form. The algorithm will then optimize toward the conversion action that we want to be counted. The algorithm builds a user profile around customers that completed the desired action. When the algorithm is evaluating bidding in another auction, it will bid more aggressively on users that fit within that profile. If you have secondary conversion actions outside of the primary completion objective, you have to specify that those secondary conversion actions should be counted for the system to optimize towards them.

While each strategy has pros and cons, it doesn’t mean a strategy should be discounted because of a preconceived notion. I recommend that you test a strategy on a small number of campaigns and review performance before deciding if it should be rolled out across other account areas.

When & How to Optimize Your Automated Bidding Strategy

With automated bidding strategies, it is all too easy to set it and forget it, but that should never be the case. These algorithms fluctuate, and since you can’t adjust your keyword, device, or location bids anymore, it begs the question of how we optimize.

In testing tCPA and tROAS, we need to start with a realistic goal that the algorithm can work towards. For example, if your desired long-term goal is a 200% return and the campaign’s current ROAS is 80%, you’ll run into large fluctuations in performance if you try to achieve that large of an increase all at once. Large cost spikes, no or low spending, or volume tanking are all possibilities if you set unrealistic targets. Typically, I look at the last 14 – 30 days of performance and use that as a baseline. Remember, since these strategies use that time range to set bids, this is the most accurate representation of what the algorithm will think is possible. Suppose I switch from a manual strategy to tCPA or tROAS. In that case, I will usually set the target 5 – 10% below the baseline since it will initially enter the learning phase, the next consideration for testing.

For the Maximize variations, the levers we can pull are limited. You can freely adjust budgets on campaigns since the algorithm will use the average daily budget to determine spend, you can set maximum bid limits to ensure the system doesn’t bid higher than your specified cap, and you can set the ad scheduling to show during certain times of the week, but that’s it. The lack of levers means controlling spending is a much more difficult task, especially since these strategies can quickly increase the daily cost putting accurate spending pacing in jeopardy.

Target Impression Share works similarly but allows you to choose between aiming for the absolute top of the page, on the top of the page, or anywhere on the page of a Google Search result. You can then adjust an impression share target, and the system will automatically set your bids to show your ads the total possible amount of times at that target. You also have the ability to set bid limits. Similar to the Maximize variations, Target Impression Share can quickly increase the daily cost of campaigns, making spend pacing much more difficult.

With these bidding strategies, if efficiency metrics are a primary concern, then I would not recommend starting with them and instead testing tCPA or tROAS.

When launched, the automated bidding strategy will enter a learning period of 1-2 weeks in which it tests the boundaries of bidding. Who is likely to convert or what types of auctions it should bid in are questions the algorithm seeks to answer during that time. While in the learning phase, you should not adjust the goal or make any major changes to the campaign, or you risk extending the learning period further. It can be tempting to make a change since performance will be scattered in this phase, but once it is out of the initial learning stage, you should see more stability. In a recent test we conducted for tCPA, we set the target at $600. During the learning phase, the CPA was around $1100, but after that initial learning period, we saw the CPA drop into the low $600 range.

Once the campaign is out of learning, changes need to be made sparingly because you risk the campaign reverting to a learning period or undergoing large changes in performance. The max adjustment for tCPA and tROAS targets should not exceed 20% at a time. Goal adjustments should only happen about once a week; ideally, goal adjustments no more than once every two weeks. Despite the buzz around machine learning, these algorithms are sensitive. It’s best to make a change, wait a few days for stability, and then make an additional change if needed.

If you’re running any promotions, there are additional considerations to remember. Last Spring, we switched to tROAS for most Non-Brand campaigns for one of our e-commerce clients. We typically run multiple offers throughout the month, and as we entered the busy summertime, we noticed fluctuations in performance that didn’t correspond to the strength of the offer. The following chart shows the type of promotions we ran during June:

graph showing spend and revenue results of different offers during June

The strongest offers were at the beginning of the month with a Buy One Get One, but if you notice on June 17th, we see a sudden spike in spend and revenue trending down after switching to a 30% Off deal. Even when adjusting targets to be stricter to lower the aggressiveness of bids with the weaker offer, spend still increased. The reason for the decline in performance is the historical data the automated bidding strategy was using. As more of the 14-day lookback period was dominated by the improved performance from the BOGO, when we switched to the less enticing 30% Off, the algorithm believed users were still going to convert at a much higher rate and therefore bid much more aggressively.

After this realization we worked with the client to restructure the promotional offers to avoid these large shifts in offer strength. Steadily ramping up and down the promotional strength allowed the algorithm to normalize performance and reduced the fluctuations in spend that we experienced.

If you rarely run promotions or have specific promotions where you see a jump in performance, you can use seasonality adjusters to push the algorithms to bid more aggressively. These adjusters are based on the CVR increase you might expect running that type of offer. For example, if you only run a Black Friday promotion each year and saw that your CVR increased by 20%, you can tell the system to spend more during that time by setting a 20% seasonality adjuster. The time period that you specify with the adjuster won’t be considered in the lookback period for automated strategies, which reduces the risk of performance fluctuation following the offer. It’s recommended though that you only use a seasonality adjuster for a maximum of four days. Typically, we will use a seasonality adjuster during the first day or two of a strong promotion to help prime the algorithms.

Going Forward

Automated bidding is the future, for better or worse. While it removes a level of control that advertisers have become accustomed too, it can also help improve account performance by taking into consideration several factors that humans aren’t privy to. In the coming years, as automated bidding becomes the new normal, advertisers will still need to be wary of the espoused oasis Google claims because, while machines have the ability to help, they should not dictate our actions. Constant monitoring and testing is needed to ensure these strategies help grow accounts sustainably and that they fit your desired direction, but automated bidding can become a strong asset for accounts when executed with that understanding in mind.

Want to know more? Read about why analytics is so important, how to conduct incrementality testing, and remarketing strategies to improve your digital marketing results.

Contact us to talk about your digital marketing challenges!