The age-old debate in mobile game monetization has long revolved around Bidding-Only vs. Hybrid Waterfall models. However, innovative approaches like Bruno Balistrieri's A/B testing have revealed a more nuanced truth. In this article, we'll delve into the findings and uncover what truly works in today's complex ad monetization landscape.
Bruno's research aimed to assess whether a fully Bidding-Only model could maximize fill rate and improve auction efficiency by removing manual prioritization in the waterfall. The initial promise of increased eCPMs, simplified workflows, and reduced latency was enticing – but did it live up to expectations?
The results showed that while Bidding-Only models initially deliver promising results, they often decline over time as networks adjust their bidding strategies. This volatility can lead to revenue fluctuations, making long-term stability a significant concern.
So, what are the biggest limitations of Bidding-Only models? One major issue is the potential for eCPM fluctuations due to market demand changes. Unlike hybrid models, which rely on guaranteed deals or historical floor prices, Bidding-Only models are fully dependent on real-time market dynamics. This can result in revenue instability and reduced performance over time.
To mitigate this decline, publishers can regularly refresh demand sources, test different auction models, and implement strategic floor prices – like Bruno's innovative "Canary Test".
But how does a Hybrid Waterfall model compare to Bidding-Only? The answer lies in its ability to combine the efficiency of bidding with the stability of traditional waterfalls. While Bidding-Only can be efficient, some networks still perform better with manual prioritization. Hybrid setups allow publishers to retain high-value placements while benefiting from bidding competition. For most publishers, a hybrid model is often the best approach – providing greater control over revenue stabilization.
Automated waterfalls also play a crucial role in optimizing monetization. By using algorithms to dynamically adjust network placements and pricing thresholds, these setups minimize manual work and optimize revenue without constant intervention. To implement automated waterfalls effectively, developers can:
- Use mediation platforms that support automated price adjustments
- Develop an internal tool that understands waterfall rules and call-to-action approaches
- Regularly analyze historical data to set dynamic floor prices
- Run continuous A/B tests to fine-tune configurations
Mediation platforms often push publishers toward Bidding-Only models, promising simplified integration and reduced operational work. However, publishers should be cautious about:
- Revenue stability: Some networks still perform better in a waterfall setup
- Loss of control: Without manual prioritization, revenue optimization becomes fully dependent on auction dynamics
- Fill rate fluctuations: Ensuring a diverse demand mix is key to maintaining fill rates
To maintain control, publishers can keep a mix of bidding and traditional demand sources, continuously monitor performance, and set floor prices strategically.
Bruno's "Canary Test" revealed valuable insights about network behaviors. By running controlled tests with new networks before fully integrating them, tracking bid trends over time, and regularly iterating on monetization setups, publishers can refine their approach and prevent unwanted surprises.
In conclusion, while Bidding-Only models may have initial appeal, the data-driven approach of Bruno Balistrieri's A/B testing shows that Hybrid Waterfall models consistently outperform pure bidding approaches. By combining auction efficiency with stability, hybrid setups provide greater control over revenue stabilization – making them a more effective choice for most publishers.