Abstract
Marketing research has increasingly relied on online platform studies, which are studies conducted in a naturalistic online environment and which leverage the A/B testing tool provided by platforms such as Facebook or Google Ads. These studies allow researchers to compare the effectiveness of different ads and the way they are delivered, and to study “real” consumer behavior, such as clicking on ads. However, they lack true random assignment of ads to consumers, preventing causal inference. In this manuscript, we present a comprehensive review of 133 published online platform studies revealing how researchers have, so far, utilized and characterized these studies; we find that most of these studies are mistakenly presented as (randomized) experiments and most of their findings are erroneously described as causal. Our review suggests limited awareness of the inherent confoundedness of online platform studies (i.e., the inability to attribute user responses to ad creatives versus the platform’s targeting algorithms). Importantly, the prevalence of these undesirable practices has remained relatively constant over time. Against this backdrop, we offer clear guidance on how to position, conduct, and report online platform studies for researchers interested in this method and for reviewers invited to evaluate it.
| Original language | English |
|---|---|
| Pages (from-to) | 886-903 |
| Number of pages | 18 |
| Journal | International Journal of Research in Marketing |
| Volume | 42 |
| Issue number | 3 |
| Early online date | Jan 2025 |
| DOIs | |
| Publication status | Published - Sept 2025 |
Keywords
- Advertising
- Social media
- Digital marketing
- Research methodology
- Design
- Choice
- Consumer strategy
- Online platform studies
- A/B test
- Meta
- Search engine advertising
- Validity
- Research ethics
ASJC Scopus subject areas
- Marketing