Virtual Influencers Hit $6 Billion: The Academic Questions Behind the Synthetic Creator Boom

The $6 billion figure is everywhere. The virtual influencer market, valued at roughly $6 billion in 2024, is projected to reach nearly $46 billion by 2030 at a 40% annual growth rate. Brands are moving fast: Guess placed a fully AI-generated model in Vogue's print edition. Zalando used AI for 70% of its editorial images in Q4 2024. Lil Miquela — a synthetic Instagram persona with 2.4 million followers — generates over $1 million annually in brand partnerships.

The money is there. The question I keep asking is: what happens to consumer behavior theory when the influencer isn't human?

Why this matters now

For decades, the mechanics of influencer persuasion have rested on two pillars: source credibility (does the consumer trust and believe the endorser?) and parasocial interaction (does the consumer feel a personal relationship with them?). Both constructs were developed with humans in mind. Both assume some version of authentic selfhood behind the influence.

Virtual influencers break that assumption. They have no lived experience to draw from, no genuine vulnerability to disclose, no off-script moments. They are, by design, perfectly on-message. So why do they work?

The uncomfortable answer

Recent empirical work suggests they work better than we'd expect and in ways that expose conceptual gaps in our frameworks.

Stein, Breves, and Anders (2024), in a preregistered experiment with 179 participants, found that parasocial responses to virtual influencers were not significantly different from responses to human influencers. The mechanism is interesting: virtual influencers generated stronger direct parasocial effects, but that advantage was offset by lower perceived human-likeness and similarity to the self. Two opposing forces, approximate equilibrium.

A 2025 study with TikTok users in Egypt and Jordan (Springer Nature) found that AI influencers can sometimes outperform human influencers in generating community cohesion and social capital, particularly in collectivist cultural contexts.

The assumed hierarchy — human = more credible, authentic = more persuasive — doesn't survive empirical scrutiny.

What the market data reveals (and doesn't)

The Influencer Marketing Factory's 2026 Creator Economy Report, which surveyed 1,000 U.S.-based creators, found that 56.1% believe AI will significantly change how creators work. That's the producer side. The consumer side is far less documented.

Academic studies on virtual influencers have grown 300% since 2020 (YouScan, 2026). What we still don't know is whether the parasocial bonds consumers form with virtual influencers are structurally different from those formed with humans — or simply weaker versions of the same construct.

A 2025 experience sampling study published in Media Psychology suggests they may be qualitatively different. Bonds did form, but they were often driven by novelty rather than the intimacy and similarity that anchor relationships with human influencers. What happens when the novelty fades? That's still an open question.

The regulatory layer adds complexity

New York's Authorized Digital Actors Act now requires consent from human performers before AI replicates their likeness. The FTC has tightened disclosure requirements for AI-generated promotional content. The Creators Coalition on AI, launched in late 2025, counts over 500 signatories demanding compensation and guardrails.

Meanwhile, some brands are turning anti-AI authenticity into a marketing signal. Dove's pledge to exclude AI-generated models won the Media Grand Prix at Cannes Lions 2025. iHeart's "Guaranteed Human" label reportedly appealed to 96% of consumers surveyed. Brands are simultaneously using authenticity as a selling point and investing in synthetic personas.

This is not a brand safety issue. It's a consumer behavior problem. If authenticity is simultaneously a marketing signal for "no AI" brands and irrelevant to the persuasive effectiveness of virtual influencers, we have a theoretical inconsistency that deserves serious academic attention.

Understanding virtual influencers’ mechanisms

The $6 billion market is interesting. The theoretical rupture is more interesting.

We need longitudinal studies that track how parasocial bonds with virtual influencers evolve, not just in controlled experiments, but in naturalistic settings. We need cross-cultural work that moves beyond Western consumers as the default sample. And we need more precise constructs: a CGI fashion avatar, an AI chatbot-persona, and an autonomously generating content agent are very different objects of study.

The question is not whether virtual influencers work. The market has answered that. The question is why they work, for whom, under what conditions, and with what long-term implications for consumer trust in digital environments.

I don't have complete answers. But I think they're the right questions.

If you work on influencer marketing, consumer psychology, or platform design and are thinking through any of this, I'd like to talk :-)

Sources:

Previous
Previous

IA, ¿diferenciación o “commodity”?: por qué el marketing 'AI-first' se está convirtiendo en ruido genérico

Next
Next

The Commoditization Trap: Why 'AI-First' Marketing is Becoming Generic Noise