Independent video game publisher Finji has publicly accused social media platform TikTok of creating and publishing generative AI advertisements for its games without consent. The allegations state that these AI-generated ads were not only unauthorized but also contained racist and sexist content that the publisher could not edit or remove.
The situation came to light when Finji’s co-founder, Bekah Saltsman, detailed the experience in a series of public posts. According to Saltsman, the company discovered that TikTok had autonomously generated promotional videos for Finji’s games, including the critically acclaimed “Night in the Woods,” using artificial intelligence tools.
Allegations of Unapproved and Problematic Content
Finji’s primary complaint centers on a lack of authorization and control. The publisher asserts it did not grant TikTok permission to create or run these specific advertisements. Furthermore, the AI-generated content reportedly included offensive stereotypes and imagery deemed racist and sexist by the company.
A particularly concerning aspect for Finji was the apparent inability to edit or delete these ads through TikTok’s standard advertising dashboard. This lack of editorial control over content bearing its name and promoting its intellectual property represents a significant breach of trust and brand safety for the independent publisher.
TikTok’s Automated Ad Systems in Question
The incident points to the potential risks of automated advertising systems employed by major social media platforms. TikTok, like its competitors, utilizes algorithms and AI tools to help advertisers create and optimize campaigns. However, Finji’s case suggests these systems may sometimes operate without explicit, granular consent from the brand being promoted.
Industry observers note that while AI-assisted ad creation is a growing trend, the technology is not infallible. It can perpetuate biases present in its training data, leading to the generation of inappropriate or harmful content, as allegedly occurred here. The inability to correct such errors manually compounds the problem for affected businesses.
Broader Implications for Digital Advertising
This dispute raises important questions about accountability and transparency in the digital ad ecosystem. It highlights the tension between platforms leveraging automation for scale and the need for brands to maintain strict oversight over their public messaging. For independent studios and publishers with limited resources, such automated errors can pose a disproportionate threat to their reputation.
The gaming industry, in particular, is sensitive to issues of representation and inclusivity. Unauthorized ads containing discriminatory content directly contradict the values many modern game developers publicly uphold and can alienate their player communities.
Official Responses and Next Steps
As of the latest reports, Finji has publicly called on TikTok to address the issue and clarify its policies regarding automated ad generation. The publisher seeks assurances that such incidents will not recur and that brands will have full control over any content created in their name.
TikTok has acknowledged the situation, stating it is investigating the claims. The platform’s advertising policies typically require advertiser approval for campaigns, suggesting a possible malfunction or misapplication of its AI tools in this instance. The outcome of this investigation and any subsequent policy changes will be closely watched by advertisers and digital rights advocates.
Moving forward, this case may prompt wider scrutiny of automated content creation tools across social media. Regulatory bodies in several jurisdictions are already examining the ethical use of AI, and incidents like this could accelerate calls for stricter guidelines governing AI in advertising, specifically concerning consent, bias, and editorial control.
Source: GamesIndustry.biz