TikTok is one of the world’s leading social media platforms with over 1 billion monthly active users. TikTok’s algorithmic driven recommendations and viral culture makes it one of the fastest and easiest platforms for the quick spread of information disorders. Sadly, the rise of artificially generated content appears to be worsening the situation.
Generative artificial intelligence (AI) also known as Gen AI, uses machine learning to create new content like texts, images, videos, audio or software code. Unlike other forms of AI, gen AI focuses on creating new content, allowing anyone to quickly and easily create a massive amount of altered content.
On TikTok, this can be used to produce viral videos, persuasive text or deep fake content that closely mimics reality. This form of technology is constantly being leveraged to craft contents that are misleading, manipulative and false. One of the most concerning forms of generated AI disinformation are deep fakes – a realistic video or audio clip where someone’s face or voice is altered to say or do something they have never done.
The unique selling point for TikTok is its short and catchy videos which usually dominates the algorithm and makes it incredibly easy for deep fake or AI generated contents to go viral with alarming speed. Videos of political figures or celebrities making controversial statements are quick to spread, causing public confusion and sometimes outrage, even if the video was completely fabricated or altered to push a certain narrative.
For instance, there was a 2023 video of Singapore’s Prime Minister, Lawrence Wong, announcing investment projects in his country. The comments under the post from Stait Time suggests that this is not the first time that this kind of deep fake video of public officials will go viral.
Some users also believe what was said in the video, showing how easily people believe anything on the internet. There are deep fake videos of the U.S. President-elect Donald Trump, making the rounds on TikTok as well, with one of them having over 405,000 impressions and about 12,000 likes. Sometimes, Gen AI contents are so convincing that the average viewer finds it hard to distinguish between reality and fiction but some are easily distinguishable.
The spread of disinformation through TikTok’s algorithm
TikTok’s algorithm, which is designed to promote engaging content, plays a significant role in the quick spread of disinformation. TikTok prioritises videos based on user engagement, that is the likes,comments and shares that a video has – regardless of if the video content is true or false.
Without proper vetting TikTok still pushes false videos with high user engagements to the feed of even more audiences. This creates an unending circle of the spread of misinformation or disinformation on the platform. Contents that are deemed controversial or elicit strong emotional response from viewers are oftentimes the ones with high user engagement. An example is this AI-generated video of Trump in a prison orange jumpsuit. This type of content gets people excited that they never bother to get the facts of the matter before believing and spreading it to other persons.
A common example is the popularised African AI contents made into stories and even short films that people always anticipate. Because of its popularity, the content regularly features on the ‘Explore’ page, controlled by TikTok algorithm This form of AI generated content also enables the spread of disinformation and misinformation.
A TikTok page of a fictitious African royal family, The Zidogian Family has ten thousand followers and over 746,000 posts, with one of its AI-generated videos having over 410,000 impressions. Although this page has an AI disclaimer on its bio, the same cannot be said for other Gen AI pages such as House of Zelashire, Royalfan0, House of Ineke, and House of Waldland. The For You Page, FYP analyses thousands of signals before recommending contents for you. Occasionally, it pushes random content to your FYP and some of these AI contents could end up on your page.
Personalised disinformation through an individual’s algorithm
Like other social media platforms, TikTok specialises each users’ feed to fit their preferences. To achieve this, the algorithm analyses the type of content a user often interacts with then predicts the type of content they may want to see and pops it up on their feed.While it is an easy way for users to access their favorite kinds of content without having to constantly search, it also opens them up to targeted disinformation. The personalised algorithm constructs an echo chamber, increasing the likelihood you will engage with content that aligns with your preexisting beliefs, regardless of the accuracy of the content.
Mimicking trusted sources
Generative AI is very sophisticated, making it easy for AI content creators to create videos showing public figures, news anchors and even politicians endorsing or further spreading various kinds of information. This could be misleading to the undiscerning eyes and further entangles the web, making it hard to achieve a fact-based information space.
The influence of generative AI on how information is created and consumed in the digital space is undeniable. By fostering critical thinking, individuals can actively question and analyse online content, discerning fact from fiction and reducing susceptibility to misleading or poorly generated information. While generative AI can be a valuable resource, blind trust in its outputs risks spreading misinformation, making it essential to remain sceptical and aware of its limitations and biases. Reporting inappropriate or false content on digital platforms further combats misinformation, as these tools enable users to flag violations of community guidelines, reducing the visibility and impact of harmful posts. Collectively, these actions empower users to foster a healthier information ecosystem and reinforce accountability in digital spaces.