(01-21-2025, 08:06 AM)TomK Wrote: There's a rather obvious flaw in that claim..
I wonder, did you come to your conclusions via the scientific method? You know, gather and analyse the data and let the facts take you where they will? Or, perchance, is that just you shootin' from the hip?
Maybe you need better data?
Try this..
From.. https://www.rand.org/pubs/commentary/202...t-for.html
TikTok Is a Threat to National Security, but Not for the Reason You Think
And here's a snippet, follow the link above for the whole enchilada.
Some 170 million Americans use TikTok, and many of them will be incredibly upset if they lose access to their favorite social media app. Earlier this year, the government enacted a law forcing the company to divest its U.S. operations or face a ban, and congressional phone lines were overwhelmed due to the sheer volume of calls in protest of the legislation. The discourse surrounding the national security implications of the app overlooks a critical threat—an unprecedented corpus of videos ideal for training advanced deepfake-generating AI systems. In the face of lukewarm public opinion about the ban and the inevitable legal action, it is essential that U.S. lawmakers understand and emphasize this risk.
In early 2024, a Hong Kong-based employee at a British engineering company transferred over $25 million to foreign accounts after receiving oral authorization from his CFO in a few routine Zoom meetings. This would have been standard procedure, except that the CFO, and all the other employees on the calls, were impersonations created by scammers using artificial intelligence (AI).
“Deepfake” scams of this magnitude are still rare, and a keen eye can usually differentiate AI-generated videos from reality. News outlets were quick to expose a deepfake of President Biden used to discourage voting in New Hampshire and a deepfake of State Department spokesman Matthew Miller stating that a Russian city was a legitimate target for Ukraine's use of U.S. weapons.
As AI systems rapidly scale, however, these fabrications are increasingly hard to distinguish. In the not-too-distant future, eyes and ears may no longer be reliable sensors of truth. It is easy to imagine the national security implications of an adversary artificially generating videos of Americans doing and saying anything they please.
It is easy to imagine the national security implications of an adversary artificially generating videos of Americans doing and saying anything they please.
To curb foreign development of these and other dangerous AI capabilities, the government controls exports of semiconductors, the physical underpinning of AI. Equal emphasis, however, should be placed on cross-border transfers of large datasets, or bulk data, the fuel for generative AI systems. This means reexamining foreign controlled data-aggregation platforms, especially TikTok.
The U.S. government has raised several objections to TikTok's data collection practices, mainly focused on American users' sensitive personal information. TikTok has responded to these concerns by creating Project Texas, an initiative to store “protected U.S. user data” such as emails, birthdays, and behavioral data on U.S.-based Oracle servers.
Safeguarding this kind of data is important, but equally significant national security risks emerge from the flood of publicly posted videos. TikTok assures that “U.S. users of the TikTok platform can still communicate and interact with global users for a cohesive global experience.” This is the problem. The videos that users post publicly are not subject to Project Texas restrictions and can still end up on foreign servers.
Most of the individual videos that Americans post on social media platforms are harmless at face value, but the 34 million videos posted daily on TikTok become ideal training material for massive generative AI models. These models will be able to create astonishingly convincing deepfakes and could be used to launch discreet, large-scale, and highly targeted influence operations. This is not an abstract future threat. Policymakers need to understand that in the age of generative AI, bulk audiovisual data can be more valuable than the birthdays and email addresses users use to sign up for apps like TikTok.
Chinese actors have already used generative AI to spread disinformation. They have also used TikTok to spread anti-American propaganda within the United States. TikTok itself releases monthly lists of uncovered covert influence operations on the app, and they showed that in May 2024 a network with hundreds of thousands of followers “operated from China and targeted a US audience. The individuals behind this network created inauthentic accounts in order to artificially amplify narratives that the US is corrupt and unsafe.”
TikTok encourages users to post vertical videos with a 9:16 aspect ratio. This uniformity in structure, along with the diverse content of the posts, makes them perfect for training deep learning models. In addition, the dynamic watermarking that TikTok adds to all uploaded videos makes it difficult (PDF) for other actors to scrape these videos for AI training purposes, meaning TikTok, and by extension its parent company ByteDance, essentially have sole access to the training material.
ByteDance is no stranger to creating large-scale generative AI systems. They are responsible for one of China's most advanced large language models, MegaScale. This infrastructure, combined with exclusive access to an increasingly massive body of audiovisual information of Americans engaging in a vast array of activities, provides the resources to make some of the most advanced deepfakes in the world. This poses a grave threat to U.S. national security.
And BTW, TomK, saying only stupid Americans use the site, as a way to excuse the stupidity it promotes, is a poor argument. Children use it. It literally is designed to put thoughts in their heads.. it makes people stupid.