How Worried Should We Be? Tik Tok & AI Image Generators Pose Privacy Issue for Users

How Worried Should We Be?

Tik Tok & AI Image Generators Pose Privacy Issue for Users

The news has been abuzz with headlines about the latest privacy fails with popular apps and computer programs. Most notably, Tik Tok has come under fire in particular from the American and Canadian governments as well as the EU’s executive arm and the European parliament, who have recently banned the use of TikTok on government-issued mobile devices as privacy and cybersecurity concerns about the video-sharing app have grown. In addition, there have been growing concerns about the popular AI image generators such as the Lensa and  Starryai. 

Tik Tok

Tik Tok reached more than 3.5 billion downloads and has fast become one of the most popular mobile applications in the world. Bytedance, the Chinese owner of Tik ToK, has long maintained that it does not share data with the Chinese government and that its data is not held in China. It also disputes accusations that it collects more user data than other social media companies, and insists that it is run independently by its own management. China hawks in the US Congress are looking to expand that ban further, even as lawmakers have done little to protect the privacy of Americans by allowing US companies to collect and share its users data and sell it to third parties, potentially including China’s government. 

Lawmakers are renewing their calls for a nationwide TikTok ban and pushing the Biden administration to force a breakup of Bytedance. Efforts to pass a national privacy law, which failed last year, have largely evaporated. But do cybersecurity allegations against TikTok hold up? 

Studies on TikTok’s data collecting are not conclusive. An analysis of TikTok by Australian and U.S. cybersecurity organization Internet 2.0 concluded that “permissions and device information collection are overly intrusive” and that TikTok was harvesting data excessively.

A 2021 study by Canada-based Citizen Lab came to a different result. The researchers noted that “TikTok collects similar types of data to track user behavior and serve targeted ads”. They did not find evidence of “overtly malicious behavior”.

Another study on TikTok by the Internet Governance Project concluded that “the data collection by TikTok can only be of espionage value if it comes from users who are intimately connected to national security functions and use the app in ways that expose sensitive information”. TikTok data collection is not that different from most “other social media and mobile apps”. 

TikTok claims that its Chinese employees can’t access data of non-Chinese users. The company had to admit, however, several employees from mainland China did access data of at least two journalists from the United States. ByteDance dismissed these employees and stated that it is storing user data in the US and Singapore. The company plans to create data centers in Ireland to store EU-user data there.

AI Image Generators

From the viral Lensa app to the image generator Starryai, AI art has been in the news quite a lot lately and for good reason. Images that once took human artists a considerable amount of time can now be made by an AI art generator in seconds. 

However, while artists are protesting these AI-generated images, security experts  are more concerned about the implications AI will have on cybersecurity. As reported by Cybernews, the report’s authors found that AI isn’t really used in today’s cyberattacks. When it is used though, it’s limited to social engineering as a means to gain more information on a target.

One obvious thing threat actors could do with AI image generators is engage in social engineering; for example, create fake social media profiles. Some of these programs can create incredibly realistic images that look just like genuine photographs of real people, and a scammer could use these fake social media profiles for catfishing.

When devastating earthquakes hit Turkey and Syria in February 2023, millions of people around the world expressed their solidarity with the victims by donating clothes, food, and money.

According to a report from BBC, scammers took advantage of this, using AI to create realistic images and solicit donations. One scammer showed AI-generated images of ruins on TikTok Live, asking their viewers for donations. Another one posted an AI-generated image of a Greek firefighter rescuing an injured child from ruins, and asked his followers for donations in Bitcoin.

Governments, activist groups, and think tanks have long warned about the dangers of deepfakes. AI image generators add another component to this problem, given how realistic their creations are. In fact, in the UK, there’s even a comedy show called Deep Fake Neighbour Wars which finds humor in unlikely celebrity pairings. What would stop a disinformation agent from creating a fake image and promoting it on social media with the help of bots?

Mixed Bag

The security consequences of these new technologies are unclear however it is important to remain vigilant as users. Learn some cybersecurity tips from us by reading 6 Ways to Avoid Phishing  and 7 Best Cybersecurity Practices

Monitoring Remote Sessions

With more employees working from home, companies are seeking ways of monitoring remote sessions. One compelling case can be made for recording remote sessions for later playback and review. Employers are concerned that in the event of a security breach, they won’t be able to see what was happening on users’ desktops when the breach occurred. Another reason for recording remote sessions is to maintain compliance, as required for medical and financial institutions or auditing for business protocols, etc.

TSFactory’s RecordTS v6 will record Windows remote sessions reliably and securely for RDS, Citrix and VMware systems. Scalable from small offices with one server to enterprise networks with tens of thousands of desktops and servers, RecordTS integrates seamlessly with the native environment.

Click here to learn more about secure remote session recording.