Corporate strategy will need to take these potential issues into account, both by shielding who owns the data and by preventing AI from becoming a security breach.
It was one of the viral tech news stories at the start of July when WeTransfer, the popular file sharing service used massively by companies and end users alike, had changed its terms of use.
It’s the kind of thing that is usually accepted without going too deeply into it, but on this occasion they had added an element connected to artificial intelligence. As of early August, WeTransfer reserved the right to use the documents it managed to “operate, develop, market and improve the service or new technologies or services, including improving the performance of machine learning models.” User information, whatever it was, could be used to train AI, it was understood.
The scandal was huge and WeTransfer ended up backtracking, explaining to the media that, in fact, what they wanted to cover was the possibility of using AI to moderate content and not exactly what their users had understood.
However, the WeTransfer scandal became a very visible sign of a potential new risk in cybersecurity, privacy and even protection of sensitive information. A lot of data is needed to power AI and a lot of data is used for it, causing privacy policies of very popular online services to change to adapt to this new environment.
Source: CSO Online