LinkedIn Introduces Option to Opt-Out of AI Training

Editor
By Editor

LinkedIn has recently updated its User Agreement and Privacy Policy to reflect how it uses user data to power its generative AI models. The platform is now using everything that users post publicly to feed its AI tools, including information shared in messages. This could be concerning to some users, as there is no specific exclusion of Direct Messages from this agreement. While Meta has assured that it does not use private messages to train its AI models, LinkedIn has not provided a similar guarantee in its legal documentation.

In its updated policy, LinkedIn states that it may use personal data to improve products and services, develop AI models, and provide personalized services. Users can choose to opt-out of AI training if they do not want LinkedIn to harvest their information. However, most users will likely not switch this option off, meaning that they will be defaulted into the new agreement. European users’ data is currently excluded from LinkedIn’s AI training, as the region is still debating AI training permissions.

Other social platforms are also clarifying their regional requirements around AI training permissions. Meta recently obtained approval to use UK user data for AI training, and X has added an AI training opt-out to meet regional requirements. It is likely that personal information shared on social media platforms is being used for AI training, leading to potential concerns about the generation of problematic content. Users should have the choice to opt-out of data sharing, and while more platforms are adding this option, it may be too late for historical data that has already been ingested into AI models.

The decision to opt-out of AI training is relative to individual views on the process and the nature of information being shared online. While aggregated and filtered data is likely unrecognizable, there is still a possibility of problematic content generation. As more apps incorporate options to switch off data sharing, users can now make a choice in this regard. However, given that historical user data has likely already been utilized by AI models, the impact of opting out now may be limited in the broader context.

Overall, the updates to LinkedIn’s User Agreement and Privacy Policy highlight the growing use of generative AI models in social platforms. Users should be aware of how their data is being used and have the option to control whether it is used for AI training. While these changes are a step in the right direction for user privacy, the incorporation of data sharing options may come too late for data that has already been processed by AI models. As social platforms continue to evolve their AI capabilities, users should stay informed and make choices that align with their preferences and concerns.

Share This Article