Spelling Whizz

Exchange

Tax

Cars

German

Slack under fire for using user data to train AI models


slack accused of stealing user data to train its AI model

WEB DESK: Slack, the widely-used workplace communication platform, has come under scrutiny for using user messages, files, and other content to train its machine learning models without explicit user consent.

This practice is implemented by default, requiring users to actively opt out to protect their private data. However, the process is cumbersome and cannot be initiated by individual users; instead, it necessitates intervention from organisational administrators, such as HR or IT departments.

Corey Quinn, an executive at DuckBill Group, brought this issue to light after discovering the policy in Slack’s Privacy Principles. He highlighted the matter on social media platform X, referencing a section that states: “To develop AI/ML models, our systems analyze Customer Data (e.g., messages, content, and files) submitted to Slack as well as Other Information (including usage information) as defined in our Privacy Policy and in your customer agreement.”

To opt out of this data usage, users must have their organisation’s Slack administrator contact Slack’s Customer Experience team via email, providing specific details such as the Workspace/Org URL and a subject line reading “Slack Global model opt-out request.”

The policy places the onus on users to safeguard their data, rather than offering a straightforward opt-out mechanism.

In response to Quinn’s post, Slack clarified on X: “To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results. And yes, customers can exclude their data from helping train those (non-generative) ML models.”

The timeline of when this policy was quietly introduced by the Salesforce-owned company remains unclear.

Criticism has arisen over Slack’s use of the term “customers,” which technically excludes individual employees who rely on their administrators to opt out. This raises concerns about transparency and user control over personal data.

Moreover, Slack’s privacy policies appear inconsistent. One part claims that during AI/ML model development or data analysis, Slack cannot access the underlying content due to technical safeguards. Yet, the policy on training machine learning models seems to contradict this, creating confusion about the extent of data access and use.

Additionally, Slack’s marketing for its premium generative AI tools asserts, “Work without worry. Your data is your data. We don’t use it to train Slack AI. Everything runs on Slack’s secure infrastructure, meeting the same compliance standards as Slack itself.”

This statement, however, pertains only to premium AI tools, not the broader machine learning models being trained without explicit permission. This selective application of data protection promises has been criticised as misleading.

Read next: Apple reportedly considering centred rear cameras for iPhone 17

You May Also Like