It is not merely copyrights and patents that are threatened by the rapid rise of GenAI. Personal privacy was a hot issue long before the GenAI companies became so thirsty for data. GenAI has turned the temperature up several notches.
Recommended Reading
This article explores the intersection between artificial intelligence (AI) and privacy, highlighting the privacy problems posed by AI and suggesting potential directions for the evolution of privacy law in this area. The article emphasizes the need for privacy law to properly address the privacy issues associated with AI and provides a roadmap for addressing these problems. The author argues that AI exposes the shortcomings of existing privacy laws and calls for the law to address key issues and adopt effective approaches to protect privacy in the age of AI.
WASHINGTON—Today, privacy and cybersecurity policy expert James X. Dempsey released a paper, commissioned by NetChoice, titled “Generative AI: The Security and Privacy Risks of Large Language Models.” In the paper, […]
Compliance with AI standards and regulatory frameworks is crucial. Privacy impact assessments (PIAs) should be conducted systematically throughout the AI lifecycle. Consider privacy risks even with anonymized data and address issues like re-identification and data justice. Adhere to standards, best practices, and codes of conduct, and provide documentation for assessments if you're a developer or vendor. Be transparent about AI's impact on users, ensure explainability, and develop workflows for data use explanations and user challenges.
In last week's edition I came right out and stated one of the radical, practically beneficial theories I have been developing in this newsletter: that large language models (LLMs) are inherently much more privacy-protective than "Big Data," and that replacing Big Data technologies with LLMs can crea
Privacy legislation can address the challenge of regulating artificial intelligence by ensuring algorithmic transparency and accountability.
Language models lack context understanding, raising privacy concerns. Techniques for training privacy-preserving models don't align well with the broad nature of language and privacy. Models should be trained on explicitly public text data.
Pretrained language models can possess sensitive information and output harmful text. This study proposes an attack-and-defense framework for deleting sensitive information from model weights. However, even state-of-the-art methods struggle to completely delete information, as traces can still be recovered. New defense methods are provided, but no universally effective method is found. Truly deleting sensitive information from language models is a challenging problem with societal implications.
The MTA uses AI scanning software to track transit riders believed to have dodged fares. Civil liberties groups fear it could be a privacy disaster.
Google has updated its privacy policy to give itself free range over your content to benefit building and improving its AI tools.
X (formerly Twitter) has updated its privacy policy to inform users that it will collect biometric data, job history, and education history. Additionally, the company plans to use the data it collects, along with publicly available information, to train its machine learning and artificial intelligence models. This change in policy has led to speculation that Elon Musk, who owns X, may intend to use the data collected by the platform for his other AI-focused venture, xAI. Musk has previously stated that xAI would use public tweets to train its AI models, and he has accused other tech giants of leveraging Twitter data for AI model training.
Meta trained its AI chatbot, Meta AI, using public Facebook and Instagram posts, excluding private posts to respect privacy. The company also filtered private details from public datasets used for training. Meta AI can generate real-time information, photorealistic images, and has access to Bing search engine through a partnership with Microsoft.
Next
1 / 3