[ad_1]
In context: Signal is a communication platform that offers end-to-end encryption for instant messaging, voice calls, and video calls. Recognized as a “gold standard” for user privacy, this open-source software is accessible on all major platforms and is officially considered a “major threat” by the NSA.
Signal Foundation President Meredith Whittaker recently participated in an onstage discussion at TechCrunch Disrupt to discuss AI, user data, and surveillance. Whittaker is a prominent critic of AI technology and facial recognition systems, and she was one of the former Google employees who played a role in organizing the Google walkouts in 2018.
According to Whittaker, AI technology and services rely on the vast collection of big data and the targeting industry to function effectively. She stated that AI “requires the surveillance business model,” and essentially exacerbates what the industry has been experiencing since the late 90s with the growth of “surveillance advertising.” Whittaker straightforwardly commented, “The Venn diagram is a circle.”
Training successful LLMs and other AI models requires enormous volumes of human-generated content, typically gathered from both the public and less accessible parts of the internet. However, Whittaker emphasized that training is just one aspect of the issue. She argued that the use of AI technology, particularly in facial recognition systems, constitutes a form of surveillance.
The Signal president illustrated a surveillance-oriented application of AI technology by discussing a facial recognition camera designed to generate “pseudo-scientific” data about a passerby’s emotional state. This data, which may be accurate or inaccurate, is then used to determine whether a person is happy, sad, or even potentially dishonest.
Ultimately, Whittaker contended that these AI-driven surveillance systems are being marketed to powerful entities such as employers, governments, and border controls, allowing them to make decisions and predictions that could significantly impact individuals’ access to resources and opportunities.
Furthermore, AI surveillance systems exploit a workforce of human laborers who could ultimately become the very targets of these systems. Whittaker explained that there’s no way to create LLMs without reinforcement learning with human feedback, essentially “tech-washing precarious human labor.” Thousands of workers are paid very little to verify the “ground truth of the data,” and the so-called artificial “intelligence” couldn’t possibly exist without their contributions.
Whittaker also acknowledged that not all AI and machine learning models are the same. Signal employs some AI technology as well. The instant messaging platform utilizes a “small on-device model” that the foundation didn’t develop, as part of the face-blurring feature in Signal’s media editing toolkit. While it may not be perfect, Whittaker noted that the AI model is useful for detecting faces in crowd photos to safeguard people’s privacy on social media.
[ad_2]
Source link