5 highlights
-
Musk proposes that Twitter’s algorithm should be open source and that “there should be no behind the scenes manipulation, either algorithmically or manually.”
-
This blueprint could help solve the most vexing problem confronting lawmakers, namely, how to regulate social media platforms in a way that strikes the right balance between free speech and societal harm.
-
Content moderation suffers from what psychologist and economist Daniel Kahneman calls “noise” in human decisionmaking. Even when the rules for decisionmaking are well-specified, different judges make opposite decisions on the same data, and individual judges are often inconsistent, impacted by irrelevant factors such as mood and weather.
-
But algorithms can cause unintended harm that may become evident only much later, if at all. My NYU Stern colleague Jonathan Haidt argues that there is sufficient evidence to link algorithms to increases in self-harm and suicide in teenage girls. This only became apparent over time, based on studies conducted in several countries. Without transparency, we are blind to such risks.
-
Letting private enterprise design and operate the public square invites scandals like Cambridge Analytica