Four important lessons to be learned from the complaints of a Facebook whistleblower
The start of Facebook's week has been rocky to say the least. During a period of approximately six hours on Monday, Facebook, WhatsApp, and Instagram were all unavailable. Frances Haugen, the Facebook whistleblower, testified before a Senate subcommittee on Tuesday, following the release of thousands of pages of internal research and documents by the social media giant.
When Haugen, a 37-year-old former Facebook (FB) product manager who oversaw the company's civic integrity efforts, appeared on "60 Minutes" on Sunday night, she revealed her true identity. In accordance with reports, she has filed at least eight whistleblower complaints with the Securities and Exchange Commission, alleging that the company conceals research about its flaws from investors and the general public, among other things. She also shared the documents with government regulators and the Wall Street Journal, which published a series of articles demonstrating that Facebook was aware of issues with its apps at the time of their publication.
Eight of Haugen's complaints were broadcast on "60 Minutes" on Monday. The complaints led to the following four conclusions, which are listed below:
It is possible that the mechanics of Facebook are contributing to the spread of false information
In the complaints, internal documents demonstrate that Facebook is aware that hate speech and misinformation on its platforms have a negative impact on society. The documents also demonstrate that Facebook is aware that its "core product mechanics, such as virality recommendations and engagement optimization, contribute significantly to the proliferation of this type of speech."
Within days of following official, verified pages for conservative figures such as Fox News and Donald Trump, Facebook's algorithm recommended conspiracy pages to an account that had previously followed official, verified pages for conservative figures such as Fox News and Donald Trump. QAnon made a recommendation to the same account in less than a week after it was created. In addition, according to the documents cited in the complaints titled "They used to post selfies, now they're attempting to reverse the election" and "Does Facebook reward outrage," not only do Facebook's algorithms reward posts about election fraud conspiracies with likes and shares, but also "'the more negative comments a piece of content generates, the greater the likelihood that the link will receive additional tr."
"Can you explain what collateral damage is?" reads a single piece of writing even goes to the length of mentioning it "Facebook as a whole will, as a result, actively (though perhaps unconsciously) promote these types of activities. The mechanics of our platform do not operate in a neutral manner."
Facebook has implemented a limited number of measures to address misinformation that had already been spread
A Facebook internal document on problematic nonviolent narratives cited in at least two of the complaints states that between 3 percent and 5 percent of hate speech is removed, while less than 1 percent of content deemed violent or inciting to violence is removed. This is due to the fact that the volume is overwhelming for human reviewers, and its algorithms struggle to accurately classify content when there is a contextual element to the content being reviewed.
It also appears that those spreading misinformation are rarely thwarted by Facebook's intervention mechanisms, according to internal Facebook documents on the company's role in the 2020 presidential election and the January 6 uprising. According to one piece of documentation, "Implementing on pages moderated by administrators who have posted two or more pieces of false information in the last 67 days would result in 277,000 pages being affected. Approximately 11,000 of these pages are dedicated to current repeat offenders."
Although Facebook claims that they "remove content from Facebook regardless of who posted it," according to Haugen, when content violates their standards, the "XCheck" or "Cross-check" system "effectively whitelists high-profile and/or privilege users," according to Haugen. Several complaints have been filed alleging that an internal document on error prevention states that "'over the years, many XChecked pages, profiles, and entities have been exempt from enforcement."
As a result of the platform's rapid growth, internal documents pertaining to "quantifying the concentration of reshares and their VPVs among users" and a "killswitch plan for all group recommendation surfaces" indicate that Facebook reversed some changes that had been proven to reduce misinformation.
Additional allegations made by Haugen include that the company deceived advertisers by claiming that they had done everything possible to prevent an uprising. As detailed in the filing, "Capitol Riots Break the Glass," the safer parameters implemented by Facebook for the 2020 election, such as degrading hate speech and other content that violates its Community Standards, were actually rolled back and reinstated "only after the insurrection flared up," according to the document cited.
On the basis of one document, "we were only willing to act *after* things had spiraled into a desperate state."
Facebook misled the public about the harmful effects of its platforms on children and adolescents, particularly young females
On the question of whether Facebook's platforms "harm children" during a congressional hearing in March, Facebook CEO Mark Zuckerberg responded affirmatively by saying, "I do not believe so."
But according to internal Facebook research cited in one of Haugen's complaints, "13.5 percent of teen girls on Instagram say the platform exacerbates thoughts of 'Suicide and Self Injury,'" while 17 percent say the platform, which is owned by Facebook, worsened the symptoms of "Eating Issues," such as anorexia. Furthermore, according to their research, Facebook's platforms "exacerbate body image issues for one in every three adolescent females."
Facebook is well aware of the fact that its platforms facilitate human exploitation and exploitation of others
Despite the fact that Facebook's community standards state that the company "removes content that facilitates or coordinates human exploitation," internal company documents cited in one of Haugen's complaints indicate that the company was aware "domestic servitude content remained on the platform" prior to a BBC News investigation into a black market for domestic workers on Instagram in 2019. The BBC News investigation revealed a black market for domestic workers on Instagram in 2019.
The Middle East, according to one document titled "Domestic Servitude and Tracking in the Middle East," is "lax in enforcing confirmed abusive behavior when there is a platform connection." It was discovered that our platform facilitates all three stages of human exploitation (recruitment, facilitation and exploitation) through real-world networks.... The traffickers, recruiters and facilitators from these "agencies" used Facebook profiles, Instagram profiles, Pages, Messenger and WhatsApp to communicate with their victims.
from Scholars Globe - News, How To, Science, Tech, Business, Tips & Tools https://ift.tt/3DkJaDd
via IFTTT
No comments:
Post a Comment