Who is responsible for the death of 14-year-old girl Molly Russell in a British court for “algorithm-recommended content”?

Sharing is Caring

On September 30, 2022, the BBC reported a piece of news that shocked the IT world, “A London court ruled that content recommended by social media algorithms led to the suicide of 14-year-old girl Molly Russel in the UK .”

In view of the fact that Molly Russell, the protagonist in this incident, is the first user who has been judged by the court so far to be determined to have died because of the content recommended by the algorithm. Many media around the world have begun to report on this issue, and experts and scholars have joined the discussion . Andrew Walker, the coroner who accepted the case, did not explain the cause of Molly Russell’s death by suicide, but specifically mentioned the influence of social media:

 

Molly Rose Russell died from an act of self-harm while suffering from depression and the negative effects of online content. (Molly Russell died from an act of self-harm while suffering from depression and the negative effects of online content.)

 

Before this incident was exposed, people were more or less aware of social media algorithms, which sometimes push inappropriate content to viewers, and what’s more, it may affect people’s body and mind because of the pushed content situation.

However, no one has ever presented evidence and definitively declared that “the content recommended to me by the algorithm is the culprit of my anxiety disorder”, and the technology giants have also vigorously claimed that “personalized content is recommended based on the user’s habits. in accordance with”. The unfortunate death of Molly Russell has directly sounded the questions of the information technology industry:

 

Under the algorithmic recommendation mechanism, do users still have the freedom to choose? Who is to blame if the content promoted by an algorithm harms people’s physical and mental health?

 

A hearing in the United States may answer these questions.

 

Technology giants’ deliberate manipulation of algorithms

On June 25, 2019, Tristan Harris, a former Google software engineer, attended a hearing in the U.S. Congress , explaining that the algorithms created by technology giants are making society more and more extreme.

Tristan mentioned many times that the recommended content of the algorithm is not generated by accident, but by deliberate design (It’s not by accident but by design.). When each user starts the software, his personal model (Avatar) will be triggered, and all the footprints left by the user on the Internet will be recorded by the model as data raw materials for analyzing behavior patterns.

Today’s technology giants are competing to see whose model can more accurately predict user behavior, and then sell the behavior model to merchants. Stanford University professor Shoshana Zuboff calls this business model “Surveillance Capitalism . “

Take YouTube, an audio-visual platform owned by Google, as an example. Whenever I watch a video, the algorithm model behind the video is also constantly collecting and analyzing my personal data. The algorithm will analyze and recommend video content that I may like based on the data. .

 

Tristan mentioned, “70% of YouTube’s video views are contributed by the algorithm’s recommended content.” In other words, the algorithm’s recommendation mechanism creates an illusion of free choice-the videos I choose are actually The above is a well-designed forecast, and every choice I make is just to confirm the accuracy of this model.

A similar effect can also be glimpsed in the tricks of the magician. Among the techniques of card magic, there is a technique called “forcing cards” . The magician will shuffle the entire deck of playing cards, and even hand it over to the audience to shuffle the cards. Then, the magician will invite the audience to draw a card, and this card is exactly the card that the magician presets for the audience to draw. .

In other words, the illusion of “free choice” is given to the audience through the operation of the magician; however, all of this is under the control of the magician, just like the recommendation mechanism of the algorithm. It’s just that magicians can only manipulate one-on-one, while YouTube’s algorithm can manipulate 2 billion users at a time, day and night, all year round.

 

Making society more extreme through algorithms

So, who is to blame for this? Tristan specifically mentioned during the hearing:

 

The polarization of our society is actually part of the business model. (The polarization of our society is also part of the business model.)

 

In order to retain the most users on their platforms, tech giants are constantly modifying the design of their platforms in an attempt to create maximum engagement everywhere users go. In 2018, Sinan Aral, an information scientist at the Massachusetts Institute of Technology, used real/false information as an analysis database from 2006 to 2017 to conduct research on the spread of fake news on Twitter. .

The Aral team found that the dissemination speed, depth and breadth of false news (false news) are faster than true news in these three aspects, and the dissemination speed of false news is even 6 times that of true news.

In terms of statistics for different emotions, such as: trust, joy, expectation, sadness, anger, fear, nausea, and surprise, it is found that in the dimension of “disgusting, surprise”, the interaction (reply) of fake news is lower than that of real news. It is much higher (shown in the red box in the picture above), followed by “expectation, anger, fear”, and the interaction between true and false news is similar.

From Aral’s research, it can be found that compared to content with positive emotions (trust, joy) and negative emotions (disgust, anger, fear), the user interaction rate of Twitter is actually higher.

The report in The Verge also mentioned that the high-interaction content delivered by Facebook’s algorithm is also inlaid with anger, hatred, and righteous devil’s speech, and users immersed in the black hole of the algorithm will become more and more social because of this. extreme. The report also mentioned that an online survey in 2016 found that about 64% of Facebook users joined extremist groups, all due to the recommendations of Facebook algorithms.

In addition, Tristan mentioned in the hearing that the top 15 keywords of YouTube’s recommended videos also have traces of negative sentiment:

  • hate
  • expose (debunk)
  • kill (destroy)
  • obliterate

Tristan described this as a race to the brainstem. The recommendation mechanism of the algorithm is trying to draw out the irrational impulses of human instincts, while the technology giants behind it are harvesting a lot of users. attention currency.

Surveillance capitalism’s business model has damaged speech in a democratic society, and worse, has killed users who couldn’t help it; however, it’s not the CEOs, founders, programmers who are really to blame Teachers, but to prohibit this set of business models that use unscrupulous means to obtain benefits and extract data from Internet users.

 

Question the cost of using data

Maybe some people will ask: “Anyway, companies like YouTube, Facebook, Google, and Twitter don’t charge me money. It should be reasonable for me to contribute a little personal data in exchange?”

In fact, terms of use that sound reasonable on the surface are not an exchange of equivalents. The possible reason behind this question is that users still do not understand the use of digital personal data, and easily hand over their personal data.

Just imagine, doctors, lawyers, and insurance companies all have your personal information, but would you accept that they sell your personal information while providing you with services? If the above-mentioned practitioners sell customers’ personal data, they will face legal sanctions; similarly, technology giants should also be held accountable for taking personal data, and then building personal behavior prediction models and selling them to businessmen. punish.

After five years of litigation (2017~2022), Molly’s father finally let the court say that his request “must actively monitor the content on social media.” His efforts couldn’t bring back Molly’s bright smile in the photo, but he urged the judiciary and the public to use social media more consciously, and to zoom in when necessary. After all, the promise of “freedom” that these tech giants have promised us will come at a great price if it is so readily available.

 

Sharing is Caring