A social media company is having trouble with political actors manipulating the flow of information on its service. Specifically, certain governments are producing tens or hundreds of thousands of fake accounts to promote government propaganda, thereby attempting to swamp any news which does not fit the government’s perspective. The social media platform is attempting to respond by using machine learning to determine which accounts are fake, based on their activity patterns, but the adversary governments are responding with their own machine learning to better hide those patterns and impersonate real users. For every batch of fake accounts deactivated, just as many seem to pop up again. Furthermore, the machine learning algorithms are imperfect, and balancing false-positives with false negatives can lead to deactivating real people’s accounts, leading to anger, frustration, and bad publicity for the company. On the other hand, scaling back to avoid false positives leads to more fake accounts slipping through. What should the company do? What ethical questions should they consider? How might the questions below inspire perspectives on this problem? 5. How might this effort be evaluated through the various ethical 'lenses' described in the “Conceptual Frameworks” document?
By Brian Patrick Green
A social media company is having trouble with political actors manipulating the flow of
information on its service. Specifically, certain governments are producing tens or hundreds of
thousands of fake accounts to promote government propaganda, thereby attempting to swamp
any news which does not fit the government’s perspective.
The social media platform is attempting to respond by using machine learning to determine
which accounts are fake, based on their activity patterns, but the adversary governments are
responding with their own machine learning to better hide those patterns and impersonate real
users. For every batch of fake accounts deactivated, just as many seem to pop up again.
Furthermore, the machine learning
false negatives can lead to deactivating real people’s accounts, leading to anger, frustration, and
bad publicity for the company. On the other hand, scaling back to avoid false positives leads to
more fake accounts slipping through.
What should the company do? What ethical questions should they consider? How might the
questions below inspire perspectives on this problem?
5. How might this effort be evaluated through the various ethical 'lenses' described in the
“Conceptual Frameworks” document?
![](/static/compass_v2/shared-icons/check-mark.png)
Step by step
Solved in 2 steps
![Blurred answer](/static/compass_v2/solution-images/blurred-answer.jpg)
![Database System Concepts](https://www.bartleby.com/isbn_cover_images/9780078022159/9780078022159_smallCoverImage.jpg)
![Starting Out with Python (4th Edition)](https://www.bartleby.com/isbn_cover_images/9780134444321/9780134444321_smallCoverImage.gif)
![Digital Fundamentals (11th Edition)](https://www.bartleby.com/isbn_cover_images/9780132737968/9780132737968_smallCoverImage.gif)
![Database System Concepts](https://www.bartleby.com/isbn_cover_images/9780078022159/9780078022159_smallCoverImage.jpg)
![Starting Out with Python (4th Edition)](https://www.bartleby.com/isbn_cover_images/9780134444321/9780134444321_smallCoverImage.gif)
![Digital Fundamentals (11th Edition)](https://www.bartleby.com/isbn_cover_images/9780132737968/9780132737968_smallCoverImage.gif)
![C How to Program (8th Edition)](https://www.bartleby.com/isbn_cover_images/9780133976892/9780133976892_smallCoverImage.gif)
![Database Systems: Design, Implementation, & Manag…](https://www.bartleby.com/isbn_cover_images/9781337627900/9781337627900_smallCoverImage.gif)
![Programmable Logic Controllers](https://www.bartleby.com/isbn_cover_images/9780073373843/9780073373843_smallCoverImage.gif)