AP: Cyborgs, Trolls and bots: A guide to online misinformation

Status
Not open for further replies.

hanimmal

Well-Known Member
https://www.nbcnews.com/tech/tech-news/facebook-knew-radicalized-users-rcna3581Screen Shot 2021-10-23 at 10.49.05 AM.png
In summer 2019, a new Facebook user named Carol Smith signed up for the platform, describing herself as a politically conservative mother from Wilmington, North Carolina. Smith’s account indicated an interest in politics, parenting and Christianity and followed a few of her favorite brands, including Fox News and then-President Donald Trump.

Though Smith had never expressed interest in conspiracy theories, in just two days Facebook was recommending she join groups dedicated to QAnon, a sprawling and baseless conspiracy theory and movement that claimed Trump was secretly saving the world from a cabal of pedophiles and Satanists.

Screen Shot 2021-10-23 at 11.00.21 AM.png

Screen Shot 2021-10-23 at 11.03.20 AM.png

‘A pattern at Facebook’
For years, company researchers had been running experiments like Carol Smith’s to gauge the platform’s hand in radicalizing users, according to the documents seen by NBC News.

This internal work repeatedly found that recommendation tools pushed users into extremist groups, findings that helped inform policy changes and tweaks to recommendations and news feed rankings. Those rankings are a tentacled, ever-evolving system widely known as “the algorithm” that pushes content to users. But the research at that time stopped well short of inspiring any movement to change the groups and pages themselves.

That reluctance was indicative of “a pattern at Facebook,” Haugen told reporters this month. “They want the shortest path between their current policies and any action.”

“There is great hesitancy to proactively solve problems,” Haugen added.

A Facebook spokesperson disputed that the research had not pushed the company to act and pointed to changes to groups announced in March.

While QAnon followers committed real-world violence in 2019 and 2020, groups and pages related to the conspiracy theory skyrocketed, according to internal documents. The documents also show how teams inside Facebook took concrete steps to understand and address those issues — some of which employees saw as too little, too late.

By summer 2020, Facebook was hosting thousands of private QAnon groups and pages, with millions of members and followers, according to an unreleased internal investigation.

A year after the FBI designated QAnon as a potential domestic terrorist threat in the wake of standoffs, planned kidnappings, harassment campaigns and shootings, Facebook labeled QAnon a “Violence Inciting Conspiracy Network” and banned it from the platform, along with militias and other violent social movements. A small team working across several of Facebook’s departments found its platforms had hosted hundreds of ads on Facebook and Instagram worth thousands of dollars and millions of views, “praising, supporting, or representing” the conspiracy theory.

The Facebook spokesperson said in an email that the company has “taken a more aggressive approach in how we reduce content that is likely to violate our policies, in addition to not recommending Groups, Pages or people that regularly post content that is likely to violate our policies.

For many employees inside Facebook, the enforcement came too late, according to posts left on Workplace, the company’s internal message board.

“We’ve known for over a year now that our recommendation systems can very quickly lead users down the path to conspiracy theories and groups,” one integrity researcher, whose name had been redacted, wrote in a post announcing she was leaving the company. “This fringe group has grown to national prominence, with QAnon congressional candidates and QAnon hashtags and groups trending in the mainstream. We were willing to act only * after * things had spiraled into a dire state.”

‘We should be concerned’
Screen Shot 2021-10-23 at 10.58.07 AM.png

QAnon believers also jumped to groups promoting former President Donald Trump’s false claim that the 2020 election was stolen, groups that trafficked in a hodgepodge of baseless conspiracy theories alleging voters, Democrats and election officials were somehow cheating Trump out of a second term. This new coalition, largely organized on Facebook, ultimately stormed the U.S. Capitol on Jan. 6, according to a report included in the document trove and first reported by BuzzFeed News in April.

These conspiracy groups had become the fastest-growing groups on Facebook, according to the report, but Facebook wasn’t able to control their “meteoric growth,” the researchers wrote, “because we were looking at each entity individually, rather than as a cohesive movement.” A Facebook spokesperson told BuzzFeed News it took many steps to limit election misinformation but that it was unable to catch everything.

Facebook’s enforcement was “piecemeal,” the team of researchers wrote, noting, “we’re building tools and protocols and having policy discussions to help us do this better next time.”

‘A head-heavy problem’
The attack on the Capitol invited harsh self-reflection from employees.

Screen Shot 2021-10-23 at 10.55.48 AM.png

Screen Shot 2021-10-23 at 10.59.00 AM.png

The Drebbel “Gateway Groups” study looked back at a collection of QAnon and anti-vaccine groups that had been removed for violating policies around misinformation and violence and incitement. It used the membership of these purged groups to study how users had been pulled in. Drebbel identified 5,931 QAnon groups with 2.2 million total members, half of which joined through so-called gateway groups. For 913 anti-vaccination groups with 1.7 million members, the study identified 1 million had joined via gateway groups. (Facebook has said it recognizes the need to do more.)

Screen Shot 2021-10-23 at 10.54.14 AM.png
 

hanimmal

Well-Known Member
https://www.washingtonpost.com/technology/2021/10/21/teens-instagram-feed-mental-health/Screen Shot 2021-10-23 at 11.22.40 AM.png
When Elyse Stieby opens her Instagram app, among the first things she sees are weight loss tips on the “explore” page: The number of calories in eggs, a medium coffee and a potato.

Stieby says she tries to just look at her friends’ posts, rather than the recommended content Instagram serves her in her feed and through the explore tab — the app’s version of a personalized landing page and search bar accessed through the magnifying glass icon at the bottom of the app. But she says she knows the app’s algorithm chooses what it shows her based on what it thinks she wants to see — so the makeup, hair and body tips are tough to avoid.

“I don’t need to lose weight. I’m 102 pounds,” said the 18-year-old materials science major at Ohio State University.

Experiences like Stieby’s are at the center of a storm of criticism surrounding Instagram owner Facebook. In September, Facebook paused plans for an Instagram app designed especially for children after lawmakers voiced concerns about the app’s effects on young people’s mental health. Instagram is supposed to be for children older than 13, but kids younger than that have been able to get on the platform. Facebook whistleblower Frances Haugen leaked internal documents to the Wall Street Journal and the Securities and Exchange Commission that suggested the company knew that the use of Instagram may hurt the mental health of young women and girls.
She testified in front of a Senate committee saying Facebook put growth and profit above anything else. Facebook has fought back, denying the claims.

Instagram has been steadily increasing the amount of recommended content it shows people. In July, the app started putting videos from people you don’t know right alongside your friends’ posts in the main feed. And the explore tab — a curated collection of algorithmically recommended content — is a wild West of images the app thinks you will like based largely on other posts you’ve interacted with. Impressionable teens may ultimately pay the price as the explore tab spits out content including idealized images and dubious “self help” recommendations.

Social media apps Snapchat and TikTok have also been criticized for promoting content that could warp self image or encourage harmful behaviors.

Still, experts say there are some steps teens, parents and schools can take to help teens handle the challenges that come with social media use.

What apps to use if you decided to leave Facebook

Screen Shot 2021-10-23 at 11.23.48 AM.png

While some experts caution that the impact of social media on mental health isn’t fully understood, others have found demonstrable effects.

“The idea that Facebook just learned about this, as a problem for kids’ mental health, is complete baloney,” Jim Steyer, founder and CEO of family advocacy organization Common Sense Media, said.

Danielle Wagstaff, a lecturer in psychology at Federation University in Australia, co-authored a 2019 paper linking Instagram use with adverse mental health symptoms in women. Potential evidence that Facebook knowingly continued harmful practices shifts the conversation, Wagstaff said, leaving some parents wondering whether Instagram is a safe place for teens to spend time.

But teens are savvy media consumers and they’re coming to their own conclusions about the apps that expand their worlds and pricks at their brains. Teens say they understand how the algorithm works, and they’re doing their best to blunt its effects.

Facebook whistleblower’s revelations could usher in tech’s ‘Big Tobacco moment,’ lawmakers say

How the recommended content works

Screen Shot 2021-10-23 at 11.54.27 AM.png

Teens are savvy, but ‘the algorithm’ is a mental burden

Screen Shot 2021-10-23 at 11.52.53 AM.png

Ways to mitigate impact

Some parents may feel the itch to snatch the phone and ban the app. But pause a moment before launching your teen’s smartphone into the nearest body of water.

Kids that get their phones taken will likely get their hands on a new one, Wagstaff cautioned, and deleted apps can still be accessed from any Internet-connected device. Public relations blowups like Facebook’s require acknowledging a hard truth: If kids weren’t encountering harmful images on Instagram, they’d be seeing them somewhere else.

Instead, parents, schools and companies must work together to educate kids not only about the risks of social media, but also about the mindset it takes to move through a tough world with confidence and self love, she said. Parents should connect teens with resources to practice mindfulness and self-compassion, both of which help build resilience in the face of constant comparison.

Talk (and listen) frequently with your children about Instagram, Common Sense Media’s Steyer said, teaching them to recognize the compulsion to compare themselves to others. Explain that Photoshop and other editing tools are responsible for the stylized images they see, and ask why people choose to change their faces and edit their surroundings.

Some meditation apps offer series tailored for teens. In one meditation on the Calm app, pop star Camila Cabello walks listeners through an exercise in which instead of grabbing their phones every time they want to scroll, they mindfully observe that impulse instead.

Schools play a role as well, experts say. Common Sense offers school programs in media literacy and digital citizenship, both of which help students evaluate the messages they see online and engage constructively with others, Steyer said. Australia is experimenting with health programs in primary school that encourage kids to notice the natural difference among bodies and to comfortably talk about bodies, Wagstaff said.

Last, many think the apps themselves must change. Some legislators and advocates have pushed for new types of feeds on social media apps, rather than the kind that rank content with the goal of boosting engagement and time spent on the app. Others have supported bills expanding data privacy protections for children, which would make it harder for companies to track and target them.

Teens themselves are taking steps to manage their own social media use and put what they see into perspective.
Stieby, who also uses Snapchat and TikTok, said she uses her iPhone’s Screen Time app to set a limit for social media apps: two hours a day, tops. She rarely hits it, she said, but when she does, she knows it’s time to log off.
 

CatHedral

Well-Known Member
It should come as no surprise that promoting our basest drives makes the most money the fastest.

Critical Corporate Theory states that the conflict of interest was built in from the start.

The users see a relentless onslaught of toxic content. The metrics folks at headquarters see many many clicks, and that is how they like it.
 

hanimmal

Well-Known Member
https://apnews.com/article/the-facebook-papers-language-moderation-problems-392cb2d065f81980713f37384d07e61fScreen Shot 2021-10-25 at 9.02.51 AM.png
DUBAI, United Arab Emirates (AP) — As the Gaza war raged and tensions surged across the Middle East last May, Instagram briefly banned the hashtag #AlAqsa, a reference to the Al-Aqsa Mosque in Jerusalem’s Old City, a flash point in the conflict.

Facebook, which owns Instagram, later apologized, explaining its algorithms had mistaken the third-holiest site in Islam for the militant group Al-Aqsa Martyrs Brigade, an armed offshoot of the secular Fatah party.

For many Arabic-speaking users, it was just the latest potent example of how the social media giant muzzles political speech in the region. Arabic is among the most common languages on Facebook’s platforms, and the company issues frequent public apologies after similar botched content removals.

Now, internal company documents from the former Facebook product manager-turned-whistleblower Frances Haugen show the problems are far more systemic than just a few innocent mistakes, and that Facebook has understood the depth of these failings for years while doing little about it.

Such errors are not limited to Arabic. An examination of the files reveals that in some of the world’s most volatile regions, terrorist content and hate speech proliferate because the company remains short on moderators who speak local languages and understand cultural contexts. And its platforms have failed to develop artificial-intelligence solutions that can catch harmful content in different languages.

THE FACEBOOK PAPERS
Facebook's language gaps weaken screening of hate, terrorism
EXPLAINER: Just what are 'The Facebook Papers,' anyway?
Apple once threatened Facebook ban over Mideast maid abuse
People or profit? Facebook papers show deep conflict within

In countries like Afghanistan and Myanmar, these loopholes have allowed inflammatory language to flourish on the platform, while in Syria and the Palestinian territories, Facebook suppresses ordinary speech, imposing blanket bans on common words.

“The root problem is that the platform was never built with the intention it would one day mediate the political speech of everyone in the world,” said Eliza Campbell, director of the Middle East Institute’s Cyber Program. “But for the amount of political importance and resources that Facebook has, moderation is a bafflingly under-resourced project.”

This story, along with others published Monday, is based on Haugen’s disclosures to the Securities and Exchange Commission, which were also provided to Congress in redacted form by her legal team. The redacted versions were reviewed by a consortium of news organizations, including The Associated Press.

In a statement to the AP, a Facebook spokesperson said that over the last two years the company has invested in recruiting more staff with local dialect and topic expertise to bolster its review capacity around the world.

But when it comes to Arabic content moderation, the company said, “We still have more work to do. ... We conduct research to better understand this complexity and identify how we can improve.”

In Myanmar, where Facebook-based misinformation has been linked repeatedly to ethnic and religious violence, the company acknowledged in its internal reports that it had failed to stop the spread of hate speech targeting the minority Rohingya Muslim population.

The Rohingya’s persecution, which the U.S. has described as ethnic cleansing, led Facebook to publicly pledge in 2018 that it would recruit 100 native Myanmar language speakers to police its platforms. But the company never disclosed how many content moderators it ultimately hired or revealed which of the nation’s many dialects they covered.

Despite Facebook’s public promises and many internal reports on the problems, the rights group Global Witness said the company’s recommendation algorithm continued to amplify army propaganda and other content that breaches the company’s Myanmar policies following a military coup in February.

In India, the documents show Facebook employees debating last March whether it could clamp down on the “fear mongering, anti-Muslim narratives” that Prime Minister Narendra Modi’s far-right Hindu nationalist group, Rashtriya Swayamsevak Sangh, broadcasts on its platform.

Screen Shot 2021-10-25 at 9.12.26 AM.png

He had tried to be clever. Like many Palestinians, he’d learned to avoid the typical Arabic words for “martyr” and “prisoner,” along with references to Israel’s military occupation. If he mentioned militant groups, he’d add symbols or spaces between each letter.

Other users in the region have taken an increasingly savvy approach to tricking Facebook’s algorithms, employing a centuries-old Arabic script that lacks the dots and marks that help readers differentiate between otherwise identical letters. The writing style, common before Arabic learning exploded with the spread of Islam, has circumvented hate speech censors on Facebook’s Instagram app, according to the internal documents.

But Slaieh’s tactics didn’t make the cut. He believes Facebook banned him simply for doing his job. As a reporter in Gaza, he posts photos of Palestinian protesters wounded at the Israeli border, mothers weeping over their sons’ coffins, statements from the Gaza Strip’s militant Hamas rulers.

Criticism, satire and even simple mentions of groups on the company’s Dangerous Individuals and Organizations list — a docket modeled on the U.S. government equivalent — are grounds for a takedown.

“We were incorrectly enforcing counterterrorism content in Arabic,” one document reads, noting the current system “limits users from participating in political speech, impeding their right to freedom of expression.”

The Facebook blacklist includes Gaza’s ruling Hamas party, as well as Hezbollah, the militant group that holds seats in Lebanon’s Parliament, along with many other groups representing wide swaths of people and territory across the Middle East, the internal documents show, resulting in what Facebook employees describe in the documents as widespread perceptions of censorship.

“If you posted about militant activity without clearly condemning what’s happening, we treated you like you supported it,” said Mai el-Mahdy, a former Facebook employee who worked on Arabic content moderation until 2017.

In response to questions from the AP, Facebook said it consults independent experts to develop its moderation policies and goes “to great lengths to ensure they are agnostic to religion, region, political outlook or ideology.”

“We know our systems are not perfect,” it added.

The company’s language gaps and biases have led to the widespread perception that its reviewers skew in favor of governments and against minority groups.

Screen Shot 2021-10-25 at 9.10.43 AM.png

When it came to looking into the abuse of domestic workers in the Middle East, internal Facebook documents acknowledged that engineers primarily focused on posts and messages written in English. The flagged-words list did not include Tagalog, the major language of the Philippines, where many of the region’s housemaids and other domestic workers come from.

Screen Shot 2021-10-25 at 9.08.51 AM.png

Screen Shot 2021-10-25 at 9.06.53 AM.png

“We told Facebook: Do you want people to convey their experiences on social platforms, or do you want to shut them down?” said Husam Zomlot, the Palestinian envoy to the United Kingdom, who recently discussed Arabic content suppression with Facebook officials in London. “If you take away people’s voices, the alternatives will be uglier.”
 

hanimmal

Well-Known Member
https://apnews.com/article/the-facebook-papers-covid-vaccine-misinformation-c8bbc569be7cc2ca583dadb4236a0613
Screen Shot 2021-10-26 at 7.14.25 PM.png
WASHINGTON (AP) — In March, as claims about the dangers and ineffectiveness of coronavirus vaccines spun across social media and undermined attempts to stop the spread of the virus, some Facebook employees thought they had found a way to help.

By altering how posts about vaccines are ranked in people’s newsfeeds, researchers at the company realized they could curtail the misleading information individuals saw about COVID-19 vaccines and offer users posts from legitimate sources like the World Health Organization.

“Given these results, I’m assuming we’re hoping to launch ASAP,” one Facebook employee wrote, responding to the internal memo about the study.

Instead, Facebook shelved some suggestions from the study. Other changes weren’t made until April.

When another Facebook researcher suggested disabling some comments on vaccine posts in March until the platform could do a better job of tackling anti-vaccine messages lurking in them, that proposal was ignored at the time.

Critics say the reason Facebook was slow to take action on the ideas is simple: The tech giant worried it might impact the company’s profits.

“Why would you not remove comments? Because engagement is the only thing that matters,” said Imran Ahmed, the CEO of the Center for Countering Digital Hate, an internet watchdog group. “It drives attention and attention equals eyeballs and eyeballs equal ad revenue.”

In an emailed statement, Facebook said it has made “considerable progress” this year with downgrading vaccine misinformation in users’ feeds.

Facebook’s internal discussions were revealed in disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. The redacted versions received by Congress were obtained by a consortium of news organizations, including The Associated Press.

The trove of documents shows that in the midst of the COVID-19 pandemic, Facebook carefully investigated how its platforms spread misinformation about life-saving vaccines. They also reveal rank-and-file employees regularly suggested solutions for countering anti-vaccine content on the site, to no avail. The Wall Street Journal reported on some of Facebook’s efforts to deal with anti-vaccine comments last month.

Facebook’s response raises questions about whether the company prioritized controversy and division over the health of its users.

THE FACEBOOK PAPERS
People or profit? Facebook papers show deep conflict within
EXPLAINER: Just what are 'The Facebook Papers,' anyway?
Facebook's language gaps weaken screening of hate, terrorism
Apple once threatened Facebook ban over Mideast maid abuse

“These people are selling fear and outrage,” said Roger McNamee, a Silicon Valley venture capitalist and early investor in Facebook who is now a vocal critic. “It is not a fluke. It is a business model.”

Typically, Facebook ranks posts by engagement — the total number of likes, dislikes, comments, and reshares. That ranking scheme may work well for innocuous subjects like recipes, dog photos, or the latest viral singalong. But Facebook’s own documents show that when it comes to divisive public health issues like vaccines, engagement-based ranking only emphasizes polarization, disagreement, and doubt.

To study ways to reduce vaccine misinformation, Facebook researchers changed how posts are ranked for more than 6,000 users in the U.S., Mexico, Brazil, and the Philippines. Instead of seeing posts about vaccines that were chosen based on their popularity, these users saw posts selected for their trustworthiness.

The results were striking: a nearly 12% decrease in content that made claims debunked by fact-checkers and an 8% increase in content from authoritative public health organizations such as the WHO or U.S. Centers for Disease Control. Those users also had a 7% decrease in negative interactions on the site.

Employees at the company reacted to the study with exuberance, according to internal exchanges included in the whistleblower’s documents.

“Is there any reason we wouldn’t do this?” one Facebook employee wrote in response to an internal memo outlining how the platform could rein in anti-vaccine content.

Facebook said it did implement many of the study’s findings — but not for another month, a delay that came at a pivotal stage of the global vaccine rollout.

In a statement, company spokeswoman Dani Lever said the internal documents “don’t represent the considerable progress we have made since that time in promoting reliable information about COVID-19 and expanding our policies to remove more harmful COVID and vaccine misinformation.”

The company also said it took time to consider and implement the changes.

Yet the need to act urgently couldn’t have been clearer: At that time, states across the U.S. were rolling out vaccines to their most vulnerable — the elderly and sick. And public health officials were worried. Only 10% of the population had received their first dose of a COVID-19 vaccine. And a third of Americans were thinking about skipping the shot entirely, according to a poll from The Associated Press-NORC Center for Public Affairs Research.

Despite this, Facebook employees acknowledged they had “no idea” just how bad anti-vaccine sentiment was in the comments sections on Facebook posts. But company research in February found that as much as 60% of the comments on vaccine posts were anti-vaccine or vaccine reluctant.

“That’s a huge problem and we need to fix it,” the presentation on March 9 read.

Even worse, company employees admitted they didn’t have a handle on catching those comments. And if they did, Facebook didn’t have a policy in place to take the comments down. The free-for-all was allowing users to swarm vaccine posts from news outlets or humanitarian organizations with negative comments about vaccines.

“Our ability to detect (vaccine hesitancy) in comments is bad in English — and basically non-existent elsewhere,” another internal memo posted on March 2 said.

Los Angeles resident Derek Beres, an author and fitness instructor, sees anti-vaccine content thrive in the comments every time he promotes immunizations on his accounts on Instagram, which is owned by Facebook. Last year, Beres began hosting a podcast with friends after they noticed conspiracy theories about COVID-19 and vaccines were swirling on the social media feeds of popular health and wellness influencers.

Screen Shot 2021-10-26 at 7.26.10 PM.png

Full Coverage: The Facebook Papers

Screen Shot 2021-10-26 at 7.24.58 PM.png

Earlier this month, an AP-NORC poll found that most Americans blame social media companies, like Facebook, and their users for misinformation.

But Ahmed said Facebook shouldn’t just shoulder blame for that problem.

“Facebook has taken decisions which have led to people receiving misinformation which caused them to die,” Ahmed said. “At this point, there should be a murder investigation.”
 

hanimmal

Well-Known Member
https://www.washingtonpost.com/outlook/2021/10/28/misinformation-spanish-facebook-social-media/Screen Shot 2021-10-31 at 4.04.39 PM.png
The release of internal Facebook documents showing that the platform isn’t doing enough to stop a flood of lies and misinformation has sparked outrage nationwide. As bad as these problems are in English, though, they are even worse in other languages: Facebook has admitted its platform was used to incite violence against the Rohingya in Myanmar, and in the Philippines, the site helped fuel a vicious drug war and attacks on dissident journalists. Social media platforms are allowing far more misinformation to spread in other languages than they are in English.

But some of the scariest misinformation online is spreading right here in the United States — in Spanish.

I worked at Google from 2015 to 2018, and I saw the power the Internet has to foster community, keep family connected and help small businesses. It can even fuel social movements — from Cuba’s most recent protests to police accountability. Yet I also saw firsthand how many of the platforms use the shiny possibility of the Internet as a shield to hide the depths of what happens on the dark side. We are living with the consequences of years of inaction, which have yielded a mass shooting of Latinos in El Paso, a literal insurrection and deaths from anti-vaccine misinformation.

Latino communities maintain strong connections across Latin America; the result is an entire continent of Spanish-language misinformation largely unchecked by the platforms. Latinos are more susceptible to misinformation simply because of how much time we are spending online — twice as much on YouTube as non-Latino adults, for example, according to the latest research by Equis, the organization where I work, which is dedicated to researching the Latino electorate. Two-thirds of Latinos treat YouTube as a primary source for their news and information about politics and elections. Half of Latinos in the United States use WhatsApp, the Facebook-owned messaging platform, morethan any other ethnic or racial group in the country.

Spanish-language misinformation narratives often start on Facebook or YouTube, but then conversations or viral content move to closed WhatsApp groups where there’s less of a chance for fact-checkers to intervene. Even with all the misinformation spreading on WhatsApp, Facebook founder and chief executive Mark Zuckerberg still opposed having a “voting information center” for Spanish speakers on the platform ahead of the 2020 election because he thought it was not “politically neutral,” according to The Washington Post. (Facebook spokesman Andy Stone said this week that “this is false” and that WhatsApp had launched bilingual campaigns last year about voter registration information and fact-checking. The company did not answer a Washington Post editor’s request for comment for this story.)

Many Spanish-language social media pages and groups are cesspools, enabling smugglers to target desperate migrants and refugees and spreading harmful covid-19 and vaccine misinformation as fast as your tia’s “Dios Te Bendiga” meme. These tech platforms don’t just spread racist hate speech targeting Latinos; they’re also frequently spreading racial tropes that perpetuate colorism and anti-blackness, which help drive a wedge between Latino and Black communities.

Our research shows that social media networks are doing a poor job of addressing Spanish misinformation, with less moderation and harmful posts left up longer than in English. Facebook still has Spanish-language posts active today from November 2020 that promote election lies with no warning labels. Facebook and YouTube both announced policies to remove or restrict QAnon content, but it continued to spread in Spanish. The platforms allowed content to stay up for weeks until we flagged it for them — and they still refused to take some down.

Facebook pages we were tracking last year spread the lie that dead people voted in Nevada in the 2020 election, a claim Facebook’s fact-checking partners rated false multiple times. The pages posting the claim in English had the posts labeled as “false information.” One Spanish post with hundreds of shares still has no label.

More recently, we’ve seen that Facebook will flag vaccine misinformation content in English, but the same content in Spanish takes days to get flagged, if it ever does. The online activist group Avaaz found Facebook failed to issue warning labels on 70 percent of misinformation in Spanish, compared to only 29 percent in English. It isn’t just Spanish, either: In whistleblower Frances Haugen’s testimony before Congress, she revealed that 87 percent of misinformation spending is on English, but only about 9 percent of the users are English speakers. Haugen exposed how the company’s profit incentives prompt it not to offer the same safety systems for every language on the platform or every country where Facebook is used: “Every time Facebook expands to a new one of these linguistic areas, it costs just as much, if not more, to make the safety systems for that language as it did to make English or French,” she told “60 Minutes.” “Because each new language costs more money but there’s fewer and fewer customers. And so, the economics just doesn’t make sense for Facebook to be safe in a lot of these parts of the world."

Screen Shot 2021-10-31 at 4.06.22 PM.png

When a soccer player collapsed in cardiac arrest during an international match this year, anti-vaccine social media accounts jumped on it, falsely claiming the incident was related to the coronavirus vaccine (the player hadn’t even been vaccinated, his coach said). Several Facebook pages we found sharing the lie in English were almost immediately labeled as false. But the exact same lie posted on a prominent disinformation account in Spanish was left up for days — an endless amount of time for disinformation — receiving hundreds of shares before being labeled.

Screen Shot 2021-10-31 at 4.09.34 PM.png

Advocates have been pushing for a set of solutions that the platforms can take — including hiring a C-suite position to oversee Spanish-language content moderation, expanding Spanish language moderation capacity and being more transparent about their moderation systems and processes — but with little to no success. And Facebook and the other platforms have repeatedly shown that they won’t solve this problem. The lack of self correction by the platforms has many advocates now calling on Congress and the Biden administration to not only demand answers but regulate the platforms or, in the case of Facebook and WhatsApp, break them up. A Senate hearing on Spanish-language disinformation specifically is likely.

Screen Shot 2021-10-31 at 4.10.15 PM.png
 

hanimmal

Well-Known Member
https://www.rawstory.com/facebook-fake-news-2655472261/Screen Shot 2021-11-01 at 12.11.39 PM.png
In a column for the conservative Bulwark, senior editor Jim Swift claimed that Facebook is still allowing fake news about an election to be disseminated to targeted users despite executives insisting that they have cleaned up their act after the 2020 presidential election.

Case in point, Swift began by explaining that he fits the profile that is a preferred target for conservative-leaning news producers -- real or not -- and that Tuesday's election in Virginia is the current hot topic.

Explaining, "I'm what you would think would be a prototypical Glenn Youngkin voter: a white male around 40 with a near-perfect voting frequency and obvious signs that throughout my voting life, I've been a Republican. That is the kind of publicly accessible information that can be hoovered up by anyone building databases for advertising campaigns," Swift claimed he was on the receiving end of an advertisement on Facebook from "Old Dominion News."

Following the link, he claims it took him down a rabbit hole that led him to a supporter of Donald Trump who has been pushing the "Big Lie."

"It's a page that had fewer than 20 likes as of Sunday night—which is to say, it was little-read and obscure. It is clearly a fake publication trying to push people like me to turn out to vote," he wrote before adding, "Despite the 'publication' having such little influence, the ad has 840 comments and 209 shares."

"First, the website that the 'news' article points to, VoteRef.com, resembles the mailers that political parties and organizations have used in previous election years and again this year. The idea of these mailers—targeting algorithmically selected voters—is to try to shame the recipients into voting," he explained. "Maybe the guilt over seeing your own past voting record would get to you; maybe the fear that your neighbors can see that you haven't voted will get to you; either way, the idea is for you to tie yourself to the voting booth."

Digging deeper, Swift discovered the VoteRef website belonging to "Voter Reference Foundation" has as its executive director Gina Swoboda who worked for Donald Trump as a director of elections in Arizona.

As Adam Klasfeld of Law & Crime has reported Swoboda is an "apparent Sharpiegater, attesting to a conspiracy theory that has been debunked by officials' testimony and abandoned by the Trump campaign's own lawyer who now claims it's not not (sic) to their case."

As for Swift, he claimed it is likely that supporters of Democratic nominee Terry McAuliffe are also using Facebook in sketchy ways to sway voters, but that misses the big picture.

"One can't read too much into either the Youngkin or McAuliffe campaigns (or their allies) in Virginia based on the use of this sort of eleventh-hour Facebook advertising and astroturfed news outlet. This fake publication—and the hundreds of other such fakes out there, and the ads promoting them—are reminders of just how much money is mysteriously sloshing around on the hidden, fake-news, data-driven side of our politics," he wrote before concluding, "Cambridge Analyticamay be dead and gone, but its successors are still clunking along."

You can read his whole column here.
 

hanimmal

Well-Known Member
Came across this one today and thought you would like it...

Really good watch. Especially about how Trevor Noah tried to quit and kept getting sucked back in with notifications that he never had before.

I do with they would really put the time in to understand the power of trolls though.

They keep talking about the 'algorithm' and keep missing the ability to manipulate that real time analysis using trolls to push up the narratives that get amplified by it.
 

hanimmal

Well-Known Member
Really fucked up ad popped up on my youtube today pushing propaganda to kids while scare mongering 'freedom loving parents'.

Step 1. Piss and moan about 'CRT' that is not being taught to kids.

Step 2. Slide in and sell these idiots their next con.

Screen Shot 2021-11-04 at 7.14.28 PM.pngScreen Shot 2021-11-04 at 7.14.46 PM.pngScreen Shot 2021-11-04 at 7.14.54 PM.png
 

hanimmal

Well-Known Member

https://apnews.com/article/joe-biden-new-york-manhattan-subpoenas-8a9e60d13bbb3d19a6adc7922809cd1f
Screen Shot 2021-11-06 at 8.40.29 PM.png
NEW YORK (AP) — Federal agents searched the New York homes of people tied to the conservative group Project Veritas months after the group received a diary that a tipster claimed belonged to President Joe Biden’s youngest daughter, its leader said Friday.

In a video posted on YouTube, James O’Keefe said his organization had received a grand jury subpoena and said current and former Project Veritas employees had their homes searched by federal agents.

The conservative group is known for using hidden cameras and hiding identities to try to ensnare journalists in embarrassing conversations and to reveal supposed liberal bias.

An FBI spokesman confirmed that agents had conducted “court authorized law enforcement activity” at an apartment in Manhattan and an address in Mamaroneck in Westchester County. The U.S. attorney’s office in Manhattan declined to comment.

O’Keefe said his group never threatened anyone or “engaged in any illegal conduct.” He said there was “no doubt Project Veritas acted appropriately at each and every step.”

In the video, O’Keefe says that his group was contacted late last year by “tipsters” who had claimed to have a copy of Ashley Biden’s diary. The tipsters said the diary had been “abandoned in a room” after she left the room, O’Keefe said, adding that the tipsters said the diary had “explosive allegations against then-candidate Joe Biden.”

He said the group’s lawyers had been in contact with the Justice Department before the searches, which were first reported by The New York Times, and had “conveyed unassailable facts that demonstrate Project Veritas’ lack of involvement in criminal activity and/or criminal intent.”

O’Keefe said the tipsters who had provided the diary had contacted the group and at the time said they were also negotiating with another organization to sell the information. Ultimately, Project Veritas did not publish information from the diary, in part because the group could not determine if it belonged to Ashley Biden, or if the information was authentic, he said.

O’Keefe said his group tried to return the diary to a lawyer for Ashely Biden and later provided it to law enforcement, though he did not specific which agency the group contacted.
 

hanimmal

Well-Known Member
https://apnews.com/article/path-to-radicalization-us-pakistan-ee6744ea3fa9c7adc5cf15ed058a75f5
Screen Shot 2021-11-06 at 3.35.27 PM.png
In the months before he was charged with storming the Capitol, Doug Jensen was sharing conspiracy theories he’d consumed online. But it hadn’t always been that way, says his brother, who recalls how he once posted the sort of family and vacation photos familiar to nearly all social media users.

A world away, Wahab hadn’t always spent his days immersed in jihadist teaching. The product of a wealthy Pakistani family and the youngest son of four, he was into cars and video games, had his own motorcycle, even studied in Japan.

No two ideologues are identical. No two groups are comprised of monolithic clones. No single light switch marks the shift to radicalism. The gulf between different kinds of extremists — in religious and political convictions, in desired world orders, in how deeply they embrace violence in the name of their cause — is as wide as it is obvious.

But to dwell only on the differences obscures the similarities, not only in how people absorb extremist ideology but also in how they feed off grievances and mobilize to action.

For any American who casts violent extremism as a foreign problem, the Jan. 6 Capitol siege held up an uncomfortable mirror that showed the same conditions for fantastical thinking and politically motivated violence as any society.

The Associated Press set out to examine the paths and mechanics of radicalization through case studies on two continents: a 20-year-old man rescued from a Taliban training camp on Afghanistan’s border, and an Iowa man whose brother watched him fall sway to nonsensical conspiracy theories and ultimately play a visible role in the mob of Donald Trump loyalists that stormed the Capitol.

Two places, two men, two very different stories as seen by two close relatives. But strip away the ideologies for a moment, says John Horgan, a researcher of violent extremism. Instead, look at the psychological processes, the pathways, the roots, the experiences.

“All of those things,” Horgan says, “tend to look far more similar than they are different.”

THE AMERICAN

America met Doug Jensen via a video that ricocheted across the Internet, turning an officer into a hero and laying bare the mob mentality inside the Capitol that day.

Jensen is the man in a dark stocking cap, a black “Trust the Plan” shirt over a hooded sweatshirt, front and center in a crowd of rioters chasing Eugene Goodman, a Capitol Police officer, up two flights of stairs. One prominent picture shows him standing feet from an officer, arms spread wide, mouth agape.

When it was all over, he’d tell the FBI that he was a “true believer” in QAnon, that he’d gone to Washington because Q and Trump had summoned “all patriots” and that he’d expected to see Vice President Mike Pence arrested. He’d say he pushed his way to the front of the crowd because he wanted “Q” to get the credit for what was about to happen.

He’d tell his brother the photos were staged, how the police had practically let him in through the front door (prosecutors say he climbed a wall and entered through a broken window) and that some officers even did selfies with the crowd.

William Routh of Clarksville, Arkansas, had an unsettled feeling about that day even before the riot and says he cautioned his younger brother. “I said, if you go down there and you’re going to do a peaceful thing, then that’s fine. But I said keep your head down and don’t be doing something stupid.”

TOP HEADLINES
Crowd surge kills at least 8 at Houston music festival
Biden hails infrastructure win as 'monumental step forward'
Roads, transit, internet: What's in the infrastructure bill
Climate goal: Money for poor nations; less emissions by rich

In interviews with the AP days and months after his younger brother’s arrest, Routh painted Jensen — a 42-year-old Des Moines father of three who’d worked as a union mason laborer — as a man who enjoyed a pleasant if unextraordinary American existence. He says he took his family to places like the Grand Canyon and Yellowstone National Park, attended his children’s sporting events, worked to pay for a son’s college education, made anodyne Facebook posts.

“I have friends that I speak to constantly that have conspiracy theories,” Routh said, “but this was a shock to me more than anything, because I would not have thought this from my brother Doug, because he’s a very good, hardworking family man and he has good values.”

Exactly who Jensen is, and how much knowledge he had of the world around him, depends on who’s talking.

A Justice Department memo that argued for Jensen’s detention cites a criminal history and his eagerness to drive more than 1,000 miles to “hear President Trump declare martial law,” then to take it into his own hands when no proclamation happened. It notes that when the FBI questioned him, he said he’d gone to Washington because “Q,” the movement’s amorphous voice, had forecast that the “storm” had arrived.

His lawyer, Christopher Davis, countered in his own filing by essentially offering Jensen up as a dupe, a “victim of numerous conspiracy theories” and a committed family man whose initial devotion to QAnon “was its stated mission to eliminate pedophiles from society.”

Six months after the insurrection, the argument resonated with a judge who agreed to release Jensen on house arrest as his case moved forward. The judge, Timothy Kelly, cited a video in which Jensen referred to the Capitol building as the White House and said he didn’t believe Jensen could have planned an attack in advance “when he had no basic understanding of where he even was that day.”

Yet less than two months after he was released, Jensen was ordered back to jail for violating the conditions of his freedom. Though barred from accessing a cellphone, he watched a symposium sponsored by MyPillow CEO Mike Lindell that offered up false theories that the presidential election’s outcome was changed by Chinese hackers. A federal officer making the first unannounced visit to Jensen found him in his garage using an iPhone to watch news from Rumble, a streaming platform popular with conservatives.

Davis, who weeks earlier had asserted that his client “feels deceived, recognizing that he bought into a pack of lies,” likened his client’s behavior this time to an addiction. The judge was unmoved.

“It’s now clear that he has not experienced a transformation and that he continues to seek out those conspiracy theories that led to his dangerous conduct on Jan. 6,” Kelly said. “I don’t see any reason to believe that he has had the wake-up call that he needs.”

Precisely when and how Jensen came to absorb the conspiracies that led him to the Capitol is bewildering to Routh, who says he took Jensen under his wing during a challenging childhood that included stays in foster care and now feels compelled, as his oldest living relative, to speak on his behalf.

Screen Shot 2021-11-06 at 8.50.56 PM.png

THE PAKISTANI

Wahab had it all. The youngest son of four from a wealthy Pakistani family, he spent his early years in the United Arab Emirates and for a time in Japan, studying. Wahab liked cars, had his own motorcycle and was crazy about video games.

His uncle, who rescued the 20-year-old from a Taliban training camp on Pakistan’s border with Afghanistan earlier this year, asked that his full name not be used because in the northwest where the family lives, militants have deep-reaching tentacles. But more than that, he worries about his family’s reputation because of its prominence. He agreed to be quoted using his middle name, Kamal.

The family has business interests scattered across the globe. Kamal is one of five brothers who runs the family-owned import/export conglomerate. Each brother in turn has groomed and primed their sons for the business. Wahab’s older brothers are already running overseas branches of the family business.

Screen Shot 2021-11-06 at 8.49.45 PM.png

Screen Shot 2021-11-06 at 8.48.45 PM.png
___

Screen Shot 2021-11-06 at 8.47.28 PM.png

Says Cohen: “People seem to not be able to get enough of a conspiracy theory, but they’re never quite satisfied or really reassured.”
 

CatHedral

Well-Known Member
Really liked that sentence towards the bottom, "It gives us answers that are much more appealing, emotionally, than the real answer". Anything to keep from looking into the mirror.
That in a nutshell is the sorry state of our society and discourse. We have years of hard work ahead of us if we are to unite again.
 
Status
Not open for further replies.
Top