AP: Cyborgs, Trolls and bots: A guide to online misinformation

Status
Not open for further replies.

Fogdog

Well-Known Member
I have, kind of.....of course it's not ALL about myself. That would be extremely challenging 'cuz of that whole no stoner is an island thing.

My not-quite-all-about-myself masterpiece is a truly hilarious, engaging, uplifting, heartbreaking, witty, exquisitely-written monument to throbbing, rampaging badassness.

I'm quite interesting and I've put 7 digits in the bank writing in my spare time.

And, here we are.
No, you should write a book that is all about yourself.
 

Yowza McChonger

Active Member
No, you should write a book that is all about yourself.
Consider it done. Or, more precisely, consider the consideration to consider actually doing it done. Open up WIDE, world, for:

McChonger: The Fogdog Chronicles

Aw, fuck. It ain't all about me no more now that I done mentioned you, is it? Oh, cruel fate........
 

hanimmal

Well-Known Member
https://apnews.com/article/joe-biden-donald-trump-race-and-ethnicity-media-misinformation-9696392bf389ba8ca3441d2314abcefaScreen Shot 2021-03-10 at 3.09.20 PM.png
WASHINGTON (AP) — Tom Perez was a guest on a Spanish-language talk radio show in Las Vegas last year when a caller launched into baseless complaints about both parties, urging Latino listeners to not cast votes at all.

Perez, then chairman of the Democratic Party, recognized many of the claims as talking points for #WalkAway, a group promoted by a conservative activist, Brandon Straka, who was later arrested for participating in the deadly Jan. 6 insurrection at the U.S. Capitol.

In the run-up to the November election, that call was part of a broader, largely undetected movement to depress turnout and spread disinformation about Democrat Joe Biden among Latinos, promoted on social media and often fueled by automated accounts.

The effort showed how social media and other technology can be leveraged to spread misinformation so quickly that those trying to stop it cannot keep up. There were signs that it worked as Donald Trump swung large numbers of Latino votes in the 2020 presidential race in some areas that had been Democratic strongholds.

Videos and pictures were doctored. Quotes were taken out of context. Conspiracy theories were fanned, including that voting by mail was rigged, that the Black Lives Matter movement had ties to witchcraft and that Biden was beholden to a cabal of socialists.

That flow of misinformation has only intensified since Election Day, researchers and political analysts say, stoking Trump’s baseless claims that the election was stolen and false narratives around the mob that overran the Capitol.

More recently, it has morphed into efforts to undermine vaccination efforts against the coronavirus.

“The volume and sources of Spanish language information are exceedingly wide-ranging and that should scare everyone,” Perez said.

The funding and the organizational structure of this effort is not clear, although the messages show a fealty to Trump and opposition to Democrats.

A nonpartisan academic report released this past week said most false narratives in the Spanish-language community “were translated from English and circulated via prominent platforms like Facebook, Twitter and YouTube, as well as in closed group chat platforms like WhatsApp, and efforts often appeared coordinated across platforms.”

“The most prominent narratives and those shared were either closely aligned with or completely repurposed from right-wing media outlets,” said the report by researchers from Stanford University, the University of Washington, the social network analysis firm Graphika and Atlantic Council’s DFRLab, which studies disinformation online around the world.

Straka said via email that nothing from the #WalkAway Campaign ”encourages people not to vote.” He declined further comment.

While much of the material is coming from domestic sources such as Spanish-speaking social media “influencers,” it increasingly originating on online sites in Latin America, those studying it closely say.

Misinformation originally promoted in English is translated in Colombia, Brazil, Mexico, Nicaragua and elsewhere, then reaches Hispanic voters in the U.S. via communications from their relatives in those countries. That is often shared via private WhatsApp and Facebook chats and text chains.

“There’s this growing concern that this is very much part of the immigrant and first-generation information environment for a lot of Latinos in the United States,” said Dan Restrepo, former senior director for Western Hemisphere affairs at the National Security Council. “A lot of it is seemingly coming through family and other group chats, whose origins are in-region rather than the United States.”

WhatsApp and similar services are popular among Hispanics in the U.S. because the services allow for communicating with family and friends in Latin America free over the internet, avoiding costly long-distance charges. While those originating such campaigns in Latin America often cannot vote in the U.S., they can influence family in this country who do.

YouTube, Facebook and other social media companies have cracked down on false claims since before the election and intensified such efforts after online conspiracy theories helped incite the Trump loyalists who attacked the Capitol.

“We are running the largest online vaccine information campaign in history on our apps in dozens of languages, including Spanish,” said Kevin McAlister, a spokesman for Facebook which owns WhatsApp and Instagram. “We’ve removed millions of pieces of content on Facebook and Instagram that violate our COVID-19 and vaccine misinformation policies, and labeled more than 167 million pieces of COVID-19 content, including Spanish-language content.”

WhatsApp now limits users’ ability to send highly forwarded messages to more than one chat at a time; that led to a 70% reduction in the number of such messages. The company also partnered with Google to provide a feature allowing users to search the internet for the contents of forwarded messages to better check the veracity.

Still, those who monitor Spanish-language content online describe an information void, or dearth of reliable sources with large enough followings to consistently debunk falsehoods.

“The Spanish-language space has been a bit of a blind spot for researchers for awhile now,” said Bret Schafer, a disinformation expert at the Alliance for Securing Democracy, which works to combat online efforts to weaken democratic institutions. “This field exploded after 2016 and, the vast majority of us who are in it, more of us speak Russian than Spanish.”

With the election behind them, the proponents of these campaigns are now trying to spread chaos more broadly, notably by trying to create doubt about vaccines. That push is especially dangerous because Latinos have higher chances of being infected by, hospitalized from and dying of COVID-19 than do whites and African Americans or Asian Americans.

Maria Teresa Kumar, president and CEO of Voto Latino, which works to promote Hispanic voting and political engagement nationwide, has personal experience.

Her mother, Mercedes Vegvary, runs an elderly care facility in Northern California and spent weeks planning to forgo getting vaccinated against COVID-19 because a friend at a gym had showed her a video circulating on social media. In it, a woman wearing a lab coat and claiming to be a pharmacist in El Salvador says in Spanish that such vaccines aren’t safe for use in humans.

A video with a similar message appears to have originated in Panama, and another came from the Middle East but had been translated into Spanish. All moved into the U.S. via text chains or internet messaging from people with family and friends in Latin America, Kumar said.

One chain features doctored video of the late, Nobel Prize-winning chemist Kary Mullis purportedly dismissing Dr. Anthony Fauci, the top U.S. infectious disease expert, as a “phony who knows nothing about virology.” Another shows a crowded street that it claims is full of Italians flaunting social distancing and mask-wearing rules over the hashtag in Spanish #yonomevacuno” or “I won’t be vaccinated.”


Screen Shot 2021-03-10 at 3.13.16 PM.png

Brazilian Americans, for instance, have gotten manipulated video from a Democratic presidential primary debate when Biden suggested he would raise $20 billion to help Brazil battle Amazon deforestation that makes it sound like Biden is ready to send U.S. troops into that country.

Misinformation has continued at such a furious pace post-election that 20-plus Latino progressive groups drafted a January letter declaring “No Más Lies, Disinformation and White Supremacy” that urged Spanish-language radio stations and other outlets in Florida to crack down on spreading it. Pérez-Verdía, one of the signees, said afterward that “it hasn’t dropped off. I consider now that it’s actually doubled down.”

In response to Russian meddling in the 2016 election, Congress approved $160 million for the State Department to lead efforts across the federal government to identify and counter foreign propaganda. Still, a 2018 report by the Senate Intelligence Committee found that such efforts had only increased following Election Day 2016 — a postelection pattern that is consistent with the one experts have tracked in Spanish after 2020′s vote.

So far, Congress isn’t investigating Spanish-language misinformation to see if its origins spread beyond Latin America.

“Was this a deliberate effort to suppress the votes of specific demographic groups? Was this orchestrated and funded by dark money groups or other organized actors?,” said Sen. Mark Warner, D-Va., chairman of the Senate Intelligence Committee. “These are all legitimate questions.”
 

hanimmal

Well-Known Member
https://apnews.com/article/aurora-obituaries-colorado-denver-c4b2454d5889193d8555c94477aea624
Screen Shot 2021-04-14 at 2.53.52 PM.png
DENVER (AP) — A 12-year-old Colorado boy who was hospitalized after his family said he tried a TikTok challenge that dared people to choke themselves until they lose consciousness has died.

Joshua Haileyesus died last Saturday, according to an obituary published online by Olinger Hampden Mortuary & Cemetery in Denver.

He was admitted to Children’s Hospital Colorado on March 22 and was taken off life support after 19 days, according to a GoFundMe page dedicated to raising funds for Joshua’s medical, and now, funeral expenses.

Haileyesus’ twin brother found him passed out in the bathroom of the family’s home in the suburb of Aurora on March 22. KCNC-TV reported that doctors told relatives the boy was brain dead.

Joshua’s father, Haileyesus Zerihun, told the station that a few days before his son was found unconscious, he bragged to his brother about being able to hold his breath for a minute. The so-called “blackout challenge” on TikTok dared users to choke themselves until they pass out.

“Unbeknownst to his parents, Joshua had been playing this dangerous game completely unaware of the risks involved,” the GoFundMe page said.

Joshua’s family hopes their story will inspire others to talk about any games or challenges that could cause serious injury.

“I don’t know why people would do such things,” Zerihun told KCNC. “This is not a joke. This is not a thing to play with.”

TikTok has expressed “profound sympathies” for the boy and his family.

“At TikTok, we have no higher priority than protecting the safety of our community, and content that promotes or glorifies dangerous behavior is strictly prohibited and promptly removed to prevent it from becoming a trend on our platform,” the company said in a statement last month.

Searches on TikTok for #blackoutchallenge returned no results. A note said the phrase may be associated with behavior or content that violates the site’s guidelines.

A funeral service for Joshua is scheduled for Monday.
 

hanimmal

Well-Known Member

I don't normally listen to these people so don't know much about them. But just happy to see this idiot banned. It is interesting it was sounding like it is about trolling with socks.
 

hanimmal

Well-Known Member
That is crazy about that troll (Mike Adams) that @HGCC mentioned.
https://www.rollitup.org/t/what-has-trump-done-to-this-country.1018837/post-16287227

Going down the rabbit hole about this troll was getting mentioned by that TV propagandists Dr Oz pushing people to his website. Turns out they are also using troll farms to amplify their dangerous scams.

https://www.nbcnews.com/tech/tech-news/troll-farms-macedonia-philippines-pushed-coronavirus-disinformation-facebook-n1218376
Screen Shot 2021-04-23 at 11.23.02 AM.png
One of the largest publishers of coronavirus disinformation on Facebook has been banned from the platform for using content farms from North Macedonia and the Philippines, Facebook said on Friday.

The publisher, Natural News, was one of the most prolific pushers of the viral “Plandemic” conspiracy video, which falsely claimed that the coronavirus is part of an elaborate government plot to control the populace through vaccines, and erroneously claimed that wearing a mask increases the risk of catching the coronavirus.


Facebook said that it had found foreign trolls repeatedly posted content from Natural News, an anti-vaccination news site that frequently posts false coronavirus conspiracy theories about 5G towers and Bill Gates. They also posted content from Natural News' sister websites, NewsTarget and Brighteon, in an effort to artificially inflate their reach.

“We removed these Pages for spammy and abusive behavior, not the content they posted. They misled people about the popularity of their posts and relied on content farms in North Macedonia and the Philippines,” Facebook said in a statement.

Download the NBC News app for full coverage of the coronavirus outbreak

Facebook said the actions came as part of its routine enforcement against spam networks. Among other irregular behaviors, Natural News posted its content at an unusually high frequency, attempting to evade rate limits, which effectively tripped Facebook’s spam alarms, the company said.

Natural News’ official Facebook page was banned from Facebook last year, but the site evaded the ban by posting content on Natural News-branded disinformation groups titled “Amazing Cures” and “GMO Dangers,” which had hundreds of thousands of followers.

After Facebook’s discovery of foreign platform manipulation, the company banned all users from posting links to Natural News and its sister sites across the entirety of the site on May 19.

Natural News is a website owned and operated by Mike Adams, a dietary supplement purveyor who goes by the moniker “The Health Ranger.” Adam's operation is by farthe worst spreader of health misinformation online, according to an NBC News analysis.

Last year, the website hosted the most engaged-with article about cancer on the internet. The April article, “Cancer industry not looking for a cure; they’re too busy making money,” which promoted the baseless conspiracy theory that “Big Pharma” is hiding a known cure for cancer to keep people sick, garnered 5.4 million shares, comments and reactions, mostly on Facebook, according to data compiled through BuzzSumo, a social media analytics company.

Full coverage of the coronavirus outbreak

Over its 25 years of operation, Natural News has hosted thousands of articles that reject scientific consensus, promote fake cures and spread conspiracy theories. Its articles have also targeted scientists and doctors for harassment and violence. In April, the site had 3.5 million unique viewers, according to the internet analytics company SimilarWeb.

Natural News was banned by Facebook in June 2019 for using “misleading or inaccurate information” to attract engagement, according to a statement from Facebook. Natural News had nearly 3 million followers at the time.

“It’s long overdue,” said David Gorski, an oncologist at Wayne State University who writes about health misinformation and pseudoscience at the website Science Based Medicine. “Natural News and Mike Adams have been as harmful to public discourse on science, medicine, and also politics as Alex Jones, whom he very much resembles.”

Adams did not respond to a request for comment, but addressed the ban on his website on Tuesday, calling it “digital book burning” and urging readers to complain to Facebook and file a civil rights complaint with the Department of Justice.

Avaaz, an activist organization that campaigns against disinformation online, released a study on Friday claiming that Natural News reached new heights after its initial ban last year by creating informal subgroups that spammed the social media network with health disinformation.

“Since the takedown has happened, Natural News has reached hundreds of millions of people by utilizing an entire universe of pages that are pushing this disinformation,” said Luca Nicotra, a researcher and senior campaigner at Avaaz.

Avaaz’s research found that Natural News’ content had more engagement on Facebook than links to the WHO and CDC combined over the last year.

Nicotra called Facebook’s move to ban links to Natural News “a very bold move, and something usually reserved for spammers.” He said he’s “happy” with Facebook’s major step to crack down on what he called a “serial misinformer,” while adding that more work needed to be done to keep users safe.

“What’s needed is a major detox, a systemic solution to quarantine serial health misinformers. If they don’t, their lies and conspiracies could contaminate millions more and threaten the global response to Covid-19,” Nicotra said.
 

hanimmal

Well-Known Member
I found this and it sounds about right to me. Says they started out the militarized trolling back in 2002.
https://biznology.com/2017/12/teenage-russian-troll-og/
Screen Shot 2021-04-27 at 11.10.11 AM.png
All anyone is talking about these days is how armies of Russian trolls got До́нальд Джон Трамп (Donald John Trump) elected President of the United States. They did this through a unique witchcraft and voodoo that normal mortals cannot resist. How did Влади́мир Влади́мирович Пу́тин (Vladimir Vladimirovich Putin) and his malintent Веб-бригады (web brigades) so easily, cheaply, effortlessly, quickly, and effectively puppet-master our innocent, vulnerable, and naïve online American yokel brains into becoming mindless hordes of racist, sexist, nationalist Nazi deplorables.

Russia isn’t the only country that’s leveraging highly-trained covert operatives with bomb-proof non-disclosure agreements to sneak around online in deep cover, pretending to be other people, genders, ages and emulating the interests, hungers, passions, fears, dreams, and goals of communities that could really benefit the agendas of their clients.

Okay, I wasn’t actually either a teenager or Russian when I was an OG Russian Troll and part of the Russian Troll Army.

In my early and mid-thirties I worked for New Media Strategies (NMS). I sat in an open-plan room occupying a former newsroom, spending all my days under cover, marketing on behalf of Sci-Fi Channel (now SyFy), Buena Vista (now Disney), TomTom, Paramount Pictures, Coca-Cola, McDonalds, Disney, Reebok, EA, RCA, and NBC. And that was just me, from 2002-2006. If you hired NMS you hired the best. Everyone was smart, capable, savvy, clever, discreet, and had world class situational awareness.

The Good Old Days of Paid Trolling

I was part of a super-discreet, amazingly-effective, super-covert, mercenary troll army that lived and worked out of two floors of Rosslyn Twin Tower Two in Arlington, Virginia, the United States called New Media Strategies. While I was employee 13, NMS topped out somewhere north of 120 employees, all of whom were exhaustively trained to promote and protect brands, companies, media networks, movies, shows, series, games, politicians, sports teams, political issues, and anything else that would plausibly be helped by a crack team of Millennials and Digital Natives who were trained with such rigor and held to such standards of discretion and secrecy that it actually felt like I was going through some sort of social media marketing bootcamp held by the FBI Training Academy in Quantico.

But no. But the training was expensive and lasted months. I am certain that the reason why NMS was born in Washington, DC, to Pete Snyder, was because DC is the only city where secret-keeping and to-the-deathbed discretion is in our DNA. Other folks from other cities are all about building their own personal brand on the back of their employer; DC’s not like that.

What we did was called word of mouth marketing. We fed exclusive info from our clients and back-channeled them into conversations that were already happening. We spent all day, every day, becoming essential parts of these online communities. I was routinely offered administrative and moderator rights. Rather, my noms de guerre were. Because message boards live and die by post count when it comes to prestige, it didn’t hurt that we were spending quality time–all day, every day, as long as our clients paid us and often between clients–on sometimes relatively small-but-influential homespun online forums.

There were times when we couldn’t get conversation started that we would all pile into the conversation that we needed happening for an upcoming report and we would chat with each other.

Each time we posted anonymously on a message board it was called a cyberstrike. 80% of all cyberstrikes were done in order to build trust in the community. Sometimes, online identities would not start cyberstriking on behalf of our clients for months, each one just becoming part of his or her community, sometimes communities. We would do this across message boards, forums, groups, newsgroups, and even the comment sections of relevant blogs.

I left during the Summer of 2006 to join Edelman’s élite digital public affairs team here in Washington, DC, so I never got to see what happened in a Facebook and then Twitter world but I’m pretty sure we didn’t invest our time at all on Friendster or MySpace. While I was always a proponent of transparently pitching bloggers and message board and forum owners on behalf of our very cool clients, that was never the company core, though we did create many a blog and website posing as superfans.

Security Culture

At first, we were just super-careful. Getting burnt was not an option. Later, we implemented a hardware-based IP address anonymizing tool such as an Anonymizer appliance that lives in our server rooms. I think we also tried out proxy servers to allow us to cyberstrike from all corners of the world. No matter how much we ended up leaning on spy-tech gear, it was more about faultlessly maintaining cover without making a mistake that would not only burn you but burn 3-24-months of cultivation and would, then, probably burn other false names, other resources, and then probably burn back to our clients and even ourselves.

If there was even a hint of “Witch!” or any remote hint of getting called out astroturfing or not clearly and transparently representing who you are, your true name, your true identity, and your paid-by connection or a brand, political campaign, upcoming movie, or whatever, our entire newsroom full of trolls would be commanded to stop by the our Chief Operation Office: “logout of everything you’re on right now and await further instructions!”

The entire room–actually both floors–would go silent. Brand Managers would meet with the C-Suite, and a very cautious step-by-step plan would be quickly but completely developed before anyone was allowed anywhere near the site of the possible crime, including all other false-name cyberstriking characters that had, at any point, come in contact. And since online communities are very incestuous, this toxicity would assume to have spilled over to other boards, groups, and forums with shared topics and interests.

I don’t remember us making up memes but we did have several amazing graphic designers so that’s plausible. I do know we would also plant counter-news, counter-ads, and stoke people up whenever their noisiness could either push out client’s agenda forward to hold our client’s adversary’s agenda back.

And, none of what we did–I did–at NMS required any special equipment. Anyone with an internet connection can do it. And anyone who can not only be discreet but bring 5, 50, 500 of their closest allies on board can do it. In fact, very few very influential platforms demand true names. Most message boards, forums, groups–especially reddit and Wikipedia–do not demand you are who you say you are. And they’re especially rightfully paranoid about it on those two platforms.

You Too, Could be a Russian Troll

If the only barrier to entry into the world of becoming a Russian Troll Army is a connection to the internet, an Internet-enabled device like a computer or smart phone, some smarts, some discretion, some fearlessness, possibly a shareable spreadsheet to track all your fake users and, where they’ve been and what they’ve done, some sort of proxy tool such as Tor (optional and maybe more trouble than it’s worth) and balls, when why would Russia be especially good at this? Sure, they can speak some English, but even native English speakers catch crap for making grammar mistakes.

I am sure if Pete Snyder came up with this brilliant plan back in 1999 there must be an infinite number of little newsrooms around the USA, Canada, and the world. Some persistent and possibly part of the State Department, Homeland Security, the FBI, or the CIA, and some ad hoc based on a passing need or an upcoming election. Though I must hand it to Peter Snyder for coming up with marketing strategies and tactics that compelled me to ditch a decade of high-level technology for a junior gig at NMS, it’s not rocket science and I am sure versions of it are being played out discreetly, elegantly, and effectively, right now–no matter how publicly against it WOMMA professes to be.

I loved working for New Media Strategies (NMS) from 2002-2006, from 32-36. Compared to my colleagues who were 22-26, I was long in the tooth and not remotely a teenage troll. What I was was part of a very smart, elite, highly-trained, Internet-native team of online covert operatives who spent long hours of our micromanaged time anonymously sneaking around message boards and online forums under the protection of false names, noms de plume, noms de guerre. I freaking loved it!

But I don’t do it any more. I haven’t at all since I left NMS in 2006 and I don’t even know if NMS continued doing it themselves since there was a lot of heat from the FTC and organizations like WOMMA, PRSA, and the lot to not astroturf or misrepresent oneself. To me, things like that just mean agencies like NMS just go deeper, darker, and maybe offshore.
Screen Shot 2021-04-27 at 11.13.47 AM.png
 

hanimmal

Well-Known Member
I am really enjoying this website, they have some great links that I am still reading through, but some really good information clearly laid out. I do think they are missing the ball when explaining how sock puppet accounts are done by just mentioning 'major' websites that are being trolled.

When it is every website with any comment section that is the battle ground for these militarized troll attacks on our societies.

https://www.cits.ucsb.edu/fake-news/spread
Screen Shot 2021-04-27 at 1.39.25 PM.png
Screen Shot 2021-04-27 at 1.49.02 PM.pngScreen Shot 2021-04-27 at 1.40.10 PM.png
Fake news comes through a complex of commercial, political, psychological, social, and computer-scientific factors that make it hard to grasp in its totality. And the topic has been politicized—individuals use the term, “fake news,” to undermine reporting that’s damaging to their own version of events.

It’s also hard to talk about because people think it doesn’t affect themselves, that it only affects other people. And, like the perception of media bias that occurs everywhere [1], individuals think that people on the “other” side of the political spectrum make fake news, believe fake news, and pass along fake news, and that their side doesn’t. (Rigorous research has shown that political fake news in the 2016 Presidential campaign was largely pro-Trump. But many of the most recent Russian fake news efforts are portraying anti-Trump events, and continuing to fan the flames about ongoing “hot-button issues” in American society and culture.) But it’s not the Russians’ doing alone—American social media users click on it, consume it, like it, and share it. No one side is innocent in this mess, no matter what political side you’re on, and it’s important to try to be objective about it.

It’s not just an American problem. Fake news has led to violence in India and Myanmar. It played a part in the UK’s Brexit vote, and is expected to interfere in upcoming elections in many countries. Indeed, on August 21, 2018,

"Facebook said that it had identified multiple new influence campaigns that were aimed at misleading people around the world…trying to sow misinformation. The activity originated in Iran and Russia, Facebook said. The latest campaigns appeared to be similar to those of past operations on the social network: to distribute false news that might cause confusion among people, and to alter people’s thinking," according to the New York Times.
That’s why it’s time for a clear and balanced presentation, based on the best academic research and most reliable reporting. What exactly is fake news? How did it develop? How does it operate? Can it really affect elections, and inflame (or suppress) social conflicts and violence? What’s been done to curb its influence, and do these measures work? What can each of us do right now to reduce the problem? Because fake news so often spreads through social media, it relies on friends posting, sharing, retweeting and otherwise passing it on to other friends and followers, everyone has a role to play in spreading or halting fake news, and we have some realistic suggestions for you that can help you to prevent its spread throughout your own digital networks and reduce its effects on people's thinking.

Screen Shot 2021-04-27 at 1.40.54 PM.png


Our website has the following sections that you can read independently or in any order.
  1. What is Fake News
  2. A Brief History of Fake News
  3. Where Does Fake News Come From?
  4. The Danger of Fake News in Inflaming or Suppressing Social Conflict
  5. The Danger of Fake News to Our Election
  6. How is Fake News Spread? Bots, People like You, Trolls, and Microtargeting
  7. Why We Fall for Fake News
  8. Protecting Ourselves from Fake News: Fact-Checkers and their Limitations
  9. Protecting Ourselves from Fake News: Games that Teach about Fake News
  10. What Can I Do Today?
 

hanimmal

Well-Known Member
Here you go @CapnBligh, in this thread you get an idea of what a sock puppet is.

Nice to meet you if you are not another in the endless line of trolls attacking this and every site nonstop.


Oh and basically a sock puppet is a account that a troll makes to spam nonsense to push narratives that they are being paid/tricked into posting about. After a bit, they quit and in comes a new account that happens to find the political section here pushing whatever the trolls of the day are.
https://www.rollitup.org/t/the-2021-state-of-the-union-go-joe.1052390/post-16300016
Screen Shot 2021-04-30 at 10.40.25 AM.png

The trolling is endless and if you are a real person, remember you always have an ignore button.

 

hanimmal

Well-Known Member
https://www.rawstory.com/lumber-shortage-2021/Screen Shot 2021-05-11 at 7.21.40 PM.png
Amateur sleuths are racking up page views by staking out hardware stores and filming stacks of lumber in the latest online conspiracy theory.

Lumber prices have tripled as demand for housing has jumped and the pandemic throttled production, but lumber-shortage truthers are recording videos of wood just as coronavirus skeptics filmed half-full hospital parking lots last year in an effort to prove the pandemic was a hoax, reported The Daily Beast.

"I'm just astounded at how much lumber is here, and I'm wondering why there's such a problem at the lumber yard," says the owner of a YouTube account called "Ken's Karpentry" in a widely viewed video. "We're still seeing the prices increase at the lumber yards, so I'm not sure why."

That video, with the blaring title "TRAIN LOADS OF LUMBER JUST STACKED UP !!!! Why," has been viewed nearly 500,000 times and was cited by the pro-Donald Trump blog Zero Hedge as proof that lumber prices were being artificially inflated.

"Could the lumber industry, controlled just by a few players, be pulling the playbook straight out of the diamond industry to limit supply to drive up prices?" wrote pseudonymous ZeroHedge writer "Tyler Durden."

The spike in lumber prices, which has increased the average cost of new home construction by $36,000, has been driven by increased demand for new housing and home improvement projects, which are both related to the same pandemic that limited production last year in anticipation of an economic crash -- but conspiracy theorists instead see sinister actors at work.

"There are a LOT of these type vids showing the BS narrative of Lumber shortages," wrote one person on a popular QAnon forum. "Nothing short of market manipulation to drive up prices, most notably homes. Why Homes? Part of the American dream is to buy a house."

TikTok has been flooded with viral lumber truther content.

"TikTok, these folks are lying to us about this wood shortage," wrote a TikTok user with the screen name "Red the Trucker." "What I do for a living, I'm not going to tell you. But everywhere I go, it's just like this. It's stacked up everywhere I go."

Those videos are often folded into video compilations on Facebook, where the conspiracy theories can reach an often older audience than TikTok's typical user -- and blame President Joe Biden for the price increases.

"So Joe Biden says there's a shortage of lumber, and that's why the price at Home Depot is so high on 2x4's and such?" says the narrator in one such video on Facebook.

 

hanimmal

Well-Known Member
https://www.washingtonpost.com/outlook/qanon-game-plays-believers/2021/05/10/31d8ea46-928b-11eb-a74e-1f4cf89fd948_story.htmlScreen Shot 2021-05-15 at 11.09.04 AM.png
For people who believe in the sprawling set of false claims that make up the QAnon belief system, this is a time of confusion. “The Storm” never came, Joe Biden is the president, Donald Trump is out of office and off social media, and Q has not been heard from since last year. Everything followers had been told was a lie. Logic would say that this has to be the end. But logic never had much to do with QAnon, so the doubt remains, like the end of a horror movie: Is it really over?

I have a strong hunch, based on my experience as a game designer, that it is not.

I work in a very small niche: I create and research games intended to be played in reality — stories and games designed to come to life around the players, using the real world as the backdrop.

When I saw QAnon, I knew exactly what it was and what it was doing. It was the gamification of propaganda. QAnon was a game that played people.

Q has specifically followed the model of an alternate reality game (ARG) using many of the same techniques. The games I design entice players through clever rabbit holes found in the real world that start them searching for answers — maybe something written on a billboard, seen at a rally or printed on a flier.

Players are led through labyrinth-like stories full of puzzles, clues and group challenges. ARGs can have millions of people involved in them. (The 2007 game promoting Christopher Nolan’s “The Dark Knight” had 11 million participants in 75 countries.) The similarities are so striking that QAnon has sometimes been referred to as a live-action role playing (LARP) or an ARG. But QAnon is the reflection of a game in a mirror: It looks like one, but inverted.

In one of my earliest games, the plot led investigators into a creepy basement to look for a clue. It was obvious and easy to find, and I expected no trouble. But there was. Some scraps of wood were on the basement floor. Three of them somehow got shuffled into the shape of a perfect arrow pointing at a blank wall — which was not the clue. The clue hunters, sure this was the puzzle to solve, would go no further until they figured out what it meant. The random tools scattered around the basement seemed to mean that the clue was behind the wall. To get it, they needed to use the tools! Perhaps they could pry out a rock or two?
Soon, players were picking up crowbars, ready to rip up the wall looking for clues that didn’t exist. (We intervened and redirected them before they did any damage.)

These were normal people, and their assumptions were logical. And completely wrong.

What the players in the basement had experienced was an apophany. They hadn’t seen a clue — they’d created one in their minds. They hadn’t followed the plot of the story or solved a puzzle; they’d created chaos. But it felt the same to them.

In many games, like the ones I work on, apophenia is a wild card that can lead participants away from the plot and force designers to scramble to get them back. Games can easily go off the rails — because there are rails. There are puzzles with real solutions and a real story to experience. In a well-designed game, players arrive at the intended epiphany, the puzzle is solved, new content is revealed, and the plot moves forward.

QAnon is a mirror reflection of this dynamic: Apophenia is the point.

QAnon revolves around a fantastical narrative that “Q,” allegedly a top-secret government operative, has been leaving clues on websites about a cabal of Satan-worshiping, child-abusing Democrats and “deep state” elitists who run the nation’s power centers, and that Trump and his allies were working clandestinely to fight back against them.

Believers pick up guidance from multiple sources, including rabbit-hole-like social media hashtags, TikTok influencers, popular YouTubers, even mainstream news articles. They click on links, search hashtags, “do their own research” and ultimately end up at various sources of Q’s material.

It spread from the 4chan message board to a wider circle of websites, Reddit forums, social media groups and YouTube channels, and it picked up additional convoluted — and false — details along the way. As prominent Q sites are created, social media and technology companies attempt to deplatform them, and the sites and groups reappear in different locations in a propaganda version of whack-a-mole.

How conspiracy theories spread from the Internet’s darkest corners

The QAnon call to “do the research” (this is the notion that people shouldn’t trust “experts,” but should come to their own conclusions, instead) breaks down resistance to new ideas. Guiding people to arrive at conclusions themselves is a perfect way to get them to accept a new and conflicting ideology as their own.

It also instills a distrust for society and the competence of others — and confers an unearned sense of importance on the player. Only the believers can discover what’s really going on! Initiates are given the tools — ways to look for ostensibly hidden messages in videos and text, and online communities to share their results — to arrive at “their own conclusions,” which are in every way more compelling, interesting and clearer than real solutions.

For instance, learning to search for “code” in everyday correspondence led QAnon conspiracy theorists to find a “hidden message” in one of former FBI director James Comey’s tweets that, to them, indicated there would be a “false flag” attack at the Grass Valley Charter School’s Blue Marble Jubilee. Convinced that the children were in danger people called law enforcement, the school and the FBI.

The event was cancelled because the organizers were afraid QAnon believers would show up to “guard” the event. The followers had created a danger out of thin air and “saved” the children from that imaginary danger. Never mind that the organization lost money and could not hold their school fund-raising event.

Working backwards from the outcome is another sure way to generate a satisfying story. The coronavirus hurt the Unite States, so the obvious and satisfying narrative is that it was created to hurt us on purpose. Because the pandemic led the government to restrict our personal freedoms, it’s a hoax created for that purpose. Both “work” as fictions that explain the complicated chaos of real life in terms of stories, with villains, victims and heroes instead of bats and complex environmental issues we don’t have answers for.
Continued in next post.
 

hanimmal

Well-Known Member
That’s because they are entirely fictional, and fiction is easier to write than reality.

In a real game — or real life — it’s hard to solve puzzles. First, there have to be actual puzzles or problems to solve. Then you need the skills to solve them, and your solution has to be right. Not so for the imaginary puzzles created by apophenia: There doesn’t need to be anything to solve. You just have to be creative and follow along, leaping from one conclusion to the next. As Valerie Gilbert, the QAnon “meme queen,” put it: “The world opened up in Technicolor for me. It was like the Matrix — everything just started to download.”

Several Q drops stated that members of the ruling class of occultists identify themselves through symbolism and that “their need for symbolism will be their downfall.” Conveniently, the symbols were signs that followers were sure to find as soon as they started looking. Q might drop a clue specifically calling out high-ranking Democrats, such as Hillary Clinton, or performers, such as Beyoncé. He suggests searching for the iconography of owls and skulls with horns. He asks followers to look for themselves.

So participants start poring over hundreds of images of the people they distrust. “Evidence” isn’t hard to find: Rock stars throw “hand horns” all the time. Their videos are riddled with the skulls of cattle and weird tattoos and conscious occult references. Suddenly, just like in actual ARGs, participants look at the world a little differently. They can go on the boards, watch videos, ask questions. They can submit their own “research” and get kudos and make friends. They don’t have to believe it all, but now, when they see a new music video or look closely at a corporate chain logo, maybe they begin to notice strange things. Could it be that these things aren’t just filler or coindicence, but they have real meanings? A doubt begins to grow, and doubt is extremely difficult to get rid of once it starts. Maybe the elite really are in a powerful cult, after all.

Marjorie Taylor Greene isn’t here to legislate. She’s here to live-stream.

There are no scripted plots for Q followers. There is no actual solution to arrive at. There’s only a breadcrumb trail away from reality. As game designers would expect, it works very well — because when you “figure it out yourself,” you own it. You experience the thrill of discovery, the excitement of finding the rabbit hole and tumbling down it. Because you were persuaded to “connect the dots yourself,” you can see the absolute logic of it, even if you made it up.

The most important difference between QAnon and real games is that Q claims it’s not a game at all.

In ARGs, people do a lot of the same things QAnon followers do, but they don’t call and report fictional crimes as if they are real. They don’t break laws. They may show up in mobs, but they understand it’s all pretend. Q is the opposite: People report crimes, or they swamp emergency phone lines with false reports about their enemies setting wildfires. They break the law. They show up in mobs for causes they think are very real — like the Jan. 6 insurrection at the Capitol, where QAnon figures were pivotal actors.

You can’t play a game if you don’t know you’re playing one. Play requires an agreement to play. Otherwise, it’s just manipulation — which describes Q perfectly. And it couldn’t have worked if it hadn’t been surrounded by a much larger right-wing media and social media disinformation campaign that calls into question the very nature of reality.

But now that Q has gone silent, what happens to all the believers? In a scripted game, the writers and producers produce a satisfying conclusion, often with a big reveal — for instance, a tour of Disneyland, a private concert or a special screening of a movie. The most die-hard players are rewarded extravagantly in real-life gatherings and often go home with amazing experiences and maybe a souvenir or two. The puppet masters get to come out from behind the curtain and take a bow, too: We meet our fans, answer questions, accept congratulations on a game well-run. And then the game ends, with a gratifying sense of closure and camaraderie.

In QAnon, as usual, there was a horrific mirror version of this kind of ending. Q’s posts dwindled to nothing; as Jan. 6 approached, the group was open to all kinds of outside influences. The huge “conclusion” occurred as people mobbed the Capitol waiting for “the Storm” to arrive and Trump to rise. Just like in a real game, the players came with their cellphones turned on, taking pictures and streaming video to record the big event. But nothing was planned for the benefit of those gathered. They hadn’t arrived at an event. They were the event.

And now, Q remains in hiding from his misled fan base and a furious nation. People are searching for the puppet masters, and the question of whether the game is truly over lingers over the nation. When you choose to play a game, you decide when you’re done playing. When a game plays you, though, it’s out of your control.

Twitter: @soi
 

hanimmal

Well-Known Member
https://www.washingtonpost.com/outlook/2021/05/20/ai-bots-grassroots-astroturf/Screen Shot 2021-05-20 at 5.47.57 PM.png
This month, the New York state attorney general issued a report on a scheme by “U.S. Companies and Partisans [to] Hack Democracy.” This wasn’t another attempt by Republicans to make it harder for Black people and urban residents to vote. It was a concerted attack on another core element of U.S. democracy — the ability of citizens to express their voice to their political representatives. And it was carried out by generating millions of fake comments and fake emails purporting to come from real citizens.

This attack was detected because it was relatively crude. But artificial intelligence technologies are making it possible to generate genuine-seeming comments at scale, drowning out the voices of real citizens in a tidal wave of fake ones.

As political scientists like Paul Pierson have pointed out, what happens between elections is important to democracy. Politicians shape policies and they make laws. And citizens can approve or condemn what politicians are doing, through contacting their representatives or commenting on proposed rules.


That’s what should happen. But as the New York report shows, it often doesn’t. The big telecommunications companies paid millions of dollars to specialist “AstroTurf” companies to generate public comments. These companies then stole people’s names and email addresses from old files and from hacked data dumps and attached them to 8.5 million public comments and half a million letters to members of Congress. All of them said that they supported the corporations’ position on something called “net neutrality,” the idea that telecommunications companies must treat all Internet content equally and not prioritize any company or service.
Three AstroTurf companies — Fluent, Opt-Intelligence and React2Media — agreed to pay nearly $4 million in fines.

The fakes were crude. Many of them were identical, while others were patchworks of simple textual variations: substituting “Federal Communications Commission” and “FCC” for each other, for example.

Next time, though, we won’t be so lucky. New technologies are about to make it far easier to generate enormous numbers of convincing personalized comments and letters, each with its own word choices, expressive style and pithy examples. The people who create fake grass-roots organizations have always been enthusiastic early adopters of technology, weaponizing letters, faxes, emails and Web comments to manufacture the appearance of public support or public outrage.

Take Generative Pre-trained Transformer 3, or GPT-3, an AI model created by OpenAI, a San Francisco based start-up. With minimal prompting, GPT-3 can generate convincing seeming newspaper articles, résumé cover letters, even Harry Potter fan-fiction in the style of Ernest Hemingway. It is trivially easy to use these techniques to compose large numbers of public comments or letters to lawmakers.

The FCC just received a million net-neutrality comments. Here’s what it’s like to sort through them all.

OpenAI restricts access to GPT-3, but in a recent experiment, researchers used a different text-generation program to submit 1,000 comments in response to a government request for public input on a Medicaid issue. They all sounded unique, like real people advocating a specific policy position. They fooled the Medicaid.gov administrators, who accepted them as genuine concerns from actual human beings. The researchers subsequently identified the comments and asked for them to be removed, so that no actual policy debate would be unfairly biased. Others won’t be so ethical.

When the floodgates open, democratic speech is in danger of drowning beneath a tide of fake letters and comments, tweets and Facebook posts. The danger isn’t just that fake support can be generated for unpopular positions, as happened with net neutrality. It is that public commentary will be completely discredited. This would be bad news for specialist AstroTurf companies, which would have no business model if there isn’t a public that they can pretend to be representing. But it would empower still further other kinds of lobbyists, who at least can prove that they are who they say they are.

We may have a brief window to shore up the flood walls. The most effective response would be to regulate what UCLA sociologist Edward Walker has described as the “grassroots for hire” industry. Organizations that deliberately fabricate citizen voices shouldn’t just be subject to civil fines, but to criminal penalties. Businesses that hire these organizations should be held liable for failures of oversight. It’s impossible to prove or disprove whether telecommunications companies knew their subcontractors would create bogus citizen voices, but a liability standard would at least give such companies an incentive to find out. This is likely to be politically difficult to put in place, though, since so many powerful actors benefit from the status quo.

We’re under constant threat of cyberattack, and Congress isn’t prepared to do anything about it

Other overhauls include requirements to link public comments more directly to the individuals who purportedly write them. This could enable random audits. These could be combined with machine learning techniques — which can be applied to catch fakes as well as generate them. Companies such as Google have developed extensive techniques to detect spam and bogus webpages that are intended to gimmick search results, providing lessons for how the public sector too can detect or deter similar manipulation.

None of these techniques are perfect. They will inevitably be gamed by AstroTurf merchants, creating an Alice-in-Wonderland style Red Queen’s Race, where fakers try new techniques to outwit regulators, and regulators try to outwit the fakers. Inevitably, they will either let some fake comments through because they are too forgiving, or block some real comments because they are too rigid — or both, if they are badly designed. All that is hard to square with the democratic commitment to hear each voice equally. But we’re entering into a world where such awkward trade-offs may be what we have to live with.
 
Status
Not open for further replies.
Top