Another quarter, another season, and countless new memes birthed into the internet ether. Hello and welcome back to hype_cast! If you’re catching this series for the first time, hype_cast is all about demystifying and trend-spotting what’s on the interweb.
In this volume, we’re tackling some heavy hitters: Misinformation, Social Activism, and Artificial Intelligence, featuring our completely biased and subjective Sadie Scale. So strap yourself in because we’re about to go full-speed ahead! Let’s go!
Misinformation on Social Media
Metrics show that global users spend an average of over 2 hours on social media each day. Countless images, factoids, and news snippets are shared and reposted. While opinions differ on what is and isn’t “fake news,” there doesn’t seem to be any slowing down of the spreading of misinformation. Therein lies the crux of our first topic; what is misformation and what makes it so contagious?
First, let’s clarify the difference between misinformation and disinformation.
Misinformation is defined as incorrect or misleading information, often showing up on social media as inaccurate statistics or content presented out of its original context. Everyone’s got at least one relative who regularly shares unsourced articles with absurd clickbait-y titles. Aunt Sharon truly believes in the exaggerated health claims of her chia seed panacea! While well-meaning, misinformation is a type of false information that is typically shared without the knowledge of its inaccuracy.
On the other hand, disinformation is information that has been purposefully manipulated with the intent to deceive. For example, in recent history, a plethora of doctored images flooded the internet as part of disinformation campaigns at the start of Russia’s invasion of Ukraine. While both misinformation and disinformation can be damaging and cause incorrect conclusions, the difference is intent.
So why, when everyone is so wary of shared content, is misinformation so prolific and readily found? We’ve already touched on one reason for it: misinformation is usually packaged and optimized for its virality. Catchy and dramatic titles will capture the short attention span of a site visitor and direct them to content littered with misinformation. And if it’s not the title that traps the reader, it might be an image that triggers an emotional reaction. A surefire tactic, cute cats will always get clicks. Clicks tend to translate to traffic and some portion of that audience will probably reshare the content, so the consumption cycle continues to turn.
While platforms like Twitter are working towards implementing misinformation policies, and users may unfollow, thumbs-down, or report misinformation as spam, it continues to be difficult to eradicate. Platforms also tend to be careful about exercising moderation in these public forums because sharing information is often discussed in the same breath as “freedom of speech” rights. A notable example is the sharing of vaccine misinformation, a contentious topic of discussion for those who looked to analyze the protections granted by the first amendment.
So how does one build up their immunity to false information? Google has released new features to combat the tide of fake news. And tips shared by Forbes recommend taking the time to analyze the intent of the share, look for credibility, and consider fact-checking the sources.
1/5 Sadies - Not all that glitters is gold (especially on the internet). Take a beat and check the citations before resharing Uncle Kevin’s latest find.
Watson, AlphaGo, and DALL-E. These are all AI or artificial intelligence projects that have found their way into the mainstream. Most individuals mark the beginning of AI with the work of Alan Turing, who sought to answer the question of “can machines think?” From there, this field of study has only grown in popularity and complexity.
“At its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving. It also encompasses sub-fields of machine learning and deep learning, which are frequently mentioned in conjunction with artificial intelligence. These disciplines are comprised of AI algorithms which seek to create expert systems which make predictions or classifications based on input data.”
Modern AI encapsulates machine learning and deep learning. Each definition describes a more specific type of algorithmic “learning”.
While both types rely on the input of data sets, deep learning differs from machine learning in that it does not require human intervention in the processing of its data. Deep learning enables the use of larger data sets, that are unstructured and “raw” (e.g. text and images). With classical or “non-deep” machine learning, human experts organize and prioritize features by which the data is organized.
Commonplace examples of AI applications include speech recognition, recommendation engines, and computer vision. Retail chatbots, meme machines, and Siri included; in these few short decades, human lives have become inundated with smart technology, begging the question of, has the technology become too smart? Most recently, an engineer working at Google claimed that an internal AI tool had a soul. While exciting and impressive, AI models are far from true sentience. Even at places like Google where language models have developed impressive linguistic features and capabilities, they are still flawed. Even the most sophisticated models are inconsistent with their generated outputs and are not yet able to pass the Turing Test.
At tech giants like IBM, Microsoft, and Google the next step in creating more sophisticated AI models is the prioritization of ethical AI. AI researchers hope to better understand issues that have arisen when training their systems. Pertinent concerns include controlling for biases and fair decision-making. By outlining ethical principles, these corporations aim to create systems that side-step bias and discrimination.
4/5 Sadies - Probably not in the near future but artificial intelligence may one day be better at decision-making than humans … “Siri, how do we keep history from repeating itself?”
Rainbow-washing and Corporate Activism
How does a brand portray itself in an authentic manner? Spoiler: most can’t—and don’t. For example, every Pride month, as the rainbow logos appear like spring’s first multicolored robins, we hear the same cynical (but hilarious, TBC) jokes and commentary.
The early days of social media activism seemed simpler and more effective. The Arab Spring started and was sustained by social media. Even the Ice Bucket Challenge, grandfather of all trends, was reported to greatly accelerate scientific work on ALS. But like so many things, this brand of activism was co-opted by corporations who saw an opportunity to increase their public perception (and therefore hire more and make more $$$).
Corporate social media activism, especially around holidays like Pride or socio-political movements like BLM, has jumped the shark and is often indistinguishable from actual parody. But consumers are savvier and want brands to do better. Research shows they’re picking brands based on shared values and want to spend more with brands they think are fulfilling their promises about the environment and topical social issues.
If brands want to ensure that the work they do doesn’t rub the community the wrong way, they need to learn how to use their platforms and resources for good without centering it around them (for an example of this, read on how celebrities used their social media accounts to amplify Black voices during the BLM protests of 2020).
1/5 Sadies. Put your activism where your mouth is and your money where it makes a difference: in programs that directly contribute to underrepresented groups.
In this volume of hype-cast, we dived into misinformation on social media, the world of AI, and the dark side of corporate activism. Join us as we continuing learning and fact-checking all things internet and viral, hype_cast’s next issue will released before you know it!