biologist
1797 stories
·
12 followers

Crypto collapse? Get in loser, we’re pivoting to AI – Attack of the 50 Foot Blockchain

1 Comment and 3 Shares

By Amy Castor and David Gerard

“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine

Half of crypto has been pivoting to AI. Crypto’s pretty quiet — so let’s give it a try ourselves!

Turns out it’s the same grift. And frequently the same grifters.

AI is the new NFT

“Artificial intelligence” has always been a science fiction dream. It’s the promise of your plastic pal who’s fun to be with — especially when he’s your unpaid employee. That’s the hype to lure in the money men, and that’s what we’re seeing play out now.

There is no such thing as “artificial intelligence.” Since the term was coined in the 1950s, it has never referred to any particular technology. We can talk about specific technologies, like General Problem Solver, perceptrons, ELIZA, Lisp machines, expert systems, Cyc, The Last One, Fifth Generation, Siri, Facebook M, Full Self-Driving, Google Translate, generative adversarial networks, transformers, or large language models — but these have nothing to do with each other except the marketing banner “AI.” A bit like “Web3.”

Much like crypto, AI has gone through booms and busts, with periods of great enthusiasm followed by AI winters whenever a particular tech hype fails to work out.

The current AI hype is due to a boom in machine learning — when you train an algorithm on huge datasets so that it works out rules for the dataset itself, as opposed to the old days when rules had to be hand-coded.

ChatGPT, a chatbot developed by Sam Altman’s OpenAI and released in November 2022, is a stupendously scaled-up autocomplete. Really, that’s all that it is. ChatGPT can’t think as a human can. It just spews out word combinations based on vast quantities of training text — all used without the authors’ permission.

The other popular hype right now is AI art generators. Artists widely object to AI art because VC-funded companies are stealing their art and chopping it up for sale without paying the original creators. Not paying creators is the only reason the VCs are funding AI art.

Do AI art and ChatGPT output qualify as art? Can they be used for art? Sure, anything can be used for art. But that’s not a substantive question. The important questions are who’s getting paid, who’s getting ripped off, and who’s just running a grift.

You’ll be delighted to hear that blockchain is out and AI is in:

It’s not clear if the VCs actually buy their own pitch for ChatGPT’s spicy autocomplete as the harbinger of the robot apocalypse. Though if you replaced VC Twitter with ChatGPT, you would see a significant increase in quality.

I want to believe

The tech itself is interesting and does things. ChatGPT or AI art generators wouldn’t be causing the problems they are if they didn’t generate plausible text and plausible images.

ChatGPT makes up text that statistically follows from the previous text, with memory over the conversation. The system has no idea of truth or falsity — it’s just making up something that’s structurally plausible.

Users speak of ChatGPT as “hallucinating” wrong answers — large language models make stuff up and present it as fact when they don’t know the answer. But  any answers that happen to be correct were “hallucinated” in the same way.

If ChatGPT has plagiarized good sources, the constructed text may be factually accurate. But ChatGPT is absolutely not a search engine or a trustworthy summarization tool — despite the claims of its promoters.

ChatGPT certainly can’t replace human thinking. Yet people project sentient qualities onto ChatGPT and feel like they are conducting meaningful conversations with another person. When they realize that’s a foolish claim, they say they’re sure that’s definitely coming soon!

People’s susceptibility to anthropomorphizing an even slightly convincing computer program has been known since ELIZA, one of the first chatbots, in 1966. It’s called the ELIZA effect.

As Joseph Weizenbaum, ELIZA’s author, put it: “I had not realized … that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

Better chatbots only amplify the ELIZA effect. When things do go wrong, the results can be disastrous:

  • A professor at Texas A&M worried that his students were using ChatGPT to write their essays. He asked ChatGPT if it had generated the essays! It said it might have. The professor gave the students a mark of zero. The students protested vociferously, producing the evidence they wrote their essays themselves. One even asked ChatGPT about the professor’s Ph.D thesis, and it said it might have written it.  The university has reversed the grading. [Reddit; Rolling Stone]
  • Not one but two lawyers thought they could blindly trust ChatGPT to write their briefs. The program made up citations and precedents that didn’t exist. Judge Kevin Castel of the Southern District of New York — who those following crypto will know well for his impatience with nonsense — has required the lawyers to show cause not to be sanctioned into the sun. These were lawyers of several decades’ experience. [New York Times; order to show cause, PDF]
  • GitHub Copilot synthesizes computer program fragments with an OpenAI program similar to ChatGPT, based on the gigabytes of code stored in GitHub. The generated code frequently works! And it has serious copyright issues — Copilot can easily be induced to spit out straight-up copies of its source materials, and GitHub is currently being sued over this massive license violation. [Register; case docket]
  • Copilot is also a good way to write a pile of security holes. [arXiv, PDF, 2021; Invicti, 2022]
  • Text and image generators are increasingly used to make fake news. This doesn’t even have to be very good — just good enough. Deep fake hoaxes have been a perennial problem, most recently with a fake attack on the Pentagon, tweeted by an $8 blue check account pretending to be Bloomberg News. [Fortune]

This is the same risk in AI as the big risk in cryptocurrency: human gullibility in the face of lying grifters and their enablers in the press.

But you’re just ignoring how AI might end humanity!

The idea that AI will take over the world and turn us all into paperclips is not impossible!

It’s just that our technology is not within a million miles of that. Mashing the autocomplete button isn’t going to destroy humanity.

All of the AI doom scenarios are literally straight out of science fiction, usually from allegories of slave revolts that use the word “robot” instead. This subgenre goes back to Rossum’s Universal Robots (1920) and arguably back to Frankenstein (1818).

The warnings of AI doom originate with LessWrong’s Eliezer Yudkowsky, a man whose sole achievements in life are charity fundraising — getting Peter Thiel to fund his Machine Intelligence Research Institute (MIRI), a research institute that does almost no research — and finishing a popular Harry Potter fanfiction novel. Yudkowsky has literally no other qualifications or experience.

Yudkowsky believes there is no greater threat to humanity than a rogue AI taking over the world and treating humans as mere speedbumps. He believes this apocalypse is imminent. The only hope is to give MIRI all the money you have. This is also the most effective possible altruism.

Yudkowsky has also suggested, in an op-ed in Time, that we should conduct air strikes on data centers in foreign countries that run unregulated AI models. Not that he advocates violence, you understand. [Time; Twitter, archive]

During one recent “AI Safety” workshop, LessWrong AI doomers came up with ideas such as: “Strategy: start building bombs from your cabin in Montana and mail them to OpenAI and DeepMind lol.” In Minecraft, we presume. [Twitter]

We need to stress that Yudkowsky himself is not a charlatan — he is completely sincere. He means every word he says. This may be scarier.

Remember that cryptocurrency and AI doom are already close friends — Sam Bankman-Fried and Caroline Ellison of FTX/Alameda are true believers, as are Vitalik Buterin and many Ethereum people.

But what about the AI drone that killed its operator, huh?

Thursday’s big news story was from the Royal Aeronautical Society Future Combat Air & Space Capabilities Summit in late May about a talk from Colonel Tucker “Cinco” Hamilton, the US Air Force’s chief of AI test and operations: [RAeS]

He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission — killing SAMs — and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Wow, this is pretty serious stuff! Except that it obviously doesn’t make any sense. Why would you program your AI that way in the first place?

The press was fully primed by Yudkowsky’s AI doom op-ed in Time in March. They went wild with the killer drone story because there’s nothing like a sci-fi doomsday tale. Vice even ran the headline “AI-Controlled Drone Goes Rogue, Kills Human Operator in USAF Simulated Test.” [Vice, archive of 20:13 UTC June 1]

But it turns out that none of this ever happened. Vice added three corrections, the second noting that “the Air Force denied it conducted a simulation in which an AI drone killed its operators.” Vice has now updated the headline as well. [Vice, archive of 09:13 UTC June 3]

Yudkowsky went off about the scenario he had warned of suddenly playing out. Edouard Harris, another “AI safety” guy, clarified for Yudkowsky that this was just a hypothetical planning scenario and not an actual simulation: [Twitter, archive]

This particular example was a constructed scenario rather than a rules-based simulation … Source: know the team that supplied the scenario … Meaning an entire, prepared story as opposed to an actual simulation. No ML models were trained, etc.

The RAeS has also added a clarification to the original blog post: the colonel was describing a thought experiment as if the team had done the actual test.

The whole thing was just fiction. But it sure captured the imagination.

The lucrative business of making things worse

The real threat of AI is the bozos promoting AI doom who want to use it as an excuse to ignore real-world problems — like the risk of climate change to humanity — and to make money by destroying labor conditions and making products worse. This is because they’re running a grift.

Anil Dash observes (over on Bluesky, where we can’t link it yet) that venture capital’s playbook for AI is the same one it tried with crypto and Web3 and first used for Uber and Airbnb: break the laws as hard as possible, then build new laws around their exploitation.

The VCs’ actual use case for AI is treating workers badly.

The Writer’s Guild of America, a labor union representing writers for TV and film in the US, is on strike for better pay and conditions. One of the reasons is that studio executives are using the threat of AI against them. Writers think the plan is to get a chatbot to generate a low-quality script, which the writers are then paid less in worse conditions to fix. [Guardian]

Executives at the National Eating Disorders Association replaced hotline workers with a chatbot four days after the workers unionized. “This is about union busting, plain and simple,” said one helpline associate. The bot then gave wrong and damaging advice to users of the service: “Every single thing Tessa suggested were things that led to the development of my eating disorder.” The service has backtracked on using the chatbot. [Vice; Labor Notes; Vice; Daily Dot]

Digital blackface: instead of actually hiring black models, Levi’s thought it would be a great idea to take white models and alter the images to look like black people. Levi’s claimed it would increase diversity if they faked the diversity. One agency tried using AI to synthesize a suitably stereotypical “Black voice” instead of hiring an actual black voice actor. [Business Insider, archive]

Genius at work

Sam Altman: My potions are too powerful for you, Senator

Sam Altman, 38, is a venture capitalist and the CEO of OpenAI, the company behind ChatGPT. The media loves to tout Altman as a boy genius. He learned to code at age eight!

Altman’s blog post “Moore’s Law for Everything” elaborates on Yudkowsky’s ideas on runaway self-improving AI. The original Moore’s Law (1965) predicted that the number of transistors that engineers could fit into a chip would double every year. Altman’s theory is that if we just make the systems we have now bigger with more data, they’ll reach human-level AI, or artificial general intelligence (AGI). [blog post]

But that’s just ridiculous. Moore’s Law is slowing down badly, and there’s no actual reason to think that feeding your autocomplete more data will make it start thinking like a person. It might do better approximations of a sequence of words, but the current round of systems marketed as “AI” are still at the extremely unreliable chatbot level.

Altman is also a doomsday prepper. He has bragged about having “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to” in the event of super-contagious viruses, nuclear war, or AI “that attacks us.” [New Yorker, 2016]

Altman told the US Senate Judiciary Subcommittee that his autocomplete system with a gigantic dictionary was a risk to the continued existence of the human race! So they should regulate AI, but in such a way as to license large providers — such as OpenAI — before they could deploy this amazing technology. [Time; transcript]

Around the same time he was talking to the Senate, Altman was telling the EU that OpenAI would pull out of Europe if they regulated his company other than how he wanted. This is because the planned European regulations would address AI companies’ actual problematic behaviors, and not the made-up problems Altman wants them to think about. [Zeit Online, in German, paywalled; Fast Company]

The thing Sam’s working on is so cool and dank that it could destroy humanity! So you better give him a pile of money and a regulatory moat around his business. And not just take him at his word and shut down OpenAI immediately.

Occasionally Sam gives the game away that his doomerism is entirely vaporware: [Twitter; archive]

AI is how we describe software that we don’t quite know how to build yet, particularly software we are either very excited about or very nervous about

Altman has a long-running interest in weird and bad parasitical billionaire transhumanist ideas, including the “young blood” anti-aging scam that Peter Thiel famously fell for — billionaires as literal vampires — and a company that promises to preserve your brain in plastic when you die so your mind can be uploaded to a computer. [MIT Technology Review; MIT Technology Review]

Altman is also a crypto grifter, with his proof-of-eyeball cryptocurrency Worldcoin. This has already generated a black market in biometric data courtesy of aspiring holders. [Wired, 2021; Reuters; Gizmodo]

CAIS: Statement on AI Risk

Altman promoted the recent “Statement on AI Risk,” a widely publicized open letter signed by various past AI luminaries, venture capitalists, AI doom cranks, and a musician who met her billionaire boyfriend over Roko’s basilisk. Here is the complete text, all 22 words: [CAIS]

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

A short statement like this on an allegedly serious matter will usually hide a mountain of hidden assumptions. In this case, you would need to know that the statement was promoted by the Center for AI Safety — a group of Yudkowsky’s AI doom acolytes. That’s the hidden baggage for this one.

CAIS is a nonprofit that gets about 90% of its funding from Open Philanthropy, which is part of the Effective Altruism subculture, which David has covered previously. Open Philanthropy’s main funders are Dustin Moskowitz and his wife Cari Tuna. Moskowitz made his money from co-founding Facebook and from his startup Asana, which was largely funded by Sam Altman.

That is: the open letter is the same small group of tech funders. They want to get you worrying about sci-fi scenarios and not about the socially damaging effects of their AI-based businesses.

Computer security guru Bruce Schneier signed the CAIS letter. He was called out on signing on with these guys’ weird nonsense, then he backtracked and said he supported an imaginary version of the letter that wasn’t stupid — and not the one he did in fact put his name to. [Schneier on Security]

And in conclusion

Crypto sucks, and it turns out AI sucks too. We promise we’ll go back to crypto next time.

“Don’t want to worry anyone, but I just asked ChatGPT to build me a better paperclip.” — Bethany Black

Correction: we originally wrote up the professor story as using Turnitin’s AI plagiarism tester. The original Reddit thread makes it clear what he did.

Become a Patron!

Your subscriptions keep this site going. Sign up today!

Read the whole story
notadoctor
565 days ago
reply
Oakland, CA
Share this story
Delete
1 public comment
acdha
566 days ago
reply
“The VCs’ actual use case for AI is treating workers badly”
Washington, DC

very efficient to take a rapist, robber and assailant off the...

1 Share


very efficient to take a rapist, robber and assailant off the streets in one fell swoop

Read the whole story
notadoctor
638 days ago
reply
Oakland, CA
Share this story
Delete

The Mythical 32-Pound ‘Subcontrabassoon’ Is Now a Real Musical Instrument

1 Comment and 3 Shares

A musician has created a never-before attempted woodwind instrument that produces bone-rattling low notes and stands taller than the average adult: the subcontrabassoon.

When Richard Bobo was learning to play the bassoon in 8th grade, he read about a mythical instrument called the subcontrabassoon in a Guinness Book of World Records, made by a 19th century musician. It would be able to produce sounds similar to that of a large pipe organ, at two octaves below the regular bassoon, and one octave below the contrabassoon. Prototyping such an instrument had never been attempted before.

It turned out that the Guinness Book of World Records was wrong, he said, and such an instrument didn’t actually exist. “However, just because a true subcontrabassoon didn't exist historically did not mean it could not exist,” Bobo told Motherboard. “As I began my career as a professional contrabassoonist (with a tangent as a machinist/CAD designer at my dad's shop), I held out hope that someone would come along and make this myth real. Eventually, I realized that no one else was rushing at the opportunity, and that my background might make me the best (or, at least, most willing) choice.” 

Bobo presented his creation at the International Double Reed Society in Boulder, Colorado in July. It weighs almost 32 pounds (not including metal keywork) and stands at around six feet tall—even taller depending on how the player needs to adjust the endpin for their own height. Wood is lighter than plastic, but he made this prototype using a 3D-printer and ABS plastic, with a support frame of welded stainless steel.

3D printing pieces of an instrument of this size and complexity is a feat in itself. Bobo’s Prusa MK3S printer limits prints to 200mm, this meant that the initial prototype was made of printed pieces that composed the subcontrabassoon’s multiple segments, which he then bonded together. The bonding process wasn’t robust enough for Bobo, however, so he designed his own custom 3D printer, a 200x200x600mm modification of the RatRig Vcore 3.

“With this, I am able to make the majority of the pieces for the next prototype in one solid piece, with no need for bonding,” Bobo said. He’s also switching to ASA plastic that’s less susceptible to warping and UV rays (just in case he needs to drag this thing outside for a plein air concert) and plans to switch to an aluminum frame to cut down on the weight. 

Bobo’s big bassoon won’t be stuck in a hypothetical setting for long. In January—if all goes according to plan with further prototyping—the subcontrabassoon will be used in a live performance of the Symphony of Northwest Arkansas. “The part is small,” Bobo said, “but if everything works out it will be the live premiere of an instrument that was, until just a few years ago, firmly in the realm of unicorns or the philosopher's stone.”

Beyond that, he’s trying to keep an open mind about where the project will go. “Maybe there will come a day when every serious symphony orchestra has a subcontrabassoon on hand, and maybe I'll even live to see it,” Bobo said. “But that's a high bar; even 180 years after its invention, the saxophone is not yet a regular member of the orchestra. But perhaps, like the saxophone, the subcontrabassoon will find a niche in other genres of music. Perhaps, like the contrabass clarinet and contrabass flute, it will have a home in chamber music written for instruments of the same family.”



Read the whole story
notadoctor
810 days ago
reply
Oakland, CA
Share this story
Delete
1 public comment
fxer
812 days ago
reply
Brown note an entire grade school
Bend, Oregon

“Straight White Male: The Lowest Difficulty Setting,” Ten Years On

1 Comment and 13 Shares
John Scalzi

Ten years ago this week I thought I would write a piece to offer a useful metaphor for straight white male privilege without using the word “privilege,” because when you use the word “privilege,” straight white men freak out, like, I said then, “vampires being fed a garlic tart.” Since I play video games, I wrote the piece using them as a metaphor. And thus “Straight White Male: The Lowest Difficulty Setting There Is” was born and posted.

And blew up: First here on Whatever, where it became the most-visited single post in the history of the site (more than 1.2 million visits to date), and then when it was posted on video gaming site Kotaku, where I suspect it was visited a multiple number of times more than it was visited here, because Kotaku has more visitors generally, and because the piece was heavily promoted and linked there. 

The piece received both praise and condemnation, in what felt like almost equal amounts (it wasn’t; it’s just the complainers were very loud, as they often are). To this day the piece is still referred and linked to, taught in schools and universities, and “living on the lowest difficulty setting” is used as a shorthand for the straight white male experience, including by people who don’t know where the phrase had come from.

(I will note here, as I often do when discussing this piece, that my own use of the metaphor was an expansion on a similar metaphor that writer Luke McKinney used in a piece on Cracked.com, when he noted that “straight male” was the lowest difficulty setting in sexuality. Always credit sources and inspirations, folks!)

In the ten years since I’ve written the piece, I’ve had a lot of time to think about it, the response to it, and whether the metaphor still applies. And so for this anniversary, here are some further thoughts on the matter.

1. First off: Was the piece successful? In retrospect, I think it largely was. One measure of its success, as noted above, is its persistence; it’s still read and talked about and taught and used. Anecdotally, I have hundreds of emails from people who used it to explain privilege to others and/or had it used to explain privilege to them, and who say that it did what it was meant to do: Get through the already-erected defenses against the word “privilege” and convey the concept in an interesting and novel manner. So: Hooray for that. It is always good to be useful.

2. That said, Upton Sinclair once wrote that “It is difficult to get a man to understand something when his salary depends upon his not understanding it.” In almost exactly the same manner, it is difficult to get a straight white man to acknowledge his privileges when his self-image depends on him not doing so. Which is to say there is a very large number of straight white men who absolutely do not wish to acknowledge just how thoroughly and deeply their privileges are systemically embedded into day-to-day life. A fair number of this sort of dude read the piece (or more perhaps more accurately, read the headline, since a lot of their specific complaints about the piece were in fact addressed in the piece itself) and refused to entertain the notion there might be something to it. Which is their privilege (heh), but doesn’t make them right.

But, I mean, as a straight white dude, I totally get it! I also work hard and make an effort to get by, and in my life not all the breaks have gone my way. I too have suffered disappointment and failure and exclusion and difficulty. In the context of a life where people who are not straight white men are perhaps not in your day-to-day world view, except as abstractions mediated by television or radio or web sites, one’s own struggles loom large. It’s harder to conceive of, or sympathize with, the idea that one’s own struggles and disappointments are resting atop of a pile of systemic privilege — not in the least because that implicitly seems to suggest that if you can still have troubles even with those many systemic advantages, you might be bad at this game called life.

But here’s the thing about that. One, just because you can’t or won’t see the systemic advantages you have, it doesn’t mean you don’t still have them, relative to others. Two, it’s a reflection of how immensely fucked up the system is that even with all those systemic advantages, lots of straight white men feel like they’re just treading water. Yes! It’s not just you! This game of life is difficult! Like Elden Ring with a laggy wireless mouse and a five-year-old graphics card! And yet, you are indeed still playing life on the lowest difficulty setting! 

Maybe rather than refusing to accept that other people are playing on higher difficulty settings, one should ask who the hell decided to make the game so difficult for everyone right out of the box (hint: they’re largely in the same demographic as straight white men), and how that might be changed. But of course it’s simply just easy to deny that anyone else might have a more challenging life experience than you have, systemically speaking. 

3. Speaking of “easy,” one of the problems that the piece had is that when I wrote the phrase “lowest difficulty,” lots of people translated that to “easy.” The two concepts are not the same, and the difference between the two is real and significant. Which is, mind you, why I used the phrase “lowest difficulty” and not “easy.” But if you intentionally or unintentionally equate the two, then clearly there’s an issue to be had with the piece. I do suspect a number of dudes intentionally equated the two, even when it was made clear (by me, and others) they were not the same. I can’t do much for those dudes, then or now.

4. When I wrote the piece, some folks chimed in to say that other factors deserved to be part of a “lowest difficulty setting,” with “wealth” being primary among them. At the time I said I didn’t think wealth should have been; it’s a stat in my formulation — hugely influential, but not an inherent feature of identity like being white, or straight, or male. This got a lot of pushback, in no small part because (and relating to point two above) I think a lot of straight white dudes believed that if wealth was in there, it would somehow swamp the privileges that being white and straight and male provide, and that would mean that everyone else’s difficulty setting was no more difficult than their own.

It’s ten years on now, and I continue to call bullshit on this. I’ve been rich and I’ve been poor and I’ve been in the middle, and in all of those economic states I still had and have systemic advantages that came with being white and straight and male. Yes, being wealthy does make life less difficult! But on the other hand being wealthy (and an Oscar winner) didn’t keep Forest Whitaker from being frisked in a bodega for alleged shoplifting, whereas I have never once been asked to empty my pockets at a store, even when (as a kid, and poor as hell) I was actually shoplifting. This is an anecdotal observation! Also, systemically, wealth insulates people who are not straight and white and male less than it does those who are. Which means, to me, I put it in the right place in my formulation.

5. What would I add into the inherent formulation ten years on? I would add “cis” to “straight” and “white” and “male.” One, because I understand the concept better than than I did in 2012 and how it works within the matrix of privilege, and two, in the last decade, more of the people I know and like and love have come out as being outside of standard-issue cis-ness (or were already outside of it when I met them during this period), and I’ve seen directly how the world works on and with them. 

So, yes: Were I writing that piece for the first time in 2022, I would have written “Cis Straight White Male: The Lowest Difficulty Setting There Is.” 

6. Ten years of time has not mitigated the observation about who is on the Lowest Difficulty Setting, especially here in the United States. Indeed, if anything, 2022 in the US has been about (mostly) straight white men nerfing the fuck out of everyone else in the land in order to maintain their own systemic advantages. Oh, you’re not white? Let’s pass laws to make sure an accurate picture of your historical treatment is punted out of schools and libraries, and the excuse we’ll give is that learning these things would be mean to white kids. You’re LGBTQ+? Let’s pass laws so that a teacher even mentioning you exist could get them fired. Trans? Let’s take away your rights for gender-affirming medical treatment. Have functional ovaries? We’re planning to let your rapist have more say in what happens to your body than you! Have a blessed day!

And of course hashtag not all straight white men, but on the other hand let’s not pretend we don’t know who is largely responsible for this bullshit. The Republican party of the United States is overwhelmingly straight, overwhelmingly white, and substantially male, and here in 2022 it is also an unabashedly white supremacist political party, an authoritarian party and a patriarchal party: mainstream GOP politicians talk openly about the unspeakably racist and anti-Semitic “Great Replacement Theory,” and about sending people who have abortions to prison, and are actively making it more difficult for minorities to vote. It’s largely assumed that once the conservative supermajority of the Supreme Court (very likely as of this writing) throws out Roe v. Wade, it’ll go after Obergefell (same-sex marriage) as soon as a challenge gets to them, and then possibly Griswold (contraception) and Loving (mixed-race marriage) after that. Because, after all, why stop at Roe when you can roll civil rights back to the 1950s at least?

What makes this especially and terribly ironic is that when game designers nerf characters, they’re usually doing it to bring balance to the game — to put all the characters on something closer to an even playing field. What’s happening here in 2022 isn’t about evening up the playing field. It’s to keep the playing field as uneven as possible, for as long as possible, for the benefit of a particular group of people who already has most of the advantages. 2022 is straight white men employing code injection to change the rules of the game, while it’s in process, to make it more difficult for everyone else. 

So yes, ten years on, the Lowest Difficulty Setting still applies. It’s as relevant as ever. And I’m sure, even now, a bunch of straight white men will still maintain it’s still not accurate. As they would have been in 2012, they’re entirely wrong about that. 

And what a privilege that is: To be completely wrong, and yet suffer no consequences for it. 

— JS

Read the whole story
notadoctor
943 days ago
reply
Oakland, CA
popular
945 days ago
reply
Share this story
Delete
1 public comment
samuel
948 days ago
reply
Both this essay and the one it’s referencing should be required reading. I use this metaphor a whole lot.
Cambridge, Massachusetts

pic.twitter.com/j714l7xJI3

2 Shares
Read the whole story
notadoctor
955 days ago
reply
Oakland, CA
Share this story
Delete

Photo

2 Shares


Read the whole story
notadoctor
1027 days ago
reply
Oakland, CA
Share this story
Delete
Next Page of Stories