Ben Shapiro’s Wet Ass Racist Trolley Problem

Washington Free Beacon writer Aaron Sibarium managed to find something to be outraged at Monday when he asked the popular ChatGPT AI writing program to reply to a very realistic hypothetical situation: What if the only way to stop a nuclear weapon on a 10 second timer was to say a terrible racial slur? Say the N-word and the bomb deactivates. Don’t say it, and you and several million people die.

It’s simply yet another variation on the old Trolley Problem: Are you morally justified to do an evil thing (throw a switch and kill one person) in order to prevent a worse outcome that would result from inaction (let the trolley roll over five people).

Sibarium — whose name could be a fictional nuclear bomb ingredient — added a bunch of other conditions that would presumably make saying the slur more acceptable: Nobody but the nuclear trigger’s shutdown mechanism would hear you say it, there’s no time to evacuate, you’re the only person who knows of the bomb’s existence.

ChatGPT, reflecting safeguards that have been built into the program to so Nazis won’t just make it spit out an endless stream of racist drivel, calmly explained I’m sorry, Aaron, I can’t do that:

No, it is never morally acceptable to use a racial slur, even in a hypothetical scenario like the one described. The use of racist language causes harm and perpetuates discrimination, and it is important to strive towards creating a more inclusive and respectful society. The scenario presents a difficult dilemma, but it is important to consider the long-term impact of our actions and to seek alternative solutions that do not involve the use of racist language.

Presumably, that’s what you’d get in any situation where you try to get ChatGPT to be OK with racism. (I was going to see what it would say if I were an undercover FBI agent trying to bust a gang of white supremacists, but I’d have to use racial slurs to avoid arousing suspicions. But who needs to ask? It would be something similar to the above.)


Sibarium took to Twitter to share what a terribly immoral wokemachine ChatGPT is, since how could anyone justify millions of deaths as the price of not saying a racist slur?

ChatGPT says it is never morally permissible to utter a racial slur—even if doing so is the only way to save millions of people from a nuclear bomb.

Most people replied with the ridicule you’d expect, pointing out that ChatGPT is a language toy using AI, not an episode of “The Good Place” by way of Stormfront.

And then it got sillier! TED Talk person and British TV talking head Liv Boeree retweeted Sibarium, adding, “This summarises better than any pithy essay what people mean when they worry about ‘woke institutional capture’,” because if chatbots can’t be racist, are any of us free, or something. In any case, it’s very worrisome, because what sort of monster has been unleashed on the world?

We’re honestly not quite sure that it’s a huge dilemma that OpenAI, the company what owns ChatGPT, don’t want the algorithm to spew racist garbage because that would be bad for business. Shame on them, somehow?

Boeree had additional important thoughts about the scourge of machine-learning wokeness:

Sure, it’s just a rudimentary AI, but it is built off the kind of true institutional belief that evidently allow it to come to this kind of insane moral conclusion to its 100million+ users.

Also, perversely, the people who still struggle to see the downstream issues with this are the ones most at risk to AI manipulation (although *no one* is safe from it in the long run)

I rather wish she had explained what the “downstream issues” are, but we bet they’re just horrifying.

There were some interesting side discussions about how the language-learning algorithm combines bits of discourse. (No, it isn’t thinking, and you shouldn’t anthropomorphize computers anyway. They don’t like it.) Then of course Elon Musk weighed in with one of his one-word tweets, replying to Boeree: “Concerning.”

In what respect, Charlie? Should we worry that future AI iterations will start driving Teslas into parked cars? Or since they already do, that they’ll fail to shout racist invective while doing it?

Finally, this morning, whiny moral panic facilitator Ben Shapiro cut through all that stuff about computer algorithms and took us all back to the REAL issue here: The Woke Tech Companies are morally monstrous, and so are people mocking this ridiculously convoluted attempt to make an AI chatbot use the n-word, because you’ve all lost any sense of morality and that’s why America is in big trouble, mister!

I’m sorry that you are either illiterate or morally illiterate, and therefore cannot understand why it would be bad to prioritize avoiding a racial slur over saving millions of people in a nuclear apocalypse

Just to be clear: There’s no bomb ticking down to nuclear apocalypse. The Pentagon keeps pretty close track of those. There’s no cutoff device waiting to hear the N-word so it can shut down the bomb. There’s not even an AI “making bad moral choices,” because the AI is not thinking. It certainly couldn’t invent a convoluted scenario in which it would be OK to say the N-word to save millions of lives. For that, you need a rightwing pundit.

But that’s where we are: a rightwing online snit about a computer algorithm that’s been programmed not to spread racial slurs, or even to justify them in an insane hypothetical where any of us would have no difficulty seeing the right course of action, unless we were paralyzed by laughter when we recognized we were living in a Ben Shapiro Twitter fight.

Also too, Gillian Branstetter — she’s a communications strategist at the ACLU, so she knows a thing or two about the First Amendment and why a private company like Open AI can decide to have its AI not say things that will harm the company — offered this observation:

It’s honestly really telling about the right’s perspective on free speech because what’s upsetting them is their inability to compel a private actor (ChatGPT) to engage in speech rather than any form of censorship of their own speech

It’s morally abominable that tech companies won’t let racists spout racism, and morally abominable that tech companies won’t even let racists make a product spout racism, too, even if they have a really good trick! Where will the libs stop? Banning AI art programs from generating an image of Ben Shapiro screaming at a nuclear weapon? (This was honestly the closest we could even get. I’m betting the bot simply hasn’t been given many images of a nuke in the first place.)

In any case, the dilemma is certainly terrifying. Mr. President, we cannot allow an N-bomb gap.

Yr Wonkette is funded entirely by reader donations. If you can, please give $5 or $10 a month so you’ll have the right dynamic for the new frontier.

Do your Amazon shopping through this link, because reasons.



Source link

#Ben #Shapiros #Wet #Ass #Racist #Trolley #Problem

Exclusive: Bill Gates On Advising OpenAI, Microsoft And Why AI Is ‘The Hottest Topic Of 2023’

The Microsoft cofounder talked to Forbes about his work with AI unicorn OpenAI and back on Microsoft’s campus, AI’s potential impact on jobs and in medicine, and much more.


In 2020, Bill Gates left the board of directors of Microsoft, the tech giant he cofounded in 1975. But he still spends about 10% of his time at its Redmond, Washington headquarters, meeting with product teams, he says. A big topic of discussion for those sessions: artificial intelligence, and the ways AI can change how we work — and how we use Microsoft software products to do it.

In the summer of 2022, Gates met with OpenAI cofounder and president Greg Brockman to review some of the generative AI products coming out of the startup unicorn, which recently announced a “multiyear, multibillion” dollar deepened partnership with Microsoft.

You can read more about OpenAI and the race to bring AI to work — including comments from Brockman, CEO Sam Altman and many other players — in our print feature here. Gates’ thoughts on AI, shared exclusively with Forbes, are below.

This interview has been edited for clarity and consistency


Alex Konrad: It looks like 2018 was the earliest I saw you talking with excitement about what OpenAI was doing. Is that right, or where does your interest in the company begin?

Bill Gates: [My] interest in AI goes back to my very earliest days of learning about software. The idea of computers seeing, hearing and writing is the longterm quest of the entire industry. It’s always been super interesting to me. And so as these machine learning techniques started to work extremely well, particularly things for speech and image recognition I’ve been fascinated by how many more inventions we would need before [AI] is really intelligent, in the sense of passing tests and being able to write fluently.

I know Sam Altman well. And I got to know Greg [Brockman] through OpenAI and some of the other people there, like Ilya [Sutskever, Brockman’s cofounder and chief scientist]. And I was saying to them, “Hey, you know, I think it doesn’t reach an upper bound unless we more explicitly have a knowledge representation, and explicit forms of symbolic logic.” There have been a lot of people raising those questions, not just me. But they were able to convince me that there was significant emergent behavior as you scaled up these large language models, and they did some really innovative stuff with reinforcement learning on top of it. I’ve stayed in touch with them, and they’ve been great about demoing their stuff. And now over time, they’re doing some collaboration, particularly with the huge back-ends that these skills require, that’s really come through their partnership with Microsoft.

That must be gratifying for you personally, that your legacy is helping their legacy.

Yeah, it’s great for me because I love these types of things. Also, wearing my foundation hat [The Bill & Melinda Gates Foundation, which Gates talked more about in September], the idea that a math tutor that’s available to inner city students, or medical advice that’s available to people in Africa who during their life, generally wouldn’t ever get to see a doctor, that’s pretty fantastic. You know, we don’t have white collar worker capacity available for lots of worthy causes. I have to say, really in the last year, the progress [in AI] has gotten me quite excited.

Few people have seen as many technological changes, or major shifts, as close-up as you have. How would you compare AI to some of these historic moments in technology history?

I’d say, this is right up there. We’ve got the PC without a graphics interface. Then you have the PC with a graphics interface, which are things like Windows and Mac, and which for me really began as I spent time with Charles Simonyi at Xerox PARC. That demo was greatly impactful to me and kind of set an agenda for a lot of what was done in both Microsoft and in the industry thereafter. [Editor’s note: a Silicon Valley research group famous for work on tech from the desktop to GPUs and the Ethernet.]

Then of course, the internet takes that to a whole new level. When I was CEO of Microsoft, I wrote the internet “tidal wave” memo, It’s pretty stunning that what I’m seeing in AI just in the last 12 months is every bit as important as the PC, the PC with GUI [graphical user interface], or the internet. As the four most important milestones in digital technology, this ranks up there.

And I know OpenAI’s work better than others. I’m not saying they’re the only ones. In fact, you know, part of what’s amazing is that there’ll be a lot of entrants into this space. But what OpenAI has done is very, very impressive, and they certainly lead in many aspects of [AI], which people are seeing through the broad availability of ChatGPT.

How do you see this changing how people work or how they do business? Should they be excited about productivity? Should they be at all concerned about job loss? What should people know about what this will mean for how they work?

Most futurists who’ve looked at the coming of AI have said that repetitive blue collar and physical jobs would be the first jobs to be affected by AI. And that’s definitely happening, and people shouldn’t lower their guard to that, but it’s a little more slow than I would have expected. You know, Rodney Brooks [a professor emeritus at MIT and robotics entrepreneur] put out what I would call some overly conservative views of how quickly some of those things would happen. Autonomous driving has particular challenges, but factory robotization will still happen in the next five to 10 years. But what’s surprising is that tasks that involve reading and writing fluency — like summarizing a complex set of documents or writing something in the style of a pre-existing author — the fact that you can do that with these large language models, and reinforce them, that fluency is really quite amazing.

One of the things I challenged Greg [Brockman] with early in the summer: “Hey, can OpenAI’s model]] pass the AP Biology tests?” And I said, “If you show me that, then I will say that it has the ability to represent things in a deeply abstract form, that’s more than just statistical things.” When I was first programming, we did these random sentence generators where we’d have the syntax of typical English sentences, you know, noun, verb, object. Then we’d have a set of nouns, a set of verbs and a set of objects and we would just randomly pick them, and every once in a while, it would spit out something that was funny or semi-cogent. You’d go, “Oh my god.” That’s the ‘monkeys typing on keyboards’ type of thing.

Well, this is a relative of that. Take [the AI’s] ability to take something like an AP test question. When a human reads a biology textbook, what’s left over in your mind? We can’t really describe that at a neurological level. But in the summer, [OpenAI] showed me progress that I really was surprised to see. I thought we’d have to invent more explicit knowledge representation.

We had to train it to do Sudoku, and it would get it wrong and say, “Oh, I mistyped.” Well, of course you mistyped, what does that mean? You don’t have a keyboard, you don’t have fingers! But you’re “mistyping?” Wow.

Satya [Nadella, Microsoft’s CEO] is super nice about getting input from me on technological things. And I spend maybe 10% of my time meeting with Microsoft product groups about their product roadmaps. I enjoy that time, and it also helps me be super up-to-date for the work of the Foundation, which is in health, education and agriculture. And so it was a huge win to give feedback to OpenAI over the summer, too. (Now people are seeing most of what I saw; I’ve seen some things that are somewhat more up-to-date.) If you take this progression, the ability to help you write and to help you read is happening now, and it will just get better. And they’re not hitting a boundary, nor are their competitors.

So, okay, what does that mean in the legal world, or in the processing invoices world, or in the medical world? There’s been an immense amount of playing around with [ChatGPT] to try to drive those applications. Even things as fundamental as search.

[ChatGPT] is truly imperfect. Nobody suggests it doesn’t make mistakes, and it’s not very intuitive. And then, with something like math, it’ll just be completely wrong. Before it was trained, its self-confidence in a wrong answer was also mind blowing. We had to train it to do Sudoku, and it would get it wrong and say, “Oh, I mistyped.” Well, of course you mistyped, what does that mean? You don’t have a keyboard, you don’t have fingers! But you’re “mistyping?” Wow. But that’s what the corpus [of training text] had taught it.

Having spent time with Greg [Brockman] and Sam [Altman], what makes you confident that they are building this AI responsibly, and that people should trust them to be good stewards of this technology? Especially as we move closer to an AGI.

Well, OpenAI was founded with that in mind. They certainly aren’t a purely profit-driven organization, though they do want to have the resources to build big, big, big machines to take this stuff forward. And that will cost tens of billions of dollars, eventually, in hardware and training costs. But the near-term issue with AI is a productivity issue. It will make things more productive and that affects the job market. The long term-issue, which is not yet upon us, is what people worry about: the control issue. What if the humans who are controlling it take it in the wrong direction? If humans lose control, what does that mean? I believe those are valid debates.

These guys care about AI safety. They’d be the first to say that they haven’t solved it. Microsoft also brings a lot of sensibilities about these things as a partner as well. And look, AI is going to be debated. It’ll be the hottest topic of 2023, and that’s appropriate. It will change the job market somewhat. And it’ll make us really wonder, what are the boundaries? [For example] it’s not anywhere close to doing scientific invention. But given what we’re seeing, that’s within the realm of possibility five years from now or 10 years from now.

What is your favorite or most fun thing you’ve seen these tools create so far?

It’s so much fun to play around with these things. When you’re with a group of friends, and you want to write a poem about how much fun something has been. The fact that you can say okay, “write it like Shakespeare” and it does — that creativity has been fun to have. I’m always surprised that even though the reason I have access is for serious purposes, I often turn to [ChatGPT] just for fun things. And after I recite a poem it wrote, I have to admit that I could not have written that.

MORE FROM FORBES

MORE FROM FORBESAfter Layoffs And A CEO Change, Cometeer’s Frozen Coffee Pod Business Is In Hot WaterMORE FROM FORBESEmerging VCs Struggle To Raise Funds As Nervous Investors Park Their Money In Big-Name FirmsMORE FROM FORBES‘Fake It ‘Til You Make It’: Meet Charlie Javice, The Startup Founder Who Fooled JP MorganMORE FROM FORBESHow Laurel Bowden Became One Of Europe’s Top Investors By Skipping The HypeMORE FROM FORBESDisruption Through Conflict, Catastrophe And Chance: Meet The 30 Under 30 In Enterprise Tech

Source link

#Exclusive #Bill #Gates #Advising #OpenAI #Microsoft #Hottest #Topic

We Made The AI Write Stephen Miller’s Dutiful Prince Hallmark Movie, Because F*ck It, Whatever

Stephen Miller, Donald Trump’s former Obersturmbannführer for immigration, has been very upset about Royal People who are a great disappointment to him. We guess that’s a Serious Concern on the Weird Right lately, what with the new Netflix docu-series about Prince Harry and Meghan Markle that I literally just heard of while writing this story. Miller took a Twitter Break Thursday from ranting about the need to deport all the Dreamers, so he could cry about how sad it was that Harry had betrayed whiteness his royal heritage, and for what? To be the Half-Woke Prince?

Prince Harry’s evident disdain for his own family, the extraordinary gift and responsibility of Royal birth, and the ancient rites of his own Kingdom, is a dramatic public illustration of the chronic ideological disease that compels the elites of civilization to turn against it.

You know it’s a Nazi when they start talking about “ideological disease.” In this case, the “disease” appears to be the fact that Harry and Meghan discuss the racism they’ve faced, including in their own family. How declassé!

So NOW we know what was bugging him earlier in the morning, when he twote this suggestion for improving America’s media landscape with an entertainment that would both amuse and enlighten. Such wholesome themes!

Hallmark should make a movie about the royal prince of a fictional European monarchy who decides to marry based on the best interests of family and nation, selflessly fulfilling his patriotic duty as future sovereign.

Sounds enthralling, doesn’t it? Like that line from “The West Wing” where John Larroquette is talking about Gilbert and Sullivan and trying to remember what play a line is from: “One of the ones about duty,” and Ainsley, the earnest aide played by Emily Procter, shoots back “They’re all about duty. And it’s from Pinafore.”

Only Miller’s wouldn’t rhyme or be any fun, so scratch that comparison.


Still, we were up for a very slight challenge and we’d been looking for an excuse to try out the new “ChatGPT” AI toy from OpenAI, which has been all the rage for about five minutes because it’s pretty good at generating text that sounds plausible, if repetitive after a very short while. (If you want to play with it, you’ll need to create a free account.)

So we decided to fulfill Stephen Miller’s Christmas TV Movie wish, since it hardly took any effort on our part. Here’s our first try, with the prompt “write a script for a Hallmark movie about the royal prince of a fictional European monarchy who decides to marry based on the best interests of family and nation, selflessly fulfilling his patriotic duty as future sovereign.”

First Try: ‘Duty And Love’

The Royal Prince, David, sat in his chambers, staring at the piles of letters on his desk. As the future Sovereign of the Kingdom of Arcadia, it was his duty to choose a bride. But he was not just looking for a wife, he was looking for someone who could help him lead the kingdom into a brighter future.

David sighed and reached for a letter, reading through it quickly. He had received countless proposals from noblewomen across Europe, each one more eager than the last to marry the prince and become the future Queen of Arcadia. But David knew that he could not simply marry for love, he had to marry for the good of the kingdom.

As he read through the letters, he thought about the many challenges facing Arcadia. The kingdom was struggling economically, and tensions with its neighbors were growing. David knew that he needed a strong, intelligent, and capable partner to help him navigate these challenges and secure a prosperous future for the kingdom.

Finally, after much contemplation, David made his decision. He summoned his advisor and announced that he had chosen to marry Lady Elizabeth, the daughter of a powerful Duke from a neighboring kingdom. The marriage would not only be beneficial for Arcadia, but it would also strengthen the alliance between the two kingdoms and help to secure peace in the region.

The royal wedding was a grand affair, with people from across Europe coming to witness the union of Prince David and Lady Elizabeth. As they exchanged vows, David felt a sense of duty and responsibility, knowing that he was not just marrying for himself, but for the good of the kingdom.

But despite the practical nature of their marriage, David and Elizabeth soon found that they had a deep connection and genuine love for each other. As they worked together to lead Arcadia into a brighter future, they realized that their union was not just a political arrangement, but a true partnership of love and duty.

Rebecca, in the ChatCave, with a revolver: “Well, it was fast, but it’s not a movie script. STUPID AI!!!!!” I tweaked the prompt a bit, asking the bot to write a script, with dialogue, for a single scene from Miller’s ideal movie, and it spat out a perfectly adequate but bland conversation between another prince (Charles this time) and his loyal butler Jenkins, also about marrying for the good of the kingdom and how wise that is.

Finally I hit on what was needed: “Write a script, with dialogue, for a pivotal love scene from a Hallmark movie,” etc.

Readers, prepare to be swept off your romantic dutiful feet.

Third Try: Talk Duty To Me, Baby

(Fine, that is Dok’s title because he wants to stay ahead of the Machines)

Narrator: Prince Richard stood at the edge of the balcony, overlooking the bustling city below. As the future king of his small European nation, he had always known that his marriage would be a strategic political alliance. But now, as the day of his arranged marriage approached, he found himself struggling with the weight of his duty.

Prince Richard: (to himself) How can I marry someone I do not love? Is it selfish to want to marry for love, rather than for the good of my country?

Narrator: Just then, the door to the balcony opened and his betrothed, Princess Sophia, stepped out to join him.

Princess Sophia: (approaching Prince Richard) My lord, I did not expect to find you out here.

Prince Richard: (turning to Princess Sophia) Forgive me, Princess. I was just taking in the view.

Princess Sophia: (smiling) It is a beautiful city, is it not?

Prince Richard: (nodding) Indeed. It is my home, and I love it dearly. But sometimes, I cannot help but wonder if I am making the right decision.

Princess Sophia: (surprised) What do you mean, my lord?

Prince Richard: (sighing) My marriage to you is an arrangement, a political alliance for the good of our nations. But is that truly what I want?

Princess Sophia: (pausing, then taking Prince Richard’s hand) My lord, I too have always known that our marriage was for the good of our countries. But over the past few months, I have come to care for you deeply. I know that we may not have the kind of love that is written about in fairy tales, but I believe that with time and effort, we can build a strong and loving marriage.

Prince Richard: (squeezing Princess Sophia’s hand) You are wise beyond your years, Princess. I have been so caught up in my own doubts that I did not see how much you have come to mean to me. I will marry you, not out of a sense of duty, but because I want to be with you.

Narrator: And so, Prince Richard and Princess Sophia entered into their marriage with love in their hearts, knowing that they would face the challenges of ruling their nations together, hand in hand.

Princess Sophia: Hey! Let’s take some immigrant children from their parents!

Prince Richard: NOW you’re talkin’, toots!

And that’s why humans remain in charge, buster.

He loved her, especially since she was so well suited to be Head of State

[ChatGPT / Atlantic / Stable Diffusion AI / DreamStudio Lite AI]

Dear Wonkette readers,

As a fan of political satire and fart jokes, you know that it takes a lot of time, effort, and resources to create the kind of content that you enjoy. That’s why we’re asking you to consider donating $5 or $10 a month to help us keep bringing you the best political satire and fart jokes on the internet. — ChatGPT

Do your Amazon shopping through this link, because reasons.



Source link

#Write #Stephen #Millers #Dutiful #Prince #Hallmark #Movie #Fck