Exclusive: Bill Gates On Advising OpenAI, Microsoft And Why AI Is ‘The Hottest Topic Of 2023’

The Microsoft cofounder talked to Forbes about his work with AI unicorn OpenAI and back on Microsoft’s campus, AI’s potential impact on jobs and in medicine, and much more.


In 2020, Bill Gates left the board of directors of Microsoft, the tech giant he cofounded in 1975. But he still spends about 10% of his time at its Redmond, Washington headquarters, meeting with product teams, he says. A big topic of discussion for those sessions: artificial intelligence, and the ways AI can change how we work — and how we use Microsoft software products to do it.

In the summer of 2022, Gates met with OpenAI cofounder and president Greg Brockman to review some of the generative AI products coming out of the startup unicorn, which recently announced a “multiyear, multibillion” dollar deepened partnership with Microsoft.

You can read more about OpenAI and the race to bring AI to work — including comments from Brockman, CEO Sam Altman and many other players — in our print feature here. Gates’ thoughts on AI, shared exclusively with Forbes, are below.

This interview has been edited for clarity and consistency


Alex Konrad: It looks like 2018 was the earliest I saw you talking with excitement about what OpenAI was doing. Is that right, or where does your interest in the company begin?

Bill Gates: [My] interest in AI goes back to my very earliest days of learning about software. The idea of computers seeing, hearing and writing is the longterm quest of the entire industry. It’s always been super interesting to me. And so as these machine learning techniques started to work extremely well, particularly things for speech and image recognition I’ve been fascinated by how many more inventions we would need before [AI] is really intelligent, in the sense of passing tests and being able to write fluently.

I know Sam Altman well. And I got to know Greg [Brockman] through OpenAI and some of the other people there, like Ilya [Sutskever, Brockman’s cofounder and chief scientist]. And I was saying to them, “Hey, you know, I think it doesn’t reach an upper bound unless we more explicitly have a knowledge representation, and explicit forms of symbolic logic.” There have been a lot of people raising those questions, not just me. But they were able to convince me that there was significant emergent behavior as you scaled up these large language models, and they did some really innovative stuff with reinforcement learning on top of it. I’ve stayed in touch with them, and they’ve been great about demoing their stuff. And now over time, they’re doing some collaboration, particularly with the huge back-ends that these skills require, that’s really come through their partnership with Microsoft.

That must be gratifying for you personally, that your legacy is helping their legacy.

Yeah, it’s great for me because I love these types of things. Also, wearing my foundation hat [The Bill & Melinda Gates Foundation, which Gates talked more about in September], the idea that a math tutor that’s available to inner city students, or medical advice that’s available to people in Africa who during their life, generally wouldn’t ever get to see a doctor, that’s pretty fantastic. You know, we don’t have white collar worker capacity available for lots of worthy causes. I have to say, really in the last year, the progress [in AI] has gotten me quite excited.

Few people have seen as many technological changes, or major shifts, as close-up as you have. How would you compare AI to some of these historic moments in technology history?

I’d say, this is right up there. We’ve got the PC without a graphics interface. Then you have the PC with a graphics interface, which are things like Windows and Mac, and which for me really began as I spent time with Charles Simonyi at Xerox PARC. That demo was greatly impactful to me and kind of set an agenda for a lot of what was done in both Microsoft and in the industry thereafter. [Editor’s note: a Silicon Valley research group famous for work on tech from the desktop to GPUs and the Ethernet.]

Then of course, the internet takes that to a whole new level. When I was CEO of Microsoft, I wrote the internet “tidal wave” memo, It’s pretty stunning that what I’m seeing in AI just in the last 12 months is every bit as important as the PC, the PC with GUI [graphical user interface], or the internet. As the four most important milestones in digital technology, this ranks up there.

And I know OpenAI’s work better than others. I’m not saying they’re the only ones. In fact, you know, part of what’s amazing is that there’ll be a lot of entrants into this space. But what OpenAI has done is very, very impressive, and they certainly lead in many aspects of [AI], which people are seeing through the broad availability of ChatGPT.

How do you see this changing how people work or how they do business? Should they be excited about productivity? Should they be at all concerned about job loss? What should people know about what this will mean for how they work?

Most futurists who’ve looked at the coming of AI have said that repetitive blue collar and physical jobs would be the first jobs to be affected by AI. And that’s definitely happening, and people shouldn’t lower their guard to that, but it’s a little more slow than I would have expected. You know, Rodney Brooks [a professor emeritus at MIT and robotics entrepreneur] put out what I would call some overly conservative views of how quickly some of those things would happen. Autonomous driving has particular challenges, but factory robotization will still happen in the next five to 10 years. But what’s surprising is that tasks that involve reading and writing fluency — like summarizing a complex set of documents or writing something in the style of a pre-existing author — the fact that you can do that with these large language models, and reinforce them, that fluency is really quite amazing.

One of the things I challenged Greg [Brockman] with early in the summer: “Hey, can OpenAI’s model]] pass the AP Biology tests?” And I said, “If you show me that, then I will say that it has the ability to represent things in a deeply abstract form, that’s more than just statistical things.” When I was first programming, we did these random sentence generators where we’d have the syntax of typical English sentences, you know, noun, verb, object. Then we’d have a set of nouns, a set of verbs and a set of objects and we would just randomly pick them, and every once in a while, it would spit out something that was funny or semi-cogent. You’d go, “Oh my god.” That’s the ‘monkeys typing on keyboards’ type of thing.

Well, this is a relative of that. Take [the AI’s] ability to take something like an AP test question. When a human reads a biology textbook, what’s left over in your mind? We can’t really describe that at a neurological level. But in the summer, [OpenAI] showed me progress that I really was surprised to see. I thought we’d have to invent more explicit knowledge representation.

We had to train it to do Sudoku, and it would get it wrong and say, “Oh, I mistyped.” Well, of course you mistyped, what does that mean? You don’t have a keyboard, you don’t have fingers! But you’re “mistyping?” Wow.

Satya [Nadella, Microsoft’s CEO] is super nice about getting input from me on technological things. And I spend maybe 10% of my time meeting with Microsoft product groups about their product roadmaps. I enjoy that time, and it also helps me be super up-to-date for the work of the Foundation, which is in health, education and agriculture. And so it was a huge win to give feedback to OpenAI over the summer, too. (Now people are seeing most of what I saw; I’ve seen some things that are somewhat more up-to-date.) If you take this progression, the ability to help you write and to help you read is happening now, and it will just get better. And they’re not hitting a boundary, nor are their competitors.

So, okay, what does that mean in the legal world, or in the processing invoices world, or in the medical world? There’s been an immense amount of playing around with [ChatGPT] to try to drive those applications. Even things as fundamental as search.

[ChatGPT] is truly imperfect. Nobody suggests it doesn’t make mistakes, and it’s not very intuitive. And then, with something like math, it’ll just be completely wrong. Before it was trained, its self-confidence in a wrong answer was also mind blowing. We had to train it to do Sudoku, and it would get it wrong and say, “Oh, I mistyped.” Well, of course you mistyped, what does that mean? You don’t have a keyboard, you don’t have fingers! But you’re “mistyping?” Wow. But that’s what the corpus [of training text] had taught it.

Having spent time with Greg [Brockman] and Sam [Altman], what makes you confident that they are building this AI responsibly, and that people should trust them to be good stewards of this technology? Especially as we move closer to an AGI.

Well, OpenAI was founded with that in mind. They certainly aren’t a purely profit-driven organization, though they do want to have the resources to build big, big, big machines to take this stuff forward. And that will cost tens of billions of dollars, eventually, in hardware and training costs. But the near-term issue with AI is a productivity issue. It will make things more productive and that affects the job market. The long term-issue, which is not yet upon us, is what people worry about: the control issue. What if the humans who are controlling it take it in the wrong direction? If humans lose control, what does that mean? I believe those are valid debates.

These guys care about AI safety. They’d be the first to say that they haven’t solved it. Microsoft also brings a lot of sensibilities about these things as a partner as well. And look, AI is going to be debated. It’ll be the hottest topic of 2023, and that’s appropriate. It will change the job market somewhat. And it’ll make us really wonder, what are the boundaries? [For example] it’s not anywhere close to doing scientific invention. But given what we’re seeing, that’s within the realm of possibility five years from now or 10 years from now.

What is your favorite or most fun thing you’ve seen these tools create so far?

It’s so much fun to play around with these things. When you’re with a group of friends, and you want to write a poem about how much fun something has been. The fact that you can say okay, “write it like Shakespeare” and it does — that creativity has been fun to have. I’m always surprised that even though the reason I have access is for serious purposes, I often turn to [ChatGPT] just for fun things. And after I recite a poem it wrote, I have to admit that I could not have written that.

MORE FROM FORBES

MORE FROM FORBESAfter Layoffs And A CEO Change, Cometeer’s Frozen Coffee Pod Business Is In Hot WaterMORE FROM FORBESEmerging VCs Struggle To Raise Funds As Nervous Investors Park Their Money In Big-Name FirmsMORE FROM FORBES‘Fake It ‘Til You Make It’: Meet Charlie Javice, The Startup Founder Who Fooled JP MorganMORE FROM FORBESHow Laurel Bowden Became One Of Europe’s Top Investors By Skipping The HypeMORE FROM FORBESDisruption Through Conflict, Catastrophe And Chance: Meet The 30 Under 30 In Enterprise Tech

Source link

#Exclusive #Bill #Gates #Advising #OpenAI #Microsoft #Hottest #Topic

Paging Dr. AI? What ChatGPT and artificial intelligence could mean for the future of medicine | CNN



CNN
 — 

Without cracking a single textbook, without spending a day in medical school, the co-author of a preprint study correctly answered enough practice questions that it would have passed the real US Medical Licensing Examination.

But the test-taker wasn’t a member of Mensa or a medical savant; it was the artificial intelligence ChatGPT.

The tool, which was created to answer user questions in a conversational manner, has generated so much buzz that doctors and scientists are trying to determine what its limitations are – and what it could do for health and medicine.

ChatGPT, or Chat Generative Pre-trained Transformer, is a natural language-processing tool driven by artificial intelligence.

The technology, created by San Francisco-based OpenAI and launched in November, is not like a well-spoken search engine. It isn’t even connected to the internet. Rather, a human programmer feeds it a vast amount of online data that’s kept on a server.

It can answer questions even if it has never seen a particular sequence of words before, because ChatGPT’s algorithm is trained to predict what word will come up in a sentence based on the context of what comes before it. It draws on knowledge stored on its server to generate its response.

ChatGPT can also answer followup questions, admit mistakes and reject inappropriate questions, the company says. It’s free to try while its makers are testing it.

Artificial intelligence programs have been around for a while, but this one generated so much interest that medical practices, professional associations and medical journals have created task forces to see how it might be useful and to understand what limitations and ethical concerns it may bring.

Dr. Victor Tseng’s practice, Ansible Health, has set up a task force on the issue. The pulmonologist is a medical director of the California-based group and a co-author of the study in which ChatGPT demonstrated that it could probably pass the medical licensing exam.

Tseng said his colleagues started playing around with ChatGPT last year and were intrigued when it accurately diagnosed pretend patients in hypothetical scenarios.

“We were just so impressed and truly flabbergasted by the eloquence and sort of fluidity of its response that we decided that we should actually bring this into our formal evaluation process and start testing it against the benchmark for medical knowledge,” he said.

That benchmark was the three-part test that US med school graduates have to pass to be licensed to practice medicine. It’s generally considered one of the toughest of any profession because it doesn’t ask straightforward questions with answers that can easily found on the internet.

The exam tests basic science and medical knowledge and case management, but it also assesses clinical reasoning, ethics, critical thinking and problem-solving skills.

The study team used 305 publicly available test questions from the June 2022 sample exam. None of the answers or related context was indexed on Google before January 1, 2022, so they would not be a part of the information on which ChatGPT trained. The study authors removed sample questions that had visuals and graphs, and they started a new chat session for each question they asked.

Students often spend hundreds of hours preparing, and medical schools typically give them time away from class just for that purpose. ChatGPT had to do none of that prep work.

The AI performed at or near passing for all the parts of the exam without any specialized training, showing “a high level of concordance and insight in its explanations,” the study says.

Tseng was impressed.

“There’s a lot of red herrings,” he said. “Googling or trying to even intuitively figure out with an open-book approach is very difficult. It might take hours to answer one question that way. But ChatGPT was able to give an accurate answer about 60% of the time with cogent explanations within five seconds.”

Dr. Alex Mechaber, vice president of the US Medical Licensing Examination at the National Board of Medical Examiners, said ChatGPT’s passing results didn’t surprise him.

“The input material is really largely representative of medical knowledge and the type of multiple-choice questions which AI is most likely to be successful with,” he said.

Mechaber said the board is also testing ChatGPT with the exam. The members are especially interested in the answers the technology got wrong, and they want to understand why.

“I think this technology is really exciting,” he said. “We were also pretty aware and vigilant about the risks that large language models bring in terms of the potential for misinformation, and also potentially having harmful stereotypes and bias.”

He believes that there is potential with the technology.

“I think it’s going to get better and better, and we are excited and want to figure out how do we embrace it and use it in the right ways,” he said.

Already, ChatGPT has entered the discussion around research and publishing.

The results of the medical licensing exam study were even written up with the help of ChatGPT. The technology was originally listed as a co-author of the draft, but Tseng says that when the study is published, ChatGPT will not be listed as an author because it would be a distraction.

Last month, the journal Nature created guidelines that said no such program could be credited as an author because “any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”

But an article published Thursday in the journal Radiology was written almost entirely by ChatGPT. It was asked whether it could replace a human medical writer, and the program listed many of its possible uses, including writing study reports, creating documents that patients will read and translating medical information into a variety of languages.

Still, it does have some limitations.

“I think it definitely is going to help, but everything in AI needs guardrails,” said Dr. Linda Moy, the editor of Radiology and a professor of radiology at the NYU Grossman School of Medicine.

She said ChatGPT’s article was pretty accurate, but it made up some references.

One of Moy’s other concerns is that the AI could fabricate data. It’s only as good as the information it’s fed, and with so much inaccurate information available online about things like Covid-19 vaccines, it could use that to generate inaccurate results.

Moy’s colleague Artie Shen, a graduating Ph.D. candidate at NYU’s Center for Data Science, is exploring ChatGPT’s potential as a kind of translator for other AI programs for medical imaging analysis. For years, scientists have studied AI programs from startups and larger operations, like Google, that can recognize complex patterns in imaging data. The hope is that these could provide quantitative assessments that could potentially uncover diseases, possibly more effectively than the human eye.

“AI can give you a very accurate diagnosis, but they will never tell you how they reach this diagnosis,” Shen said. He believes that ChatGPT could work with the other programs to capture its rationale and observations.

“If they can talk, it has the potential to enable those systems to convey their knowledge in the same way as an experienced radiologist,” he said.

Tseng said he ultimately thinks ChatGPT can enhance medical practice in much the same way online medical information has both empowered patients and forced doctors to become better communicators, because they now have to provide insight around what patients read online.

ChatGPT won’t replace doctors. Tseng’s group will continue to test it to learn why it creates certain errors and what other ethical parameters need to be put in place before using it for real. But Tseng thinks it could make the medical profession more accessible. For example, a doctor could ask ChatGPT to simplify complicated medical jargon into language that someone with a seventh-grade education could understand.

“AI is here. The doors are open,” Tseng said. “My fundamental hope is, it will actually make me and make us as physicians and providers better.”

Source link

#Paging #ChatGPT #artificial #intelligence #future #medicine #CNN

ChatGPT: Use of AI chatbot in Congress and court rooms raises ethical questions

User-friendly AI tool ChatGPT has attracted hundreds of millions of users since its launch in November and is set to disrupt industries around the world. In recent days, AI content generated by the bot has been used in US Congress, Columbian courts and a speech by Israel’s president. Is widespread uptake inevitable – and is it ethical?

In a recorded greeting for a cybersecurity convention in Tel Aviv on Wednesday, Israeli President Isaac Herzog began a speech that was set to make history: “I am truly proud to be the president of a country that is home to such a vibrant and innovative hi-tech industry. Over the past few decades, Israel has consistently been at the forefront of technological advancement, and our achievements in the fields of cybersecurity, artificial intelligence (AI), and big data are truly impressive.”

To the surprise of the entrepreneurs attending Cybertech Global, the president then revealed that his comments had been written by the AI bot ChatGPT, making him the first world leader publicly known to use artificial intelligence to write a speech. 

But not the first politician to do so. A week earlier, US Congressman Jake Auchincloss read a speech also generated by ChatGPT on the floor of the House of Representatives. Another first, intended to draw attention to the wildly successful new AI tool in Congress “so that we have a debate now about purposeful policy for AI”, Auchincloss told CNN. 


Since its launch in November 2022, ChatGPT (created by California-based company OpenAI) is estimated to have reached 100 million monthly active users, making it the fastest-growing consumer application in history. 

The user-friendly AI tool utilises online data to generate instantaneous, human-like responses to user queries. It’s ability to scan the internet for information and provide rapid answers makes it a potential rival to Google’s search engine, but it is also able to produce written content on any topic, in any format – from essays, speeches and poems to computer code – in seconds.  

The tool is currently free and boasted around 13 million unique visitors per day in January, a report from Swiss banking giant UBS found.

Part of its mass appeal is “extremely good engineering ­– it scales up very well with millions of people using it”, says Mirco Musolesi, professor of computer science at University College London. “But it also has very good training in terms of quality of the data used but also the way the creators managed to deal with problematic aspects.”  

In the past, similar technologies have resulted in bots fed on a diet of social media posts taking on an aggressive, offensive tone. Not so for ChatGPT, and many of its millions of users engage with the tool out of curiosity or for entertainment

“Humans have this idea of being very special, but then you see this machine that is able to produce something very similar to us,” Musolesi says. “We knew that this this was probably possible but actually seeing it is very interesting.” 

A ‘misinformation super spreader’?

Yet the potential impact of making such sophisticated AI available to a mass audience for the first time is unclear, and different sectors from education, to law, to science and business are braced for disruption.    

Schools and colleges around the world have been quick to ban students from using ChatGPT to prevent cheating or plagiarism. 

>> Top French university bans students from using ChatGPT 

Science journals have also banned the bot from being listed as a co-author on papers amid fears that errors made by the tool could find their way into scientific debate.  

OpenAI has cautioned that the bot can make mistakes. However, a report from media watchdog NewsGuard said on topics including Covid-19, Ukraine and school shootings, ChatGPT delivered “eloquent, false and misleading” claims 80 percent of the time. 

“For anyone unfamiliar with the issues or topics covered by this content, the results could easily come across as legitimate, and even authoritative,” NewsGuard said. It called the tool “the next great misinformation super spreader”. 

Even so, in Columbia a judge announced on Tuesday that he used the AI chatbot to help make a ruling in a children’s medical rights case. 

Judge Juan Manuel Padilla told Blu Radio he asked ChatGPT whether an autistic minor should be exonerated from paying fees for therapies, among other questions.  

The bot answered: “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.” 

Padilla ruled in favour of the child – as the bot advised. “By asking questions to the application we do not stop being judges [and] thinking beings,” he told the radio station. “I suspect that many of my colleagues are going to join in and begin to construct their rulings ethically with the help of artificial intelligence.” 

Although he cautioned that the bot should be used as a time-saving facilitator, rather than “with the aim of replacing judges”, critics said it was neither responsible or ethical to use a bot capable of providing misinformation as a legal tool. 

An expert in artificial intelligence regulation and governance, Professor Juan David Gutierrez of Rosario University said he put the same questions to ChatGPT and got different responses. In a tweet, he called for urgent “digital literacy” training for judges.

A market leader 

Despite the potential risks, the spread of ChatGPT seems inevitable. Musolesi expects it will be used “extensively” for both positive and negative purposes – with the risk of misinformation and misuse comes the promise of information and technology becoming more accessible to a greater number of people. 

OpenAI received a multi-million-dollar investment from Microsoft in January that will see ChatGPT integrated into a premium version of the Teams messaging app, offering services such as generating automatic meeting notes. 

Microsoft has said it plans to add ChatGPT’s technology into all its products, setting the stage for the company to become a leader in the field, ahead of Google’s parent company, Alphabet. 

>> Alphabet, Amazon and Apple results: Tech earnings hit by gloom 

Making the tool free has been key to its current and future success. “It was a huge marketing campaign,” Musolesi says, “and when people use it, they improve the dataset to use for the next version because they are providing this feedback.” 

Even so, the company launched a paid version of the bot this week offering access to new features for $20 per month.

Another eagerly awaited new development is an AI classifier, a software tool to help people identify when a text has been generated by artificial intelligence.

OpenAI said in a blog post that, while the tool was launched this week, it is not yet “fully reliable”. Currently it is only able to correctly identify AI-written texts 26 percent of the time.

But the company expects it will improve with training, reducing the potential for “automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human”.  



Source link

#ChatGPT #chatbot #Congress #court #rooms #raises #ethical #questions

Three Wild Technologies About to Change Health Care

Feb. 3, 2023 – When I was a child, I watched syndicated episodes of the original Star Trek. I was dazzled by the space travel, sure, but also the medical technology. 

A handheld “tricorder” detected diseases, while an intramuscular injector (“hypospray”) could treat them. Sickbay “biobeds” came with real-time health monitors that looked futuristic at the time but seem primitive today.

Such visions inspired a lot of us kids to pursue science. Little did we know the real-life advances many of us would see in our lifetimes. 

Artificial intelligence helping to spot disease, robots performing surgery, even video calls between doctor and patient — all these once sounded fantastical but now happen in clinical care.

Now, in the 23rd year of the 21st century, you might not believe what we’ll be capable of next. Three especially wild examples are moving closer to clinical reality. 

Human Hibernation 

Captain America, Han Solo, and Star Trek villain Khan – all were preserved at low temperatures and then revived, waking up alive and well months, decades, or centuries later. These are fictional examples, to be sure, but the science they’re rooted in is real. 

Rare cases of accidental hypothermia prove that full recovery is possible even after the heart stops beating. The drop in body temperature slows metabolism and reduces the need for oxygen, stalling brain damage for an hour or more. (In one extreme case, a climber survived after almost 9 hours of efforts to revive him.) 

Useful for a space traveler? Maybe not. But it’s potentially huge for someone with life-threatening injuries from a car accident or a gunshot wound. 

That’s the thinking behind a breakthrough procedure that came after decades of research on pigs and dogs, now in a clinical trial. The idea: A person with massive blood loss whose heart has stopped is injected with an ice-cold fluid, cooling them from the inside, down to about 50 F

Doctors already induce more modest hypothermia to protect the brain and other organs after cardiac arrest and during surgery on the aortic arch (the main artery carrying blood from the heart). 

But this experimental procedure – called emergency preservation and resuscitation (EPR) – goes far beyond that, dramatically “decreasing the bodys need for oxygen and blood flow,” says Samuel Tisherman, MD, a trauma surgeon at the University of Maryland Medical Center and the trial’s lead researcher. This puts the patient in a state of suspended animation that “could buy time for surgeons to stop the bleeding and save more of these patients.”

The technique has been done on at least six patients, though none were reported to survive. The trial is expected to include 20 people by the time it wraps up in December, according to the listing on the U.S. clinical trials database. Though given the strict requirements for candidates (emergency trauma victims who are not likely to survive), one can’t exactly rely on a set schedule. 

Still, the technology is promising. Someday we may even use it to keep patients in suspended animation for months or years, experts predict, helping astronauts through decades-long spaceflights, or stalling death in sick patients awaiting a cure. 

Artificial Womb

Another sci-fi classic: growing human babies outside the womb. Think the fetus fields from The Matrix, or the frozen embryos in Alien: Covenant.

In 1923, British biologist J.B.S. Haldane coined a term for that – ectogenesis. He predicted that 70% of pregnancies would take place, from fertilization to birth, in artificial wombs by 2074. That may seems unlikely, but the timeline is on track. 

Developing an embryo outside the womb is already routine in in vitro fertilization. And technology enables preterm babies to survive through much of the second half of gestation. Normal human pregnancy is 40 weeks, and the youngest preterm baby ever to survive was 21 weeks and 1 day old, just a few days younger than a smattering of others who lived. 

The biggest obstacle for babies younger than that is lung viability. Mechanical ventilation can damage the lungs and lead to a chronic (sometimes fatal) lung disease known as bronchopulmonary dysplasia. Avoiding this would mean figuring out a way to maintain fetal circulation – the intricate system that delivers oxygenated blood from the placenta to the fetus via the umbilical cord. Researchers at Children’s Hospital of Philadelphia (CHOP) have done this using a fetal lamb.

The key to their invention is a substitute placenta: an oxygenator connected to the lamb’s umbilical cord. Tubes inserted through the umbilical vein and arteries carry oxygenated blood from the “placenta” to the fetus, and deoxygenated blood back out. The lamb resides in an artificial, fluid-filled amniotic sac until its lungs and other organs are developed.

Fertility treatment could benefit, too. “An artificial womb may substitute in situations in which a gestational carrier – surrogate – is indicated,” says Paula Amato, MD, a professor of obstetrics and gynecology at Oregon Health and Science University. (Amato is not involved in the CHOP research.) For example: when the mother is missing a uterus or can’t carry a pregnancy safely.

No date is set for clinical trials yet. But according to the research, the main difference between human and lamb may come down to size. A lamb’s umbilical vessels are larger, so feeding in a tube is easier. With today’s advances in miniaturizing surgical methods, that seems like a challenge scientists can overcome.

Messenger RNA Therapeutics 

Back to Star Trek. The hypospray injector’s contents could cure just about any disease, even one newly discovered on a strange planet. That’s not unlike messenger RNA (mRNA) technology, a breakthrough that enabled scientists to quickly develop some of the first COVID-19 vaccines. 

But vaccines are just the beginning of what this technology can do. 

A whole field of immunotherapy is emerging that uses mRNA to deliver instructions to produce chimeric antigen receptor-modified immune cells (CAR-modified immune cells). These cells are engineered to target diseased cells and tissues, like cancer cells and harmful fibroblasts (scar tissue) that promote fibrosis in, for example, the heart and lungs. 

The field is bursting with rodent research, and clinical trials have started for treating some advanced-stage malignancies.

Actual clinical use may be years away, but if all goes well, these medicines could help treat or even cure the core medical problems facing humanity. We’re talking cancer, heart disease, neurodegenerative disease – transforming one therapy into another by simply changing the mRNA’s “nucleotide sequence,” the blueprint containing instructions telling it what to do, and what disease to attack. 

As this technology matures, we may start to feel as if we’re really on Star Trek, where Dr. Leonard “Bones” McCoy pulls out the same device to treat just about every disease or injury.

Source link

#Wild #Technologies #Change #Health #Care

We Made The AI Write Stephen Miller’s Dutiful Prince Hallmark Movie, Because F*ck It, Whatever

Stephen Miller, Donald Trump’s former Obersturmbannführer for immigration, has been very upset about Royal People who are a great disappointment to him. We guess that’s a Serious Concern on the Weird Right lately, what with the new Netflix docu-series about Prince Harry and Meghan Markle that I literally just heard of while writing this story. Miller took a Twitter Break Thursday from ranting about the need to deport all the Dreamers, so he could cry about how sad it was that Harry had betrayed whiteness his royal heritage, and for what? To be the Half-Woke Prince?

Prince Harry’s evident disdain for his own family, the extraordinary gift and responsibility of Royal birth, and the ancient rites of his own Kingdom, is a dramatic public illustration of the chronic ideological disease that compels the elites of civilization to turn against it.

You know it’s a Nazi when they start talking about “ideological disease.” In this case, the “disease” appears to be the fact that Harry and Meghan discuss the racism they’ve faced, including in their own family. How declassé!

So NOW we know what was bugging him earlier in the morning, when he twote this suggestion for improving America’s media landscape with an entertainment that would both amuse and enlighten. Such wholesome themes!

Hallmark should make a movie about the royal prince of a fictional European monarchy who decides to marry based on the best interests of family and nation, selflessly fulfilling his patriotic duty as future sovereign.

Sounds enthralling, doesn’t it? Like that line from “The West Wing” where John Larroquette is talking about Gilbert and Sullivan and trying to remember what play a line is from: “One of the ones about duty,” and Ainsley, the earnest aide played by Emily Procter, shoots back “They’re all about duty. And it’s from Pinafore.”

Only Miller’s wouldn’t rhyme or be any fun, so scratch that comparison.


Still, we were up for a very slight challenge and we’d been looking for an excuse to try out the new “ChatGPT” AI toy from OpenAI, which has been all the rage for about five minutes because it’s pretty good at generating text that sounds plausible, if repetitive after a very short while. (If you want to play with it, you’ll need to create a free account.)

So we decided to fulfill Stephen Miller’s Christmas TV Movie wish, since it hardly took any effort on our part. Here’s our first try, with the prompt “write a script for a Hallmark movie about the royal prince of a fictional European monarchy who decides to marry based on the best interests of family and nation, selflessly fulfilling his patriotic duty as future sovereign.”

First Try: ‘Duty And Love’

The Royal Prince, David, sat in his chambers, staring at the piles of letters on his desk. As the future Sovereign of the Kingdom of Arcadia, it was his duty to choose a bride. But he was not just looking for a wife, he was looking for someone who could help him lead the kingdom into a brighter future.

David sighed and reached for a letter, reading through it quickly. He had received countless proposals from noblewomen across Europe, each one more eager than the last to marry the prince and become the future Queen of Arcadia. But David knew that he could not simply marry for love, he had to marry for the good of the kingdom.

As he read through the letters, he thought about the many challenges facing Arcadia. The kingdom was struggling economically, and tensions with its neighbors were growing. David knew that he needed a strong, intelligent, and capable partner to help him navigate these challenges and secure a prosperous future for the kingdom.

Finally, after much contemplation, David made his decision. He summoned his advisor and announced that he had chosen to marry Lady Elizabeth, the daughter of a powerful Duke from a neighboring kingdom. The marriage would not only be beneficial for Arcadia, but it would also strengthen the alliance between the two kingdoms and help to secure peace in the region.

The royal wedding was a grand affair, with people from across Europe coming to witness the union of Prince David and Lady Elizabeth. As they exchanged vows, David felt a sense of duty and responsibility, knowing that he was not just marrying for himself, but for the good of the kingdom.

But despite the practical nature of their marriage, David and Elizabeth soon found that they had a deep connection and genuine love for each other. As they worked together to lead Arcadia into a brighter future, they realized that their union was not just a political arrangement, but a true partnership of love and duty.

Rebecca, in the ChatCave, with a revolver: “Well, it was fast, but it’s not a movie script. STUPID AI!!!!!” I tweaked the prompt a bit, asking the bot to write a script, with dialogue, for a single scene from Miller’s ideal movie, and it spat out a perfectly adequate but bland conversation between another prince (Charles this time) and his loyal butler Jenkins, also about marrying for the good of the kingdom and how wise that is.

Finally I hit on what was needed: “Write a script, with dialogue, for a pivotal love scene from a Hallmark movie,” etc.

Readers, prepare to be swept off your romantic dutiful feet.

Third Try: Talk Duty To Me, Baby

(Fine, that is Dok’s title because he wants to stay ahead of the Machines)

Narrator: Prince Richard stood at the edge of the balcony, overlooking the bustling city below. As the future king of his small European nation, he had always known that his marriage would be a strategic political alliance. But now, as the day of his arranged marriage approached, he found himself struggling with the weight of his duty.

Prince Richard: (to himself) How can I marry someone I do not love? Is it selfish to want to marry for love, rather than for the good of my country?

Narrator: Just then, the door to the balcony opened and his betrothed, Princess Sophia, stepped out to join him.

Princess Sophia: (approaching Prince Richard) My lord, I did not expect to find you out here.

Prince Richard: (turning to Princess Sophia) Forgive me, Princess. I was just taking in the view.

Princess Sophia: (smiling) It is a beautiful city, is it not?

Prince Richard: (nodding) Indeed. It is my home, and I love it dearly. But sometimes, I cannot help but wonder if I am making the right decision.

Princess Sophia: (surprised) What do you mean, my lord?

Prince Richard: (sighing) My marriage to you is an arrangement, a political alliance for the good of our nations. But is that truly what I want?

Princess Sophia: (pausing, then taking Prince Richard’s hand) My lord, I too have always known that our marriage was for the good of our countries. But over the past few months, I have come to care for you deeply. I know that we may not have the kind of love that is written about in fairy tales, but I believe that with time and effort, we can build a strong and loving marriage.

Prince Richard: (squeezing Princess Sophia’s hand) You are wise beyond your years, Princess. I have been so caught up in my own doubts that I did not see how much you have come to mean to me. I will marry you, not out of a sense of duty, but because I want to be with you.

Narrator: And so, Prince Richard and Princess Sophia entered into their marriage with love in their hearts, knowing that they would face the challenges of ruling their nations together, hand in hand.

Princess Sophia: Hey! Let’s take some immigrant children from their parents!

Prince Richard: NOW you’re talkin’, toots!

And that’s why humans remain in charge, buster.

He loved her, especially since she was so well suited to be Head of State

[ChatGPT / Atlantic / Stable Diffusion AI / DreamStudio Lite AI]

Dear Wonkette readers,

As a fan of political satire and fart jokes, you know that it takes a lot of time, effort, and resources to create the kind of content that you enjoy. That’s why we’re asking you to consider donating $5 or $10 a month to help us keep bringing you the best political satire and fart jokes on the internet. — ChatGPT

Do your Amazon shopping through this link, because reasons.



Source link

#Write #Stephen #Millers #Dutiful #Prince #Hallmark #Movie #Fck