Celebrities may have helped shape anti-vaccine opinions during Covid-19 pandemic, study finds | CNN



CNN
 — 

Covid-19 vaccines are known to be safe and effective, and they’re available for free, but many Americans in the US refuse to get them – and a recent study suggests that celebrities may share some of the blame for people’s mistrust.

Celebrities have long tried to positively influence public health, studies show, but during the Covid-19 pandemic, they also seemed to have a large influence on spreading misinformation.

Decades ago, in the 1950s, people could see stars like Elvis Presley, Dick Van Dyke and Ella Fitzgerald in TV ads that encouraged polio vaccination. This celebrity influence boosted the country’s general vaccination efforts, and vaccination nearly eliminated the deadly disease.

In 2021, US officials used celebrities in TV ads to encourage more people to get vaccinated against Covid-19. Big names like lifestyle guru Martha Stewart, singer Charlie Puth and even Senate Minority Leader Mitchell McConnell showed up in spots that had billions of ad impressions.

The world isn’t restricted to only three TV networks any more, so celebrities like actress Hilary Duff, actor Dwayne “The Rock” Johnson, singer Dolly Parton and even Big Bird also used their enormous presence on Instagram and Twitter to promote a pro Covid-19 vaccine message.

But social media also became a vehicle for celebrities to cast doubt about the safety and effectiveness of the vaccine and even to spread disinformation about Covid.

Their negative messages seemed to find an audience.

For their study, published in the journal BMJ Health & Care Informatics, researchers examined nearly 13 million tweets between January 2020 and March 2022 about Covid-19 and vaccines. They designed a natural language model to determine the sentiment of each tweet and compared them with tweets that also mentioned people in the public eye.

The stars they picked to analyze included people who had shared skepticism about the vaccines, who had Covid-related tweets that were identified as misinformation or who retweeted misinformation about Covid.

They included rapper Nicki Minaj, football player Aaron Rodgers, tennis player Novak Djokovic, singer Eric Clapton, Sen. Rand Paul, former President Donald Trump, Sen. Ted Cruz, Florida Gov. Ron DeSantis, TV host Tucker Carlson and commentator Joe Rogan.

The researchers found 45,255 tweets from 34,407 unique authors talking about Covid-19 vaccine-related issues. Those tweets generated a total of 16.32 million likes. The tweets from these influencers, overall, were more negative about the vaccine than positive, the study found. These tweets were specifically more related to antivaccine controversy, rather than news about vaccine development, the study said.

The highest number of negative comments was associated with Rodgers and Minaj. Clapton had “very few” positive tweets, the study said, and that may have had an influence, but he also caught flak for it from the public.

The most-liked tweet that mentioned Clapton and the vaccine said, “Strongly disagree with [EC] … take on Covid and the vaccine and disgusted by his previous white supremacist comments. But if you reference the death of his son to criticize him, you are an ignorant scumbag.”

Trump and Cruz were found to have the most substantial impact within this group, with combined likes totaling more than 122,000.

They too came in for criticism on the topic, with many users wondering whether these politicians were qualified to have opinions about the vaccines. The study said the most-liked tweet mentioning Cruz was, “I called Ted Cruz’s office asking to make an appointment to talk with the Senator about my blood pressure. They told me that the Senator was not qualified to give medical advice and that I should call my doctor. So I asked them to stop advising about vaccines.”

The most-liked tweet associated with Rogan was an antivaxx statement: “I love how the same people who don’t want us to listen to Joe Rogan, Aaron Rodgers about the covid vaccine, want us to listen to Big Bird & Elmo.”

Posts shared by news anchors and politicians seemed to have the most influence in terms of the most tweets and retweets, the study found.

“Our findings suggest that the presence of consistent patterns of emotional content co-occurring with messaging shared by those persons in the public eye that we’ve mentioned, influenced public opinion and largely stimulated online public discourse, for the at least over the course of the first two years of the Covid pandemic,” said study co-author Brianna White, a research coordinator in the Population Health Intelligence lab at the University of Tennessee Health Science Center – Oak Ridge National Laboratory Center for Biomedical Informatics.

“We also argue that obviously as the risk of severe negative health outcomes increase with the failure to comply with health protective behavior recommendations, that our findings suggest that polarized messages from societal elite may downplay those severe negative health outcome risks.”

The study doesn’t get into exactly why celebrity tweets would have such an impact on people’s attitudes about the vaccine. Dr. Ellen Selkie, who has conducted research on influence at the intersection of social media, celebrity and public health outcomes, said celebrities are influential because they attract a lot of attention.

“I think part of the influence that media have on behavior has to do with the amount of exposure. Just in general, the volume of content that is focused on a specific topic or on a specific sort of interpretation of that topic – in this case misinformation – the repeated exposure to any given thing is going to increase the likelihood that it’s going to have an effect,” said Selkie, who was not involved in the new research. She is an adolescent health pediatrician and researcher with UW Health Kids and an assistant professor of pediatrics at the University of Wisconsin School of Medicine and Public Health.

Just as people listen to a friend’s thoughts, they’ll listen to a celebrity whom they tend to like or identify with because they trust their opinion.

“With fandoms, in terms of the relationship between musical artists and actors and their fans, there is this sort of mutual love that fans and artists have for each other, which sort of can approximate that sense that they’re looking out for each other,” Selkie said.

She said she would be interested to see research on the influence of celebrities who tweeted positive messages about the Covid-19 vaccine.

The authors of the study hope public health leaders will use the findings right away.

“We argue this threat to population health should create a sense of urgency and warrants public health response to identify, develop and implement innovative mitigation strategies,” the study says.

Exposure to large amounts of this misinformation can have a lasting impact and work against the public’s best interest when it comes to their health.

“As populations grow to trust the influential nature of celebrity activity on social platforms, followers are disarmed and open to persuasion when faced with false information, creating opportunities for dissemination and rapid spread of misinformation and disinformation,” the study says.



Source link

#Celebrities #helped #shape #antivaccine #opinions #Covid19 #pandemic #study #finds #CNN

Most Americans are uncomfortable with artificial intelligence in health care, survey finds | CNN



CNN
 — 

Most Americans feel “significant discomfort” about the idea of their doctors using artificial intelligence to help manage their health, a new survey finds, but they generally acknowledge AI’s potential to reduce medical mistakes and to eliminate some of the problems doctors may have with racial bias.

Artificial intelligence is the theory and development of computer programs that can solve problems and perform tasks that typically would require human intelligence – machines that can essentially learn like humans can, based on the input they have been given.

You probably already use technology that relies on artificial intelligence every day without even thinking about it.

When you shop on Amazon, for example, it’s artificial intelligence that guides the site to recommend cat toys if you’ve previously shopped for cat food. AI can also help unlock your iPhone, drive your Tesla, answer customer service questions at your bank and recommend the next show to binge on Netflix.

Americans may like these individualized services, but when it comes to AI and their health care, it may be a digital step too far for many.

Sixty percent of Americans who took part in a new survey by the Pew Research Center said that they would be uncomfortable with a health care provider who relied on artificial intelligence to do something like diagnose their disease or recommend a treatment. About 57% said that the use of artificial intelligence would make their relationship with their provider worse.

Only 38% felt that using AI to diagnose disease or recommend treatment would lead to better health outcomes; 33% said it would lead to worse outcomes; and 27% said it wouldn’t make much of a difference.

About 6 in 10 Americans said they would not want AI-driven robots to perform parts of their surgery. Nor do they like the idea of a chatbot working with them on their mental health; 79% said they wouldn’t want AI involved in their mental health care. There’s also concern about security when it comes to AI and health care records.

“Awareness of AI is still developing. So one dynamic here is, the public isn’t deeply familiar with all of these technologies. And so when you consider their use in a context that’s very personal, something that’s kind of high-stakes as your own health, I think that the notion that folks are still getting to know this technology is certainly one dynamic at play,” said Alec Tyson, Pew’s associate director of research.

The findings, released Wednesday, are based on a survey of 11,004 US adults conducted from December 12-18 using the center’s American Trends Panel, an online survey group recruited through random sampling of residential addresses across the country. Pew weights the survey to reflect US demographics including race, gender, ethnicity, education and political party affiliation.

The respondents expressed concern over the speed of the adoption of AI in health and medicine. Americans generally would prefer that health care providers move with caution and carefully consider the consequences of AI adoption, Tyson said.

But they’re not totally anti-AI when it comes to health care. They’re comfortable with using it to detect skin cancer, for instance; 65% thought it could improve the accuracy of a diagnosis. Some dermatologists are already exploring the use of AI technology in skin cancer diagnosis, with some limited success.

Four in 10 Americans think AI could also help providers make fewer mistakes, which are a serious problem in health care. A 2022 study found that medical errors cost about $20 billion a year and result in about 100,000 deaths each year.

Some Americans also think AI may be able to build more equity into the health care system.

Studies have shown that most providers have some form of implicit bias, with more positive attitudes toward White patients and negative attitudes toward people of color, and that could affect their decision-making.

Among the survey participants who understand that this kind of bias exists, the predominant view was that AI could help when it came to diagnosing a disease or recommending treatments, making those decisions more data-driven.

Tyson said that when people were asked to describe in their own words how they thought AI would help fight bias, one participant cited class bias: They believed that, unlike a human provider, an AI program wouldn’t make assumptions about a person’s health based on the way they dressed for the appointment.

“So this is a sense that AI is more neutral or at least less biased than humans,” Tyson said. However, AI is developed with human input, so experts caution that it may not always be entirely without bias.

Pew’s earlier surveys about artificial intelligence have found a general openness to AI, he said, particularly when it’s used to augment, rather than replace, human decision-making.

“AI as just a piece of the process in helping a human make a judgment, there is a good amount of support for that,” Tyson said. “Less so for AI to be the final decision-maker.”

For years, radiologists have used AI to analyze x-rays and CT scans to look for cancer and improve diagnostic capacity. About 30% of radiologists use AI as a part of their practice, and that number is growing, a survey found – but more than 90% in that survey said they wouldn’t trust these tools for autonomous use.

Dr. Victor Tseng, a pulmonologist and medical director of California-based Ansible Health, said that his practice is one of many that have been exploring the AI program ChatGPT. His group has set up a committee to look into its uses and to discuss the ethics around using it so the practice could set up guardrails before putting it into clinical practice.

Tseng’s group published a study this month that showed that ChatGPT could correctly answer enough practice questions that it would have passed the US Medical Licensing Examination.

Tseng said he doesn’t believe that AI will ever replace doctors, but he thinks technology like ChatGPT could make the medical profession more accessible. For example, a doctor could ask ChatGPT to simplify complicated medical jargon so that someone with a seventh-grade education could understand.

“AI is here. The doors are open,” Tseng said.

The Pew survey findings suggest that attitudes could shift as more Americans become more familiar with artificial intelligence. Survey respondents who were more familiar with a technology were more supportive of it, but they still shared caution that doctors could move too quickly in adopting it.

“Whether you’ve heard a lot about AI, just a little or maybe even nothing at all, all of those segments of the public are really in the same space,” Tyson said. “They echo this sentiment of caution of wanting to move carefully in AI adoption in health care.”

Source link

#Americans #uncomfortable #artificial #intelligence #health #care #survey #finds #CNN

Ongoing Sector Rotation Out Of Defense Into Technology

The Relative Rotation Graph for US sectors continues to show a shift out of defensive sectors into more offensive and economically sensitive ones.

The improvement for XLC (communication services, XLY (consumer discretionary), and XLK (technology) continues and is visible inside the improving quadrant. All three tails are travelling at a positive RRG-Heading. XLC and XLK are coming very close to crossing over into the leading quadrant, while XLY is still the sector with the lowest RS-Ratio reading but rapidly picking up now.

Communication Services

XLC managed to break away from its falling trend channel at the end of last year. Since then, a double bottom formation was completed, out of which a rally followed that brought the sector back to resistance near 60. The decline that followed after setting a peak against that resistance level is the first serious pull-back after breaking away from the bottoming formation.

On the back of that improvement in price, the relative strength for XLC against SPY has rapidly improved, and the tail on the RRG is now close to crossing over into the leading quadrant. Overall, the current setback seems to offer a good new entry point, especially when the tail on the daily RRG will rotate back into a positive RRG-Heading. Confirmation will be given when XLC can take out resistance at 60.

Technology

After breaking above its falling resistance and out of the declining channel, XLK is managing to hold up well above its previous high, now acting as support. This confirms that a new series of higher highs and higher lows is now in place.

Relative strength against SPY has just broken above its previous high, signalling an end to the relative downtrend as well.

On the RRG, the tail for XLK is inside improving, travelling at a strong RRG-Heading and ready to cross over into the leading quadrant.

Even if XLK dropped back below support between roughly 135-137, it would not immediately harmn the new trend. There is still a bit of room to manoever.

Here also, a rotation back to a positive RRG-Heading on the daily RRG tail will be the confirmation for further relative improvement over SPY.

Consumer Discretionary

The break above the falling resistance line marked the end of the downtrend that started at the end of 2021. For the last three weeks, XLY remained above its breakout level around 147, where falling trendline resistance co-incided with the horizontal resistance offered by the most recent peaks in H2-2022. This in itself is a sign of strength.

Combine this with a further improvement in relative strength and the weekly tail moving further into the improving quadrant, and things are looking good for XLY. The only things that makes XLY a bit more risky than XLK and XLC is the fact that it has the lowest Jdk RS-Ratio reading on the weekly RRG. This means there is still some risk for this tail to roll over while inside improving and not making it all the way to leading.

Just like for XLC and XLK, here also a rotation back up on the daily RRG will provide support for a further improvement in coming weeks.


Rotation out of Defense

On the opposite side of these rotations, at a positive RRG-Heading we are still seeing money flowing out of the defensive sectors. Their tails continue to travel at a negative RRG-Heading. XLU has already crossed into the lagging quadrant. XLV and XLP are still inside weakening but rapidly moving towards lagging.

Utilities

This sector has been showing a very choppy chart since it came down off its high near 78. In that move, trendline support was broken, as well as support coming from two previous lows. The rally then tried to break back above resistance, sending some confusing messages in the process. But finally that attempt failed, and a small double top formation was completed in that resistance zone, and the market is now working its way lower from that high.

Relative strength has started to move inline and recently broke below its former low, signalling that a downtrend is now in place. This puts the tail on the weekly RRG back into the lagging quadrant while at a negative RRG-Heading, suggesting that there is more relative weakness ahead in coming weeeks.

Consumer Staples

XLP dropped out of its rising channel in the first half of 2022. Since then, a trading range has developed between 66 and 77. The last rally to this upper boundary ended in another test of resistance and a failure to break. Out of this recent high a new series of lower highs and lower lows is developing, and XLP seems to be underway to the lower end of the range again.

This sideways price performance has also caused relative weakness for this sector, resulting in the tail on the weekly RRG to move rapidly towards the lagging quadrant, currently inside weakening, at a negative RRG-Heading.

Health Care

The third and final defensive sector is Health care. This sector already started trading in a range late 2021, starting 2022. The upper boundary is marked around 140 while the lower boundary is coming in around 122.50 with two to three dips towards 117.5.

This sideways movement caused really strong relative strength during 2022, when the S&P 500 moved significantly lower. However, XLV has not been able to keep up with the recent strength in the S&P, and relative strength is now rolling over. On the weekly RRG the XLV tail is following XLP towards the lagging quadrant.

All-in-All, rotation out of defensive sectors continues, and a more pronounced move into more offensive and sensitive sectors is starting to shape up. This suggests underlying strength for the broader market.

#StayAlert, –Julius


Julius de Kempenaer
Senior Technical Analyst, StockCharts.com
CreatorRelative Rotation Graphs
FounderRRG Research
Host ofSector Spotlight

Please find my handles for social media channels under the Bio below.

Feedback, comments or questions are welcome at [email protected]. I cannot promise to respond to each and every message, but I will certainly read them and, where reasonably possible, use the feedback and comments or answer questions.

To discuss RRG with me on S.C.A.N., tag me using the handle Julius_RRG.

RRG, Relative Rotation Graphs, JdK RS-Ratio, and JdK RS-Momentum are registered trademarks of RRG Research.

Julius de Kempenaer

About the author:
Julius de Kempenaer is the creator of Relative Rotation Graphs™. This unique method to visualize relative strength within a universe of securities was first launched on Bloomberg professional services terminals in January of 2011 and was released on StockCharts.com in July of 2014.

After graduating from the Dutch Royal Military Academy, Julius served in the Dutch Air Force in multiple officer ranks. He retired from the military as a captain in 1990 to enter the financial industry as a portfolio manager for Equity & Law (now part of AXA Investment Managers).
Learn More

Subscribe to RRG Charts to be notified whenever a new post is added to this blog!

Source link

#Ongoing #Sector #Rotation #Defense #Technology

SEC proposes rules that would change which crypto firms can custody customer assets

The Securities and Exchange Commission voted 4-1 on Wednesday to propose sweeping changes to federal regulations that would expand custody rules to include assets like crypto and require companies to gain or maintain registration in order to hold those customer assets.

The proposed amendments to federal custody rules would “expand the scope” to include any client assets under the custody of an investment advisor. Current federal regulations only include assets like funds or securities, and require investment advisors, like Fidelity or Merrill Lynch, to hold those assets with a federal- or state-chartered bank, with a few highly specific exceptions.

It would be the SEC’s most overt effort to rein in even regulated crypto exchanges that have substantial institutional custody programs serving high-net-worth individuals and entities which custody investor assets, like hedge funds or retirement investment managers.

The move poses a fresh threat to crypto exchange custody programs, as other federal regulators actively discourage custodians like banks from holding customer crypto assets. The amendments also come as the SEC aggressively accelerates enforcement attempts.

While the amendment doesn’t specify crypto companies, Gensler said in a separate statement that “though some crypto trading and lending platforms may claim to custody investors’ crypto, that does not mean they are qualified custodians.”

Under the new rules, in order to custody any client asset — including and specifically crypto — an institution would have to hold the charters, or qualify as a registered broker-dealer, futures commission merchant, or be a certain kind of trust or foreign financial institution.

SEC officials said that the proposal would not alter the requirements to be a qualified custodian and that there was nothing precluding state-chartered trust companies, including Coinbase or Gemini, from serving as qualified custodians.

The officials emphasized that the proposed amendments did not make a decision on which cryptocurrencies the SEC considered securities.

The amended regulation would also require a written agreement between custodians and advisors, expand the “surprise examination” requirements, and enhance recordkeeping rules.

The SEC had previously sought public feedback on whether crypto-friendly state-chartered trusts, like those in Wyoming, were “qualified custodians.”

“Make no mistake: Today’s rule, the 2009 rule, covers a significant amount of crypto assets,” Gensler said in a statement. “As the release states, ‘most crypto assets are likely to be funds or crypto asset securities covered by the current rule.’ Further, though some crypto trading and lending platforms may claim to custody investors’ crypto, that does not mean they are qualified custodians.”

But Gensler’s proposal seemed to undercut comments from SEC officials, who insisted the moves were designed with “all assets” in mind. The SEC chair alluded to several high-profile crypto bankruptcies in recent months, including those of Celsius, Voyager, and FTX.

“When these platforms go bankrupt—something we’ve seen time and again recently—investors’ assets often have become property of the failed company, leaving investors in line at the bankruptcy court,” Gensler said.

The proposed changes by the SEC are also intended to “ensure client assets are properly segregated and held in accounts designed to protect the assets in the event of a qualified custodian bankruptcy or other insolvency,” according to material released by the agency on Wednesday.

Coinbase already has a similar arrangement in place. In its most recent earnings report, the exchange specified that it keeps customer crypto assets “bankruptcy remote” from hypothetical general creditors, but noted that the “novelty” of crypto assets meant it was uncertain how courts would treat them.

The SEC has already begun to target other lucrative revenue streams for crypto institutions like Coinbase, which is the only publicly traded pure crypto exchange in the U.S. Last week, the SEC announced a settlement with crypto exchange Kraken over its staking program, alleging it constituted an unregistered offering and sale of securities.

At the time, Coinbase CEO Brian Armstrong said a potential move against staking would be a “terrible path” for consumers.

Coinbase reported $19.8 million in institutional transaction revenue and $14.5 million in custodial fee revenue for the three months ending Sept. 30, 2022. Together, that institutional revenue represented about 5.8% of Coinbase’s $590.3 million in revenue for that same time period. But that percentage does not include any revenue from blockchain rewards or interest income from institutional custody clients.

“Coinbase Custody Trust Co. is already a qualified custodian, and after listening to today’s SEC meeting, we are confident that we will remain a qualified custodian even if this proposed rule is enacted as proposed,” Coinbase chief legal officer Paul Grewal said. “We agree with the need for consumer protections — as a reminder, our client assets are segregated and protected in any eventuality.”

Grayscale Bitcoin Trust (GBTC), for example, custodies billions of dollars worth of bitcoin using Coinbase Custody, holding roughly 3.4% of the world’s bitcoin in May 2022.

In the aftermath of the SEC’s approval vote, comments from commissioners made it unclear what the full extent of the SEC’s proposed rulemaking would be, and how it could impact existing partnerships. Grayscale is not a registered investment advisor, and so under the proposed amendments would not apparently face any material impact to their custody arrangement.

A person familiar with the matter did not expect the relationship would be adversely affected, noting Coinbase Custody’s qualified custodian status as a New York state-chartered trust, and observing that investment advisors might even transition from directly holding bitcoin to owning GBTC shares as a result of the proposed amendments.

Within the commissioner’s ranks, there was dissent and questions over the nature of the proposed rules. “The proposing release takes great pains to paint a “no-win” scenario for crypto assets,” SEC commissioner Mark Uyeda said. “In other words, an adviser may custody crypto assets at a bank, but banks are cautioned by their regulators not to custody crypto assets.”

But Uyeda also noted that the proposal was a move towards rulemaking, rather than what he called a historic use of “enforcement actions to introduce novel legal and regulatory theories.’

It was a sentiment echoed by Coinbase’s chief legal officer, who emphasized a need for clarity, a clarion call that has been echoed throughout the industry. “We encourage the SEC to begin the rulemaking process on what should or should not be considered a crypto security, especially given that today’s proposal acknowledges that not all crypto assets are securities. Rulemaking on that topic could offer needed clarity to consumers, investors, and the industry,” Grewal said.

— CNBC’s Kate Rooney contributed to this report.

Source link

#SEC #proposes #rules #change #crypto #firms #custody #customer #assets

Tindered out? How to avoid creeps, time wasters and liars this Valentine’s Day

Michelle has had her fair share of bad dates.

A divorced mother of four children, Michelle, 52, resolved to maintain her sense of humor when she returned to the dating market, and signed up for Hinge, an online dating service that includes voice memos, in addition to audio and video functions that enable two interested parties to talk to each other without sharing their phone numbers. 

Given that she had not dated since she was in her 20s, Michelle, who asked for her surname to be withheld, was thrown into the world of online dating, right swipes, ghosting, men who were actually living overseas, married men, men who lied about their age and men who posted photos that were 10 years old. She split from her husband of nearly two decades in 2014. 

Hinge is part of Match.com’s
MTCH,
+1.22%

group of apps along with OKCupid, Tinder, Bumble, and Christian Mingle, among others. The company promotes itself as the app that is designed to be deleted by its users. It’s a bold statement in the era of online dating, when people scroll through profiles — swiping right for yes and left for no — in search of their perfect mate.

But Hinge, like many other dating apps, introduced a video function in 2020 to help push people to “meet” during the worst days of the coronavirus pandemic. Dating experts advise applying the same rules you would to a Zoom
ZM,
+3.06%

call: dress smartly, use an overhead light rather than a backlight that casts you in shadow, and don’t sit in front of yesterday’s pile of dirty laundry.

‘It’s amazing how many guys use a picture from 10 years ago. You can barely recognize them when you meet them.’


— Michelle, 52, a divorced mother of four who searched for love online

A video date will reveal a lot more than a profile picture. “It’s amazing how many guys use a picture from 10 years ago,” Michelle said. “You can barely recognize them when you meet them. I discovered that someone who is very quick to ask for your email address or your number is more likely to be a scammer. Unfortunately, there’s a lot of scamming on dating apps.”

She’s not wrong. Nearly 70,000 Americans lost $1.3 billion to romance scams through social media and dating apps last year, up from 56,000 the year before, according to the Federal Trade Commission. That’s broadly in line with the amount of money lost the previous year, but up significantly from the $730 million lost in 2020. 

Through her work as a social worker, Michelle has learned to evaluate people and look for red flags. She has used those skills when online dating. She watches out for “goofy stuff” like a man who is writing like a character from a romance novel. “The Lifetime Channel Christmas Love Story is not happening on Hinge,” she said. “Those are the things that I kind of find funny.” 

Other red flags: Someone who lies about their age, is unwilling to meet, won’t turn on the video chat function — what have they got to hide? — and a man who is cheap. “Why did I drive 45 minutes to meet you and you can’t even buy me a cup of coffee? I don’t want someone who is stingy. Either they’re really miserly, have poor judgment, or poor people skills.”

The perilous side of handheld love machines

Dating apps are the ultimate love machine, churning out potential partners every two seconds, someone who is taller, younger, hotter, richer, broader, slimmer, sexier, kookier, weirder — and the list goes on. All of life’s parade is a swipe away. Millions of people use dating apps — from Grindr for gay men to Facebook Dating for pretty much everyone.

There is a balance between keeping people swiping and helping them find love. It’s a numbers game, and can be as addictive as playing the slots. EHarmony promotes its Compatibility Score, while OKCupid asks users to answer an almost limitless number of questions in order to match with more appropriate people. But critics say it leads to the gamification of people’s love lives.

Jenny Taitz, author of “How to Be Single and Happy: Science-Based Strategies for Keeping Your Sanity While Looking for a Soul Mate,” said one of the most common complaints about dating apps is the constant game of cat and mouse. Each user is probably talking to several people at the same time, and it’s tough to get people off the apps and into the real world.

If you like someone, she says, move to a video chat to test the chemistry. “It’s time-consuming, but you need to move from a pen pal to an in-person meetup,” she said. “It could be something that you do all the time, so you really have to have limits. If you’re having four dates a week, does that mean you’re not making time for friendships where you have an investment?”

‘The same person who volunteers at a soup kitchen might easily ghost someone. There is so much detachment.’


— Jenny Taitz, author of ‘How to Be Single and Happy’

Anonymity can often lead to ghosting, when people just disappear or stop answering messages. “We need to treat people like they would treat their future child or best friend,” Taitz said. “Bad behavior is so pervasive, and people are not held accountable for their actions. The same person who volunteers at a soup kitchen might easily ghost someone. There is so much detachment.”

Some studies have linked dating apps with depression, while other studies have found that online dating has led to a string of robberies through hook-ups on Grindr, and can also make it easier for sexual predators to find victims. These problems obviously exist in the real world, but social media and dating apps can provide an easier path for bad actors. 

Julie Valentine, a researcher, sexual-assault nurse examiner, and associate dean of Brigham Young University’s College of Nursing, analyzed 1,968 “acquaintance” sexual assaults that occurred between 2017 and 2020. She and her fellow researchers concluded that 14% of these sexual assaults resulted from a dating-app’s first in-person meeting. 

“One-third of the victims were strangled and had more injuries than other sexual-assault victims,” the study found. “Through dating apps, personas are created without being subjected to any criminal background checks or security screening. This means that potential victims have the burden of self-protection.” 

All those coffees take time and money

A spokeswoman for Match.com said it does not release data on how many people have actually used the video chat function. If people did use the function more often without sharing their phone number, it would in theory provide a layer of protection, help weed out bad actors, and help people decide whether a prospective date is compatible early in the process.

Cherlyn Chong, the Las Vegas-based founder of Get Over Him, a program to help women get over toxic relationships, does not believe the video chat function is as widely used as it should be. Chong, who describes herself as a dating coach and a trauma specialist, encourages her clients to use every method available to screen dates, in addition to meeting in a public place.

So what if a man did not want to video chat? “If they didn’t want to video, that’s fine,” Chong said. “But their reaction to the request would be a litmus test. We would know he is probably not someone to date, as he is not flexible. It’s also very telling if a woman explains that it’s a safety issue. The response of the guy in that situation would also be another litmus test.”

“Once you give someone their phone number, you don’t know what they are going to do with it,” Chong said. She said one of her clients encountered a man who shared her phone number with others, and sent it to a spam site on the internet. “You want to believe in the best of people,” she said, “but there are people who misuse your number because they can’t handle rejection.”

‘A couple of cocktails in New York City? You’re looking at $60 to $100, or a few hundred dollars for a pricier meal.’


— Connell Barrett, author of ‘Dating Sucks, But You Don’t’

Connell Barrett, author of “Dating Sucks, But You Don’t,” said video dates are a good first step. “You can see your date, and read their body language,” he said. “Because physical contact is off the table for a video date, it can free both singles to let go and not worry about the pressure about moving in for the first kiss. Good chemistry happens when there’s less pressure.”

Video dating also saves you time and money, especially if you’re the one who picks up the tab. “A couple of cocktails in New York City? You’re looking at $60 to $100, or a few hundred dollars for a pricier meal,” he said. Regular daters could end up spending up to $1,500 a month in bigger cities, if they’re dating a lot and eating out, Barrett added.

How much you spend will clearly depend on your lifestyle. Members of The League, a dating app that’s geared towards professionals, spend up to $260 a month on dates, followed by $215 a month for singletons using Christian Mingle, $198 for people signed up to Match.com, and $174 for Meta’s
META,
+3.03%

Facebook Dating subscribers, according to a recent survey. 

A video call allows people to get a sense of the person’s circumstances and personality, and can avoid wasting an hour having coffee with someone you will never see again. Be fun, be playful, don’t ask about exes or grill the other person “60 Minutes”-style, Barrett said. “A big mistake people make in dating is trying to impress the other person,” he said.

Video dating goes back to the 1970s

Jeff Ullman created the first successful video-dating service in Los Angeles in 1975 called Great Expectations. People recorded messages direct-to-camera. “We started with Betamax, moved to VHS, and upgraded to CD-ROMs,” he said. “As long as there are adults, there will be the hunt for love, and there will be the longing for ‘I’m missing someone, I’m missing something,’” he told MarketWatch.

“The best and the brightest did not go into dating services in the 1970s and 1980s,” he said. “I only went into it because I wanted to change the world. What I wanted to do was turn pity to envy. Our videos were 5 or 6 minutes long. There were no stock questions. They had to be ad-libbed. The only similar question was the last one: ‘What are the qualities that are most important in a relationship?’” 

He turned Great Expectations into a national franchise where customers paid $595 to $1,995 a year for membership ($1 in 1975 is around $5 today). “We did not hard sell you. We did a ‘heart sell.’ We had all kinds of Type As — doctors, lawyers, studio production chiefs, who all thought they were God’s gift, or God’s gift to womankind, but when they talked about their loneliness, they cried.”

People will always be searching for that perfect mate, Ullman said, whether it’s through videos, words, photos, psychological compatibility, A.I., or through arranged marriages or matchmakers. “But there is no perfect match. My wife Cindy and I are well matched. She’s not perfect. I’m not perfect. The moment either one of us begins to think we’re perfect is the moment we introduce negative forces.”

‘What I wanted to do was turn pity to envy. Our videos were 5 or 6 minutes. There were no stock questions.’


— Jeff Ullman, created Great Expectations, a video-dating service in Los Angeles in 1975

Before TikTok and Skype, people were not as comfortable in front of the camera, particularly if they had to talk about themselves. “We always hid the camera,” Ullman said. The 1970s decor of dark wood and indoor plants made that easier. “When we were finished, they’d say, ‘When are you going to start?’” But they were already on tape. They were, he said, happy with the first take 95% of the time.

Ullman required his franchisees to give members a three-day right to cancel for any reason — including “I’m not going to tell you” — if they changed their terms of service. “They just had to mail us or fax us their notice. Half of my franchisees were about to revolt.” Until, he said, they realized they could not afford to have a bad reputation in an industry where people were putting their hearts on the line.

It all started with a Sony-Matic Portable Videocorder gifted to him by his parents when he graduated from UC Berkeley in 1972. “They were very expensive, but they were portable. Whenever I went anywhere, whether it was a parade or a demonstration, which were common back then, they always let me in because they thought I was from “60 Minutes.” It gave us a sense of power.”

Fast forward to 2023: That power is in the hands of the $3 billion online dating industry and, perhaps to a lesser extent, in the hands of the singletons who are putting their own messages out into the world through words and pictures. In the 1970s, most people were still meeting in person. These days, your online competition is, well, almost every single person within a 50-mile radius.

Watching out for those ‘green flags’

Video dating has come in handy for singletons like Andrew Kneeshaw, a photographer and publican in Streete, County Westmeath, a small town in the Irish midlands. He’s currently active on three dating sites: Plenty of Fish, Bumble and Facebook Dating. In-app video calls have saved him — and his potential dates — time, gasoline and money spent on coffee and lunch. 

“Even someone local could be 15 or 20 miles away,” he said. He’s currently talking to a woman in Dublin, which is more than an hour away. “Hearing someone’s voice is one thing, but seeing that they are the genuine person they are supposed to be on the dating site definitely does help.” He could spend upwards of 20 euros ($21.45) on coffee/lunch, excluding gasoline.

He did go on a dinner date recently without having a video call, and he regretted it. “Neither of us felt there was a spark,” Kneeshaw said. So they split the check as they would likely never see each other again? “That sounds terrible, but yes,” he said. “I go on a date at best once a week. If you’re doing it a few times a week, it does add up very quickly.”

Ken Page, a Long Beach, N.Y.-based psychotherapist and host of the Deeper Dating podcast, is married with three children, and has compassion for people like Kneeshaw who live in more remote areas. In New York, he said, some people won’t travel uptown if they live downtown, and many more people won’t even cross the river to New Jersey. 

‘If it’s a video chat, you have the opportunity to get to know them more, and have that old-fashioned courtship experience.’


— Ken Page, a psychotherapist and host of the Deeper Dating podcast

He said green flags are just as important as red flags when deciding to move from a video date to an in-person date. “Is their smile warm and engaging? Are you attracted to the animation they have in their face? You just get tons more data when you see the person. You save money, and you save time before you get to the next step.”

In-person first dates can be brutal. “Your first reaction is, ‘they’re not attractive enough, I’ve got to get out of here,’” Page said. “If it’s a video chat, you have the opportunity to get to know them more, and have that old-fashioned courtship experience where attraction starts to grow. The ‘light attractions’ have more opportunity to grow without the pressure of meeting in person.”

Dating apps are a carousel of romantic dreams. The focus is on looks rather than personality or character. “There are so many people waiting online,” Page said. “That does not serve us. Unless the person really wows us, we swipe left. If you do a video chat, you will be more likely to get to know that person — instead of only getting to know the ‘9s’ and ‘10s.’”

And Michelle? The divorced Californian mother of four said she finally met a guy on Hinge last October, and they’ve been dating since then. “He’s just a fabulous guy. He actually moved slower than what I had experienced with other guys I had dated.” She kept her sense of humor and perspective, which helped. “He said, ‘You’re so funny.’ I didn’t have anything to lose.”

“It’s almost going to Zara
ITX,
+1.55%
,
” she said. “Nine times out of 10 you may not find something you like, but one time out of 10 you do.”

Source link

#Tindered #avoid #creeps #time #wasters #liars #Valentines #Day

For the first time, US task force proposes expanding high blood pressure screening recommendations during pregnancy | CNN



CNN
 — 

The US Preventive Services Task Force has released a draft recommendation to screen everyone who is pregnant for hypertensive disorders of pregnancy, by monitoring their blood pressure throughout the pregnancy, and the group is calling attention to racial inequities.

This is the first time the task force has proposed expanding these screening recommendations to include all hypertensive disorders of pregnancy, which are on the rise in the United States.

It means the average person might notice their doctor paying closer attention to their blood pressure measurements during pregnancy, as well as doctors screening not just for preeclampsia but for all disorders related to high blood pressure.

The draft recommendation statement and evidence review were posted online Tuesday for public comment. The statement is consistent with a 2017 statement that recommends screening with blood pressure measurements throughout pregnancy.

It was already recommended for blood pressure measurements to be taken during every prenatal visit, but “the difference is now really highlighting the importance of that – that this is a single approach that is very effective,” said Dr. Esa Davis, a member of the task force and associate professor of medicine at the University of Pittsburgh.

The draft recommendation urges doctors to monitor blood pressure during pregnancy as a “screening tool” for hypertensive disorders, she said, and this may reduce the risk of some hypertensive disorders among moms-to-be going undiagnosed or untreated.

“Since the process of screening and the clinical management is similar for all the hypertensive disorders of pregnancy, we’re broadening looking at screening for all of the hypertensive disorders, so gestational hypertension, preeclampsia, eclampsia,” Davis said.

The US Preventive Services Task Force, created in 1984, is a group of independent volunteer medical experts whose recommendations help guide doctors’ decisions. All recommendations are published on the task force’s website or in a peer-reviewed journal.

To make this most recent draft recommendation, the task force reviewed data on different approaches to screening for hypertensive disorders during pregnancy from studies published between January 2014 and January 2022, and it re-examined earlier research that had been reviewed for former recommendations.

“Screening using blood pressure during pregnancy at every prenatal encounter is a long-standing standard clinical practice that identifies hypertensive disorders of pregnancy; however, morbidity and mortality related to these conditions persists,” the separate Evidence-Based Practice Center, which informed the task force’s draft recommendation, wrote in the evidence review.

“Most pregnant people have their blood pressure taken at some point during pregnancy, and for many, a hypertensive disorder of pregnancy is first diagnosed at the time of delivery,” it wrote. “Diagnoses made late offer less time for evaluation and stabilization and may limit intervention options. Future implementation research is needed to improve access to regular blood pressure measurement earlier in pregnancy and possibly continuing in the weeks following delivery.”

The draft recommendation is a “B recommendation,” meaning the task force recommends that clinicians offer or provide the service, as there is either a high certainty that it’s moderately beneficial or moderate certainty that it’s highly beneficial.

For this particular recommendation, the task force concluded with moderate certainty that screening for hypertensive disorders in pregnancy, with blood pressure measurements, has a substantial net benefit.

Hypertensive disorders in pregnancy appear to be on the rise in the United States.

Data published last year by the US Centers for Disease Control and Prevention shows that, between 2017 and 2019, the prevalence of hypertensive disorders among hospital deliveries increased from 13.3% to 15.9%, affecting at least 1 in 7 deliveries in the hospital during that time period.

Among deaths during delivery in the hospital, 31.6% – about 1 in 3 – had a documented diagnosis code for hypertensive disorder during pregnancy.

Older women, Black women and American Indian and Alaska Native women were at higher risk of hypertensive disorders, according to the data. The disorders were documented in approximately 1 in 3 delivery hospitalizations among women ages 45 to 55.

The prevalence of hypertensive disorders in pregnancy was 20.9% among Black women, 16.4% among American Indian and Alaska Native women, 14.7% among White women, 12.5% among Hispanic women and 9.3% among Asian or Pacific Islander women.

The task force’s new draft recommendation could help raise awareness around those racial disparities and how Black and Native American women are at higher risk, Davis said.

“If this helps to increase awareness to make sure these high-risk groups are screened, that is something that is very, very important about this new recommendation,” she said. “It helps to get more women screened. It puts it more on the radar that they will then not just be screened but have the surveillance and the treatment that is offered based off of that screening.”

Communities of color are at the highest risk for hypertensive disorders during pregnancy, and “it’s very related to social determinants of health and access to care,” said Dr. Ilan Shapiro, chief health correspondent and medical affairs officer for the federally qualified community health center AltaMed Health Services in California. He was not involved with the task force or its draft recommendation.

Social determinants of health refer to the conditions and environments in which people live that can have a significant effect on their access to care, such as their income, housing, safety, and not living near sources for healthy food or easy transportation.

These social determinants of health, Shapiro said, “make a huge difference for the mother and baby.”

Hypertensive disorders during pregnancy can be controlled with regular monitoring during prenatal visits, he said, and the expectant mother would need access to care.

Eating healthy foods and getting regular exercise also can help get high blood pressure under control, and some blood pressure medications are considered safe to use during pregnancy, but patients should consult with their doctor.

Source link

#time #task #force #proposes #expanding #high #blood #pressure #screening #recommendations #pregnancy #CNN

‘Phishing-as-a-service’ kits are driving an uptick in theft: What you can learn from one business owner’s story

Cody Mullenaux and his family. Mullenaux was the victim of a sophisticated wire fraud scheme that has resulted in $120,000 being stolen

Courtesy: Cody Mullenaux

Banks have spent enormous amounts on cybersecurity and fraud detection but what happens when criminal tactics are sophisticated enough to even fool bank employees? 

For Cody Mullenaux, it meant having more than $120,000 wired from his Chase checking account with little hope of ever recouping his stolen funds.

The saga for Mullenaux, a 40-year-old small business owner from California, began on Dec. 19. While Christmas shopping for his young daughter, he received a call from a person claiming to be from the Chase fraud department and asking to verify a suspicious transaction.

The 800-number matched Chase customer service so Mullenaux didn’t think it was suspicious when the person asked him to log into his account via a secured link sent by text message for identification purposes. The link looked legitimate and the website that opened appeared identical to his Chase banking app, so he logged in. 

“It never even crossed my mind that I was not speaking with a legitimate Chase representative,” Mullenaux told CNBC.

Gone are the days when the only thing a consumer had to be wary of was a suspicious email or link. Cybercriminals’ tactics have morphed into multipronged schemes, with multiple criminals acting as a team to deploy sophisticated tactics involving readymade software sold in kits that mask phone numbers and mimic login pages of a victim’s bank. It’s a pervasive threat that cybersecurity experts say is driving an uptick in activity. They predict it will only get worse. Unfortunately, for victim of these schemes, the bank isn’t always required to repay the stolen funds.

After he was logged in, Mullenaux said he saw large amounts of money moving between his accounts. The person on the phone told him someone was in his account actively trying to steal his money and that the only way to keep it safe was to wire money to the bank supervisor, where it would be temporarily held while they secured his account.

Terrified that his hard-earned savings was about to be stolen, Mullenaux said he stayed on the phone for nearly three hours, followed all the instructions he was given and answered additional security questions he was asked. 

CNBC has reviewed Mullenaux’s cellular records, bank account information, as well as images of the text message and link he was sent.

A team of scammers

Cody Mullenaux, the inventor and founder of Aquaphant, a technology company that converts moisture from the air into filtered water, with his team and family.

Courtesy: Cody Mullenaux

Little recourse for victims of wire scams

Mullenaux said he feels frustrated and defeated about his experience trying to recover his stolen funds.

“No matter what they do to try and safeguard customers, scammers are always one step ahead,” Mullenaux said, adding that his money would have been safer in a shoebox than in a big bank that cybercriminals are targeting.

The Federal Trade Commission advises that any customer who thinks they might have sent money to scammers via a wire transfer should immediately contact their bank, report the fraudulent transfer and ask for it to be reversed.

Time is critical when trying to recover funds sent via fraudulent wire transfer, the FTC told CNBC. The agency said victims should also report the crime to the agency as well as the FBI’s Internet Crime Complaint Center, the same day or next day, if possible. 

Mullenaux said he realized something was wrong the next morning when his funds had not been returned to his account.

He immediately drove to his local Chase bank branch where he was told he had likely been the victim of fraud. Mullenaux said the matter wasn’t handled with any sense of urgency, and a reverse wire transfer attempt, which the FTC suggests customers ask for, wasn’t offered as an option.

Instead, Mullenaux said the branch employee told him he would receive a packet in the mail within 10 days that he could fill out to file a claim. Mullenaux asked for the packet immediately. He filled it out and submitted it the same day.

That claim, along with a second one Mullenaux filed with the executive branch, were denied. The employees investigating the matter said Mullenaux had called to authorize the wire transfers.

Cody Mullenaux and his daughter. Mullenaux had been shopping for Christmas gifts for his daughter when he received a call from a man impersonating a Chase fraud department employee.

Courtesy: Cody Mullenaux

CNBC provided Chase with Mullenaux’s cellular phone records that showed he never made any outgoing phone calls to Chase on the day in question. The records also suggest, when compared with the wire transfer records, that it could not have been Mullenaux who called Chase to authorize the wire transfers because all three were authorized and went through while Mullenaux was still on the phone with the scammers.

However, that didn’t change the bank’s decision and, again, Mullenaux’s claim was denied since he had shared his private information with the criminals.

Scammers exploited regulatory loopholes

Whether the scammers realized they were doing it or not, they successfully exploited two loopholes in current consumer protection legislation that resulted in Chase not being required to replace Mullenaux’s stolen funds. Legally, banks do not have to reimburse stolen funds when a customer is tricked into sending money to a cybercriminal.

However, under the Electronic Fund Transfer Act, which covers most types of electronic transactions like peer-to-peer payments and online payments or transfers, banks are required to repay customers when funds are stolen without the customer authorizing it. Unfortunately, wire transfers, which involve transferring money from one bank to another, are not covered under the act, which also excludes fraud involving paper checks and prepaid cards.

The cybercriminals also transferred funds from Mullenaux’s personal checking and savings accounts to his business account before initiating the wire transfers. Regulation E, which is designed to help consumers get their money back from an unauthorized transaction, only protects individuals, not business accounts.

A representative for Chase said that the investigation is ongoing as the bank tries to recover the stolen funds.

That is something Mullenaux says he is praying for. “I pray that this tragedy is somehow reconciled, that [bank] management sees what happened to me and my money is returned.”

Mullenaux has also filed reports with the local police and the FBI’s Internet Crime Complaint Center, but neither have contacted him about his case.

Sophisticated scamming tactics on the rise

It’s not just Chase customers being targeted by cybercriminals with these sophisticated schemes. This past summer, IronNet uncovered a “phishing-as-a-service” platform that sells ready-made phishing kits to cybercriminals that target U.S.-based companies, including banks. The customizable kits can cost as little as $50 per month and include code, graphics and configuration files to resemble bank login pages.

Joey Fitzpatrick, a threat analysis manager at IronNet, said that while he can’t say for certain that this is how Mullenaux was defrauded, “the attack against him bears all the hallmarks of attackers leveraging the same sort of multimodal tools that phishing-as-a-service platforms provide.”

He expects “as-a-service”-type offerings will only continue to gain traction as the kits not only lower the bar for low- to medium-tier cybercriminals to create phishing campaigns, but it also enables the higher-tier criminals to focus on a single area and develop more sophisticated tactics and malware.

“We’ve seen a 10% increase in deployment of phishing kits in January 2023 alone,” Fitzpatrick said.

In 2022, the company saw a 45% increase in phishing alerts and detections.

But it’s not just phishing schemes on the rise, it’s all cyberattacks. Data from Check Point showed in 2022 there was a 52% increase in weekly cyberattacks on the finance/banking sector compared with attacks in 2021.

“The sophistication of cyberattacks and fraud schemes has significantly increased during the last year,” said Sergey Shykevich, the threat group manager at Check Point. “Now, in many cases cybercriminals don’t rely only on sending phishing/malicious emails and waiting for the people to click it, but combine it with phone calls, MFA [multifactor authentication] fatigue attacks and more.”

Both cybersecurity experts said banks can be doing more to educate customers. 

Shykevich said the banks should invest in better threat intelligence that can detect and block methods cybercriminals use. An example he gave is comparing a login to a person’s digital “fingerprint,” which is based on data such as the browser an account uses, screen resolution or keyboard language.

Best advice: Hang up the phone

Source link

#Phishingasaservice #kits #driving #uptick #theft #learn #business #owners #story

Paging Dr. AI? What ChatGPT and artificial intelligence could mean for the future of medicine | CNN



CNN
 — 

Without cracking a single textbook, without spending a day in medical school, the co-author of a preprint study correctly answered enough practice questions that it would have passed the real US Medical Licensing Examination.

But the test-taker wasn’t a member of Mensa or a medical savant; it was the artificial intelligence ChatGPT.

The tool, which was created to answer user questions in a conversational manner, has generated so much buzz that doctors and scientists are trying to determine what its limitations are – and what it could do for health and medicine.

ChatGPT, or Chat Generative Pre-trained Transformer, is a natural language-processing tool driven by artificial intelligence.

The technology, created by San Francisco-based OpenAI and launched in November, is not like a well-spoken search engine. It isn’t even connected to the internet. Rather, a human programmer feeds it a vast amount of online data that’s kept on a server.

It can answer questions even if it has never seen a particular sequence of words before, because ChatGPT’s algorithm is trained to predict what word will come up in a sentence based on the context of what comes before it. It draws on knowledge stored on its server to generate its response.

ChatGPT can also answer followup questions, admit mistakes and reject inappropriate questions, the company says. It’s free to try while its makers are testing it.

Artificial intelligence programs have been around for a while, but this one generated so much interest that medical practices, professional associations and medical journals have created task forces to see how it might be useful and to understand what limitations and ethical concerns it may bring.

Dr. Victor Tseng’s practice, Ansible Health, has set up a task force on the issue. The pulmonologist is a medical director of the California-based group and a co-author of the study in which ChatGPT demonstrated that it could probably pass the medical licensing exam.

Tseng said his colleagues started playing around with ChatGPT last year and were intrigued when it accurately diagnosed pretend patients in hypothetical scenarios.

“We were just so impressed and truly flabbergasted by the eloquence and sort of fluidity of its response that we decided that we should actually bring this into our formal evaluation process and start testing it against the benchmark for medical knowledge,” he said.

That benchmark was the three-part test that US med school graduates have to pass to be licensed to practice medicine. It’s generally considered one of the toughest of any profession because it doesn’t ask straightforward questions with answers that can easily found on the internet.

The exam tests basic science and medical knowledge and case management, but it also assesses clinical reasoning, ethics, critical thinking and problem-solving skills.

The study team used 305 publicly available test questions from the June 2022 sample exam. None of the answers or related context was indexed on Google before January 1, 2022, so they would not be a part of the information on which ChatGPT trained. The study authors removed sample questions that had visuals and graphs, and they started a new chat session for each question they asked.

Students often spend hundreds of hours preparing, and medical schools typically give them time away from class just for that purpose. ChatGPT had to do none of that prep work.

The AI performed at or near passing for all the parts of the exam without any specialized training, showing “a high level of concordance and insight in its explanations,” the study says.

Tseng was impressed.

“There’s a lot of red herrings,” he said. “Googling or trying to even intuitively figure out with an open-book approach is very difficult. It might take hours to answer one question that way. But ChatGPT was able to give an accurate answer about 60% of the time with cogent explanations within five seconds.”

Dr. Alex Mechaber, vice president of the US Medical Licensing Examination at the National Board of Medical Examiners, said ChatGPT’s passing results didn’t surprise him.

“The input material is really largely representative of medical knowledge and the type of multiple-choice questions which AI is most likely to be successful with,” he said.

Mechaber said the board is also testing ChatGPT with the exam. The members are especially interested in the answers the technology got wrong, and they want to understand why.

“I think this technology is really exciting,” he said. “We were also pretty aware and vigilant about the risks that large language models bring in terms of the potential for misinformation, and also potentially having harmful stereotypes and bias.”

He believes that there is potential with the technology.

“I think it’s going to get better and better, and we are excited and want to figure out how do we embrace it and use it in the right ways,” he said.

Already, ChatGPT has entered the discussion around research and publishing.

The results of the medical licensing exam study were even written up with the help of ChatGPT. The technology was originally listed as a co-author of the draft, but Tseng says that when the study is published, ChatGPT will not be listed as an author because it would be a distraction.

Last month, the journal Nature created guidelines that said no such program could be credited as an author because “any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”

But an article published Thursday in the journal Radiology was written almost entirely by ChatGPT. It was asked whether it could replace a human medical writer, and the program listed many of its possible uses, including writing study reports, creating documents that patients will read and translating medical information into a variety of languages.

Still, it does have some limitations.

“I think it definitely is going to help, but everything in AI needs guardrails,” said Dr. Linda Moy, the editor of Radiology and a professor of radiology at the NYU Grossman School of Medicine.

She said ChatGPT’s article was pretty accurate, but it made up some references.

One of Moy’s other concerns is that the AI could fabricate data. It’s only as good as the information it’s fed, and with so much inaccurate information available online about things like Covid-19 vaccines, it could use that to generate inaccurate results.

Moy’s colleague Artie Shen, a graduating Ph.D. candidate at NYU’s Center for Data Science, is exploring ChatGPT’s potential as a kind of translator for other AI programs for medical imaging analysis. For years, scientists have studied AI programs from startups and larger operations, like Google, that can recognize complex patterns in imaging data. The hope is that these could provide quantitative assessments that could potentially uncover diseases, possibly more effectively than the human eye.

“AI can give you a very accurate diagnosis, but they will never tell you how they reach this diagnosis,” Shen said. He believes that ChatGPT could work with the other programs to capture its rationale and observations.

“If they can talk, it has the potential to enable those systems to convey their knowledge in the same way as an experienced radiologist,” he said.

Tseng said he ultimately thinks ChatGPT can enhance medical practice in much the same way online medical information has both empowered patients and forced doctors to become better communicators, because they now have to provide insight around what patients read online.

ChatGPT won’t replace doctors. Tseng’s group will continue to test it to learn why it creates certain errors and what other ethical parameters need to be put in place before using it for real. But Tseng thinks it could make the medical profession more accessible. For example, a doctor could ask ChatGPT to simplify complicated medical jargon into language that someone with a seventh-grade education could understand.

“AI is here. The doors are open,” Tseng said. “My fundamental hope is, it will actually make me and make us as physicians and providers better.”

Source link

#Paging #ChatGPT #artificial #intelligence #future #medicine #CNN

I’m a parent with an active social media brand: Here’s what you need to check on your child’s social media right now | CNN

Editor’s Note: Sign up for CNN’s Stress, But Less newsletter. Our six-part mindfulness guide will inform and inspire you to reduce stress while learning how to harness it.



CNN
 — 

If you follow me on Twitter or Instagram, you’ll know I wear a lot of hats: romance author, parent of funny tweenagers, part-time teacher, amateur homesteader, grumbling celiac and the wife of a seriously outdoorsy guy.

Because I’m an author with a major publisher in today’s competitive market, I’ve been tasked with stepping up my social media brand: participation, creation and all. The more transparent and likable I am online, the better my books sell. Therefore, to social media I go.

It’s rare to find someone with no social media presence these days, but there’s a marked difference between posting a few pictures for family and friends and actively creating social media content as part of your daily life.

With a whopping 95% of teens polled having access to smartphones (and 98% of teens over 15), according to an August Pew Research Center survey on teens, social media and technology, it doesn’t look like social media platforms are going away anytime soon.

Not only are they key social tools, but they also allow teens to feel more a part of things in their communities. Many teens like being online, according to a November Pew Research Center survey on teen life on social media. Eighty percent of the teens surveyed felt more connected to what is happening in their friends’ lives, while 71% felt social media allows them to showcase their creativity.

So, while posting online is work for me, it’s a way of life for the tweens and teens I see creating and publishing content online. As a parent of two middle schoolers, I know how important social media is to them, and I also know what’s out there. I see the good, the bad and the viral, and I’ve have put together some guidelines, based on what I’ve seen, for my fellow parents to watch for.

Here are eight questions to ask yourself as you check out your children’s social media accounts.

If you don’t, it’s time to start. It’s like when I had to look up the term “situationship,” I saw that ignorance is not bliss in this case. Or really any case when it comes to your children. Both of my children have smartphones, but even if your children don’t have smartphones, if they have any sort of device — phone, tablet, school laptop — it’s likely they have some sort of social media account out there. Every app our children wish to add to their smart devices comes through my husband’s and my phone notifications for approval. Before I approve any apps, I’ll read the reviews, run an internet search and text my mom friends for their experience.

Most tweens and teens use social media for socializing with local friends.

If I’m still uncertain about an app, I’ll hold off on approving it until I can sit down with my children and ask them why they want it. Sometimes just waiting and forcing a short discussion is enough to convince them they no longer want it. In our household, I avoid any apps that run social surveys, allow anonymous feedback or require the individual to use location services.

If you don’t have your family phone plan all hooked together with parental controls, I’d advise setting that up ASAP. Because different devices and apps have different ways to monitor and set up parental controls, it’s impossible to link all the options here. However, a quick search will give you exactly the coverage you are comfortable with, including apps that track your child’s text messages and changing the settings on your child’s phone to lock down at a certain time every night.

The top social media platforms teens use today are YouTube (95% of teens polled), TikTok (67%), Instagram (62%) and Snapchat (59%), according to the Pew Research Center survey on teens and social media tech. Other social media platforms teens use less frequently are Twitter, Reddit, WhatsApp and Facebook. Most notably, Facebook is seeing a significant downturn in teen users. This list isn’t exhaustive, however. I would check out your children’s devices for group chat apps (such as Slack or Discord) and also scroll through their sport or activity apps where group chat capabilities exist.

I’ve seen preteens and teens using their real names, birthdate, home address, pets’ names, locker numbers or their school baseball team. Any of that information could be used to identify your child and location in real life or using a quick Google search. All of that is an absolute “no” in our house.

I also tell my kids not to answer the fun surveys and quizzes that invite children to share their unique information and repost it for others to see. These can be useful tools for predators and people trying to steal your children’s identity.

What I do: I made the choice a long ago to withhold the names of my children and partner. It’s not an exact science, and I know some clever digging could find them. For my husband, it’s for the sake of his privacy and also the protection of his professionalism. Just because he’s married to a romance author doesn’t mean he should have to answer for my online antics, whatever they may be. For my children, I want to avoid anything embarrassing that could be traced back to them during their college application season.

Even if your children keep their social media profiles private (more on that later), their biographical information, screen name and avatar or profile picture are public information.

Do an internet search of your child’s name to see what’s out there and scroll through images to make sure there isn’t anything you wouldn’t want to be made public. In our household, I’ve asked my children to use generic items or illustrated avatars in their social media bios.

What I do: Parents who do have active social media accounts may want to do a search of their own names. When my first book was published in 2019, I did a search of my name and images and found many photos of my children that came directly from my social media pages. I hadn’t posted pictures of them, but I did use a family photo as my profile photo and those are public record. Once I deleted them, the photos disappeared.

Another “no” in our household is posting videos or photos of our home or bedrooms. Something that feels innocent and innocuous to your middle schooler may not feel that way to an adult seeking out inappropriate content.

I learned this from one of my children’s Pinterest accounts. My kid loves to create themed videos using her own photos and stock pictures, and she’s gained over 500 followers in a short period of time. She has completely followed our rules and I know, because I check and follow her myself — but it hasn’t stopped the influx of adult men following her content.

What we do: Over the holidays, I sat with her and went through each follower one by one and blocked anyone we decided was there for the wrong reasons. In the end, we blocked close to 30 adult men on her account. (I also know that some predators cleverly disguise themselves as children or teens, and we may not catch them all, but this is still a worthy exercise.)

We also talk to our children about how to protect themselves. They wouldn’t want those strangers standing in their bedroom; therefore, they don’t want to post videos of their bedroom or bathroom or classroom for strangers to view.

This is a tricky one for lots of reasons. For content creators to build their following, they need to remain public on social media. If your child is an entrepreneur or artist hoping to grab attention, locking down their account will prevent that from happening.

That said, a way around this is to have two accounts. First, a private one, locked down and only used for family and close friends, and second, a public one that lacks identifiers but showcases whatever branding the child is hoping to grow. I’ve come across some well-managed public accounts for children who have giant followings and noticed they are usually run by parents, who state that right in the profile. I like this. If your children want public profiles because they are hoping to catch the attention of a talent scout, having the accounts monitored by a responsible adult who has their best interest in mind is a healthy compromise.

This is the exception, however. Most tweens and teens today use their social media for socializing with local friends. The benefit of keeping their account as private (or as private as can be) is threefold. It allows them to screen who follows their content, thus preventing our Pinterest fiasco. It prevents strangers from accessing their content and making it viral without their permission. And it protects them from unsolicited contact with strangers.

Not all social media platforms have the option to make your account “private.” For example, YouTube has parental controls that can be adjusted at any time. TikTok and Instagram can be made private (which means users must approve followers) by making the change in the account settings. Once the account is private, a little padlock will show next to the username.

Snapchat allows users to approve followers on a case-by-case basis as well as turn off features that disclose a user’s location. Notably, Snapchat also informs users when another user takes a screenshot of their story, which is a feature other social media platforms don’t have yet.

Most group chat apps don’t have the ability to go private so much as they ask users to approve of follower requests. Take time to discuss with your children who they allow to follow them and what personal information they allow those followers to know. It’s also a great time to teach them the art of “blocking” those individuals who are unsafe or unkind.

My suggestion is to log in, scroll around and even ask your children to teach you about the platforms they use. Then, when they roll their eyes at you, go ahead and tell them about your first Hotmail email address and the way you picked the perfect emo playlist on your Myspace page … and when they’re bent over laughing, sneak a peek at their follower list. Trust me, it’ll be worth it.

Source link

#parent #active #social #media #brand #Heres #check #childs #social #media #CNN

ChatGPT: Use of AI chatbot in Congress and court rooms raises ethical questions

User-friendly AI tool ChatGPT has attracted hundreds of millions of users since its launch in November and is set to disrupt industries around the world. In recent days, AI content generated by the bot has been used in US Congress, Columbian courts and a speech by Israel’s president. Is widespread uptake inevitable – and is it ethical?

In a recorded greeting for a cybersecurity convention in Tel Aviv on Wednesday, Israeli President Isaac Herzog began a speech that was set to make history: “I am truly proud to be the president of a country that is home to such a vibrant and innovative hi-tech industry. Over the past few decades, Israel has consistently been at the forefront of technological advancement, and our achievements in the fields of cybersecurity, artificial intelligence (AI), and big data are truly impressive.”

To the surprise of the entrepreneurs attending Cybertech Global, the president then revealed that his comments had been written by the AI bot ChatGPT, making him the first world leader publicly known to use artificial intelligence to write a speech. 

But not the first politician to do so. A week earlier, US Congressman Jake Auchincloss read a speech also generated by ChatGPT on the floor of the House of Representatives. Another first, intended to draw attention to the wildly successful new AI tool in Congress “so that we have a debate now about purposeful policy for AI”, Auchincloss told CNN. 


Since its launch in November 2022, ChatGPT (created by California-based company OpenAI) is estimated to have reached 100 million monthly active users, making it the fastest-growing consumer application in history. 

The user-friendly AI tool utilises online data to generate instantaneous, human-like responses to user queries. It’s ability to scan the internet for information and provide rapid answers makes it a potential rival to Google’s search engine, but it is also able to produce written content on any topic, in any format – from essays, speeches and poems to computer code – in seconds.  

The tool is currently free and boasted around 13 million unique visitors per day in January, a report from Swiss banking giant UBS found.

Part of its mass appeal is “extremely good engineering ­– it scales up very well with millions of people using it”, says Mirco Musolesi, professor of computer science at University College London. “But it also has very good training in terms of quality of the data used but also the way the creators managed to deal with problematic aspects.”  

In the past, similar technologies have resulted in bots fed on a diet of social media posts taking on an aggressive, offensive tone. Not so for ChatGPT, and many of its millions of users engage with the tool out of curiosity or for entertainment

“Humans have this idea of being very special, but then you see this machine that is able to produce something very similar to us,” Musolesi says. “We knew that this this was probably possible but actually seeing it is very interesting.” 

A ‘misinformation super spreader’?

Yet the potential impact of making such sophisticated AI available to a mass audience for the first time is unclear, and different sectors from education, to law, to science and business are braced for disruption.    

Schools and colleges around the world have been quick to ban students from using ChatGPT to prevent cheating or plagiarism. 

>> Top French university bans students from using ChatGPT 

Science journals have also banned the bot from being listed as a co-author on papers amid fears that errors made by the tool could find their way into scientific debate.  

OpenAI has cautioned that the bot can make mistakes. However, a report from media watchdog NewsGuard said on topics including Covid-19, Ukraine and school shootings, ChatGPT delivered “eloquent, false and misleading” claims 80 percent of the time. 

“For anyone unfamiliar with the issues or topics covered by this content, the results could easily come across as legitimate, and even authoritative,” NewsGuard said. It called the tool “the next great misinformation super spreader”. 

Even so, in Columbia a judge announced on Tuesday that he used the AI chatbot to help make a ruling in a children’s medical rights case. 

Judge Juan Manuel Padilla told Blu Radio he asked ChatGPT whether an autistic minor should be exonerated from paying fees for therapies, among other questions.  

The bot answered: “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.” 

Padilla ruled in favour of the child – as the bot advised. “By asking questions to the application we do not stop being judges [and] thinking beings,” he told the radio station. “I suspect that many of my colleagues are going to join in and begin to construct their rulings ethically with the help of artificial intelligence.” 

Although he cautioned that the bot should be used as a time-saving facilitator, rather than “with the aim of replacing judges”, critics said it was neither responsible or ethical to use a bot capable of providing misinformation as a legal tool. 

An expert in artificial intelligence regulation and governance, Professor Juan David Gutierrez of Rosario University said he put the same questions to ChatGPT and got different responses. In a tweet, he called for urgent “digital literacy” training for judges.

A market leader 

Despite the potential risks, the spread of ChatGPT seems inevitable. Musolesi expects it will be used “extensively” for both positive and negative purposes – with the risk of misinformation and misuse comes the promise of information and technology becoming more accessible to a greater number of people. 

OpenAI received a multi-million-dollar investment from Microsoft in January that will see ChatGPT integrated into a premium version of the Teams messaging app, offering services such as generating automatic meeting notes. 

Microsoft has said it plans to add ChatGPT’s technology into all its products, setting the stage for the company to become a leader in the field, ahead of Google’s parent company, Alphabet. 

>> Alphabet, Amazon and Apple results: Tech earnings hit by gloom 

Making the tool free has been key to its current and future success. “It was a huge marketing campaign,” Musolesi says, “and when people use it, they improve the dataset to use for the next version because they are providing this feedback.” 

Even so, the company launched a paid version of the bot this week offering access to new features for $20 per month.

Another eagerly awaited new development is an AI classifier, a software tool to help people identify when a text has been generated by artificial intelligence.

OpenAI said in a blog post that, while the tool was launched this week, it is not yet “fully reliable”. Currently it is only able to correctly identify AI-written texts 26 percent of the time.

But the company expects it will improve with training, reducing the potential for “automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human”.  



Source link

#ChatGPT #chatbot #Congress #court #rooms #raises #ethical #questions