Has Meta’s record-breaking Threads opened us up to more cyberthreats?

By Dr Niklas Hellemann, Psychologist, CEO, SoSafe

Whether it’s the launch of Threads, the shift to remote work, or even the start of the war in Ukraine, hackers will manipulate our emotions against us, Dr Niklas Hellemann writes.

Threads, the new social media platform from Meta and supposed Twitter competition, is officially the fastest-growing new app in history. 

In just five days, the Twitter competitor was able to gain over 100 million users, which is even more impressive as the app is not yet available in Europe. 

However, in an already treacherous dark economy, where various channels are leveraged for cybercrime, Meta’s new social media superstar is yet another convenient avenue of attack for career cybercriminals and their social engineering toolkit. 

Civilians and employees – especially those who work with sensitive data – must be vigilant, as the rapidly expanding social media landscape represents a serious security risk.

A plethora of scams

In the short time since its release, cybercriminals have already used Threads’ high-profile launch to attempt to scam and attack unsuspecting users. 

For instance, criminals have developed phishing sites that mimic non-existent web versions of Threads, which are designed to trick users into entering their login details. 

Because Threads is connected to other Meta services, cybercriminals could use these phishing sites to steal access to users’ other social media accounts, such as Instagram or Facebook. 

This is not only a privacy risk, opening the door to identity theft and doxing, but also a financial risk, as criminals may be able to steal personal banking information.

Similarly, fake versions of the app have appeared in smartphone stores, either to trick users out of their money by requiring payment or to act as a channel for malware and phishing attacks. 

Earlier this month, Apple had to remove a counterfeit Threads app from its European app store after it climbed to the number one spot in its store.

Social media, the perfect hunting ground

One reason these fraudulent sites and apps have been so successful is that Threads is not yet available to European consumers. 

Its launch in the EU was delayed due to regulatory issues over the extensive amount of data Threads collects on its users, which should concern prospective users. 

Threads can collect personal information, including location, finance and even health and fitness data. 

This treasure trove of data makes it an attractive target for hackers, representing a serious vulnerability if it is breached.

Those who can use Threads must also be careful about who they follow. Threads’ current verification system allows anyone to purchase a “tick”. 

Without vetting, there is a risk of impersonators pretending to be well-known celebrities or organisations, possibly scamming users out of their money or as part of a multi-channel phishing attack. 

Social media is the perfect hunting ground for spear-phishing attacks: by harvesting personal details, cybercriminals can craft their attacks to target people with surgical precision, including by pretending to be an authority figure, such as the CEO of a business. 

This is made even easier because users may falsely believe that they are in a safe, private environment and feel encouraged to broadcast their personal information.

FOMO, a part of human nature

The security issues around Threads relate to a basic psychological phenomenon that leads to potential risks. 

Namely, humans are fallible in the sense of reacting with certain behaviour to certain emotions, and when faced with the novelty and excitement of getting to grips with new technologies, they often let their guard down. 

In their haste to try out Threads, many users are exposing themselves to these scams. 

“FOMO” – the fear of missing out – is very real when it comes to jumping headfirst into exciting new platforms, but unfortunately, so are the potential risks.

However, there is a bigger issue at play. The rapid diversification of not just social media channels but also the communication tools and collaboration platforms we use in our everyday work and personal lives mean that we are frequently getting to grips with unfamiliar technologies and environments. 

Our increased dependency on this wider range of tools and platforms provides an advantage to cybercriminals, giving them more channels and vulnerabilities to attack and more ways to collect valuable data.

The security concerns around Threads also point to the simple fact that most people are unaware of the huge menu of tactics and methods used by today’s highly professional hackers. 

The cybercrime industry has never been more sophisticated or had more resources and opportunities, with the professionalisation of cybercrime leading to the creation of organised networks operating like slick criminal enterprises. 

Their main chance for success? Playing with our human psyche and emotions.

This is what you can do to protect yourself

So, how can everyday people stay safe in this ever-evolving cyberthreat jungle? 

First, we need to raise awareness of the threats that are out so that people remember to protect themselves online. 

By learning to spot threats or malicious messages, people are much better equipped to deal with them rather than learn the hard way.

Second, we need to reinforce safe online behaviour. That means setting strong passwords and using multi-factor authentication to keep login details secure, but also being aware of what information we are sharing online – social media are public platforms where you cannot control the spread of information. 

Where possible, set your account to private.

Finally, be aware that cybercriminals will find ways to exploit current affairs as they are masters of social engineering.

Whether it’s the launch of Threads, the shift to remote work, or even the start of the war in Ukraine, hackers will manipulate our emotions against us.

Today’s cybercriminals are experts at exploiting the human psyche. 

Only if we are aware of the innovation strength and creativity of cybercriminals and practice secure behaviour while online will we be able to notice these risks continuously and stay safe. 

Dr Niklas Hellemann is a psychologist and the CEO of SoSafe, a security awareness scale-up.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#Metas #recordbreaking #Threads #opened #cyberthreats

ChatGPT: Use of AI chatbot in Congress and court rooms raises ethical questions

User-friendly AI tool ChatGPT has attracted hundreds of millions of users since its launch in November and is set to disrupt industries around the world. In recent days, AI content generated by the bot has been used in US Congress, Columbian courts and a speech by Israel’s president. Is widespread uptake inevitable – and is it ethical?

In a recorded greeting for a cybersecurity convention in Tel Aviv on Wednesday, Israeli President Isaac Herzog began a speech that was set to make history: “I am truly proud to be the president of a country that is home to such a vibrant and innovative hi-tech industry. Over the past few decades, Israel has consistently been at the forefront of technological advancement, and our achievements in the fields of cybersecurity, artificial intelligence (AI), and big data are truly impressive.”

To the surprise of the entrepreneurs attending Cybertech Global, the president then revealed that his comments had been written by the AI bot ChatGPT, making him the first world leader publicly known to use artificial intelligence to write a speech. 

But not the first politician to do so. A week earlier, US Congressman Jake Auchincloss read a speech also generated by ChatGPT on the floor of the House of Representatives. Another first, intended to draw attention to the wildly successful new AI tool in Congress “so that we have a debate now about purposeful policy for AI”, Auchincloss told CNN. 


Since its launch in November 2022, ChatGPT (created by California-based company OpenAI) is estimated to have reached 100 million monthly active users, making it the fastest-growing consumer application in history. 

The user-friendly AI tool utilises online data to generate instantaneous, human-like responses to user queries. It’s ability to scan the internet for information and provide rapid answers makes it a potential rival to Google’s search engine, but it is also able to produce written content on any topic, in any format – from essays, speeches and poems to computer code – in seconds.  

The tool is currently free and boasted around 13 million unique visitors per day in January, a report from Swiss banking giant UBS found.

Part of its mass appeal is “extremely good engineering ­– it scales up very well with millions of people using it”, says Mirco Musolesi, professor of computer science at University College London. “But it also has very good training in terms of quality of the data used but also the way the creators managed to deal with problematic aspects.”  

In the past, similar technologies have resulted in bots fed on a diet of social media posts taking on an aggressive, offensive tone. Not so for ChatGPT, and many of its millions of users engage with the tool out of curiosity or for entertainment

“Humans have this idea of being very special, but then you see this machine that is able to produce something very similar to us,” Musolesi says. “We knew that this this was probably possible but actually seeing it is very interesting.” 

A ‘misinformation super spreader’?

Yet the potential impact of making such sophisticated AI available to a mass audience for the first time is unclear, and different sectors from education, to law, to science and business are braced for disruption.    

Schools and colleges around the world have been quick to ban students from using ChatGPT to prevent cheating or plagiarism. 

>> Top French university bans students from using ChatGPT 

Science journals have also banned the bot from being listed as a co-author on papers amid fears that errors made by the tool could find their way into scientific debate.  

OpenAI has cautioned that the bot can make mistakes. However, a report from media watchdog NewsGuard said on topics including Covid-19, Ukraine and school shootings, ChatGPT delivered “eloquent, false and misleading” claims 80 percent of the time. 

“For anyone unfamiliar with the issues or topics covered by this content, the results could easily come across as legitimate, and even authoritative,” NewsGuard said. It called the tool “the next great misinformation super spreader”. 

Even so, in Columbia a judge announced on Tuesday that he used the AI chatbot to help make a ruling in a children’s medical rights case. 

Judge Juan Manuel Padilla told Blu Radio he asked ChatGPT whether an autistic minor should be exonerated from paying fees for therapies, among other questions.  

The bot answered: “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.” 

Padilla ruled in favour of the child – as the bot advised. “By asking questions to the application we do not stop being judges [and] thinking beings,” he told the radio station. “I suspect that many of my colleagues are going to join in and begin to construct their rulings ethically with the help of artificial intelligence.” 

Although he cautioned that the bot should be used as a time-saving facilitator, rather than “with the aim of replacing judges”, critics said it was neither responsible or ethical to use a bot capable of providing misinformation as a legal tool. 

An expert in artificial intelligence regulation and governance, Professor Juan David Gutierrez of Rosario University said he put the same questions to ChatGPT and got different responses. In a tweet, he called for urgent “digital literacy” training for judges.

A market leader 

Despite the potential risks, the spread of ChatGPT seems inevitable. Musolesi expects it will be used “extensively” for both positive and negative purposes – with the risk of misinformation and misuse comes the promise of information and technology becoming more accessible to a greater number of people. 

OpenAI received a multi-million-dollar investment from Microsoft in January that will see ChatGPT integrated into a premium version of the Teams messaging app, offering services such as generating automatic meeting notes. 

Microsoft has said it plans to add ChatGPT’s technology into all its products, setting the stage for the company to become a leader in the field, ahead of Google’s parent company, Alphabet. 

>> Alphabet, Amazon and Apple results: Tech earnings hit by gloom 

Making the tool free has been key to its current and future success. “It was a huge marketing campaign,” Musolesi says, “and when people use it, they improve the dataset to use for the next version because they are providing this feedback.” 

Even so, the company launched a paid version of the bot this week offering access to new features for $20 per month.

Another eagerly awaited new development is an AI classifier, a software tool to help people identify when a text has been generated by artificial intelligence.

OpenAI said in a blog post that, while the tool was launched this week, it is not yet “fully reliable”. Currently it is only able to correctly identify AI-written texts 26 percent of the time.

But the company expects it will improve with training, reducing the potential for “automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human”.  



Source link

#ChatGPT #chatbot #Congress #court #rooms #raises #ethical #questions