‘A fight for your way of life’: Lithuania’s culture minister on Ukraine and Russian disinformation

Lithuania’s Minister of Culture Simonas Kairys spoke to FRANCE 24 about Lithuania’s fight against Russian disinformation and why the Baltic nation feels so bound to Ukraine.

Issued on:

5 min

In March 1990, Lithuania became the first nation to declare its independence as the Soviet Union collapsed, setting an example for other states that had been under the Kremlin’s influence for half a century. As a nascent democracy emerging from Soviet control, Lithuania was free to rediscover its own history and culture.

But Vilnius has once again become a target for Moscow. Russian President Vladimir Putin has long considered the demise of the Soviet Union as a historical tragedy in which Russians were innocent victims. As part of efforts to justify the February 2022 invasion of Ukraine, Russia has launched a disinformation campaign aimed at Kyiv’s allies in the West.

In addition to putting pressure on Ukraine’s supporters, the Kremlin has attempted to intimidate them. Russian authorities placed Lithuanian Culture Minister Simonas Kairys, Estonian Prime Minister Kaja Kallas and others on a wanted list in February along with other Baltic officials for allowing municipalities to dismantle WWII-era monuments to Soviet soldiers, moves seen by Moscow as “an insult to history”. 

Upon being informed his name was listed, Culture Minister Kairys was insouciant. “I’m glad that my work in dismantling the ruins of Sovietisation has not gone unnoticed,” he said.

Read moreThe Kremlin puts Baltic leaders on ‘wanted’ list

FRANCE 24 spoke to Kairys on why it is vital to fight Russian propaganda, and why the Baltic state feels so invested in what is happening in Ukraine.

This interview has been lightly edited for length and clarity. 

What historical narratives has Russia tried to distort when it comes to Lithuanian independence?

Simonas Kairys: Russia is still in “imperialism” mode. The way they inscribed me onto their wanted list shows that they think and act upon the belief that countries that were formerly part of the Soviet Union – sovereign and independent countries such as Lithuania – are still part of Russia.

Russia has its own law system, which – from their point of view – is [the law even] in free countries (in the Russian criminal code, “destroying monuments to Soviet soldiers” is an act punishable by a five-year prison term). It’s absurd and unbelievable how they interpret the current situation in the world. If they say, for example, that they are “protecting” objects of Soviet heritage in a foreign country like Lithuania, they are spreading their belief that it is not a free country. But we are not slaves, and we are taking this opportunity to be outspoken and say Russia is promoting a fake version of history.

Why is combating Russian disinformation essential for Lithuanian national security?

It is not important for Lithuania – it is important for the EU, for Europe and for the entire free world. The war in Ukraine is happening very near to the EU; it is happening only a few hours away from France. Culture, heritage [and] historical memory are also fields of combat. Adding me to their wanted list is just one example of this. When we see how Russia is falsifying not only history but all information, it’s important to speak about it very loudly. Lithuania has achieved a lot in this domain, along with Ukraine and France. 

When France had the [rotating, six-month] presidency of the EU [in early 2022], we made several joint declarations. The result was that we signed a sixth package of sanctions against Russia and we designated six Russian television channels to be blocked in the EU – this was the first step in considering information as a [weapon]. In other words, information is being used by Russia to convince their society and sway public opinion in other European countries. Now we have a situation in which we are blocking Russian television channels in EU territory.  

Our foreign partners often ask us upon which criteria Russian information can be considered as disinformation. These days, it’s very important to stress that any information – from television shows to news to other television productions – coming from Russia is automatically disinformation, propaganda and fake news. We must understand that there is no truth in what Russia tries to say.

This fight against disinformation is crucial because we are in a phase of big developments in technology and artificial intelligence. We have to ensure that our societies will be prepared, be capable of critical thinking, and understand what is happening in the world right now.


Olympic and world champion Ruta Meilutyte swims across a pond colored red to signify blood, in front of the Russian embassy in Vilnius, Lithuania, Wednesday, April 6, 2022. © Andrius Repsys, AP

To borrow a term from Czech writer Milan Kundera, would you say that Lithuania was “kidnapped from the West” when it was annexed by the Soviet Union in 1940?

During the Middle Ages, the Grand Duchy of Lithuania spanned from the Baltic Sea to the Black Sea. We were the same country as Poland, Ukraine and Belarus. We were oriented to the West and not the East. In much older times, during the Kievan Rus period, Moscow didn’t even exist; there were just swamps and nothing more. But with [growing] imperialism from the Russian side, they began portraying history in a different way. Yet our memory is like our DNA, our freedom and orientation are ingrained. The eastern flank of the EU is currently talking about the values of Western civilisation much more emphatically than in the past.

[During the Cold War] not only was our freedom taken but [Russia] tried to delete history and paint a picture only from the time when this imperialism entered our territory. But we remembered what happened in the Middle Ages; we remember how modern Lithuanian statehood arose after World War I and how we regained our freedom in 1990. It’s impossible to delete this memory and name Lithuania as a country that isn’t free. Once you take a breath of freedom, you never forget it. This is the reason why we understand Ukrainians and why we are so active to not only defend the territory of Ukraine, but also the values of Western civilisation as well.   

How has the war in Ukraine influenced Lithuanian life and culture?

The main thing is to think about freedom; we have to do a lot because of that freedom, we have to fight for freedom … we understand more and more that culture plays a big role in this war, because it is based on culture and history. You can see what Putin is declaring and it is truly evident that culture, heritage and historical memory are used as the basis for an explanation of why Russia is waging war in Ukraine right now. (To justify the invasion of Ukraine, Putin has insisted that Russians and Ukrainians are one people and uniting them is a historical inevitability.) 

There are important collaborations taking place with Ukrainian culture and artists. It’s important to give them a platform – for everyone to see that Ukraine is not defeated, that Ukraine is still fighting, that Ukraine will win, that we will help them. 

The best response to an aggressor is to live your daily life, with all your traditions, habits and cultural legacy. This fight is also for your way of life. The situation is not one where you must stop and only think about guns and systems of defence – you have to live, work, create, and keep up your business and cultural life. 

Source link

#fight #life #Lithuanias #culture #minister #Ukraine #Russian #disinformation

Ilya Gambashidze: Simple soldier of disinformation or king of Russia’s trolls?

He may not be a household name, but Ilya Gambashidze appears to be involved in almost all of the latest Russian disinformation operations across the world. His disruptive cyber actions earned him a spot last year on the European sanctions list. But a FRANCE 24-RFI profile of Russia’s mystery man of manipulation reveals an operative with a far smaller disinformation stature than the Kremlin’s previous troll czar, the late Wagner boss, Yevgeny Prigozhin.

He emerged from anonymity in the West in the summer of 2023. Ilya Gambashidze’s name first appeared on the July 2023 Council of the European Union’s list of Russian nationals subjected to sanctions.

The list – which transliterated his last name from the original Cyrillic text as “Gambachidze” – noted that he was the “founder of Structura National Technologies and Social Design Agency” and was a “key actor” in Russia’s disinformation campaign targeting Ukraine and a number of West European countries.

By November, the US State Department was citing Gambashidze in a media note on the Kremlin’s efforts to covertly spread disinformation in Latin America.

The tactics cited in the US and EU documents detail the disinformation strategies employed in a vast operation dubbed Doppelganger by EU officials, which clones and creates fake websites impersonating government organisations and mainstream media.

The Social Design Agency (SDA) and Structura were described by the US State Department as “influence-for-hire firms” with “deep technical capability, experience in exploiting open information environments, and a history of proliferating disinformation and propaganda to further Russia’s foreign influence objectives”.  

The SDA fulfills a dual role, according to Coline Chavane, threat research analyst at Sekoia.io, a French cybersecurity company. “The SDA acted both as a coordinator of the various players involved in these disinformation campaigns, and as an operator, creating false content,” she explained.

Exploiting crises from Ukraine to Gaza war

In addition to being a prolific disinformer, Gambashidze is also an opportunistic one. Months after his name appeared on the European sanctions list, Gambashidze was busy trying to fan tensions between France’s Muslim and Jewish communities following the Gaza war launched by Israel in response to the October 7 Hamas attack.

The French foreign ministry has linked an anti-Semitic Star of David graffiti campaign in the Paris region to Operation Doppelganger. Viginum, the French government agency for defence against foreign digital influence, has accused the SDA of seeking to amplify the surge in anti-Semitism in France by using bots to proliferate Star of David posts on social networks.

The Kremlin has even cited Gambashidze as the chief organiser of a new anti-Western propaganda campaign in Ukraine, according to documents detailing a disinformation plan signed by the SDA boss and leaked to Ukrainian media.

In the leaked documents, Gambashidze is presented as one of the main shadow advisors to “The Other Ukraine”, a massive Kremlin propaganda operation targeting Ukrainian President Volodymyr Zelensky.

“One reason for talking about Gambashidze is he looks very central. His name keeps cropping up including with respect to Ukraine,” said Andrew Wilson, a professor of Ukrainian studies at University College London.

When contacted by FRANCE 24, the Council of the European Union declined to comment on the importance that Brussels attaches to this Russian propagandist, citing the “confidentiality of preparatory work” in deciding whether to sanction an individual or a company.

Gambashidze is not the only Russian involved in Operation Doppelganger cited by the EU. Individuals linked to the GRU, Russia’s military intelligence unit, have also been sanctioned.

Nor is he the sole orchestrator of the new disinformation campaign in Ukraine. He is also said to have worked with Sofiya Zakharova, an employee of the Russian Department of Communications and Information Technology, dubbed “the brain” of Operation Doppelganger.

In the footsteps of Yevgeny Prigozhin

With the SDA and Structura cropping up in multiple Western investigations and news reports on Russian disinformation, Anton Shekhovtsov, a Ukrainian political scientist and director of the Austrian-based Centre for Democratic Integrity, notes that his omnipresence suggests that “Ilya Gambashidze and the SDA are gradually replacing Yevgeny Prigozhin and his troll factory”.

Before the Wagner militia chief’s death in August 2023, Prigozhin ran a network of “troll farms” that conducted disinformation operations covering vast ground, from the 2016 US presidential elections and the Brexit vote to online anti-West campaigns in Africa and Asia.

Prigozhin’s death in a plane crash just two months after he led a failed mutiny in Russia has left the disinformation throne vacant, according to Shekhovtsov. “There’s a place up for grabs and the competition is fierce. For now, Ilya Gambashidze appears to be well placed,” he noted.

But Gambashidze has not yet reached Prigozhin’s disinformation stature, and his vast domain could be divided between several inheritors. “We are currently witnessing a restructuring of the propaganda ecosystem in Russia. There isn’t necessarily one player at the heart of the system. It’s more like a network that’s being set up,” said Chavane.

In the past, when Prigozhin was the tutelary figure of the Kremlin’s cyber propaganda, “disinformation was organised in a pyramid structure, whereas we seem to be moving more towards a spider’s web structure with several players linked together in a network”, explained François Deruty, Sekoia’s chief operations officer.

A discreet Rasputin of disinformation

Gambashidze and Prigozhin have a difference in style as well as stature. The middle-aged Gambashidze, with his rather stern Russian technocrat demeanor, has none of the bluster and media showmanship of the late Wagner boss. While Prigozhin was known for his public boasts and rants, Gambashidze’s modus operandi appears to be discretion.

Very little is known about his private life, and the Internet provides little information about him – not even basic details such as his age. According to Russian investigative journalist Sergei Yezhov, Gambashidze is 46 years old.

There are no details about his birthplace, education and family life either. The only available piece of information is that he comes under the fiscal jurisdiction of a Moscow tax office. Photographs of Gambashidze are equally rare, and one of the most recent shows an austere-looking man with thinning hair and no other distinguishing features.

On the European list of sanctioned individuals, he is described as having “formerly worked as a counsellor … to Piotr Tolstoi”. It’s a noteworthy detail. Piotr Tolstoi, commonly spelt Pyotr Tolstoy, is none other than the great-grandson of Russian literary icon Leo Tolstoy. The younger Tolstoy is the deputy chairman of the Duma, Russia’s lower house of parliament. He was also deputy chairman of the Parliamentary Assembly of the Council of Europe before Russia was expelled from the organisation – which is distinct from the EU – following the 2022 Ukraine invasion.

The lack of information, discretion and cited ties to prominent Russian politicians paints a picture of a mysterious master of manipulation, a latter-day Rasputin of disinformation.

‘Third-rate political technologist’

But images can also be deceptive. “If there’s so little information about him, it may be simply because he’s not important enough in Russia,” noted Andrey Pertsev, a journalist with the Latvia-based independent Russian website, Meduza, and an expert on Moscow’s corridors of power.

Gambashidze’s case illustrates how the same individual can be perceived by two very different worlds. In the West, he is considered a threat, with Europe going so far as to include him on its list of sanctioned individuals. In Russia, on the other hand, he is at best “a third-rate political technologist”, according to Pertsev, using a Russian term for the professional engineering of politics.

While the term “political technology” is largely unfamiliar in the West, it’s well known to Russian and Ukrainian audiences acquainted with the state’s manipulation of techniques to hijack and weaponise the political process.

It’s also the subject of Wilson’s latest book, “Political Technology: The Globalisation of Political Manipulation”, and Gambashidze appears to neatly fit the definition of a political technologist. “His career looks super typical. A lot of these political technologists are entrepreneurial. They sell services, they come up with ideas,” explained Wilson.

Internationally, political technologists are most often associated with Prigozhin, who sent dozens of them to African countries to help Moscow’s protégés win elections. But most Russian political technologists are focused on domestic politics and local parties, according to experts. “We mustn’t forget that their main bread and butter consists of handling local elections, working for governors or parties,” noted Shekhovtsov.

“That’s where the money is,” explained Pertsev. A political technologist’s influence is therefore measured above all by the prestige of the election he or she is supposed to help win.

Gambashidze is no exception. He has handled elections in Kalmykia, one of Russia’s 21 republics, located in the North Caucasus, as well as in the Tambov Oblast, one of the least populated regions of central Russia.

“His [SDA] team often made mistakes and he was repeatedly called back to Moscow to avoid an electoral setback,” explained Pertsev, who says he cannot understand how such an individual ended up in Brussels’ crosshairs.

On the messaging service Telegram, anonymous accounts make fun of the questionable effects of Gambashidze’s advice to Batu Khassikov, governor of Kalmykia in 2019. Not only did Gambashidze fail to get Khassikov re-elected, but the incumbent’s popularity rating actually plummeted at the time.

A pig release backfires

In August 2023, a Gambashidze associate thought it wise to organise a release of pigs tattooed with the Communist Party emblem in Khakassia, a republic in southern Siberia.

The aim of the pig release was to discredit the republic’s Communist governor, Valentin Konovalov. But the plan backfired: Gambashidze’s associate was accused by a section of the local population of “ridiculing Russian history” and he was fined for violating campaign rules.

But the SDA’s most prestigious, if short-lived, client appears to have been Leonid Slutsky, who took over as head of the ultranationalist Liberal Democratic Party (LDPR) in 2022 after the death of Russia’s notorious, far-right populist, Vladimir Zhirinovsky.

The new LDPR boss, aware of his lack of charisma, needed a political technologist. He ended up with Gambashidze. But he was quickly dismissed “without a moment’s hesitation, which means that Ilya Gambashidze is not considered very important in the Kremlin”, explained Pertsev.

How did such an individual come to be associated with large-scale disinformation operations on the international stage? “Sometimes it’s not competence that counts, but loyalty, and in Russia the quality of the network is central for a political technologist,” noted Wilson.

In Gambashidze’s case, the man who knows the man who knows President Vladimir Putin is Alexander Kharichev, a Kremlin adviser. But most important, according to Pertsev, is the fact that Gambashidze is “a fellow traveler” of Sergey Kiriyenko, a former Russian prime minister and currently the first deputy chief of staff in Putin’s administration.

In late December 2023, the Washington Post identified Kiriyenko as the top Russian official who tasked Kremlin political strategists with promoting political discord in France by amplifying messages to strengthen the French far-right. These included such talking points as the Ukraine war was plunging France into its deepest economic crisis ever or that it was depleting France of the weapons needed to defend itself.

“People come to Sergey Kiriyenko for electoral or other questions, and he delegates to Alexandre Kharitchev the task of finding the right political technologists,” explained Pertsev.

Cannon fodder in the information war

This is how Gambashidze came to be involved in international disinformation operations, explained Pertsev. “The main reason is that he’s cheap,” he explained, noting that in the Kremlin’s order of budgetary priorities, getting the right candidate to win local elections is more important than launching a disinformation campaign in Western Europe.

What’s more, “the best political scientists would probably not be interested”, added Pertsev.  For the big fish, working on disinformation campaigns targeting the West is not worth the risk since the domestic political market is more lucrative and they don’t risk ending up on international sanctions lists. In a way, Gambashidze is simply informational cannon fodder.

Yet the Kremlin’s great ideological war against the West – in which disinformation operations play an important role – has always been presented as a priority for Putin. It may seem incongruous to make a relatively minor figure like Gambashidze a central part of the disinformation schemes targeting the West.

But Gambashidze is not the only master on board. “As the defence of Russian values has been elevated to a matter of national security, Russian spies are inevitably involved in this type of operation,” noted Yevgeniy Golovchenko, a specialist in Russian disinformation at the University of Copenhagen.

Nor does the Kremlin require elaborate cyber-propaganda campaigns. “The most sophisticated aspect is the diversity of media and means used. For Operation Doppelganger, the SDA called on local media, journalists and YouTubers to amplify their messages. They also set up a vast network of fake sites, some of which were only visible in a specific country,” explained Chavane.

The fake news sites set up were rather crude clones of major news sites such as the French “20 minutes”, Germany’s “Der Spiegel” or British daily, “The Guardian”.

“The important thing is that these operations are inexpensive. One costs less than a missile over Ukraine. So even if they’re not perfectly executed by Ilya Gambashidze, the bet is that by stringing them together over a long period, they’ll end up working,” explained Golovchenko.

In Moscow’s informational warfare set-up, Gambashidze is a key cog in the wheel, in an approach reminiscent of Russia’s military strategy in Ukraine: sending in wave after wave of troops, in the hope that the enemy’s defences will collapse under the sheer numbers.

(This is a translation of the original in French.)



Source link

#Ilya #Gambashidze #Simple #soldier #disinformation #king #Russias #trolls

Facebook shuts thousands of fake Chinese accounts masquerading as Americans

Someone in China created thousands of fake social media accounts designed to appear to be from Americans and used them to spread polarizing political content in an apparent effort to divide the U.S. ahead of next year’s elections, Meta said Thursday. 

The network of nearly 4,800 fake accounts was attempting to build an audience when it was identified and eliminated by the tech company, which owns Facebook and Instagram. The accounts sported fake photos, names and locations as a way to appear like everyday American Facebook users weighing in on political issues.

Instead of spreading fake content as other networks have done, the accounts were used to reshare posts from X, the platform formerly known as Twitter, that were created by politicians, news outlets and others. The interconnected accounts pulled content from both liberal and conservative sources, an indication that its goal was not to support one side or the other but to exaggerate partisan divisions and further inflame polarization.

The newly identified network shows how America’s foreign adversaries exploit U.S.-based tech platforms to sow discord and distrust, and it hints at the serious threats posed by online disinformation next year, when national elections will occur in the U.S., India, Mexico, Ukraine, Pakistan, Taiwan and other nations.

“These networks still struggle to build audiences, but they’re a warning,” said Ben Nimmo, who leads investigations into inauthentic behavior on Meta’s platforms. “Foreign threat actors are attempting to reach people across the internet ahead of next year’s elections, and we need to remain alert.”

Meta Platforms Inc., based in Menlo Park, California, did not publicly link the Chinese network to the Chinese government, but it did determine the network originated in that country. The content spread by the accounts broadly complements other Chinese government propaganda and disinformation that has sought to inflate partisan and ideological divisions within the U.S.

To appear more like normal Facebook accounts, the network would sometimes post about fashion or pets. Earlier this year, some of the accounts abruptly replaced their American-sounding user names and profile pictures with new ones suggesting they lived in India. The accounts then began spreading pro-Chinese content about Tibet and India, reflecting how fake networks can be redirected to focus on new targets.

Meta often points to its efforts to shut down fake social media networks as evidence of its commitment to protecting election integrity and democracy. But critics say the platform’s focus on fake accounts distracts from its failure to address its responsibility for the misinformation already on its site that has contributed to polarization and distrust.

For instance, Meta will accept paid advertisements on its site to claim the U.S. election in 2020 was rigged or stolen, amplifying the lies of former President Donald Trump and other Republicans whose claims about election irregularities have been repeatedly debunked. Federal and state election officials and Trump’s own attorney general have said there is no credible evidence that the presidential election, which Trump lost to Democrat Joe Biden, was tainted.

When asked about its ad policy, the company said it is focusing on future elections, not ones from the past, and will reject ads that cast unfounded doubt on upcoming contests.

And while Meta has announced a new artificial intelligence policy that will require political ads to bear a disclaimer if they contain AI-generated content, the company has allowed other altered videos that were created using more conventional programs to remain on its platform, including a digitally edited video of Biden that claims he is a pedophile.

“This is a company that cannot be taken seriously and that cannot be trusted,” said Zamaan Qureshi, a policy adviser at the Real Facebook Oversight Board, an organization of civil rights leaders and tech experts who have been critical of Meta’s approach to disinformation and hate speech. “Watch what Meta does, not what they say.” 

Meta executives discussed the network’s activities during a conference call with reporters on Wednesday, the day after the tech giant announced its policies for the upcoming election year — most of which were put in place for prior elections. 

But 2024 poses new challenges, according to experts who study the link between social media and disinformation. Not only will many large countries hold national elections, but the emergence of sophisticated AI programs means it’s easier than ever to create lifelike audio and video that could mislead voters. 

“Platforms still are not taking their role in the public sphere seriously,” said Jennifer Stromer-Galley, a Syracuse University professor who studies digital media. 

Stromer-Galley called Meta’s election plans “modest” but noted it stands in stark contrast to the “Wild West” of X. Since buying the X platform, then called Twitter, Elon Musk has eliminated teams focused on content moderation, welcomed back many users previously banned for hate speech and used the site to spread conspiracy theories.

Democrats and Republicans have called for laws addressing algorithmic recommendations, misinformation, deepfakes and hate speech, but there’s little chance of any significant regulations passing ahead of the 2024 election. That means it will fall to the platforms to voluntarily police themselves.

Meta’s efforts to protect the election so far are “a horrible preview of what we can expect in 2024,” according to Kyle Morse, deputy executive director of the Tech Oversight Project, a nonprofit that supports new federal regulations for social media. “Congress and the administration need to act now to ensure that Meta, TikTok, Google, X, Rumble and other social media platforms are not actively aiding and abetting foreign and domestic actors who are openly undermining our democracy.”

Many of the fake accounts identified by Meta this week also had nearly identical accounts on X, where some of them regularly retweeted Musk’s posts.

Those accounts remain active on X. A message seeking comment from the platform was not returned.

Meta also released a report Wednesday evaluating the risk that foreign adversaries including Iran, China and Russia would use social media to interfere in elections. The report noted that Russia’s recent disinformation efforts have focused not on the U.S. but on its war against Ukraine, using state media propaganda and misinformation in an effort to undermine support for the invaded nation.

Nimmo, Meta’s chief investigator, said turning opinion against Ukraine will likely be the focus of any disinformation Russia seeks to inject into America’s political debate ahead of next year’s election.

“This is important ahead of 2024,” Nimmo said. “As the war continues, we should especially expect to see Russian attempts to target election-related debates and candidates that focus on support for Ukraine.”

(AP)

Source link

#Facebook #shuts #thousands #fake #Chinese #accounts #masquerading #Americans

‘Pallywood propaganda’: Pro-Israeli accounts online accuse Palestinians of staging their suffering

Since Hamas carried out its deadly attack on October 7 and Israel began retaliatory military operations in Gaza, a parallel war is being fought online. A barrage of disinformation, fake news and misinformation has swarmed social media feeds. Pro-Israeli accounts on social media are using the term “Pallywood” to accuse Palestinians of faking their suffering.  

Amid the thick fog of this information war, one word has consistently come out from behind the haze. Pro-Israeli accounts online have been deploying the word “Pallywood” as a means to undermine the plight of Gazans. 


A blend of the words “Palestine” and “Hollywood”, the term insinuates that stories of suffering coming from Gaza are contrived or embellished for propaganda purposes. The accusations range from hiring crisis actors, to doctoring footage and editing it in a dishonest way that misrepresents reality.  

Detractors argue the pejorative term is a deliberate attempt to delegitimise the very real hardships endured by Gazans, and to dehumanise Palestinian lives.  

A Gazan caught in the crosshairs 

At the heart of the Pallywood claims made by pro-Israeli accounts online is one young Gazan in particular, Saleh Al-Jafarawi. He has repeatedly been accused of being a “crisis actor” working for Hamas who allegedly stages scenes to make himself look like a victim.  

Al-Jafarawi has been actively posting videos on Instagram since the start of the war to document what is happening on the ground in Gaza. But he got caught in the crosshairs of disinformation when pro-Israeli accounts started sharing videos showing an alleged Al-Jafarawi in a hospital bed one day, and walking the streets of Gaza the next.  

The claim that Al-Jafarawi had faked an injury spread like wildfire, with official government profiles taking part in its circulation. Israel’s official X account also shared the story in two separate tweets, which it then deleted some hours later.  

Hananya Naftali, who used to work under Prime Minister Benjamin Netanyahu as part of his digital communications team and is now a leading pro-Israeli influencer, also re-tweeted the viral video on October 26. 

In Naftali’s post, two videos have been edited side-by-side. The video on the left depicts a man walking through rubble and has a green banner above it that reads “today”. On the right, a man lies in a hospital bed with an amputated leg while a red banner on the top of the video reads “yesterday”. Naftali called the video “Pallywood propaganda”, claiming the Palestinian man was “miraculously healed in one day” from Israeli strikes. 

But the two videos are of two different men. The video on the left is of Al-Jafarawi, a Gazan YouTuber and singer. The video on the right is of Mohammed Zendiq, a young man who lost his leg after Israeli forces attacked the Nur Shams refugee camp in the West Bank on July 24.  

Though the claim has long been debunked by various news outlets, Naftali has not deleted his post. And claims about Al-Jafarawi have continued to spread.  

“[Pallywood] is certainly a form of disinformation,” says Dr. Robert Topinka, a senior lecturer at Birkbeck University in London who has carried out extensive research on disinformation. “It’s being deliberately spread to confuse… It’s purposeful. Why else would it continue to be spread after it’s been so clearly debunked?” 

Al-Jafarawi can still be seen in a compilation of photos aimed at discrediting his coverage of the war in Gaza. A mosaic with nine different photos purports to show Al-Jafarawi taking on different “roles”, but they are images from different dates, taken in different settings, and are not proof he is an actor, something French daily Libération has thoroughly fact-checked. The state of Israel reposted the compilation on November 6 and has not deleted it from its X account so far.  

As for the misidentified Palestinian man who lost his leg, Zendiq, he has received an avalanche of online abuse. His family now fear for his life.  

‘Dilute’, ‘dehumanise’ and ‘undermine’ 

For Shakuntala Banaji, an expert on disinformation and media professor at the London School of Economics and Political Science who has been monitoring false claims online since the war broke out, Pallywood “is insult added to injury”.  

“We don’t really need those kinds of false reports, since the accurate reporting is there,” says Banaji, referring to the journalists on the ground in Gaza. Though no foreign reporters have been allowed into Gaza and at least 53 journalists have been killed in the enclave according to the Committee to Protect Journalists, many are still risking their lives to document what is happening.  

For Topinka, one of the reasons why disinformation like Pallywood is created is to dilute the inhumane aspects of conflicts or events. More than 14,000 people, mostly civilians, have been killed in Gaza since October 7, according to the Hamas-run health authority. “These events are so horrifying that people almost don’t want to believe them,” explains Topinka.  

But in the case of Israel and Palestine, there are also strong political motivations that drive the spread of disinformation. “Pallywood is propaganda. It’s overwhelmingly clear that Gazans are undergoing incredible suffering right now. There’s endless evidence for it,” says Topinka. “So to make it seem as if people are inflating the suffering helps to tell a different story about what’s actually happening. It makes it seem like less of a humanitarian disaster,” the researcher explains.  

Pallywood is being used in the context of real trauma, loss and grief. To reduce this suffering to fake theatrics, Banaji believes, “fits with the entire lexicon of the dehumanisation of Palestinians”. Even the use of the word itself is, Topinka believes, very intentional. Bollywood and Nollywood (terms that refer to the Indian and Nigerian film industries), he argues, “capture a kind of cultural dynamism, where communities and cultures have created their own film industry outside of Hollywood”. 

“But in Pallywood, it’s a reversal of positivity. The idea is that Palestinians are uniquely deceptive. It’s meant to capture a culture… but in this case, in a negative way,” he says.  

Aside from dehumanising and diluting Palestinian suffering, the spread of disinformation like Pallywood has tangible consequences, not only on the lives of those who fall victim to it, but also on larger efforts for peace. “It can end up undermining campaigns for a ceasefire or even undermine diplomatic efforts,” warns Topinka.  

Pallywood’s comeback and Indian influence 

It is not the first time Pallywood has been used to discredit Palestinian suffering. The term was first coined more than a decade ago by Richard Landes, a US historian based in Jerusalem

In 2005, Landes produced an online documentary called “Pallywood: According to Palestinian Sources”, and since then, has largely popularised the term that has now even been adopted by Israeli authorities. Landes continues to use Pallywood in the context of the ongoing war, and recently spoke to the Australian Jewish Association about its invention. 

“It is now being re-weaponised,” says Banaji.  

Logically Facts, a UK company specialised in combatting disinformation, analysed social media data across Facebook, YouTube, Twitter and Reddit from September 27 to October 26. It found that the volume of posts citing Pallywood “increased steadily in the days after October 7”, and that the term was mentioned over 146,000 times by more than 82,000 unique users between October 7 and October 27. The country with the most mentions was the US, followed by India and Israel.  

“I’ve been monitoring day and night,” Banaji concurs. “90% of the Pallywood content that is coming out … appears to be coming from pro-Zionist, pro-Israel accounts,” which, according to Logically Facts, is being driven by users based outside of Israel and Palestinian Territories.  

India accounts online are a major driver. The country has seen a massive disinformation campaign targeting Palestinians since the start of the war. 

“Many of these people are paid trolls, but many of them are unpaid anti-Muslims who have a stake in seeing Israel exonerated,” Banaji argues, referring to the spread of anti-Muslim sentiment by Prime Minister Narendra Modi and his BJP party. “[In the UK], there are Indian accounts pretending to be either Muslim or Israelis, spreading disinformation on behalf of the Israeli state, the IDF or British Zionist organisations,” Banaji explains. 

But despite official voices like the state of Israel or the Indian government amplifying disinformation like Pallywood, and the exhaustion that comes with monitoring the never-ending rush of her feed, Banaji believes there is a way to rebuild trust in institutions. “I wouldn’t be working on disinformation and teaching about media if I thought all was lost,” she says.  

Banaji often tells her students about her four-point plan to combat disinformation. Step one is “for people to learn how to do rigorous research for themselves”. Step two is finding “media organisations which maintain a presence on the ground and a balance in reporting”. Step three is reporting misinformation online “because it can get taken down but only if many people report it”. And step four is “trying to re-humanise groups of people”.  



Source link

#Pallywood #propaganda #ProIsraeli #accounts #online #accuse #Palestinians #staging #suffering

Recent escalations remind us of the need to combat disinformation

The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

There is no doubt that with disinformation becoming more widespread, we are in a war against weaponised information, Oliver Rolofs writes.

ADVERTISEMENT

Today’s information society offers a lecture on the relativity of truth. 

Across the world, there are masters at work in the art of bending the facts. And where the truth no longer matters, it becomes easier to wage war.

But disinformation is not a new concern. As early as 1710, the Irish satirist Jonathan Swift wrote in The Examiner: “Falsehood flies, and the Truth comes limping after it.” 

Two centuries later, Britain’s legendary Prime Minister Winston Churchill noted: “A lie gets halfway around the world before the truth has a chance to get its pants on.”

The wisdom of Churchill’s words was tragically demonstrated again just two weeks ago, when we witnessed Hamas and other malign actors apparently pursue this approach to instrumentalise their supporters. 

Upon investigation, some of the widely shared images of the alleged Israeli rocket attack on a hospital in Gaza appear to be a tragic, yet effective example of disinformation. 

Yet for many observers, clear Open Source Intelligence (OSINT) analysis counted for little, as it did not fit into their preconceived narrative. 

This kind of disinformation — whether practised by extremists, state actors like China, Russia, Iran, or other fake news-producing powers, all further enabled by social media platforms, messenger services and AI-based solutions such as ChatGPT — is increasingly a threat to global peace and stability.

Truthfully portraying facts-based reality

For centuries, nation-states have promulgated laws addressing the propagation of falsehoods, and on issues such as defamation, fraud, false advertising and perjury. 

However, current discussions on disinformation reflect a new and rapidly evolving communications landscape, in part due to innovative technologies that enable the dissemination of unparalleled volumes of content at unprecedented speeds.

In his 2022 report on countering disinformation, UN Secretary-General António Guterres explored the challenges of navigating this qualitatively different media landscape and ensuring it advances, rather than undermines, human rights and international peace and security. 

States, but especially tech companies, have a duty to take appropriate steps to address these harmful impacts. This is not an easy task, as they need to simultaneously limit any infringement on rights, including the right to freedom of opinion and expression.

The 1978 UNESCO Media Declaration could be a useful guiding light in this, however. Even in today’s technological age, it could serve as a moral compass for states, tech and media companies providing any sort of communication service. 

The tasks for the media formulated in the UNESCO declaration — to contribute to the strengthening of peace and international understanding, to promote human rights and to fight racism, antisemitism, apartheid and warmongering — are more relevant than ever.

Media, journalists, as well as social media platforms and messenger services are challenged to truthfully portray reality based on facts, especially in this digital age where every person can be a publisher with unprecedented reach. 

By taking this principle to heart, the antagonism of conflict and the polarisation of societies around the globe can be overcome.

The European approach, a solid blueprint

There is no doubt that we are facing greater challenges on this front than ever before. 

Platforms for dialogue and cooperation are crucial. International forums, especially those that bring together the Global North and Global South, such as the Global Media Congress in Abu Dhabi or the Deutsche Welle Global Media Forum in Bonn, in addition to UN formats, can give this issue the space it needs to drive a strong approach at the global level.

ADVERTISEMENT

There is much to discuss. Combating disinformation is a complex challenge. While it is a global issue, the European approach can provide solid guidance for a multilayered approach. 

It includes, via the EU Digital Services Act (DSA), new regulations on online platforms and improving EU citizens’ information environment by building transparency and security safeguards and holding tech companies accountable. 

The DSA is strengthened through the EU Code of Practice on Disinformation, a strong albeit voluntary set of commitments from tech and media firms.

Further Europe-led approaches include the EU vs Disinfo website and database to highlight Russia’s influence campaigns against the EU, its member states, and allies. 

More projects that actively engage society, like in the case of Finland, which has elevated the way citizens separate fact from fiction through effective media literacy toolkits, are also needed.

ADVERTISEMENT

Across the Atlantic, new collaborative human-technological solutions — like the Public Editor project, a collective intelligence system that labels specific reasoning mistakes in the daily news, so we can all learn to avoid biased thinking now also implemented in Europe — could usefully further efforts in the fight against disinformation. 

War against weaponised information

There is no doubt that with disinformation becoming more widespread, we are in a war against weaponised information. 

Communicators, politicians, media and opinion leaders need to work together across borders, and they need a whole set of instruments to combat it effectively. 

Investing in quality journalism, fact-based education and regulation, and using technologies such as social listening tools are the arsenal we need to help identify and defuse emerging threats before the world is thrown into outright turmoil.

Oliver Rolofs is a strategic security and communication expert and Director of the Vienna-based Austrian Institute for Strategic Studies and International Cooperation (AISSIC). Previously the Head of Communications at the Munich Security Conference, he also runs the Munich-based strategy consultancy, CommVisory.

ADVERTISEMENT

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#escalations #remind #combat #disinformation

We need to learn that if it’s online, it doesn’t have to be true

By Yoan Blanc, Co-director, C’est vrai ça?

Investing in education and the development of critical thinking is more important than ever, as it will help to limit the risks of disinformation posed by the advent of technologies such as the internet and AI, Yoan Blanc writes.

In its new report on education and technology, published on 26 July 2023, UNESCO’s Global Education Monitoring Report discusses the risks and opportunities that new technologies hold for the future of education. 

The report points out that only around half of 15-year-olds in OECD countries are able to tell facts from opinions. 

With the advent of artificial intelligence (AI), more than ever, educators play a vital role in teaching the critical thinking and autonomy needed to navigate new technologies.

As we have observed in recent years, disinformation campaigns and conspiracy theories have gained ground. 

While it is worrying to see the often-morbid impact of fake news, we also need to consider how susceptible people are to these theories, and the lack of education about technology, which leads some people to think that “if it’s on the Internet, it must be true”.

Pre-existing issues, a smouldering fire that just needed oxygen

A pre-existing situation has made it easier for people to buy into the idea. 

Disinformation campaigns haven’t enjoyed the success they have based solely on a particular set of circumstances. 

The COVID-19 crisis merely acted as a catalyst, stoking a fire that had been smouldering for quite some time due to a number of factors that slowly came to light.

As the latest RSF report shows, freedom of the press is under threat in many countries, including those that were once regarded as democratic. 

Public confidence in the press has rarely been so low, and the work of journalists is often denigrated, even though they are the foundation of a healthy democratic system. 

The media has become polarised in recent years, favouring ideology and punchlines over in-depth debates on social issues.

The lack of digital literacy and critical thinking

It’s hard to properly account for the surge in fake news. The general public has very little understanding of how social media work and sometimes takes falsely sourced or openly conspiracy-themed publications at face value. 

Acquiring easily implemented methodological tools would undoubtedly help to avoid these pitfalls.

The main reason fake news goes viral is a lack of critical thinking. Part of the population doesn’t know how to think critically and doesn’t analyse the information they are given.

Social media is the favourite channel for spreading conspiracy theories and disinformation. 

It allows for swift and massive dissemination thanks to its sharing functions and algorithms that highlight the sort of “divisive” content that generates reactions (the more intense the emotion, the more viral the information). 

There is also the question of social media moderation, which is at best inadequate, at worst non-existent, and essentially based on user reports, with the impending risk of a “militia” effect. 

The reports they make are processed by people who often don’t speak French, leading to bizarre situations where overtly racist content can be allowed to remain online. 

Worse still, as content deletion is sometimes based on the number of reports received, journalists’ or fact-checkers’ accounts or publications are regularly deleted or suspended as a result of massive reporting campaigns.

How can we fix this?

Citizens’ initiatives that complement the work of the press need to take shape to counter fake news on social media. 

Because while disinformation’s main weapon is virality, it is still possible to mitigate the viral impact of a publication by reacting quickly to provide simple, accessible, and well-sourced explanations.

C’est vrai, ça?, for instance, has over twenty volunteers working on a citizens’ initiative to check LinkedIn posts. 

With an average of 10 fact-checking operations a day and 70,000 followers, they help prevent the spread of fake news or at least encourage critical thinking by providing sourced commentary.

This is where education comes in

To stem the tide of disinformation upstream, there is a solution that is both simple on paper and yet terribly difficult to implement in practice: education.

Training teachers to work with and use new technologies so that they can pass on critical thinking methods to their pupils, whether it be social media or content-generating artificial intelligence tools accessible to the general public (ChatGPT, Midjourney, etc.). 

Here too, partnerships could be devised between schools and community associations to train students in the use of new technologies, to encourage them to question what they consult and to use the tools at their disposal to reflect rather than just consume. 

The Internet and artificial intelligence are formidable instruments, providing access to information that was previously unavailable. 

But like any other tool, they need to be properly mastered. Access to knowledge and education has always been a means of empowering people. 

Whereas in the past, the challenge for the public was to gain access to the knowledge needed to contradict dogma, today, the challenge is to sort through the mass of information. 

We have to invest in knowledge to solve this once and for all

Learning the basic techniques of OSINT (Open-Source Intelligence) — how to detect texts or images generated by artificial intelligence or how to analyse a source of information — are all skills that are accessible from a very young age and help protect the human mind against manipulation.

Investing in education and the development of critical thinking is more important than ever, as it will help to limit the risks of disinformation posed by the advent of technologies such as the internet and AI.

To achieve this, we need genuine political commitment and enlightened governance to analyse the risks and work together to develop effective standards that protect us all from fake news.

Yoan Blanc is the co-director of C’est vrai ça?, an independent civic initiative that brings together private citizens who want to fight back against fake news.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#learn #online #doesnt #true

ChatGPT: Use of AI chatbot in Congress and court rooms raises ethical questions

User-friendly AI tool ChatGPT has attracted hundreds of millions of users since its launch in November and is set to disrupt industries around the world. In recent days, AI content generated by the bot has been used in US Congress, Columbian courts and a speech by Israel’s president. Is widespread uptake inevitable – and is it ethical?

In a recorded greeting for a cybersecurity convention in Tel Aviv on Wednesday, Israeli President Isaac Herzog began a speech that was set to make history: “I am truly proud to be the president of a country that is home to such a vibrant and innovative hi-tech industry. Over the past few decades, Israel has consistently been at the forefront of technological advancement, and our achievements in the fields of cybersecurity, artificial intelligence (AI), and big data are truly impressive.”

To the surprise of the entrepreneurs attending Cybertech Global, the president then revealed that his comments had been written by the AI bot ChatGPT, making him the first world leader publicly known to use artificial intelligence to write a speech. 

But not the first politician to do so. A week earlier, US Congressman Jake Auchincloss read a speech also generated by ChatGPT on the floor of the House of Representatives. Another first, intended to draw attention to the wildly successful new AI tool in Congress “so that we have a debate now about purposeful policy for AI”, Auchincloss told CNN. 


Since its launch in November 2022, ChatGPT (created by California-based company OpenAI) is estimated to have reached 100 million monthly active users, making it the fastest-growing consumer application in history. 

The user-friendly AI tool utilises online data to generate instantaneous, human-like responses to user queries. It’s ability to scan the internet for information and provide rapid answers makes it a potential rival to Google’s search engine, but it is also able to produce written content on any topic, in any format – from essays, speeches and poems to computer code – in seconds.  

The tool is currently free and boasted around 13 million unique visitors per day in January, a report from Swiss banking giant UBS found.

Part of its mass appeal is “extremely good engineering ­– it scales up very well with millions of people using it”, says Mirco Musolesi, professor of computer science at University College London. “But it also has very good training in terms of quality of the data used but also the way the creators managed to deal with problematic aspects.”  

In the past, similar technologies have resulted in bots fed on a diet of social media posts taking on an aggressive, offensive tone. Not so for ChatGPT, and many of its millions of users engage with the tool out of curiosity or for entertainment

“Humans have this idea of being very special, but then you see this machine that is able to produce something very similar to us,” Musolesi says. “We knew that this this was probably possible but actually seeing it is very interesting.” 

A ‘misinformation super spreader’?

Yet the potential impact of making such sophisticated AI available to a mass audience for the first time is unclear, and different sectors from education, to law, to science and business are braced for disruption.    

Schools and colleges around the world have been quick to ban students from using ChatGPT to prevent cheating or plagiarism. 

>> Top French university bans students from using ChatGPT 

Science journals have also banned the bot from being listed as a co-author on papers amid fears that errors made by the tool could find their way into scientific debate.  

OpenAI has cautioned that the bot can make mistakes. However, a report from media watchdog NewsGuard said on topics including Covid-19, Ukraine and school shootings, ChatGPT delivered “eloquent, false and misleading” claims 80 percent of the time. 

“For anyone unfamiliar with the issues or topics covered by this content, the results could easily come across as legitimate, and even authoritative,” NewsGuard said. It called the tool “the next great misinformation super spreader”. 

Even so, in Columbia a judge announced on Tuesday that he used the AI chatbot to help make a ruling in a children’s medical rights case. 

Judge Juan Manuel Padilla told Blu Radio he asked ChatGPT whether an autistic minor should be exonerated from paying fees for therapies, among other questions.  

The bot answered: “Yes, this is correct. According to the regulations in Colombia, minors diagnosed with autism are exempt from paying fees for their therapies.” 

Padilla ruled in favour of the child – as the bot advised. “By asking questions to the application we do not stop being judges [and] thinking beings,” he told the radio station. “I suspect that many of my colleagues are going to join in and begin to construct their rulings ethically with the help of artificial intelligence.” 

Although he cautioned that the bot should be used as a time-saving facilitator, rather than “with the aim of replacing judges”, critics said it was neither responsible or ethical to use a bot capable of providing misinformation as a legal tool. 

An expert in artificial intelligence regulation and governance, Professor Juan David Gutierrez of Rosario University said he put the same questions to ChatGPT and got different responses. In a tweet, he called for urgent “digital literacy” training for judges.

A market leader 

Despite the potential risks, the spread of ChatGPT seems inevitable. Musolesi expects it will be used “extensively” for both positive and negative purposes – with the risk of misinformation and misuse comes the promise of information and technology becoming more accessible to a greater number of people. 

OpenAI received a multi-million-dollar investment from Microsoft in January that will see ChatGPT integrated into a premium version of the Teams messaging app, offering services such as generating automatic meeting notes. 

Microsoft has said it plans to add ChatGPT’s technology into all its products, setting the stage for the company to become a leader in the field, ahead of Google’s parent company, Alphabet. 

>> Alphabet, Amazon and Apple results: Tech earnings hit by gloom 

Making the tool free has been key to its current and future success. “It was a huge marketing campaign,” Musolesi says, “and when people use it, they improve the dataset to use for the next version because they are providing this feedback.” 

Even so, the company launched a paid version of the bot this week offering access to new features for $20 per month.

Another eagerly awaited new development is an AI classifier, a software tool to help people identify when a text has been generated by artificial intelligence.

OpenAI said in a blog post that, while the tool was launched this week, it is not yet “fully reliable”. Currently it is only able to correctly identify AI-written texts 26 percent of the time.

But the company expects it will improve with training, reducing the potential for “automated misinformation campaigns, using AI tools for academic dishonesty, and positioning an AI chatbot as a human”.  



Source link

#ChatGPT #chatbot #Congress #court #rooms #raises #ethical #questions