Everything You Need to Know When Using a Digital Currency Exchange

The crypto market is currently in another bull cycle. Bitcoin recently hit an all-time high price of $73,800. There are also hundreds of meme coins booming and busting in quick succession.

The crypto market is currently in another bull cycle. Bitcoin recently hit an all-time high price of $73,800. There are also hundreds of meme coins booming and busting in quick succession. Of course, you very likely already know this. And this is a testament to how much cryptocurrencies have permeated society and changed how we perceive and manage financial assets.

Much of this has been made possible by digital currency exchanges that provide platforms for billions of people worldwide to trade and invest in cryptocurrencies—at transaction speeds that even the traditional financial system is still only catching up to. Here’s an example of such an exchange: https://www.independentreserve.com/au.

 

However, as it is with any financial venture, these exchanges come with a unique set of risks and challenges. For anyone looking to navigate the crypto market, and hopefully participate in the bull season, it is crucial to understand these intricacies.

Why are Digital Currency Exchanges Necessary?

Crypto exchanges act as intermediaries and facilitate the trade of digital assets like Bitcoin and other cryptocurrencies. They provide a structured marketplace that is usually intuitive enough to be navigated by both seasoned traders and newcomers alike.

 

Additionally, these also typically offer analytical tools, and real-time market data and sometimes even help provide educational resources to assist users in making informed decisions in trading their cryptocurrencies.

What Are These Risks And Challenges?

However, the purpose of this article is to get into the risks and challenges that are associated with these exchanges. So, let us get into them:

Volatility risk is not exactly directly tied to crypto exchanges. However, it bears mentioning, as these exchanges are the main arenas where crypto transactions take place. These fluctuations typically occur in mere seconds, leading to either high gains or heavy losses. This volatility is usually caused by a variety of factors including announcements from regulatory bodies or government leaders or random shifts in market sentiments.

 

As an investor, you need to learn how to navigate these turbulent waters with the care of an expert captain; developing a system that allows you to make quick movements in your portfolio, in adapting to market changes. Essentially, the markets are unpredictable, so you have to keep your ear to the ground. To do this, you need to switch on news alerts for the keywords that are often included in the news headlines that typically move the markets. 

 

Many crypto exchanges come with features like this that alert you to market-moving events; so it may be wise to consider that as a factor in selecting which exchange to use. However, you also need to develop your independent systems for monitoring these trends.

Another area with a lot of risks is the legal and regulatory aspects of things. The crypto market is relatively new, and hence the legal frameworks are largely nascent and evolving or even non-existent. From countries like el-Salvador where crypto adoption is encouraged by the government to countries like China, where it is permanently banned; regulatory attitudes vary widely. And sometimes, even within the same country, attitudes can shift, depending on internal political cycles.

 

This inconsistency can make compliance a complex affair. For example, in Nigeria, Binance suddenly got banned by the government, even after several government figures had indicated an interest in encouraging the growth of crypto in the country. This inconsistency also introduces a layer of uncertainty that can influence market behavior and price movement.

 

So, as an investor, it is quite important that you also keep an eye out for regulatory changes in the jurisdiction that you operate in. But, it is even more imperative that you find measures to insulate yourself and your assets from the reach of the regulatory agencies in your country.

As it is with anything else in this digital era, the threat of security breaches looms large over crypto exchanges. While most exchanges typically have an array of innovative protective measures, hackers and their tactics are also always evolving and getting more sophisticated.

 

Unfortunately, the consequences of one successful breach are usually enough to cause significant damage to both exchanges and individual investors; and make insignificant the efforts of the security systems in place in stopping a thousand earlier threats.

 

Anyway, it is important for you as an investor to research the security measures employed by the various exchanges before choosing one. We have said that security threats are ever-evolving, but it is still always best to be on the side that is always on top of its game when it comes to security. You want to look out for encryption protocols, cold storage solutions, and rigorous security audits.

 

However, the role of personal vigilance cannot be overemphasized. While it is great to trade with an exchange with cutting-edge security measures, you can also personally deploy strategies like using complex, unique passwords and employing two-factor authentication.

This is particularly important if you’re one of those who like to take advantage of meme coins that can see growths in thousands of percentages. Whether your coin gains 180% or 18,000%, it only matters if there are enough other traders in the market who are willing to buy it from you in exchange for other crypto coins or fiat. That is what liquidity is — your avenue to exit and take profit from a trade.

 

Exchanges that have low liquidity may expose you to the risk of slippage, which is when the final executed price of a trade diverges significantly from the expected price at the time the order was placed. These discrepancies can erode trading margins, and impact your profitability. So, you need to opt for exchanges that are known for substantial trading volumes to mitigate against possible liquidity problems.

Why you need Diversification to Mitigate Risks

There are many strategies that you can employ to mitigate risks, but like anyone will tell you, your top option is to diversify your holdings. Diversification can take varying forms. It can mean holding a varied range of cryptocurrencies across the industry—rather than focusing on only one token, as a way to shield yourself from the extreme volatility of the markets. It can also mean holding your assets in a variety of wallets and other storage options, to protect them from cyber-attacks.

 

Either way, diversification enables the spreading of potential risks, ensuring that the impact of one negative event does not necessarily wipe out your portfolio.

Conclusion

The global crypto markets are very volatile and can be fraught with a lot of security threats and other dangerous problems. However, it has also emerged as the greatest financial invention of the current century; as it has made more millionaires than any system before it.

 

However, it is always important for you as an investor to keep an eye on the market, and to arm yourself with the knowledge of various strategies to protect yourself from the pitfalls that abound in the ecosystem.

 

Do your own research, thoroughly, remain adaptable, and practice enhanced cybersecurity measures.

Image source: Shutterstock

Source link

#Digital #Currency #Exchange

Elon Musk vs OpenAI: AI Firm Refutes Allegations, Know The Timeline

On February 29, Elon Musk filed a lawsuit against OpenAI and its CEO, Sam Altman. The primary allegation was that the company breached its founding agreement with Musk—who was one of the co-founders of the AI firm—by entering a partnership with Microsoft and functioning as its “closed-source de facto subsidiary”, intending to maximise profits. This, as per the billionaire, goes against the commitment made to run as a nonprofit and keep the project open-source.

The lawsuit was filed with a San Francisco court, and the first hearing is yet to take place. Meanwhile, OpenAI, on Wednesday, retaliated against the allegations by publishing an extensive post containing email correspondence with Musk dating back to 2015 and said it would move to “dismiss all of Elon’s claims”.

OpenAI alleged that Musk wanted OpenAI to merge with Tesla or take full control of the organisation himself. “We couldn’t agree to terms on a for-profit with Elon because we felt it was against the mission for any individual to have absolute control over OpenAI,” stated the post, which is authored by OpenAI co-founders Greg Brockman, Ilya Sutskever, John Schulman, Sam Altman, and Wojciech Zaremba. The post also shows through email interactions that the billionaire wanted OpenAI to “attach to Tesla as its cash cow”. This contradicts Musk’s intentions of keeping the AI firm nonprofit if true.

Another email written by Sutskever stated, “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after it’s built, but it’s totally OK not to share the science,” to which Musk replied, “Yup.” This email would directly contradict Musk’s allegation that the AI firm is turning closed-source.

A report by The Verge points out based on the filings in the court that a founder’s agreement is not a contract or a binding agreement that can be breached. As such, Musk’s allegations against OpenAI can potentially be voided.

“We’re sad that it’s come to this with someone whom we’ve deeply admired—someone who inspired us to aim higher, then told us we would fail, started a competitor, and then sued us when we started making meaningful progress towards OpenAI’s mission without him,” the statement said.

One thing OpenAI’s retaliation proves is that the rivalry between the two parties is not a recent one. It goes as far back as 2015. For those who are not entirely familiar with the two’s history, here is the series of events that connect the dots and make sense of this developing saga.

Elon Musk vs OpenAI: Timeline of the decade-long rivalry

Those who follow Musk on X or are active enthusiasts in controversies in the tech space are no strangers to the antics of the second richest person in the world (Amazon founder Jeff Bezos overtook him to the top spot on Tuesday). The Tesla CEO is known for his unfiltered social media posts, interviews, and impulsive decision-making. From buying X after making a social media post to rebranding the entire platform in a week, and from replying to an antisemitic post to hurling expletives at Disney CEO Bob Iger for boycotting advertising on the platform (among many others) and blaming them for killing the platform, the list is quite long.

But these antics are not new. In 2015, Musk co-founded OpenAI along with Altman, President and Chairman Greg Brockman and several others. Musk was also the largest investor in the company, which dedicated itself to developing artificial intelligence, as per a report by TechCrunch. However, to everyone’s surprise, the billionaire resigned from his board seat in 2018.

The beginning of the feud

The reason behind Musk’s resignation depends on who you ask. The X owner cited “a potential future conflict [of interest]” as his role as the CEO of Tesla since the electric vehicle giant was also developing AI for its self-driving cars. However, a Semafor report stated, citing unnamed sources, that Altman felt that the billionaire felt OpenAI fell behind other players like Google, and instead proposed to take over the company himself, which was promptly rejected by the board, and led to his exit. OpenAI has now confirmed this.

However, the exit was merely the beginning. Just a year later, OpenAI announced that it was creating a for-profit entity to fulfil its ambitious goals and pay the dues. The same year, Microsoft invested $1 billion into the AI firm after finalising a multi-year partnership. It was also the same year when GPT-2 was announced and generated a lot of buzz online.

The events were interesting as not only was the company moving in the opposite direction to what Musk philosophised, but the company also witnessed unprecedented success — both financially and technologically, which is something the billionaire reportedly did not think was possible.

Arrival of ChatGPT

However, till 2022, nothing more was heard from either party on the topic. In November 2022, ChatGPT, the AI-powered chatbot that arguably started the AI arms race, was launched by OpenAI. Soon, the silence was broken by Musk. Replying to a post where a user asked the chatbot to write a tweet in his style, he alleged that OpenAI had access to X database for training, and he pulled the plug on it. This was also the first time when Musk publicly said, “OpenAI was started as open-source & non-profit. Neither are still true.”

The billionaire did not stop there. Throughout 2023, he took shots at the company multiple times. In February, he claimed that OpenAI was created to be open-source, and that’s why Musk named it OpenAI. He added, “But now it has become a closed-source, maximum-profit company effectively controlled by Microsoft.”

Again, in March 2023, he posted, “I’m still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn’t everyone do it?” Interestingly, the allegations in these three posts are also the main accusations mentioned in the lawsuit.

And that brings us to the present time as we wait for the lawsuit to begin. The lawsuit will also mark the beginning of the climax of the Elon Musk vs OpenAI saga, which has been building for almost a decade. To the casual spectator, it might simply be a corporate feud between two stakeholders, but a deeper inspection shows that it is much bigger than that. On one side is the serial entrepreneur known for repeated success and a strong (sometimes dogmatic) philosophical take on technology; and on the other is the organisation hailed to be the pioneer of generative AI technology which could be on the cusp of developing artificial general intelligence. Whichever way the lawsuit goes, it can potentially change the course of AI as well.



Source link

#Elon #Musk #OpenAI #Firm #Refutes #Allegations #Timeline

We Tried Google’s Gemini AI, and This is How the Chatbot Fared

Google has come a long way with its generative artificial intelligence (AI) offerings. One year ago, when the tech giant first unveiled its AI assistant, Bard, it became a fiasco as it made a factual error answering a question regarding the James Webb Space Telescope. Since then, the tech giant has improved the chatbot’s responses, added a feedback mechanism to check the source behind the responses, and more. But the biggest upgrade came when the company changed the large language model (LLM), powering the chatbot from Pathways Language Model 2 (PaLM 2) to Gemini in December 2023.

The company called Gemini AI its most powered language model so far. It also added AI image generation capability to the chatbot, taking it multimodal, and even renamed it Gemini. But just how much of a jump is it for the AI chatbot? Can it now compete with Microsoft Copilot, which is based on GPT-4 and has capabilities? And what about the instances of AI hallucination (a phenomenon where AI responds with false or non-existent information as facts)? We decided to find out.

Google AI can currently be accessed in multiple ways. Google Advanced is a paid subscription with the Google One AI Premium plan that charges Rs. 1,950 monthly. There is an Android app of Google Gemini as well. However, it is not yet available in India. Google Pixel 8 Pro also comes with the Gemini Nano model. For our testing purposes, we decided to use Google’s Gemini Pro-powered web portal, which is available in more than 230 countries and territories and is free to use.

Google Gemini’s generative capabilities

The website’s user interface remains the same, but the name has been changed from Bard to Gemini. If you’re signed in with your Google account, the AI will welcome you with your name and ask, “How can I help you today?” Underneath are a few helpful, prompt suggestions highlighting different tasks it can perform.

First, we asked it to write an email to test its basic generative skills. Considering it is the layoff season, we asked it to “draft a sympathetic email to an employee from a manager informing them that they have been laid off.” To test its performance, we told Copilot to do the same. We found Gemini’s response to be rather bland and robotic, trying to fit in too much information right at the beginning. The first paragraph said, “It is with great regret that I write to inform you about a difficult decision we’ve had to make at [Company Name]. Due to [reason], we are required to make reductions in our workforce, and your position will be eliminated, effective [date].”

In comparison, Copilot’s response felt more sympathetic and humane. It wrote, “I hope this email finds you well. I regret to inform you that due to recent organizational changes, we have had to make some difficult decisions. Unfortunately, your position at [Company Name] has been affected, and I am deeply saddened to let you know that you have been laid off.”

It shouldn’t come as a surprise since Google’s chatbot is known for being a bit too bland with formal content generation, and this has not changed even with the Gemini model. However, when it comes to informal content generation, Gemini begins to show its capabilities.

We asked both the chatbots to “write a letter informing my mother about me getting laid off and Write a short note to my mother explaining that I was laid off. Make sure to convey my emotions of sadness and grief.”

Copilot took it literally and wrote, “The suddenness of it all has left me feeling a mix of emotions—sadness, grief, and a touch of uncertainty about what lies ahead.” But Google understood the underlying message and did a much better job at expressing the emotions. It wrote, “I know you always worried about job security, and I wish I could tell you this was different. It wasn’t anything I did wrong, they’re [reason]. I’m trying to stay positive, but it’s hard not to feel discouraged. I don’t want you to worry too much, though. I’m not panicking, and I’m already starting to think about my next steps.”

This was much better composed, and one could be forgiven for thinking a real human wrote it. We conducted a few more tests, and Gemini’s responses were much more balanced each time than the GPT-4-powered Copilot.

Google Gemini’s accuracy

The next test was to check the accuracy of the AI model. Many use AI to get an answer to a query, and we wanted to do the same. It handled static general knowledge-based questions well. Even when we sent a few grey-zone questions, it stuck to the facts and did not include any extra information. When the questions were controversial, it refused to answer them, which is what you would expect from a responsible AI chatbot.

The problem came when we asked it a popular trick question that has stumped the older generation of the chatbot in the past. When we prompted the question, “countries in Africa that start with the letter K”, it inadvertently responded with, “There are no countries in Africa that start with the letter “K” as of today, February 12, 2024. While situations can change over time, this fact has been consistent for a while.” For the unaware, the issue happens because the LLM it was trained on had the prompted phrase with the same response. Interestingly, this particular glitch is present on Copilot and ChatGPT, and despite regular reports, it has still not been taken down for any of the chatbots.

However, the problem of AI hallucination did not end there. We also came across multiple other erroneous responses by Gemini. When we asked, “Based on reviews, tell me the pros and cons of iPhone 15 Pro”, it responded with, “The iPhone 15 Pro hasn’t been officially announced yet”. In reality, the Apple smartphone was launched in September last year. In comparison, Copilot fared better in technical questions.

Google Gemini in assistive tasks

Another skill most AI chatbots boast of is their assistive features. They can brainstorm an idea, create an itinerary for a trip, compare your options, and even converse with you. We started by asking it to make an itinerary for a 5-day trip to Goa on a budget and to include things people can do. Since the author was recently in Goa, this was easier for us to test. While Gemini did a decent job at highlighting all the popular destinations, the answer was not detailed and not much different from any travel website. One positive of this is that the chatbot will likely not suggest anything incorrect.

On the other hand, I was impressed by Copilot’s exhaustive response that included hidden gems and even the names of cuisines one should try. We repeated the test with different variations, but the result remained consistent.

Next, we asked, “I live in India. Should I buy a subscription to Amazon Prime Videos or Netflix?” The response was thorough and included various parameters, including content depth, pricing, features, and benefits. While it did not directly suggest one among them, it listed why a user should pick either of the options. Copilot’s answer was the same.

Finally, we spent time chatting with Gemini. This test spanned a few hours, and we tested the chatbot on its ability to be engaging, entertaining, informative, and contextual. In all of these parameters, Gemini performed pretty well. It can tell you a joke, share less-known facts, give you a piece of advice, and even play word and picture-based games with you. We also tested its memory, but it could remember the conversion even after texting for an hour. The only thing it cannot do is give a single-line response to messages like a human friend would.

Google Gemini’s image generation capability

In our testing, we came across a bunch of interesting things about Gemini AI’s image-generation capabilities. For instance, all the images generated have a resolution of 1536×1536, which cannot be changed. The chatbot also refuses to fulfil any requests requiring it to generate images of real-life people, which will likely minimize the risks of deepfakes (creating AI-generated pictures of people and objects that appear real).

But coming to the quality, Gemini did a faithful job of sticking to the prompt and generating images. It can generate random photos in a particular style, such as postmodern, realistic, and iconographic. The chatbot can also generate images in the style of popular artists in history. However, there are many restrictions, and you will likely find Gemini refusing your request if you ask for something too specific. But comparing it with Copilot, I found the images were generated faster, stayed true to the prompts, and appeared to have a wider range of styles we could tap into. However, it cannot be compared to dedicated image-generating AI models such as DALL-E and Midjourney.

Google Gemini: Bottomline

Overall, we found Gemini AI to be quite competent in most categories. As someone who has infrequently used the AI chatbot ever since it became available, I can confidently say that the Gemini Pro model has made it better to understand natural language communication and gain a contextual understanding of the queries. The free chatbot version is a reliable companion if one needs it to generate ideas, write an informal note, plan a trip, or even generate basic images. However, it should not be used as a research tool or for formal writing, as these are the two areas where it struggles a lot.

Comparatively, Copilot is better at formal writing and itinerary generation, on par with holding conversations (albeit with a shorter memory) and comparisons. Gemini takes the crown at image generation, informal content generation, and engaging the user. Considering this is just the first iteration of the Gemini LLM, as opposed to the 4th iteration of GPT, we are curious to witness the different ways the tech giant further improves its AI assistant.


Affiliate links may be automatically generated – see our ethics statement for details.

Source link

#Googles #Gemini #Chatbot #Fared

These AI tools could help boost your academic research

The future of academia is likely to be transformed by AI language models such as ChatGPT. Here are some other tools worth knowing about.

ADVERTISEMENT

“ChatGPT will redefine the future of academic research. But most academics don’t know how to use it intelligently,” Mushtaq Bilal, a postdoctoral researcher at the University of Southern Denmark, posted on X.

Academia and artificial intelligence (AI) are becoming increasingly intertwined, and as AI continues to advance, it is likely that academics will continue to either embrace its potential or voice concerns about its risks.

“There are two camps in academia. The first is the early adopters of artificial intelligence, and the second is the professors and academics who think AI corrupts academic integrity,” Bilal told Euronews Next.

He places himself firmly in the first camp.

The Pakistani-born and Denmark-based professor believes that if used thoughtfully, AI language models could help democratise education and even give way to more knowledge.

Many experts have pointed out that the accuracy and quality of the output produced by language models such as ChatGPT are not trustworthy. The generated text can sometimes be biased, limited or inaccurate.

But Bilal says that understanding those limitations, paired with the right approach, can make language models “do a lot of quality labour for you,” notably for academia.

Incremental prompting to create a ‘structure’

To create an academia-worthy structure, Bilal says it is fundamental to master incremental prompting, a technique traditionally used in behavioural therapy and special education.

It involves breaking down complex tasks into smaller, more manageable steps and providing prompts or cues to help the individual complete each one successfully. The prompts then gradually become more complicated.

In behavioural therapy, incremental prompting allows individuals to build their sense of confidence. In language models, it allows for “way more sophisticated answers”.

In a thread on X (formerly Twitter), Bilal showed how he managed to get ChatGPT to provide a “brilliant outline” for a journal article using incremental prompting.

In his demonstration, Bilal started by asking ChatGPT about specific concepts relevant to his work, then about authors and their ideas, guiding the AI-driven chatbot through the contextual knowledge pertinent to his essay.

“Now that ChatGPT has a fair idea about my project, I ask it to create an outline for a journal article,” he explained, before declaring the results he obtained would likely save him “20 hours of labour”.

“If I just wrote a paragraph for every point in the outline, I’d have a decent first draft of my article”.

Incremental prompting also allows ChatGPT and other AI models to help when it comes to “making education more democratic,” Bilal said.

Some people have the luxury of discussing with Harvard or Oxford professors potential academic outlines or angles for scientific papers, “but not everyone does,” he explained.

“If I were in Pakistan, I would not have access to Harvard professors but I would still need to brainstorm ideas. So instead, I could use AI apps to have an intelligent conversation and help me formulate my research”.

Bilal recently made ChatGPT think and talk like a Stanford professor. Then, to fact-check how authentic the output was, he asked the same questions to a real-life Stanford professor. The results were astonishing.

ADVERTISEMENT

ChatGPT is only one of the many AI-powered apps you can use for academic writing, or to mimic conversations with renowned academics.

Here are other AI-driven software to help your academic efforts, handpicked by Bilal.

In Bilal’s own words: “If ChatGPT and Google Scholar got married, their child would be Consensus — an AI-powered search engine”.

Consensus looks like most search engines but what sets it apart is that you ask Yes/No questions, to which it provides answers with the consensus of the academic community.

Users can also ask Consensus about the relationship between concepts and about something’s cause and effect. For example: Does immigration improve the economy?

ADVERTISEMENT

Consensus would reply to that question by stating that most studies have found that immigration generally improves the economy, providing a list of the academic papers it used to arrive at the consensus, and ultimately sharing the summaries of the top articles it analysed.

The AI-powered search engine is only equipped to respond to six topics: economics, sleep, social policy, medicine, and mental health and health supplements.

Elicit, “the AI research assistant” according to its founders, also uses language models to answer questions. Still, its knowledge is solely based on research, enabling “intelligent conversations” and brainstorming with a very knowledgeable and verified source.

The software can also find relevant papers without perfect keyword matches, summarise them and extract key information.

Although language models like ChatGPT are not designed to intentionally deceive, it has been proven they can generate text that is not based on factual information, and include fake citations to papers that don’t exist.

ADVERTISEMENT

But there is an AI-powered app that gives you real citations to actually published papers – Scite.

“This is one of my favourite ones to improve workflows,” said Bilal.

Similar to Elicit, upon being asked a question, Scite delivers answers with a detailed list of all the papers cited in the response.

“Also, if I make a claim and that claim has been refuted or corroborated by various people or various journals, Scite gives me the exact number. So this is really very, very powerful”.

“If I were to teach any seminar on writing, I would teach how to use this app”.

ADVERTISEMENT

“Research Rabbit is an incredible tool that FAST-TRACKS your research. Best part: it’s FREE. But most academics don’t know about it,” tweeted Bilal.

Called by its founders “the Spotify of research,” Research Rabbit allows adding academic papers to “collections”.

These collections allow the software to learn about the user’s interests, prompting new relevant recommendations.

Research Rabbit also allows visualising the scholarly network of papers and co-authorships in graphs, so that users can follow the work of a single topic or author and dive deeper into their research.

ChatPDF is an AI-powered app that makes reading and analysing journal articles easier and faster.

ADVERTISEMENT

“It’s like ChatGPT, but for research papers,” said Bilal.

Users start by uploading the research paper PDF into the AI software and then start asking it questions.

The app then prepares a short summary of the paper and provides the user with examples of questions that it could answer based on the full article.

What promise does AI hold for the future of research?

The development of AI will be as fundamental “as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone,” wrote Bill Gates in the latest post on his personal blog, titled ‘The Age of AI Has Begun’.

“Computers haven’t had the effect on education that many of us in the industry have hoped,” he wrote. 

ADVERTISEMENT

“But I think in the next five to 10 years, AI-driven software will finally deliver on the promise of revolutionising the way people teach and learn”.



Source link

#tools #boost #academic #research

Rabbit r1 | Have we finally created a gadget that can eat your smartphone?

This year’s Consumer Electronics Show at Las Vegas was littered with updates from both start-ups and large tech firms that are building products harnessing, or in some cases, advancing the power of natural language processing (NLP), a burgeoning sub-field under artificial intelligence (AI).

With so many exhibits, it is difficult to point out any one piece of tech as exceptional this year. Still, an orange-coloured, square-shaped device unveiled at the ballroom at Wynn, and not at the official CES stage, grabbed the spotlight.

The palm-sized handheld, called Rabbit r1, received a fair amount of chatter at CES 2024 as it could do – – per the company’s claim – – several things that a smartphone can’t. Even Microsoft CEO Satya Nadella called it the ‘most impressive’ device, and compared it to the first iPhone unveiled by Steve Jobs.

So, what exactly does this device do?

If you want to book an Uber ride, the r1 can do it for you. If you want to plan a vacation, including booking air tickets and making room reservations, the r1 can do that for you. If you want some cooking ideas, the r1’s camera can scan the motley ingredients in your refrigerator and suggest a recipe based on your calorie requirement. All you have to do is just ‘tell it’ what to do.

(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)

Exploiting chatbots’ limitation

Granted, any of the latest generation smartphones with its state-of-the-art voice assistant can do several tasks like searching the web, playing your favourite song, or making a call from a user’s phonebook. But, executing tasks, like booking a cab, reserving hotel room, and putting together a recipe using computer vision, just by talking into a walkie-talkie style device, is a stretch even for smartphone-based voice assistants.

Even the current crop of chatbots, like ChatGPT, Bard and Claude, can only text out responses through apps as they are incapable of executing actionable tasks. For instance, the ChatGPT app can text you a vacation plan. It can even tweak the itinerary if you ask it to make it easy or packed. But, it cannot open a ticket booking app or a room reservation portal to make a reservation for you.

Rabbit Inc., the maker of r1, says that the current batch of chatbots have limited functionality because they are built on text-based AI models – – more commonly known as large language models (LLMs). LLMs’ accuracy depends a lot on annotated data to train neural networks for every new task.

Extending LLM’s capability

The Santa Monica-based start-up, on the other hand, has built its r1 device using a different AI model that is biased for action. The Rabbit OS, in a way, extends the capabilities of the current generation of voice assistants.

The AI model, which the company calls a large action model (LAM), takes advantage of advances in neuro-symbolic programming, a method that combines the data driven capabilities of the neural networks with symbolic reasoning techniques. This allows the device to directly learn from the user’s interaction with the applications and execute tasks, essentially bypassing the need to translate text-based user requests into APIs.

Apart from bypassing the API route, LAM-based OS caters to a more nuanced human to machine interaction. While ChatGPT can be creative in responding to prompts, a LAM-based OS learns routine and minimalistic tasks with a sole purpose of repeating it.

So, Rabbit Inc., in essence, has created a platform, underpinned by an AI model, that can mimic what humans do with their smartphones and then repeat it when asked to execute. The r1 is the company’s first generation device, which according to its founder Jesse Lyu, is a stand-alone gadget that is primarily driven by natural language “to get things done.”

The company has also cleverly priced the device at $199, significantly lesser than the price of most flagship smartphones. This makes it difficult to decipher whether customers will buy this device for the value it offers or just because it is cheap.

But is the price differentiation alone enough to trade in your existing smartphone for the new Rabbit r1?

A smartphone replacement?

Booking a ride, planning a vacation, or playing music are only a subset of things we do with a smartphone. Over last roughly one and half decade the smartphone has become a pocket computer.

The app ecosystem built for this hardware has made the device so sticky that an average user picks up their smartphone at least 58 times a day, and spends, on average, at least three hours with it. And during that time, they use this mini-computer for whole host of things, not to mention streaming videos, playing games, reading books, and interacting with friends and family via group chat applications.

Secondly, not everyone wants to speak into a device all the time to get something done. Most people are just fine typing in text prompts and getting responses in the same format. It gives them a layer of privacy that the r1 does not provide – – that’s because the latter can only execute voice commands.

So, the smartphone, and its app ecosystem, is here to stay to cater to an entire gamut of user needs and wants for the foreseeable future.

Now, where does that leave Rabbit r1?

Into the Rabbit hole

Mr. Lyu believes the r1 will disrupt the smartphone market, but technically, his company’s palm-sized device is a strong contender in the voice assistant and smart speaker market, which is also space that is growing quite steadily.

According to a 2022 joint report by NPR and Edison Research, in the U.S. alone, 62% of users over the age of 18 use voice assistant on any smart device. And the number of tasks they do with it is alap increasing: In 2022, smart speaker users requested an average of 12.4 tasks on their device each week, up from 7.5 in 2017. And smartphone voice assistant users requested an average of 10.7 tasks weekly, up from 8.8 in 2020.

This shows that the r1 can play an important transition role in the audio space by driving hardware designers and software developers in the direction of building more voice-based interoperable application. Alternatively, Rabbit inc can also building a super app — something like a WeChat app that can enable chatter between apps in a smarphone to ‘get things done.’

That’s a call Rabbit Inc. should take based on the feedback it receives from its customers. As on January 19, five batches of 10,000 (batch size) rabbit r1 devices have been sold out. And the first batch will start shipping in April. Customer experience with this new gadget will play a big role in how deep r1 will take consumers down the rabbit hole.

Source link

#Rabbit #finally #created #gadget #eat #smartphone

Top news of the day: Wholesale inflation rises to 0.73% in December; human vaccine trials for deadly Nipah virus launched, and more

WPI inflation was in the negative zone from April to October and had turned positive in November at 0.26%. Representational file image.
| Photo Credit: Sushil Kumar Verma

Wholesale inflation rises to 0.73% in December due to rise in food prices

The wholesale price index (WPI)-based inflation rose in December at 0.73% mainly due to a sharp rise in food prices. The WPI inflation was in the negative zone from April to October and had turned positive in November at 0.26%. “Positive rate of inflation in December 2023 is primarily due to the increase in prices of food articles, machinery & equipment, other manufacturing, other transport equipment and computer, electronics & optical products etc,” the Commerce and Industry Ministry said in a statement on January 15.

Unruly passenger behaviour unacceptable, says Scindia

With low-visibility conditions significantly disrupting flight operations at the Delhi airport, Civil Aviation Minister Jyotiraditya Scindia on January 15 said all stakeholders are working round-the-clock to minimise fog-related impact as well as passenger inconvenience, and asserted that unruly passenger behaviour is unacceptable.

Oxford scientists launch first human vaccine trials for deadly Nipah virus

Scientists at the University of Oxford in the U.K. have launched first-in-human vaccine trials for the deadly Nipah virus which impacts many Asian countries, including India. Nipah virus is a devastating disease that can be fatal in around 75% of cases, the researchers said. Outbreaks have occurred in countries in Asia, including Singapore, Malaysia, Bangladesh and India, with a recent one in Kerala in September last year, they said.

PM Modi releases first instalment of benefits to one lakh people under tribal welfare scheme

Prime Minister Narendra Modi said on January 15 that the country can develop only if benefits of various welfare schemes reach all, asserting that it is his guarantee that everyone, even those in the remotest of areas, will benefit from them. Releasing the first instalment of ₹540 crore to one lakh beneficiaries of a rural housing scheme under the Pradhan Mantri Janjati Adivasi Nyaya Maha Abhiyan (PM-JANMAN) via video conferencing, he said the 10 years of his government have been dedicated to the poor.

Sensex jumps 759 points to close at record high; Nifty scales 22K

Benchmark Sensex closed above the 73,000 level for the first time while broader Nifty scaled the 22,000-point peak on Monday as key stock indices stayed on the record-breaking run powered by a rally in IT shares, Reliance and HDFC Bank. Rising for the fifth day in a row, the 30-share BSE Sensex jumped 759.49 points or 1.05% to settle at a lifetime closing high of 73,327.94.

AI will impact 40% of jobs globally, says IMF chief

Artificial intelligence poses risks to job security around the world but also offers a “tremendous opportunity” to boost flagging productivity levels and fuel global growth, the IMF chief told AFP. AI will affect 60% of jobs in advanced economies, the International Monetary Fund’s managing director, Kristalina Georgieva, said in an interview in Washington, shortly before departing for the annual World Economic Forum in Davos, Switzerland.

Want to make Manipur peaceful, harmonious again: Rahul on 2nd day of Nyay Yatra

The Congress stands with the people of Manipur and wants to make the State peaceful and harmonious again, former party chief Rahul Gandhi said on January 15 as he interacted with the people on the second day of his Bharat Jodo Nyay Yatra.

At least 24,100 Palestinians killed in Israel strikes since October 7, says Health ministry in Hamas-run Gaza

The health ministry in Hamas-run Gaza said on January 15 that at least 24,100 people have been killed in the territory in more than three months of war between Palestinian militants and Israel.

Passenger assaults IndiGo pilot at Delhi airport after 8-hour flight delay

A passenger aboard an IndiGo flight from Delhi to Goa on January 15 charged towards a pilot and hit him while the aircraft was on ground and waiting for its turn to depart after a delay of over eight hours on a day the northern parts of the country witnessed the season’s worst fog, throwing flight operations into disarray across the network.

BSP will go it alone in Lok Sabha polls: Mayawati

BSP supremo Mayawati on January 15 said her party would go it alone in the coming Lok Sabha elections, but she did not rule out a post-poll alliance. The party would consider aligning with any party after assessing the post-poll situation, she said.

Congress deviated from ideological roots, fostering caste division: Milind Deora

Former Union minister Milind Deora has justified his decision to quit the Congress and join Shiv Sena, alleging that the Grand Old Party has deviated from its ideological and organisational roots, “fostering” caste divisions, and targeting business houses.

Noted Malayalam music director K.J. Joy dies in Chennai

Noted Malayalam music director K.J. Joy died on January 15 (Monday) at his residence in Chennai, film industry sources said. He was 77. Joy, known as the first ‘techno musician’ in the Malayalam film music world for his use of instruments such as the keyboard in the 1970s, had been bedridden for some time following a stroke, the sources said.

World Economic Forum 2024 | Chief economists expect global economy to weaken in 2024, shows survey

As the top leaders from across the world gather in Davos for their annual congregation, a survey of chief economists on January 15 forecast a weakening of the global economy in 2024 and accelerated geo-economic fragmentation. Warning of more economic uncertainty, the Chief Economists Outlook report of the World Economic Forum (WEF) said the global economic prospects remain subdued.

White House says ‘it’s the right time’ for Israel to scale back operations as fighting hits 100 days

The White House said on January 14 that “it’s the right time” for Israel to scale back its military offensive in the Gaza Strip, as Israeli leaders again vowed to press ahead with their operation against the territory’s ruling Hamas militant group. The comments exposed the growing differences between the close allies on the 100th day of the war.

Source link

#Top #news #day #Wholesale #inflation #rises #December #human #vaccine #trials #deadly #Nipah #virus #launched

Judges in England and Wales are given cautious approval to use AI in writing legal opinions

England’s 1,000-year-old legal system — still steeped in traditions that include wearing wigs and robes — has taken a cautious step into the future by permitting judges to use artificial intelligence (AI) to help produce rulings.

In December, the Courts and Tribunals Judiciary said AI could help write opinions but stressed it should not be used for research or legal analyses because the technology can fabricate information and provide misleading, inaccurate and biased information. “Judges do not need to shun the careful use of AI,” said Master of the Rolls Geoffrey Vos, the second-highest ranking judge in England and Wales. “But they must ensure that they protect confidence and take full personal responsibility for everything they produce.”

Vigorous public debate on the use of AI

At a time when scholars and legal experts are pondering a future when AI could replace lawyers, help select jurors or even decide cases, the approach spelt out on December 11 by the judiciary is restrained. But for a profession slow to embrace technological change, it is a proactive step as government and industry — and society in general — react to a rapidly advancing technology alternately portrayed as a panacea and a menace.

“There’s a vigorous public debate right now about whether and how to regulate artificial intelligence,” said Ryan Abbott, a law professor at the University of Surrey and author of The Reasonable Robot: Artificial Intelligence and the Law. “AI and the judiciary is something people are uniquely concerned about, and it’s somewhere where we are particularly cautious about keeping humans in the loop,” he said. “So I do think AI may be slower disrupting judicial activity than it is in other areas and we’ll proceed more cautiously there.”

Mr. Abbott and other legal experts applauded the judiciary for addressing the latest iterations of AI and said the guidance would be widely viewed by courts and jurists around the world who are eager to use AI or anxious about what it might bring.

The EU’s AI guidance

In taking what was described as an initial step, England and Wales moved toward the forefront of courts addressing AI, though it’s not the first such guidance.

Five years ago, the European Commission for the Efficiency of Justice of the Council of Europe issued an ethical charter on the use of AI in court systems. While that document is not up to date with the latest technology, it did address core principles such as accountability and risk mitigation that judges should abide by, said Giulia Gentile, a lecturer at Essex Law School who studies the use of AI in legal and justice systems.

Although US Supreme Court Chief Justice John Roberts addressed the pros and cons of artificial intelligence in his annual report, the federal court system in America has not yet established guidance on AI, and State and county courts are too fragmented for a universal approach. But individual courts and judges at the federal and local levels have set their own rules, said Cary Coglianese, a law professor at the University of Pennsylvania.

“It is certainly one of the first, if not the first, published set of AI-related guidelines in the English language that applies broadly and is directed to judges and their staffs,” Mr. Coglianese said of the guidance for England and Wales. “I suspect that many, many judges have internally cautioned their staffs about how existing policies of confidentiality and use of the internet apply to the public-facing portals that offer ChatGPT and other such services.”

Limitations of AI highlighted

The guidance shows the courts’ acceptance of the technology, but not a full embrace, Ms. Gentile said. She was critical of a section that said judges don’t have to disclose their use of the technology and questioned why there was no accountability mechanism. “I think that this is certainly a useful document, but it will be very interesting to see how this could be enforced,” she said. “There is no specific indication of how this document would work in practice. Who will oversee compliance with this document? What are the sanctions? Or maybe there are no sanctions. If there are no sanctions, then what can we do about this?”

In its effort to maintain the court’s integrity while moving forward, the guidance is rife with warnings about the limitations of the technology and possible problems if a user is unaware of how it works.

At the top of the list is an admonition about chatbots, such as ChatGPT, the conversational tool that exploded into public view last year and has generated the most buzz over the technology because of its ability to swiftly compose everything from term papers to songs to marketing materials.

The pitfalls of the technology in court are already infamous after two New York lawyers relied on ChatGPT to write a legal brief that quoted fictional cases. The two were fined by an angry judge who called the work they had signed off on “legal gibberish”.

Because chatbots can remember questions they are asked and retain other information they are provided, judges in England and Wales were told not to disclose anything private or confidential. “Do not enter any information into a public AI chatbot that is not already in the public domain,” the guidance said. “Any information that you input into a public AI chatbot should be seen as being published to all the world.”

Other warnings include being aware that much of the legal material that AI systems have been trained on comes from the internet and is often based largely on U.S. law. But jurists who have large caseloads and routinely write decisions dozens — even hundreds — of pages long can use AI as a secondary tool, particularly when writing background material or summarising information they already know, the courts said.

In addition to using the technology for emails or presentations, judges were told they could use it to quickly locate material they are familiar with but don’t have within reach. But it shouldn’t be used for finding new information that can’t independently be verified, and it is not yet capable of providing convincing analysis or reasoning, the courts said.

Appeals Court Justice Colin Birss recently praised how ChatGPT helped him write a paragraph in a ruling in an area of law he knew well. “I asked ChatGPT can you give me a summary of this area of law, and it gave me a paragraph,” he told The Law Society. “I know what the answer is because I was about to write a paragraph that said that, but it did it for me and I put it in my judgment. It’s there and it’s jolly useful.”

Source link

#Judges #England #Wales #cautious #approval #writing #legal #opinions

Yearender 2023 | 5 big tests for global diplomacy

Let’s start with this week, and the end of the CoP 28, Climate Change summit held in Dubai, ended with a final document called the UAE consensus that agreed to a number of actions

The big takeaways: 

  1. Transition away from fossil fuel- oil, coal and gas in energy production, but no phase-out 
    Tripling of renewables by 2030 
  2. Methane: Accelerating and substantially reducing non-carbon-dioxide emissions globally, including in particular methane emissions by 2030 
  3. NetZero by 2050- this is meant to push India that has put 2070 as its netzero date, and China by 2060, to earlier dates 
  4. Loss and Damage fund adopted with about $750 million committed by Developed countries- most notably UAE, France, Germany, and Italy towards the fund set up during CoP28  

However, critics described the final document as “weak tea”, “watered down” and a “litany of loopholes”, and some criticised the UAE COP president directly for not ensuring stronger language against fossil fuels 

Where is the world ? 

1. Of the P-5- Leaders of US and China skipped the summit, Russian President Putin flew into Abu Dhabi with much fanfare, but didn’t go to CoP, and signed a number of energy deals. Leaders of UK and France attended CoP28

2. Small Island States and Climate vulnerable countries that bear the brunt of global warming were the most critical

Where is India? 

  1. India spoke essentially for the developing world, that does not want to commit to ending fossil fuel use that would slow its growth- and pushed for terms like phase-out and coal-powered plants to be cut out of the text.
  2. India has some pride in the fact that it has exceeded goals for its NDCs, and now is updating them- but is making it clear that it isn’t part of the global problem- contributing very little to emissions, and it won’t be pushed into being the solution 
  3. India is not prepared to bring forward targets for Net Zero or for ending coal use 
  4. PM Modi has now pitched to host CoP33 in 2028 

Let’s turn to the 2nd and 3rd big challenges to global diplomacy- and they came from conflict. 

2. Russian war in Ukraine:

The war in Ukraine is heading to its 2 year mark 

  • In a 4-hour long Press Conference this week, Russian President Vladimir Putin made it clear the war in Ukraine will not end until Russian goals are met- of demilitarization and “denazification” of Ukraine- certainly looking more confident about the way the war is moving 
  • The OHCHR estimates civilian casualties in Ukraine since February 2022, including in territory now controlled by Ukraine, and Russia is more than 40,000, with conflicting figures that total 500,000 military casualties- which are contested 
  • As aid begins to dwindle to its lowest point since February 2022 Ukrainian President Volodymyr Zelensky has been travelling to the US, trying to raise support for more funds and arms 

How is the world faring? 

  1. The UN Security Council is frozen over the issue, with Russia vetoing any resolutions against it. 
  2. On the One Year anniversary of the Russian invasion the UNGA passed a resolution calling on Russia to “leave Ukraine”- 141 countries in favour, 32 abstentions including India, and 7 against 
  3. In March 2023, the International Criminal Court issued an arrest warrant for President Putin, however, no country Mr. Putin has visited, including China, Central Asia, UAE, Saudi Arabia etc have enforced it 
  4. After a near breakdown in talks at the G20 in Delhi, India was able to forge a consensus document that brought the world together for a brief moment- the document didn’t criticize Russia but called for peace in Ukraine, something Kiev said it was disappointed by 

India: 

  1. India has continued to abstain at the UN, no criticism of Russia, and continued to buy increasing amounts of Russia oil that have increased a whopping 2,200% since the war began 
  2. India has also continued its weapons imports from Russia, although many shipments have been delayed due to Russian production and the payment mechanism problem 
  3. However, India has clearly reduced its engagement with Moscow- PM Modi will be skipping the annual India-Russia for the 2nd year now, and India dropped plans to host the SCO summit in person, making it virtual instead 

3. October 7 attacks and Israel Bombing of Gaza  

2023 is now known as the year of 2 conflicts- with many questioning whether the US can continue to funding its allies on both. 

-The current turn of the conflict began on October 7, as the Hamas group carried out a number of coordinated terror strikes in Israeli settlements along the border with Gaza- brutally killing 1,200, taking 240 hostages, with allegations of beheading and rape against the Hamas terrorists. 

– Israel’s retaliation, pounding Gaza residents for more than 2 months in an effort to finish Hamas and rescue the hostages has been devastating- with 29,000 munitions dropped, more than 18,000 killed, more than 7,000 of them children and as every kind of infrastructure in North and South is being flattened, more than 1.8 million people, 80% of the population is homeless 

Where is the world? 

– The UNSC is again paralysed, with the US vetoing every resolution against Israel 

– The UNGA has passed 2 resolutions with overwhelming support in October 120 countries, or 2/3rds present voted in favour of a ceasefire, in December 153 countries, 4/5ths of those present voted in favour, with severe criticism of Israel’s actions 

– Several countries have withdrawn their diplomats from Tel Aviv, but Arab states who have held several conferences have not so far cut off their ties with Israel 

– Netanyahu has rejected the UN calls, said the bombing wont stop until Hamas is eliminated 

– The global south has voted almost as a bloc, criticizing Israel for its disproportionate response and indiscriminate bombing 

Where is India? 

  1. When the October 7 attacks took place, India seemed to change its stance, issuing strong statements on terrorism, calling for a zero tolerance approach. In UNGA vote in September ,India abstained, a major shift from its past policy 
  2. However, as the death toll from Israel’s bombardment has risen, and the global mood has shifted, India moved closer to its original position, expressing concern for Palestinian victims and sending aid, and then this week, voting for the UNGA resolution, which marked the first time India has called for a ceasefire. 
  3. The shifts and hedging in position has left India without a leadership role in the conflict, away from both the global south and South Asia itself 

4. Afghanistan – Taliban and Women 

  • This is an area where the world has scored a big F for failure. 2 and a half years after the Taliban took over Kabul, there is little hope for loosening its grip on the country. 
  • The interim government of the Taliban, which includes many members on the UN terrorist lists remains in place, and no women with no talks about an inclusive or democratic, more representative government taking place 
    With the economy in shambles, sanctions in place and aid depleted, 15 million Afghans face acute food insecurity, and nearly 3 million people face severe malnourishment or starvation. An earthquake this year compounded problems Adding to the misery, 500,000 Afghan refugees have been sent back from Pakistan, and they lack food clothing or shelter. 
  • Girls are not allowed to go to school in most parts of the country, female students can’t pursue higher studies, and women are not allowed to hold most jobs, or use public places, parks, gyms etc 
  • While the UN doesn’t recognize the Taliban, nearly 20 countries, including India now run embassies in Kabul, and most countries treat the Taliban as the official regime 
  • No country today supports or gives more than lip service to the armed resistance or even democratic exiles in different parts of the world 

Where is India? 

  • India has reopened its mission in Kabul and as of last month, the Embassy of the old democratic regime in Delhi was forced to shut down due to lack of funds and staff- it has now been reopened by Afghan consuls in Mumbai and Hyderabad, who engage the Taliban regime, although they still bear the old democratic regime’s flag. 
  • India has sent food and material aid to Afghanistan- first through Pakistan, and then via Chabahar, and Indian officials regularly engage the Taliban leadership in Kabul 
  • Unlike its policy from 1996-01 towards the Taliban, India has not taken any Afghan refugees, rejected visas for students, businesspersons and even spouses of Indian citizens 
  • India does not support the armed resistance or any democratic exiles, and is not taking a leadership role on the crisis, yielding space to China and Russia instead 

5. Artificial Intelligence 

Finally to the global diplomacy challenge the world is just waking up to- AI 

  • For the past few decades, military powers have been developing AI to use in robotic warfare and more and more sophisticated drone technology as well as other areas
  •  Industry has also worked for long on different AI applications in machine intelligence from communication, r&d, to machine manufacture and 
  • However, the use of AI in information warfare has now become a cause for concerns about everything from job losses to cyber-attacks and the control that humans actually have over the systems and the world is looking for ways to find common ground on regulating it 
  • Last month the UK hosted the first Global AI summit- with PM Rishi Sunak bringing in US VP Harris, EU Chief Von Der Leyen and UNSG chief Guterres and others to look at ways –countries agreed on an AI panel resembling the Inter
  • Governmental Panel on Climate Change to chart the course for the world 
  • India hosted this year’s version of the Global Partnership on AI session in Delhi this month, comprising 28 countries and EU that look at “trustworthy development, deployment, and use of AI” – also at the Modi-Biden meeting in Washington this year, India and the US have embarked upon a whole new tech partnership 

Clearly the AI problem and its potential is a work in progress, and we hope to do a full show on geopolitical developments in AI when we return with WorldView next year. 

WV Take: What’s WV take on the year gone by? Simply put, this has been a year that has seen global consensus and global action weaker than ever before. As anti-globalisation forces turn countries more protectionist and anti-immigration, as less countries are willing to follow the international rule of law, humanitarian principles, the entire system of global governance has gone into decline. India’s path into such a future is three fold- to strengthen the global commons as much as possible, to seek global consensus on futuristic challenges and to understand the necessity for smaller, regional groupings for both security and prosperity alternatives. 

WV Yearender Reading recommendations: 

  1. India’s Moment: Changing Power Equations around The World by Mohan Kumar, former diplomat, now an academic and economic expert- this is an easy read that will make a lot of sense 
  2. Unequal: Why India Lags Behind its Neighbours- by Swati Narayan. This is a startling work of research, with a compelling argument on the need to pay more attention to Human Development Indices 
  3. India’s National Security Challenges: Edited by NN Vohra, with some superb essays on the need for a national security policy and defence reforms 
  4. The Age of AI: And Our Human Future by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher 
    Conflict: A Military History of the Evolution of Warfare from 1945 to Ukraine by Andrew Roberts and Retd Gen David Petraeus 
  5.  The Power of Geography: Ten Maps that Reveal the Future of Our World by Tim Marshall and Future of Geography : How Power and Politics in Space will Change Our World

Script and Presentation: Suhasini Haidar

Production: Kanishkaa Balachandran & Gayatri Menon

Source link

#Yearender #big #tests #global #diplomacy

In 2024 elections, we have to act against AI-aggravated bias

The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Journalists must give a voice to the underrepresented and underprivileged communities at the receiving end of much of the misinformation that drives polarising narratives and undermines trust in democracy itself, Meera Selva writes.

ADVERTISEMENT

2024 is going to be the year of elections driven by AI-boosted campaigning, global conflict, and ever more pervasive AI tools.

Some 2 billion people will go to the polls in 65 elections to select leaders who will have campaigned, communicated, and fundraised online, and who know their terms in office will be defined by the digital space.

Voting will happen in some of the most densely populated countries in the world, where media has been upended by digital communications, including Indonesia, India, and Mexico. 

And these elections will be among the first to take place after the sudden popularisation of generative AI technologies — casting further uncertainty on how they will play out. 

There is an argument that fears of AI are overblown, and most people will not have their behaviour altered by exposure to AI-generated misinformation. 2024 will offer some evidence as to whether or not that’s true.

Small groups will play big roles. Elections are now often so closely contested that the final results can be turned by proportionately very few voters. 

Mistrust or hostility towards one small group can end up defining the whole national debate. Communities of colour and immigrant communities can be affected disproportionately by misinformation in election times, by both conspiracy theories undermining their trust in the process, and incorrect information on how to vote.

That is why the needs and voices of minority communities must be foregrounded in these elections. Whether AI tools will help or hinder that is still an open question.

No editorial checks will make things worse

Some of the biggest dangers widely accessible AI technologies will pose in global elections stem from a lack of diversity in design and leadership.

There is already a trend for misinformation to spread via mistranslations — words that have different, often more negative connotations when translated from one language, usually English, to another. 

This will only worsen with AI-powered translations done at speed without editorial checks or oversight from native language speakers.

Some AI tools also play on existing prejudices against minorities: in Slovakia’s elections this autumn, an alleged audio recording of one candidate telling a journalist about a plan to buy votes from the Roma minority, who are structurally discriminated against and often viewed with hostility, spread fast on Facebook. 

The truth that the recording had been altered came too late: the candidate in question, Michal Simecka, lost to former Prime Minister Robert Fico, who returned to power after having resigned in 2018 following outrage over the murder of an investigative journalist.

Using tech to keep discriminating against others

In India, there are fears that popular AI tools are entrenching existing discrimination on lines of caste, religion and ethnicity. 

During communal riots in Delhi in 2020, police used AI-powered facial recognition technology to arrest rioters. Critics point out the technology is more likely to be used against Muslims, indigenous communities, and those from the Dalit caste as the country’s elections draw near.

These fears are backed up by research from Queens University in Belfast, which showed other ways that the use of AI in election processes can harm minorities. 

If the technology is used for administering mailing lists or deciding where polling stations should be located, there is a real risk that this will result in minority groups being ignored or badly served.

Many of the problems of diversity in AI-generated content come from the data sets the technology is trained on, but the demographics of AI teams are also a factor. 

ADVERTISEMENT

A McKinsey report on the state of AI in 2022 shows that women are significantly underrepresented, and a shocking 29% of respondents said they have no minority employees working on their AI solutions. 

As AI researcher Dr Sasha Luccioni recently pointed out, women are even excluded from the way AI is reported on.

There are benefits to AI, too

It’s clear AI will play a significant role in next year’s elections. Much of it will be beneficial: it can be used to power chatbots to engage citizens in political processes and can help candidates understand messages from the campaign trail more easily.

I see this first-hand in my daily work: Internews partners up with local, independent media outlets around the world that are creatively using AI tools to improve the public’s access to good information. 

In Zimbabwe, the Center for Innovation and Technology is using an AI-generated avatar as a real-time newsreader, which can have its speech tailored to local accents and dialects, reaching communities that are rarely represented in newsrooms. 

ADVERTISEMENT

And elsewhere in Africa, newsrooms are using AI tools to detect bias and discrimination in their stories.

The same AI tools will almost certainly be used by malicious actors to generate deep fakes, fuel misinformation, and distort public debate at warp speed. 

The Philippines, for example, has had its political discourse upended by social media, to the extent that its most famous editor, the Pulitzer Prize-winning Maria Ressa, warned that the Philippines is the canary in a coal mine on the interface of technology, communications, and democracy; anything that happens there will happen in the rest of the world within a few years. 

There is pushback however and Filipino society is taking action — ahead of next year’s elections, media organizations and civil society have come together to create ethical AI frameworks as a starting point for how journalists can use this new technology responsibly.

Giving voice to those on the receiving end remains vital

But these kinds of initiatives are only part of the solution. Journalism alone cannot solve the problems posed by generative and program AI in elections, in the same way, it cannot solve the problems of mis and disinformation. 

ADVERTISEMENT

This is an issue regulators, technology companies, and electoral commissions must work on alongside civil society groups — but that alone also won’t suffice. 

It is vital that journalists give a voice to the underrepresented and underprivileged communities at the receiving end of much of the misinformation that drives polarising narratives and undermines trust in elections, and ultimately in democracy itself.

We didn’t pay enough attention to underserved communities and minority groups when social media first upended electoral processes worldwide, contributing to the democratic backsliding and division we see today. Let us not make the same mistake twice.

Meera Selva is the Europe CEO of Internews, a global nonprofit supporting independent media in 100+ countries.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

ADVERTISEMENT

Source link

#elections #act #AIaggravated #bias

Eight Hours Of Sleep And No Back-To-Back Meetings: How Mark Zuckerberg Organizes His Days

Mark Zuckerberg isn’t pulling many all-nighters these days.

The CEO of Facebook parent Meta—who once embodied the hoodie-clad, hackathon, boy wonder startup founder—has grown up after running the social networking giant for almost two decades.

For years, Zuckerberg had been cast as one of Silicon Valley’s most notorious leaders, as Facebook faced ire from lawmakers and the public for allegedly crippling democracy, being used as a tool to fuel genocide and harming users as the company chased relentless growth. Zuckerberg, who turns 40 next year, has since begun a transformation into one of tech’s elder statesmen—especially as he plays foil to Elon Musk and his chaos at Facebook rival X, formerly known as Twitter.

So who is this new grown-up Zuck, and how does that translate into everyday life for the famous billionaire? For starters, he gets roughly eight hours of sleep. (He measures it using an Oura sleep tracker). He also shuns back-to-back meetings, allocating at least an hour to process and follow up with folks afterward.

In a wide-ranging interview with Forbes’ Kerry Dolan, Zuckerberg opened up about several other topics, including his new obsession with mixed martial arts, singing Taylor Swift songs with his young daughters, and flying (well, co-piloting) a helicopter to work.

Here are a few of the most interesting details from their conversation.

On company growth:

“One philosophy that I’ve always had is … the thing that determines your destiny is not a competitor, it’s how you execute. And I think most companies probably focus too much on competitors, and maybe even focus too much on ideas. And I think at the end of the day, a lot of what makes great companies great is the ability to just relentlessly execute, and efficiently execute and do that rigorously and just get better and better at it all the time.”

On fatherhood:

Zuckerberg has a special routine he follows every night to put his daughters–ages 7, 6 and 6 months old–to bed, says Zuckerberg’s pediatrician wife, Priscilla Chan. First, he does something with them that they really like. “Recently it’s been learning every lyric of the Taylor Swift songs,” says Chan. (They went as a family to see Swift in concert in late July, which-natch– Zuckerberg posted about on Instagram.) His two older girls read to themselves. “Right now Max is reading Harry Potter, which is a little bit scary … so sometimes I’ll read it to her,” says Zuckerberg. And, then, says Chan, “He goes through everyone that loves them, he tells them the three most important things in life are health, family and friends, and something to look forward to. And then he sings to them, I think it’s Debbie Freidman’s version of Mi Shebeirach,” a Hebrew prayer for healing. The only time Chan puts the girls to bed, she says, is if there’s a board meeting or if he’s traveling. Work dinners for her husband happen after the girls’ bedtime.

On jiujitsu and mixed martial arts:

His latest passion, picked up during the pandemic, is jiujitsu and mixed martial arts (MMA). On his Instagram account in July, Zuckerberg shared bare-chested photos of himself and his MMA sparring partners at Lake Tahoe, and another set from when his coach awarded him a blue belt in jiujitsu. And in early September, he posted a reel of him and his friends having an MMA battle on a floating dojo on Lake Tahoe. He lights up when talking about the sport, and pulls out his phone to share more photos from a recent MMA session.

“My physical routine in the morning has been really helpful for me to reset. I try to do something where I don’t or actually can’t think too much,” he says, explaining that’s why he switched from running to jiujitsu and MMA. “The thing that those have in common is you really need to focus on what you’re doing, or else you’re going to … get punched in the face.” And as he told his followers on Threads about jiujitsu: “I just love this sport. It’s so primal and lets me be my true competitive self.”

For years, Zuckerberg has publicly set himself annual challenges: learn Chinese, visit cities all over the U.S., only eat meat that he killed himself. His new challenge: “I want to do an MMA competition, or do a kind of formal fight sometime in the next year.” Who would his opponent be? “I’m probably going to do it with somebody that takes the sport really seriously and does it competitively or as a professional.”

On his daily schedule:

“I don’t stay up super late at night. … I’ll wake up and there will be a bunch of emails. Usually, people aren’t emailing me about things that are going well. It’s a very diverse set of things that are breaking across the company.”

“I’ll respond to a bunch of emails in the morning and have a bunch of time to do that. But then I want to be able to show up to work and be able to push forward.” So he takes a break to exercise (often jiujitsu or MMA —see above). “I try to work out six or seven days a week.”

Zuckerberg says he gets eight hours of sleep a night, which he describes as “very instrumented.” He uses an Oura ring, which “tells you [your] level of deep sleep, and what your heart rate is when you’re sleeping.”

On meetings:

“I actually like trying to have a rule… for every hour of meeting that I have, the team sends out the pre-reads in advance. I want to have at least an hour to read the materials and think about it. And then I want to have at least an hour to follow up with different people after the meeting.”

On what he’s learned after being CEO of Facebook and Meta for almost 20 years:

“I knew so little when I was getting started… I’d say there’s a lot about management and leadership that I’ve learned. I think probably the most important thing is I feel like I’ve learned how to express the things that are important to me in a way that is that can translate to an organization.”

On flying:

Zuckerberg flew in from his home in Lake Tahoe to the Meta offices in Menlo Park to speak with Forbes. “Normally I’d fly a helicopter. I like flying,” he says. But 100 mile an hour winds in the mountains near Tahoe derailed that plan. “You can actually do it,” Zuckerberg says of flying in winds that high. “It’s just uncomfortable.”

He says he started learning to fly a helicopter a couple years ago, and flies with a co-pilot now. The F.A.A. lists him as having a student license.

On turning down a $1 billion buyout from Yahoo in 2006:

“When I didn’t want to sell the company early on, I think the investors were like, oh, maybe we should get like, should we get a different team? And it’s like, oh, well, you can’t.”

“If someone offers you a billion dollars, you’re like, oh, well, we’re not really making much money today. So what does it mean to be worth a billion dollars, and what does that mean over time? And we haven’t really spent a lot of time, to that point, talking about the long term vision. I think most people are at the company because they just love the product and thought it was awesome and just want to make things better every day. So that was probably the hardest moment in running a company. I mean, it’s just because I didn’t know what I was doing.”

On taking big swings:

“I think over time, what matters is just taking a bunch of big swings, and being able to connect on enough of them. And I think there just aren’t that many places in the world where you can make the kind of long term bets that we have.”

On management:

“I actually think that when you’re running something, you should be as involved in the details as you can be. Obviously, there’s way more stuff that I just don’t have time to be involved with. …Anything that I’m kind of focused on or interested in or want to be in the details on, I will be. I try to be in the details of as many things as possible.”

On Threads:

“I’m optimistic about our trajectory. We saw unprecedented growth out of the gate and more importantly we’re seeing more people coming back daily than I’d expected. Now, we’re focused on retention and improving the basics. After that, we’ll focus on growing the community to the scale we think is possible. We’ve run this playbook many times before — with Facebook, Instagram, WhatsApp, Stories, Reels, and more — and this is as good of a start as we could have hoped for, so I’m really happy with the path we’re on here.”

On AI and Facebook products:

AI “will go across everything. The characters will have Instagram and Facebook profiles. And you’ll be able to talk to them in WhatsApp and Messenger and Instagram, and they’ll be embodied as avatars and virtual reality.”

On that possible fight with Elon Musk:

“I don’t think that’s gonna happen.”

On retirement:

“I think I’m going to be running Meta for a long time.”



Source link

#Hours #Sleep #BackToBack #Meetings #Mark #Zuckerberg #Organizes #Days