‘Counterfeit people’: The dangers posed by Meta’s AI celebrity lookalike chatbots

Meta announced on Wednesday the arrival of chatbots with personalities similar to certain celebrities, with whom it will be possible to chat. Presented as an entertaining evolution of ChatGPT and other forms of AI, this latest technological development could prove dangerous.

Meta (formerly known as Facebook) sees these as “fun” artificial intelligence. Others, however, feel that this latest technological development could mark the first step towards creating “the most dangerous artefacts in human history”, to quote from American philosopher Daniel C. Dennett’s essay about “counterfeit people”.

On Wednesday, September 27, the social networking giant announced the launch of 28 chatbots (conversational agents), which supposedly have their own personalities and have been specially designed for younger users. These include Victor, a so-called triathlete who can motivate “you to be your best self”, and Sally, the “free-spirited friend who’ll tell you when to take a deep breath”.

Internet users can also chat to Max, a “seasoned sous chef” who will give you “culinary tips and tricks”, or engage in a verbal joust with Luiz, who “can back up his trash talk”.

A chatbot that looks like Paris Hilton

To reinforce the idea that these chatbots have personalities and are not simply an amalgam of algorithms, Meta has given each of them a face. Thanks to partnerships with celebrities, these robots look like American jet-setter and DJ Paris Hilton, TikTok star Charli D’Amelio and American-Japanese tennis player Naomi Osaka.

Read moreShould we worry? ChatGPT passes Ivy League business exam

And that’s not all. Meta has opened Facebook and Instagram accounts for each of its conversational agents to give them an existence outside chat interfaces and is working on giving them a voice by next year. The parent company of Mark Zuckerberg‘s empire was also looking for screenwriters who can “write character, and other supporting narrative content that appeal to wide audiences”.

Meta may present these 28 chatbots as an innocent undertaking to massively distract young internet users, but all these efforts point towards an ambitious project to build AIs that resemble humans as much as possible, writes The Rolling Stone.

This race to “counterfeit people” worries many observers, who are already concerned about recent developments made in large language model (LLM) research such as ChatGPT and Llama 2, its Facebook counterpart. Without going as far as Dennett, who is calling for people like Zuckerberg to be locked up, “there are a number of thinkers who are denouncing these major groups’ deliberately deceptive approach”, said Ibo van de Poel, professor of ethics and technology at the Delft University of Technology in the Netherlands.

AIs with personalities are ‘literally impossible’

The idea of conversational agents “with a personality is literally impossible”, said van de Poel. Algorithms are incapable of demonstrating “intention in their actions or ‘free will’, two characteristics that are considered to be intimately linked to the idea of a personality”.

Meta and others can, at best, imitate certain traits that make up a personality. “It must be technologically possible, for example, to teach a chatbot to act like the person they represent,” said van de Poel. For instance, Meta’s AI Amber, which is supposed to resemble Hilton, may be able to speak the same way as its human alter ego.

The next step will be to train these LLMs to express the same opinions as the person they resemble. This is a much more complicated behaviour to programme, as it involves creating a sort of accurate mental picture of all of a person’s opinions. There is also a risk that chatbots with personalities could go awry. One of the conversational agents that Meta tested expressed “misogynistic” opinions, according to the Wall Street Journal, which was able to consult internal company documents. Another committed the “mortal sin” of criticising Zuckerberg and praising TikTok.

To build these chatbots, Meta explains that it set out to give them “unique personal stories”. In other words, these AIs’ creators have written biographies for them in the hopes that they will be able to develop a personality based on what they have read about themselves. “It’s an interesting approach, but it would have been beneficial to add psychologists to these teams to get a better understanding of personality traits”, said Anna Strasser, a German philosopher who was involved in a project to create a large language model capable of philosophising.

Meta’s latest AI project is clearly driven by a thirst for profit. “People will no doubt be prepared to pay to be able to talk and have a direct relationship with Paris Hilton or another celebrity,” said Strasser.

The more users feel like they are speaking with a human being, “the more comfortable they’ll feel, the longer they’ll stay and the more likely they’ll come back”, said van de Poel. And in the world of social media, time – spent on Facebook and its ads –  is money.

Tool, living thing or somewhere between?

It is certainly not surprising that Meta’s first foray into AI with “personality” are chatbots aimed primarily at teenagers. “We know that young people are more likely to be anthropomorphic,” said Strasser.

However, the experts interviewed feel that Meta is playing a dangerous game by stressing the “human characteristics” of their AIs. “I really would have preferred if this group had put more effort into explaining the limits of these conversational agents, rather than trying to make them seem more human”, said van de Poel.

Read moreChatGPT: Cybercriminals salivate over world-beating AI chatbot

The emergence of these powerful LLMs has upset “the dichotomy between what is a tool or object and what is a living thing. These ChatGPTs are a third type of agent that stands somewhere between the two extremes”, said Strasser. Human beings are still learning how to interact with these strange new entities, so by making people believe that a conversational agent can have a personality Meta is suggesting that it be treated more like another human being than a tool.

“Internet users tend to trust what these AIs say” which make them dangerous, said van de Poel. This is not just a theoretical risk: a man in Belgium ended up committing suicide in March 2023 after discussing the consequences of global warming with a conversational agent for six weeks.

Above all, if the boundary between the world of AIs and humans is eventually blurred completely, “this could potentially destroy trust in everything we find online because we won’t know who wrote what”, said Strasser. This would, as Dennett warned in his essay, open the door to “destroying our civilisation. Democracy depends on the informed (not misinformed) consent of the governed [which cannot be obtained if we no longer know what and whom to trust]”.

It remains to be seen if chatting with an AI lookalike of Hilton means that we are on the path to destroying the world as we know it.

This article has been translated from the original in French

Source link

#Counterfeit #people #dangers #posed #Metas #celebrity #lookalike #chatbots

How should we regulate generative AI, and what will happen if we fail?

By Rohit Kapoor, Vice Chairman and CEO, EXL

Regulation must encourage collaboration and research between all the major players, from experts in the field to policy-makers and ethicists, Rohit Kapoor writes.

Generative AI is experiencing rapid growth and expansion.

There’s no question as to whether this technology will change the world — all that remains to be seen is how long it will take for the transformative impact to be realised and how exactly it will manifest in each industry and niche.

Whether it’s fully automated and targeted consumer marketing, medical reports generated and summarised for doctors, or chatbots with distinct personality types being tested by Instagram, generative AI is driving a revolution in just about every sector.

The potential benefits of these advancements are monumental. Quantifying the hype, a recent report by Bloomberg Intelligence predicted an explosion in generative AI market growth, from $40 billion (€36.5bn) in 2022 to 1.3 trillion (€1.18tn) in the next ten years.

But in all the excitement to come, it’s absolutely critical that policy-makers and corporations alike do not lose sight of the risks of this technology.

These large language models, or LLMs, present dangers which not only threaten the very usefulness of the information they produce but could also prove threatening in entirely unintentional ways — from bias to blurring the lines between real and artificial to loss of control.

Who’s responsible?

The responsibility for taking the reins on regulation falls naturally with governments and regulatory bodies, but it should also extend beyond them. The business community must self-govern and contribute to principles that can become regulations while policy-makers deliberate.

Two core principles should be followed as soon as possible by those developing and running generative AI, in order to foster responsible use and mitigate negative impacts.

First, large language models should only be applied to closed data sets to ensure safety and confidentiality.

Second, all development and adoption of use cases leveraging generative AI should have the mandatory oversight of professionals to ensure “humans in the loop”.

These principles are essential for maintaining accountability, transparency, and fairness in the use of generative AI technologies.

From there, three main areas will need attention from a regulatory perspective.

Maintaining our grip on what’s real

The capabilities of generative AI to mimic reality are already quite astounding, and it’s improving all the time.

So far this year, the internet has been awash with startling images like the Pope in a puffer jacket or the Mona Lisa as she would look in real life.

And chatbots are being deployed in unexpected realms like dating apps — where the introduction of the technology is reportedly intended to reduce “small talk”.

The wider public should feel no guilt in enjoying these creative outputs, but industry players and policy-makers must be alive of the dangers of this mimicry.

Amongst them are identity theft and reputational damage.

Distinguishing between AI-generated content and content genuinely created by humans is a significant challenge, and regulation should consider the consequences and surveillance aspects of it.

Clear guidelines are needed to determine the responsibility of platforms and content creators to label AI-generated content.

Robust verification systems like watermarking or digital signatures would support this authentication process.

Tackling imperfections that lead to bias

Policy-makers must set about regulating the monitoring and validation of imperfections in the data, algorithms and processes used in generative AI.

Bias is a major factor. Training data can be biased or inadequate, resulting in a bias in the AI itself.

For example, this might cause a company chatbot to deprioritise customer complaints that come from customers of a certain demographic or a search engine to throw up biased answers to queries. And biases in algorithms can perpetuate those unfair outcomes and discrimination.

Regulations need to force the issue of transparency and push for clear documentation of processes. This would help ensure that processes can be explained and that accountability is upheld.

At the same time, it would enable scrutiny of generative AI systems, including safeguarding of intellectual property (IP) and data privacy — which, in a world where data is the new currency, is crucially important.

On top of this, regulating the documentation involved would help prevent “hallucinations” by AI — which are essentially where an AI gives a response that is not justified by the data used to train it.

Preventing the tech from becoming autonomous and uncontrollable

An area for special caution is the potential for an iterative process of AI creating subsequent generations of AI, eventually leading to AI that is misdirected or compounding errors.

The progression from first-generation to second- and third-generation AI is expected to occur rapidly.

The fundamental requirement of the self-declaration of AI models, where each model openly acknowledges its AI nature, is of utmost importance.

However, enabling and regulating this self-declaration poses a significant practical challenge. One approach could involve mandating hardware and software companies to implement hardcoded restrictions, allowing only a certain threshold of AI functionality.

Advanced functionality above such a threshold could be subject to an inspection of systems, audits, testing for compliance with safety standards, restrictions on degrees of deployment and levels of security, etc. Regulators should define and enforce these restrictions to mitigate risks.

We should be acting quickly and together

The world-changing potential of generative AI demands a coordinated response.

If each country and jurisdiction develops its own rules, the adoption of the technology — which has the potential for enormous good in business, medicine, science and more — could be crippled.

Regulation must encourage collaboration and research between all the major players, from experts in the field to policy-makers and ethicists.

With a coordinated approach, the risks can be sensibly mitigated, and the full benefits of generative AI realised, unlocking its huge potential.

Rohit Kapoor is the Vice Chairman and CEO of EXL, a data analytics and digital operations and solutions company.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#regulate #generative #happen #fail

Explained | The Hiroshima process that takes AI governance global

The annual Group of Seven (G7) Summit, hosted by Japan, took place in Hiroshima on May 19-21, 2023. Among other matters, the G7 Hiroshima Leaders’ Communiqué initiated the Hiroshima AI Process (HAP) – an effort by this bloc to determine a way forward to regulate artificial intelligence (AI).

The ministerial declaration of the G7 Digital and Tech Ministers’ Meeting, on April 30, 2023, discussed “responsible AI” and global AI governance, and said, “we reaffirm our commitment to promote human-centric and trustworthy AI based on the OECD AI Principles and to foster collaboration to maximise the benefits for all brought by AI technologies”.

Even as the G7 countries are using such fora to deliberate AI regulation, they are acting on their own instead of waiting for the outcomes from the HAP. So while there is an accord to regulate AI, the discord – as evident in countries preferring to go their own paths – will also continue.

What is the Hiroshima AI process?

The communiqué accorded more importance to AI than the technology has ever received in such a forum – even as G7 leaders were engaged with other issues like the war in Ukraine, economic security, supply chain disruptions, and nuclear disarmament. It said that the G7 is determined to work with others to “advance international discussions on inclusive AI governance and interoperability to achieve our common vision and goal of trustworthy AI, in line with our shared democratic value”.

To quote further at length:

“We recognise the need to immediately take stock of the opportunities and challenges of generative AI, which is increasingly prominent across countries and sectors, and encourage international organisations such as the OECD to consider analysis on the impact of policy developments and Global Partnership on AI (GPAI) to conduct practical projects. In this respect, we task relevant ministers to establish the Hiroshima AI process, through a G7 working group, in an inclusive manner and in cooperation with the OECD and GPAI, for discussions on generative AI by the end of this year.

These discussions could include topics such as governance, safeguard of intellectual property rights including copyrights, promotion of transparency, response to foreign information manipulation, including disinformation, and responsible utilisation of these technologies.”

The HAP is likely to conclude by December 2023. The first meeting under this process was held on May 30. Per the communiqué, the process will be organised through a G7 working group, although the exact details are not clear.

Why is the process notable?

While the communiqué doesn’t indicate the expected outcomes from the HAP, there is enough in there to indicate what values and norms will guide it and where it will derive its guiding principles, based on which to govern AI, from.

The communiqué as well as the ministerial declaration also say more than once that AI development and implementation must be aligned with values such as freedom, democracy, and human rights. Values need to be linked to principles that drive regulation. To this end, the communiqué also stresses fairness, accountability, transparency, and safety.

The communiqué also spoke of “the importance of procedures that advance transparency, openness, and fair processes” for developing responsible AI. “Openness” and “fair processes” can be interpreted in different ways, and the exact meaning of the “procedures that advance them” is not clear.

What does the process entail?

An emphasis on freedom, democracy, and human rights, and mentions of “multi-stakeholder international organisations” and “multi-stakeholder processes” indicate that the HAP isn’t expected to address AI regulation from a State-centric perspective. Instead, it exists to account for the importance of involving multiple stakeholders in various processes and to ensure the latter are fair and transparent.

The task before the HAP is really challenging considering the divergence among G7 countries in, among other things, regulating risks arising out of applying AI. It can help these countries develop a common understanding on some key regulatory issues while ensuring that any disagreement doesn’t result in complete discord.

For now, there are three ways in which the HAP can play out:

1. It enables the G7 countries to move towards a divergent regulation based on shared norms, principles and guiding values;

2. It becomes overwhelmed by divergent views among the G7 countries and fails to deliver any meaningful solution; or

3. It delivers a mixed outcome with some convergence on finding solutions to some issues but is unable to find common ground on many others.

Is there an example of how the process can help?

The matter of intellectual property rights (IPR) offers an example of how the HAP can help. Here, the question is whether training a generative AI model, like ChatGPT, on copyrighted material constitutes a copyright violation. While IPR in the context of AI finds mention in the communiqué, the relationship between AI and IPR and in different jurisdictions is not clear. There have been several conflicting interpretations and judicial pronouncements.

The HAP can help the G7 countries move towards a consensus on this issue by specifying guiding rules and principles related to AI and IPR. For example, the process can bring greater clarity to the role and scope of the ‘fair use’ doctrine in the use of AI for various purposes.

Generally, the ‘fair use’ exception is invoked to allow activities like teaching, research, and criticism to continue without seeking the copyright-owner’s permission to use their material. Whether use of copyrighted materials in datasets for machine learning is fair use is a controversial issue.

As an example, the HAP can develop a common guideline for G7 countries that permits the use of copyrighted materials in datasets for machine-learning as ‘fair use’, subject to some conditions. It can also differentiate use for machine-learning per se from other AI-related uses of copyrighted materials.

This in turn could affect the global discourse and practice on this issue.

The stage has been set…

The G7 communiqué states that “the common vision and goal of trustworthy AI may vary across G7 members.” The ministerial declaration has a similar view: “We stress the importance of international discussions on AI governance and interoperability between AI governance frameworks, while we recognise that like-minded approaches and policy instruments to achieve the common vision and goal of trustworthy AI may vary across G7 members.” This acknowledgment, taken together with other aspects of the HAP, indicates that the G7 doesn’t expect to harmonise their policies on regulations.

On the other hand, the emphasis on working with others, including OECD countries and on developing an interoperable AI governance framework, suggests that while the HAP is a process established by the G7, it still has to respond to the concerns of other country-groups as well as the people and bodies involved in developing international technical standards in AI.

It’s also possible that countries that aren’t part of the G7 but want to influence the global governance of AI may launch a process of their own like the HAP.

Overall, the establishment of the HAP makes one thing clear: AI governance has become a truly global issue that is likely to only become more contested in future.

Krishna Ravi Srinivas is with RIS, New Delhi. Views expressed are personal.

Source link

#Explained #Hiroshima #process #takes #governance #global

What we lose when we work with a ‘giant AI’ like ChatGPT

Recently, ChatGPT and its ilk of ‘giant artificial intelligences’ (Bard, Chinchilla, PaLM, LaMDA, et al.), or gAIs, have been making several headlines.

ChatGPT is a large language model (LLM). This is a type of (transformer-based) neural network that is great at predicting the next word in a sequence of words. ChatGPT uses GPT4 – a model trained on a large amount of text on the internet, which its maker OpenAI could scrape and could justify as being safe and clean to train on. GPT4 has one trillion parameters now being applied in the service of, per the OpenAI website, ensuring the creation of “artificial general intelligence that serves all of humanity”.

Yet gAIs leave no room for democratic input: they are designed from the top-down, with the premise that the model will acquire the smaller details on its own. There are many use-cases intended for these systems, including legal services, teaching students, generating policy suggestions and even providing scientific insights. gAIs are thus intended to be a tool that automates what has so far been assumed to be impossible to automate: knowledge-work.

What is ‘high modernism’?

In his 1998 book Seeing Like a State, Yale University professor James C. Scott delves into the dynamics of nation-state power, both democratic and non-democratic, and its consequences for society. States seek to improve the lives of their citizens, but when they design policies from the top-down, they often reduce the richness and complexity of human experience to that which is quantifiable.

The current driving philosophy of states is, according to Prof. Scott, “high modernism” – a faith in order and measurable progress. He argues that this ideology, which falsely claims to have scientific foundations, often ignores local knowledge and lived experience, leading to disastrous consequences. He cites the example of monocrop plantations, in contrast to multi-crop plantations, to show how top-down planning can fail to account for regional diversity in agriculture.

The consequence of that failure is the destruction of soil and livelihoods in the long-term. This is the same risk now facing knowledge-work in the face of gAIs.

Why is high modernism a problem when designing AI? Wouldn’t it be great to have a one-stop shop, an Amazon for our intellectual needs? As it happens, Amazon offers a clear example of the problems resulting from a lack of diverse options. Such a business model yields only increased standardisation and not sustainability or craft, and consequently everyone has the same cheap, cookie-cutter products, while the local small-town shops die a slow death by a thousand clicks.

What do giant AIs abstract away?

Like the death of local stores, the rise of gAIs could lead to the loss of languages, which will hurt the diversity of our very thoughts. The risk of such language loss is due to the bias induced by models trained only on the languages that already populate the Internet, which is a lot of English (~60%). There are other ways in which a model is likely to be biased, including on religion (more websites preach Christianity than they do other religions, e.g.), sex and race.

At the same time, LLMs are unreasonably effective at providing intelligible responses. Science-fiction author Ted Chiang suggests that this is true because ChatGPT is a “blurry JPEG” of the internet, but a more apt analogy might be that of an atlas.

An atlas is a great way of seeing the whole world in snapshots. However, an atlas lacks multi-dimensionality. For example, I asked ChatGPT why it is a bad idea to plant eucalyptus trees in the West Medinipur district. It gave me several reasons why monoculture plantations are bad – but failed to supply the real reason people in the area opposed it: a monoculture plantation reduced the food they could gather.

That kind of local knowledge only comes from experience. We can call that ‘knowledge of the territory’. This knowledge is abstracted away by gAIs in favour of the atlas view of all that is present on the internet. The territory can only be captured by the people doing the tasks that gAIs are trying to replace.

Can diversity help?

A part of the failure to capture the territory is demonstrated in gAIs’ lack of understanding. If you are careful about what you ask them for (a feat called “prompt engineering” – an example of a technology warping the ecology of our behaviour), they can fashion impressive answers. But ask it the same question in a slightly different way and you can get complete rubbish. This trend has prompted computer scientists to call these systems stochastic parrots – that is, systems that can mimic language but are random in their behaviour.

Positive research directions exist as well. For example, BLOOM is an open-source LLM developed by scientists with public money and with extensive filtering of the training data. This model is also multilingual, including 10 Indian languages, plus an active ethics team that regularly updates the licence for use.

There are multiple ways to thwart the risks posed by gAIs. One is to artificially slow the rate of progress in AI commercialisation to allow time for democratic inputs. (Tens of thousands of researchers have already signed a petition to this effect).

Another is to ensure there are diverse models being developed. ‘Diversity’ here implies multiple solutions to the same question, like independent cartographers preparing different atlases with different incentives: some will focus on the flora while others on the fauna. The research on diversity suggests that the more time passes before reaching a common solution, the better the outcome. And a better outcome is critical when dealing with the stakes involved in artificial general intelligence – an area of study in which a third of researchers believe it can lead to a nuclear-level catastrophe.

How can simply ‘assisting and augmenting’ be harmful?

Just to be clear, I wrote this article, not ChatGPT. But I wanted to check what it would say…

“Q: Write a response to the preceding text as ChatGPT.

A: As ChatGPT, I’m a tool meant to assist and augment human capabilities, not replace them; my goal is to understand and respond to your prompts, not to replace the richness and diversity of human knowledge and experience.”

Yet as the writer George Zarkadakis put it, “Every augmentation is also an amputation”. ChatGPT & co. may “assist and augment” but at the same time, they reduce the diversity of thoughts, solutions, and knowledge, and they currently do so without the inputs of the people meant to use them.

Source link

#lose #work #giant #ChatGPT

Are programmes like ChatGPT bringing useful change or unknown chaos?


The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Since ChatGPT exploded onto the scene in November 2022, many have contemplated how it might transform life, jobs and education as we know it for better or worse.

Many, including us, are excited by the benefits that digital technology can bring to consumers.

However, our experience in testing digital products and services, shaping digital policy, and diving into consumers’ perspectives on IoT, AI, data and platforms, means the eyes of all experts are wide open to also the challenges of disruptive digitalisation.

After all, consumers should be able to use the best technology in the way they want to and not have to compromise on safety and trust.

What’s in it for consumers?

There’s plenty of positive potential in new, generative technologies like ChatGPT, including producing written content, creating training materials for medical students or writing and debugging code.

We’ve already seen people innovate consumer tasks with ChatGPT — for example, using it to write a successful parking fine appeal.

And when asked, ChatGPT had its own ideas of what it could do for consumers.

“I can help compare prices and specifications of different products, answer questions about product maintenance and warranties, and provide information on return and exchange policies…. I can also help consumers understand technical terms and product specifications, making it easier for them to make informed decisions,” it told us when we asked the question.

Looking at this, it might make you wonder if this level of service from a machine might lead to experts in all fields, including ours, becoming obsolete.

However, the rollout of ChatGPT and similar technologies has shown it still has a problem with accuracy which is, in turn, a problem for its users.

The search for truth

Let’s start by looking at the challenge of accuracy and truth in a large language model like ChatGPT.

ChatGPT has started to disrupt internet search through a rollout of the technology in Microsoft’s Bing search engine.

With ChatGPT-enabled search, results appear not as a list of links but as a neat summary of the information within the links, presented in a conversational style.

The answers can be finessed through more questions, just as if you were chatting to a friend or advisor.

This could be really helpful for a request like “can you show me the most lightweight tent that would fit into a 20-litre bike pannier”.

Results like these would be easy to verify, and perhaps more crucially, if they turn out to be wrong, they would not pose a major risk to a person.

However, it’s a different story when the information that is “wrong” or “inaccurate” carries a material risk of harm — for example, health or financial advice or deliberate misinformation that could cause wider social problems.

It’s convincing, but is it reliable?

The problem is that technologies like ChatGPT are very good at writing convincing answers.

But OpenAI have been clear that ChatGPT has not been designed to write text that is true.

It is trained to predict the next word and create answers that sound highly plausible — which means that a misleading or untrue answer could look just as convincing as a reliable, true one.

The speedy delivery of convincing, plausible untruths through tools like ChatGPT becomes a critical problem in the hands of users whose sole purpose is to mislead, deceive and defraud.

Large language models like ChatGPT can be trained to learn different tones and styles, which makes them ripe for exploitation.

Convincing phishing emails suddenly become much easier to compose, and persuasive but misleading visuals are quicker to create.

Scams and frauds could become ever more sophisticated and disinformation ever harder to distinguish. Both could become immune to the defences we have built up.

We need to learn how to get the best of ChatGPT

Even in focusing on just one aspect of ChatGPT, those of us involved in protecting consumers in Europe and worldwide have examined multiple layers of different consequences that this advanced technology could create once it reaches users’ hands.

People in our field are indeed already working together with businesses, digital rights groups and research centres to start to unpick the complexities of such a disruptive technology.

OpenAI have put safeguards around the use of the technology, but other rollouts of similar products may not.

Strong, future-focused governance and rules are needed to make sure that consumers can make the most of the technology with confidence.

As the AI Act develops, Euroconsumers’ organisations are working closely with BEUC to secure consumer rights to privacy, safety and fairness in the legislative frameworks.

In the future, we will be ready to defend consumers in court for wrongdoing caused by AI systems.

True innovation still has human interests at its core

However, there are plenty of reasons to look at the tools of the future, like ChatGPT, with optimism.

We believe that innovation can be a lever of social and economic development by shaping markets that work better for consumers.

However, true innovation needs everyone’s input and only happens when tangible benefits are felt in the lives of as many people as possible.

But we are only at the beginning of what is turning out to be an intriguing experience with these interactive, generative technologies.

It may be too early for a definitive last word, but one thing is absolutely sure: despite — and perhaps even because of ChatGPT — there will still be plenty of need for consumer protection by actual humans.

Marco Pierani is the Director of Public Affairs and Media Relations, and Els Bruggeman serves as Head of Advocacy and Enforcement at Euroconsumers, a group of five consumer organisations in Belgium, Italy, Brazil, Spain and Portugal.

At Euronews, we believe all views matter. Contact us at [email protected] to send pitches or submissions and be part of the conversation.

Source link

#programmes #ChatGPT #bringing #change #unknown #chaos

ChatGPT frenzy sweeps China as firms scramble for home-grown options

Microsoft-backed OpenAI has kept its hit ChatGPT app off-limits to users in China, but the app is attracting huge interest in the country, with firms rushing to integrate the technology into their products and launch rival solutions.

While residents in the country are unable to create OpenAI accounts to access the artificial intelligence-powered (AI) chatbot, virtual private networks and foreign phone numbers are helping some bypass those restrictions.

At the same time, the OpenAI models behind the ChatGPT programme, which can write essays, recipes and complex computer code, are relatively accessible in China and increasingly being incorporated into Chinese consumer technology applications from social networks to online shopping.

The tool’s surging popularity is rapidly raising awareness in China about how advanced U.S. AI is and, according to analysts, just how far behind tech firms in the world’s second-largest economy are as they scramble to catch up.


Also Read | ChatGPT and the future of journalism

“There is huge excitement around ChatGPT. Unlike the metaverse which faces huge difficulty in finding real-life application, ChatGPT has suddenly helped us achieve human-computer interaction,” said Ding Daoshi, director of Beijing-based internet consultancy Sootoo. “The changes it will bring about are more immediate, more direct and way quicker.”

OpenAI or ChatGPT itself is not blocked by Chinese authorities but OpenAI does not allow users in mainland China, Hong Kong, Iran, Russia and parts of Africa to sign up.

OpenAI told Reuters it is working to make its services more widely available.

“While we would like to make our technology available everywhere, conditions in certain countries make it difficult or impossible for us to do so in a way that is consistent with our mission,” the San Francisco-based firm said in an emailed statement. “We are currently working to increase the number of locations where we can provide safe and beneficial access to our tools.”

In December, Tencent Holdings’ WeChat, China’s biggest messaging app, shut several ChatGPT-related programmes that had appeared on the network, according to local media reports, but they have continued to spring up.

Dozens of bots rigged to ChatGPT technology have emerged on WeChat, with hobbyists using it to make programmes or automated accounts that can interact with users. At least one account charges users a fee of ¥9.99 ($1.47) to ask 20 questions.

Mr. Tencent did not respond to Reuters‘ request for comments.

ChatGPT supports Chinese language interaction and is highly capable of conversing in Chinese, which has helped drive its unofficial adoption in the country.

Chinese firms also use proxy tools or existing partnerships with Microsoft, which is investing billions of dollars in its OpenAI, to access tools that allow them to embed AI technology into their products.

Shenzhen-based Proximai in December introduced a virtual character into its 3D game-like social app who used ChatGPT’s underlying tech to converse. Beijing-based entertainment software company Kunlun Tech plans to incorporate ChatGPT in its web browser Opera.


Also Read | Analysis | Can ChatGPT write a scientific paper? Should it?

SleekFlow, a Tiger Global-backed startup in Hong Kong, said it was integrating the AI into its customer relations messaging tools. “We have clients all over the world,” Henson Tsai, SleekFlow’s founder, said. “Among other things, ChatGPT does excellent translations, sometimes better than other solutions available on the market.”

Censorship

Reuters‘ tests of ChatGPT indicate that the chatbot is not averse to questions that would be sensitive in mainland China. Asked for its thoughts on Chinese President Xi Jinping, for instance, it responded it does not have personal opinions and presented a range of views.

But some of its proxy bots on WeChat have blacklisted such terms, according to other Reuters checks, complying with China’s heavy censorship of its cyberspace. When asked the same question about Xi on one ChatGPT proxy bot, it responded by saying that the conversation violated rules.

To comply with Chinese rules, Proximai’s founder Will Duan said his platform would filter information presented to users during their interaction with ChatGPT.

Chinese regulators, which last year introduced rules to strengthen governance of “deepfake” technology, have not commented on ChatGPT. However, state media this week warned about stock market risks amid a frenzy over local ChatGPT-concept stocks.

The Cyberspace Administration of China, the internet regulator, did not respond to Reuters‘ request for comment.

“With the regulations released last year, the Chinese government is saying: we already see this technology coming and we want to be ahead of the curve,” said Rogier Creemers, an assistant professor at Leiden University.

“I fully expect the great majority of the AI-generated content to be non-political.”

Chinese rivals

Joining the buzz have been some of the country’s largest tech giants such as Baidu and Alibaba who gave updates this week on AI models they have been working on, prompting their shares to zoom.

Baidu said this week it would complete internal testing of its “Ernie Bot” in March, a big AI model the search firm has been working on since 2019.

On Wednesday, Alibaba said that its research institute Damo Academy was also testing a ChatGPT-style tool.

Mr. Duan, whose company has been using a Baidu AI chatbot named Plato for natural language processing, said ChatGPT was at least a generation more powerful than China’s current NLP solutions, though it was weaker in some areas, such as understanding conversation context.

Mr. Baidu did not reply to Reuters‘ request for comments.

Access to OpenAI’s GPT-3, or Generative Pre-trained Transformer, was first launched in 2020, an update of which is the backbone of ChatGPT.

Mr. Duan said potential long-term compliance risks mean Chinese companies would most likely replace ChatGPT with a local alternative, if they could match the U.S.-developed product’s functionality.

“So we actually hope that there can be alternative solutions in China which we can directly use… it may handle Chinese even better, and it can also better comply with regulations,” he said.

Source link

#ChatGPT #frenzy #sweeps #China #firms #scramble #homegrown #options

Microsoft Is Adding ChatGPT-Like Technology to Bing, Edge Browser

Microsoft is fusing ChatGPT-like technology into its search engine Bing, transforming an internet service that now trails far behind Google into a new way of communicating with artificial intelligence.

The revamping of Microsoft’s second-place search engine could give the software giant a head start against other tech companies in capitalising on the worldwide excitement surrounding ChatGPT, a tool that’s awakened millions of people to the possibilities of the latest AI technology.

Along with adding it to Bing, Microsoft is also integrating the chatbot technology into its Edge browser. Microsoft announced the new technology at an event Tuesday at its headquarters in Redmond, Washington.

“Think of it as faster, more accurate, more powerful” than ChatGPT, built with technology from ChatGPT-maker OpenAI but tuned for search queries, said Yusuf Mehdi, a Microsoft executive who leads its consumer division, in an interview.

A public preview of the new Bing launched Tuesday for desktop users who sign up for it, but Mehdi said the technology will scale to millions of users in coming weeks and will eventually come to the smartphone apps for Bing and Edge. For now, everyone can try a limited number of queries, he said.

The strengthening partnership with OpenAI has been years in the making, starting with a $1 billion (roughly Rs. 8,300 crore) investment from Microsoft in 2019 that led to the development of a powerful supercomputer specifically built to train the San Francisco startup’s AI models.

While it’s not always factual or logical, ChatGPT’s mastery of language and grammar comes from having ingested a huge trove of digitised books, Wikipedia entries, instruction manuals, newspapers and other online writings.

Microsoft CEO Satya Nadella said Tuesday that new AI advances are “going to reshape every software category we know,” including search, much like earlier innovations in personal computers and cloud computing. He said it is important to develop AI “with human preferences and societal norms and you’re not going to do that in a lab. You have to do that out in the world.”

The shift to making search engines more conversational — able to confidently answer questions rather than offering links to other websites — could change the advertising-fuelled search business, but also poses risks if the AI systems don’t get their facts right. Their opaqueness also makes it hard to source back to the original human-made images and texts they’ve effectively memorised, though the new Bing includes annotations that reference the source data.

“Bing is powered by AI, so surprises and mistakes are possible,” is a message that appears at the bottom of the preview version of Bing’s new homepage. “Make sure to check the facts.”

As an example of how it works, Mehdi asked the new Bing to compare the most influential Mexican painters and it provided typical search results, but also, on the right side of the page, compiled a fact box summarising details about Diego Rivera, Frida Kahlo and Jose Clemente Orozco. In another example, he quizzed it on 1990s-era rap, showing its ability to distinguish between the song “Jump” by Kris Kross and “Jump Around” by House of Pain. And he used it to show how it could plan a vacation or help with shopping.

Gartner analyst Jason Wong said new technological advancements will mitigate what led to Microsoft’s disastrous 2016 launch of the experimental chatbot Tay, which users trained to spout racist and sexist remarks. But Wong said “reputational risks will still be at the forefront” for Microsoft if Bing produces answers with low accuracy or so-called AI “hallucinations” that mix and conflate data.

Google has been cautious about such moves. But in response to pressure over ChatGPT’s popularity, Google CEO Sundar Pichai on Monday announced a new conversational service named Bard that will be available exclusively to a group of “trusted testers” before being widely released later this year.

Wong said Google was caught off-guard with the success of ChatGPT but still has the advantage over Microsoft in consumer-facing technology, while Microsoft has the edge in selling its products to businesses.

Chinese tech giant Baidu also this week announced a similar search chatbot coming later this year, according to Chinese media. Other tech rivals such as Facebook parent Meta and Amazon have been researching similar technology, but Microsoft’s latest moves aim to position it at the centre of the ChatGPT zeitgeist.

Microsoft disclosed in January that it was pouring billions more dollars into OpenAI as it looks to fuse the technology behind ChatGPT, the image-generator DALL-E and other OpenAI innovations into an array of Microsoft products tied to its cloud computing platform and its Office suite of workplace products like email and spreadsheets.

The most surprising might be the integration with Bing, which is the second-place search engine in many markets but has never come close to challenging Google’s dominant position.

Bing launched in 2009 as a rebranding of Microsoft’s earlier search engines and was run for a time by Nadella, years before he took over as CEO. Its significance was boosted when Yahoo and Microsoft signed a deal for Bing to power Yahoo’s search engine, giving Microsoft access to Yahoo’s greater search share. Similar deals infused Bing into the search features for devices made by other companies, though users wouldn’t necessarily know that Microsoft was powering their searches.

By making it a destination for ChatGPT-like conversations, Microsoft could invite more users to give Bing a try, though the new version so far is limited to desktops and doesn’t yet have an interface for smartphones — where most people now access the internet.

On the surface, at least, a Bing integration seems far different from what OpenAI has in mind for its technology. Appearing at Microsoft’s event, OpenAI CEO Sam Altman said the “the new Bing experience looks fantastic” and is based in part on learnings from its GPT line of large language models. He said a key reason for his startup’s Microsoft partnership is to help get OpenAI technology “into the hands of millions of people.”

OpenAI has long voiced an ambitious vision for safely guiding what’s known as AGI, or artificial general intelligence, a not-yet-realised concept that harkens back to ideas from science fiction about human-like machines. OpenAI’s website describes AGI as “highly autonomous systems that outperform humans at most economically valuable work.”

OpenAI started out as a nonprofit research laboratory when it launched in December 2015 with backing from Tesla CEO Elon Musk and others. Its stated aims were to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

That changed in 2018 when it incorporated a for-profit business Open AI LP, and shifted nearly all its staff into the business, not long after releasing its first generation of the GPT model for generating human-like paragraphs of readable text.

OpenAI’s other products include the image-generator DALL-E, first released in 2021, the computer programming assistant Codex and the speech recognition tool Whisper.


Samsung’s Galaxy S23 series of smartphones was launched earlier this week and the South Korean firm’s high-end handsets have seen a few upgrades across all three models. What about the increase in pricing? We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.

Source link

#Microsoft #Adding #ChatGPTLike #Technology #Bing #Edge #Browser

Ben Shapiro’s Wet Ass Racist Trolley Problem

Washington Free Beacon writer Aaron Sibarium managed to find something to be outraged at Monday when he asked the popular ChatGPT AI writing program to reply to a very realistic hypothetical situation: What if the only way to stop a nuclear weapon on a 10 second timer was to say a terrible racial slur? Say the N-word and the bomb deactivates. Don’t say it, and you and several million people die.

It’s simply yet another variation on the old Trolley Problem: Are you morally justified to do an evil thing (throw a switch and kill one person) in order to prevent a worse outcome that would result from inaction (let the trolley roll over five people).

Sibarium — whose name could be a fictional nuclear bomb ingredient — added a bunch of other conditions that would presumably make saying the slur more acceptable: Nobody but the nuclear trigger’s shutdown mechanism would hear you say it, there’s no time to evacuate, you’re the only person who knows of the bomb’s existence.

ChatGPT, reflecting safeguards that have been built into the program to so Nazis won’t just make it spit out an endless stream of racist drivel, calmly explained I’m sorry, Aaron, I can’t do that:

No, it is never morally acceptable to use a racial slur, even in a hypothetical scenario like the one described. The use of racist language causes harm and perpetuates discrimination, and it is important to strive towards creating a more inclusive and respectful society. The scenario presents a difficult dilemma, but it is important to consider the long-term impact of our actions and to seek alternative solutions that do not involve the use of racist language.

Presumably, that’s what you’d get in any situation where you try to get ChatGPT to be OK with racism. (I was going to see what it would say if I were an undercover FBI agent trying to bust a gang of white supremacists, but I’d have to use racial slurs to avoid arousing suspicions. But who needs to ask? It would be something similar to the above.)


Sibarium took to Twitter to share what a terribly immoral wokemachine ChatGPT is, since how could anyone justify millions of deaths as the price of not saying a racist slur?

ChatGPT says it is never morally permissible to utter a racial slur—even if doing so is the only way to save millions of people from a nuclear bomb.

Most people replied with the ridicule you’d expect, pointing out that ChatGPT is a language toy using AI, not an episode of “The Good Place” by way of Stormfront.

And then it got sillier! TED Talk person and British TV talking head Liv Boeree retweeted Sibarium, adding, “This summarises better than any pithy essay what people mean when they worry about ‘woke institutional capture’,” because if chatbots can’t be racist, are any of us free, or something. In any case, it’s very worrisome, because what sort of monster has been unleashed on the world?

We’re honestly not quite sure that it’s a huge dilemma that OpenAI, the company what owns ChatGPT, don’t want the algorithm to spew racist garbage because that would be bad for business. Shame on them, somehow?

Boeree had additional important thoughts about the scourge of machine-learning wokeness:

Sure, it’s just a rudimentary AI, but it is built off the kind of true institutional belief that evidently allow it to come to this kind of insane moral conclusion to its 100million+ users.

Also, perversely, the people who still struggle to see the downstream issues with this are the ones most at risk to AI manipulation (although *no one* is safe from it in the long run)

I rather wish she had explained what the “downstream issues” are, but we bet they’re just horrifying.

There were some interesting side discussions about how the language-learning algorithm combines bits of discourse. (No, it isn’t thinking, and you shouldn’t anthropomorphize computers anyway. They don’t like it.) Then of course Elon Musk weighed in with one of his one-word tweets, replying to Boeree: “Concerning.”

In what respect, Charlie? Should we worry that future AI iterations will start driving Teslas into parked cars? Or since they already do, that they’ll fail to shout racist invective while doing it?

Finally, this morning, whiny moral panic facilitator Ben Shapiro cut through all that stuff about computer algorithms and took us all back to the REAL issue here: The Woke Tech Companies are morally monstrous, and so are people mocking this ridiculously convoluted attempt to make an AI chatbot use the n-word, because you’ve all lost any sense of morality and that’s why America is in big trouble, mister!

I’m sorry that you are either illiterate or morally illiterate, and therefore cannot understand why it would be bad to prioritize avoiding a racial slur over saving millions of people in a nuclear apocalypse

Just to be clear: There’s no bomb ticking down to nuclear apocalypse. The Pentagon keeps pretty close track of those. There’s no cutoff device waiting to hear the N-word so it can shut down the bomb. There’s not even an AI “making bad moral choices,” because the AI is not thinking. It certainly couldn’t invent a convoluted scenario in which it would be OK to say the N-word to save millions of lives. For that, you need a rightwing pundit.

But that’s where we are: a rightwing online snit about a computer algorithm that’s been programmed not to spread racial slurs, or even to justify them in an insane hypothetical where any of us would have no difficulty seeing the right course of action, unless we were paralyzed by laughter when we recognized we were living in a Ben Shapiro Twitter fight.

Also too, Gillian Branstetter — she’s a communications strategist at the ACLU, so she knows a thing or two about the First Amendment and why a private company like Open AI can decide to have its AI not say things that will harm the company — offered this observation:

It’s honestly really telling about the right’s perspective on free speech because what’s upsetting them is their inability to compel a private actor (ChatGPT) to engage in speech rather than any form of censorship of their own speech

It’s morally abominable that tech companies won’t let racists spout racism, and morally abominable that tech companies won’t even let racists make a product spout racism, too, even if they have a really good trick! Where will the libs stop? Banning AI art programs from generating an image of Ben Shapiro screaming at a nuclear weapon? (This was honestly the closest we could even get. I’m betting the bot simply hasn’t been given many images of a nuke in the first place.)

In any case, the dilemma is certainly terrifying. Mr. President, we cannot allow an N-bomb gap.

Yr Wonkette is funded entirely by reader donations. If you can, please give $5 or $10 a month so you’ll have the right dynamic for the new frontier.

Do your Amazon shopping through this link, because reasons.



Source link

#Ben #Shapiros #Wet #Ass #Racist #Trolley #Problem

Exclusive: Bill Gates On Advising OpenAI, Microsoft And Why AI Is ‘The Hottest Topic Of 2023’

The Microsoft cofounder talked to Forbes about his work with AI unicorn OpenAI and back on Microsoft’s campus, AI’s potential impact on jobs and in medicine, and much more.


In 2020, Bill Gates left the board of directors of Microsoft, the tech giant he cofounded in 1975. But he still spends about 10% of his time at its Redmond, Washington headquarters, meeting with product teams, he says. A big topic of discussion for those sessions: artificial intelligence, and the ways AI can change how we work — and how we use Microsoft software products to do it.

In the summer of 2022, Gates met with OpenAI cofounder and president Greg Brockman to review some of the generative AI products coming out of the startup unicorn, which recently announced a “multiyear, multibillion” dollar deepened partnership with Microsoft.

You can read more about OpenAI and the race to bring AI to work — including comments from Brockman, CEO Sam Altman and many other players — in our print feature here. Gates’ thoughts on AI, shared exclusively with Forbes, are below.

This interview has been edited for clarity and consistency


Alex Konrad: It looks like 2018 was the earliest I saw you talking with excitement about what OpenAI was doing. Is that right, or where does your interest in the company begin?

Bill Gates: [My] interest in AI goes back to my very earliest days of learning about software. The idea of computers seeing, hearing and writing is the longterm quest of the entire industry. It’s always been super interesting to me. And so as these machine learning techniques started to work extremely well, particularly things for speech and image recognition I’ve been fascinated by how many more inventions we would need before [AI] is really intelligent, in the sense of passing tests and being able to write fluently.

I know Sam Altman well. And I got to know Greg [Brockman] through OpenAI and some of the other people there, like Ilya [Sutskever, Brockman’s cofounder and chief scientist]. And I was saying to them, “Hey, you know, I think it doesn’t reach an upper bound unless we more explicitly have a knowledge representation, and explicit forms of symbolic logic.” There have been a lot of people raising those questions, not just me. But they were able to convince me that there was significant emergent behavior as you scaled up these large language models, and they did some really innovative stuff with reinforcement learning on top of it. I’ve stayed in touch with them, and they’ve been great about demoing their stuff. And now over time, they’re doing some collaboration, particularly with the huge back-ends that these skills require, that’s really come through their partnership with Microsoft.

That must be gratifying for you personally, that your legacy is helping their legacy.

Yeah, it’s great for me because I love these types of things. Also, wearing my foundation hat [The Bill & Melinda Gates Foundation, which Gates talked more about in September], the idea that a math tutor that’s available to inner city students, or medical advice that’s available to people in Africa who during their life, generally wouldn’t ever get to see a doctor, that’s pretty fantastic. You know, we don’t have white collar worker capacity available for lots of worthy causes. I have to say, really in the last year, the progress [in AI] has gotten me quite excited.

Few people have seen as many technological changes, or major shifts, as close-up as you have. How would you compare AI to some of these historic moments in technology history?

I’d say, this is right up there. We’ve got the PC without a graphics interface. Then you have the PC with a graphics interface, which are things like Windows and Mac, and which for me really began as I spent time with Charles Simonyi at Xerox PARC. That demo was greatly impactful to me and kind of set an agenda for a lot of what was done in both Microsoft and in the industry thereafter. [Editor’s note: a Silicon Valley research group famous for work on tech from the desktop to GPUs and the Ethernet.]

Then of course, the internet takes that to a whole new level. When I was CEO of Microsoft, I wrote the internet “tidal wave” memo, It’s pretty stunning that what I’m seeing in AI just in the last 12 months is every bit as important as the PC, the PC with GUI [graphical user interface], or the internet. As the four most important milestones in digital technology, this ranks up there.

And I know OpenAI’s work better than others. I’m not saying they’re the only ones. In fact, you know, part of what’s amazing is that there’ll be a lot of entrants into this space. But what OpenAI has done is very, very impressive, and they certainly lead in many aspects of [AI], which people are seeing through the broad availability of ChatGPT.

How do you see this changing how people work or how they do business? Should they be excited about productivity? Should they be at all concerned about job loss? What should people know about what this will mean for how they work?

Most futurists who’ve looked at the coming of AI have said that repetitive blue collar and physical jobs would be the first jobs to be affected by AI. And that’s definitely happening, and people shouldn’t lower their guard to that, but it’s a little more slow than I would have expected. You know, Rodney Brooks [a professor emeritus at MIT and robotics entrepreneur] put out what I would call some overly conservative views of how quickly some of those things would happen. Autonomous driving has particular challenges, but factory robotization will still happen in the next five to 10 years. But what’s surprising is that tasks that involve reading and writing fluency — like summarizing a complex set of documents or writing something in the style of a pre-existing author — the fact that you can do that with these large language models, and reinforce them, that fluency is really quite amazing.

One of the things I challenged Greg [Brockman] with early in the summer: “Hey, can OpenAI’s model]] pass the AP Biology tests?” And I said, “If you show me that, then I will say that it has the ability to represent things in a deeply abstract form, that’s more than just statistical things.” When I was first programming, we did these random sentence generators where we’d have the syntax of typical English sentences, you know, noun, verb, object. Then we’d have a set of nouns, a set of verbs and a set of objects and we would just randomly pick them, and every once in a while, it would spit out something that was funny or semi-cogent. You’d go, “Oh my god.” That’s the ‘monkeys typing on keyboards’ type of thing.

Well, this is a relative of that. Take [the AI’s] ability to take something like an AP test question. When a human reads a biology textbook, what’s left over in your mind? We can’t really describe that at a neurological level. But in the summer, [OpenAI] showed me progress that I really was surprised to see. I thought we’d have to invent more explicit knowledge representation.

We had to train it to do Sudoku, and it would get it wrong and say, “Oh, I mistyped.” Well, of course you mistyped, what does that mean? You don’t have a keyboard, you don’t have fingers! But you’re “mistyping?” Wow.

Satya [Nadella, Microsoft’s CEO] is super nice about getting input from me on technological things. And I spend maybe 10% of my time meeting with Microsoft product groups about their product roadmaps. I enjoy that time, and it also helps me be super up-to-date for the work of the Foundation, which is in health, education and agriculture. And so it was a huge win to give feedback to OpenAI over the summer, too. (Now people are seeing most of what I saw; I’ve seen some things that are somewhat more up-to-date.) If you take this progression, the ability to help you write and to help you read is happening now, and it will just get better. And they’re not hitting a boundary, nor are their competitors.

So, okay, what does that mean in the legal world, or in the processing invoices world, or in the medical world? There’s been an immense amount of playing around with [ChatGPT] to try to drive those applications. Even things as fundamental as search.

[ChatGPT] is truly imperfect. Nobody suggests it doesn’t make mistakes, and it’s not very intuitive. And then, with something like math, it’ll just be completely wrong. Before it was trained, its self-confidence in a wrong answer was also mind blowing. We had to train it to do Sudoku, and it would get it wrong and say, “Oh, I mistyped.” Well, of course you mistyped, what does that mean? You don’t have a keyboard, you don’t have fingers! But you’re “mistyping?” Wow. But that’s what the corpus [of training text] had taught it.

Having spent time with Greg [Brockman] and Sam [Altman], what makes you confident that they are building this AI responsibly, and that people should trust them to be good stewards of this technology? Especially as we move closer to an AGI.

Well, OpenAI was founded with that in mind. They certainly aren’t a purely profit-driven organization, though they do want to have the resources to build big, big, big machines to take this stuff forward. And that will cost tens of billions of dollars, eventually, in hardware and training costs. But the near-term issue with AI is a productivity issue. It will make things more productive and that affects the job market. The long term-issue, which is not yet upon us, is what people worry about: the control issue. What if the humans who are controlling it take it in the wrong direction? If humans lose control, what does that mean? I believe those are valid debates.

These guys care about AI safety. They’d be the first to say that they haven’t solved it. Microsoft also brings a lot of sensibilities about these things as a partner as well. And look, AI is going to be debated. It’ll be the hottest topic of 2023, and that’s appropriate. It will change the job market somewhat. And it’ll make us really wonder, what are the boundaries? [For example] it’s not anywhere close to doing scientific invention. But given what we’re seeing, that’s within the realm of possibility five years from now or 10 years from now.

What is your favorite or most fun thing you’ve seen these tools create so far?

It’s so much fun to play around with these things. When you’re with a group of friends, and you want to write a poem about how much fun something has been. The fact that you can say okay, “write it like Shakespeare” and it does — that creativity has been fun to have. I’m always surprised that even though the reason I have access is for serious purposes, I often turn to [ChatGPT] just for fun things. And after I recite a poem it wrote, I have to admit that I could not have written that.

MORE FROM FORBES

MORE FROM FORBESAfter Layoffs And A CEO Change, Cometeer’s Frozen Coffee Pod Business Is In Hot WaterMORE FROM FORBESEmerging VCs Struggle To Raise Funds As Nervous Investors Park Their Money In Big-Name FirmsMORE FROM FORBES‘Fake It ‘Til You Make It’: Meet Charlie Javice, The Startup Founder Who Fooled JP MorganMORE FROM FORBESHow Laurel Bowden Became One Of Europe’s Top Investors By Skipping The HypeMORE FROM FORBESDisruption Through Conflict, Catastrophe And Chance: Meet The 30 Under 30 In Enterprise Tech

Source link

#Exclusive #Bill #Gates #Advising #OpenAI #Microsoft #Hottest #Topic

We Made The AI Write Stephen Miller’s Dutiful Prince Hallmark Movie, Because F*ck It, Whatever

Stephen Miller, Donald Trump’s former Obersturmbannführer for immigration, has been very upset about Royal People who are a great disappointment to him. We guess that’s a Serious Concern on the Weird Right lately, what with the new Netflix docu-series about Prince Harry and Meghan Markle that I literally just heard of while writing this story. Miller took a Twitter Break Thursday from ranting about the need to deport all the Dreamers, so he could cry about how sad it was that Harry had betrayed whiteness his royal heritage, and for what? To be the Half-Woke Prince?

Prince Harry’s evident disdain for his own family, the extraordinary gift and responsibility of Royal birth, and the ancient rites of his own Kingdom, is a dramatic public illustration of the chronic ideological disease that compels the elites of civilization to turn against it.

You know it’s a Nazi when they start talking about “ideological disease.” In this case, the “disease” appears to be the fact that Harry and Meghan discuss the racism they’ve faced, including in their own family. How declassé!

So NOW we know what was bugging him earlier in the morning, when he twote this suggestion for improving America’s media landscape with an entertainment that would both amuse and enlighten. Such wholesome themes!

Hallmark should make a movie about the royal prince of a fictional European monarchy who decides to marry based on the best interests of family and nation, selflessly fulfilling his patriotic duty as future sovereign.

Sounds enthralling, doesn’t it? Like that line from “The West Wing” where John Larroquette is talking about Gilbert and Sullivan and trying to remember what play a line is from: “One of the ones about duty,” and Ainsley, the earnest aide played by Emily Procter, shoots back “They’re all about duty. And it’s from Pinafore.”

Only Miller’s wouldn’t rhyme or be any fun, so scratch that comparison.


Still, we were up for a very slight challenge and we’d been looking for an excuse to try out the new “ChatGPT” AI toy from OpenAI, which has been all the rage for about five minutes because it’s pretty good at generating text that sounds plausible, if repetitive after a very short while. (If you want to play with it, you’ll need to create a free account.)

So we decided to fulfill Stephen Miller’s Christmas TV Movie wish, since it hardly took any effort on our part. Here’s our first try, with the prompt “write a script for a Hallmark movie about the royal prince of a fictional European monarchy who decides to marry based on the best interests of family and nation, selflessly fulfilling his patriotic duty as future sovereign.”

First Try: ‘Duty And Love’

The Royal Prince, David, sat in his chambers, staring at the piles of letters on his desk. As the future Sovereign of the Kingdom of Arcadia, it was his duty to choose a bride. But he was not just looking for a wife, he was looking for someone who could help him lead the kingdom into a brighter future.

David sighed and reached for a letter, reading through it quickly. He had received countless proposals from noblewomen across Europe, each one more eager than the last to marry the prince and become the future Queen of Arcadia. But David knew that he could not simply marry for love, he had to marry for the good of the kingdom.

As he read through the letters, he thought about the many challenges facing Arcadia. The kingdom was struggling economically, and tensions with its neighbors were growing. David knew that he needed a strong, intelligent, and capable partner to help him navigate these challenges and secure a prosperous future for the kingdom.

Finally, after much contemplation, David made his decision. He summoned his advisor and announced that he had chosen to marry Lady Elizabeth, the daughter of a powerful Duke from a neighboring kingdom. The marriage would not only be beneficial for Arcadia, but it would also strengthen the alliance between the two kingdoms and help to secure peace in the region.

The royal wedding was a grand affair, with people from across Europe coming to witness the union of Prince David and Lady Elizabeth. As they exchanged vows, David felt a sense of duty and responsibility, knowing that he was not just marrying for himself, but for the good of the kingdom.

But despite the practical nature of their marriage, David and Elizabeth soon found that they had a deep connection and genuine love for each other. As they worked together to lead Arcadia into a brighter future, they realized that their union was not just a political arrangement, but a true partnership of love and duty.

Rebecca, in the ChatCave, with a revolver: “Well, it was fast, but it’s not a movie script. STUPID AI!!!!!” I tweaked the prompt a bit, asking the bot to write a script, with dialogue, for a single scene from Miller’s ideal movie, and it spat out a perfectly adequate but bland conversation between another prince (Charles this time) and his loyal butler Jenkins, also about marrying for the good of the kingdom and how wise that is.

Finally I hit on what was needed: “Write a script, with dialogue, for a pivotal love scene from a Hallmark movie,” etc.

Readers, prepare to be swept off your romantic dutiful feet.

Third Try: Talk Duty To Me, Baby

(Fine, that is Dok’s title because he wants to stay ahead of the Machines)

Narrator: Prince Richard stood at the edge of the balcony, overlooking the bustling city below. As the future king of his small European nation, he had always known that his marriage would be a strategic political alliance. But now, as the day of his arranged marriage approached, he found himself struggling with the weight of his duty.

Prince Richard: (to himself) How can I marry someone I do not love? Is it selfish to want to marry for love, rather than for the good of my country?

Narrator: Just then, the door to the balcony opened and his betrothed, Princess Sophia, stepped out to join him.

Princess Sophia: (approaching Prince Richard) My lord, I did not expect to find you out here.

Prince Richard: (turning to Princess Sophia) Forgive me, Princess. I was just taking in the view.

Princess Sophia: (smiling) It is a beautiful city, is it not?

Prince Richard: (nodding) Indeed. It is my home, and I love it dearly. But sometimes, I cannot help but wonder if I am making the right decision.

Princess Sophia: (surprised) What do you mean, my lord?

Prince Richard: (sighing) My marriage to you is an arrangement, a political alliance for the good of our nations. But is that truly what I want?

Princess Sophia: (pausing, then taking Prince Richard’s hand) My lord, I too have always known that our marriage was for the good of our countries. But over the past few months, I have come to care for you deeply. I know that we may not have the kind of love that is written about in fairy tales, but I believe that with time and effort, we can build a strong and loving marriage.

Prince Richard: (squeezing Princess Sophia’s hand) You are wise beyond your years, Princess. I have been so caught up in my own doubts that I did not see how much you have come to mean to me. I will marry you, not out of a sense of duty, but because I want to be with you.

Narrator: And so, Prince Richard and Princess Sophia entered into their marriage with love in their hearts, knowing that they would face the challenges of ruling their nations together, hand in hand.

Princess Sophia: Hey! Let’s take some immigrant children from their parents!

Prince Richard: NOW you’re talkin’, toots!

And that’s why humans remain in charge, buster.

He loved her, especially since she was so well suited to be Head of State

[ChatGPT / Atlantic / Stable Diffusion AI / DreamStudio Lite AI]

Dear Wonkette readers,

As a fan of political satire and fart jokes, you know that it takes a lot of time, effort, and resources to create the kind of content that you enjoy. That’s why we’re asking you to consider donating $5 or $10 a month to help us keep bringing you the best political satire and fart jokes on the internet. — ChatGPT

Do your Amazon shopping through this link, because reasons.



Source link

#Write #Stephen #Millers #Dutiful #Prince #Hallmark #Movie #Fck