Connect with us

Tech

This week in AI: AI-powered personalities are all the rage

Published

on


Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.

Last week during its annual Connect conference, Meta launched a host of new AI-powered chatbots across its messaging apps — WhatsApp, Messenger and Instagram DMs. Available for select users in the U.S., the bots are tuned to channel certain personalities and mimic celebrities including Kendall Jenner, Dwyane Wade, MrBeast, Paris Hilton, Charli D’Amelio and Snoop Dogg.

The bots are Meta’s latest bid to boost engagement across its family of platforms, particularly among a younger demographic. (According to a 2022 Pew Research Center survey, only about 32% of internet users aged 13 to 17 say that they ever use Facebook, an over-50% decline from the year prior.) But the AI-powered personalities are also a reflection of a broader trend: the growing popularity of “character-driven” AI.

Consider Character.AI, which offers customizable AI companions with distinct personalities, like Charli D’Amelio as a dance enthusiast or Chris Paul as a pro golfer. This summer, Character.AI’s mobile app pulled in over 1.7 million new installs in less than a week while its web app was topping 200 million visits per month. Character.AI claimed that, moreover, as of May, users were spending on average 29 minutes per visit — a figure that the company said eclipsed OpenAI’s ChatGPT by 300% as ChatGPT usage declined.

That virality attracted backers including Andreessen Horowitz, who poured well over $100 million in venture capital into Character.AI, which was last valued at $1 billion.

Elsewhere, there’s Replika, the controversial AI chatbot platform, which in March had around 2 million users — 250,000 of whom were paying subscribers.

That’s not to mention Inworld, another AI-driven character success story, which is developing a platform for creating more dynamic NPCs in video games and other interactive experiences. To date, Inworld hasn’t shared much in the way of usage metrics. But the promise of more expressive, organic characters, driven by AI, has landed Inworld investments from Disney and grants from Fortnite and Unreal Engine developer Epic Games.

So clearly, there’s something to AI-powered chatbots with personalities. But what is it?

I’d wager to say that chatbots like ChatGPT and Claude, while undeniably useful in decidedly professional contexts, don’t hold the same allure as “characters.” They’re not as interesting, frankly — and it’s no surprise. General-purpose chatbots were designed to complete specific tasks, not hold an elivening conversation.

But the question is, will AI-powered characters have staying power? Meta’s certainly hoping so, considering the resources it’s pouring into its new bot collection. I’m not sure myself — as with most tech, there’s a decent chance the novelty will wear off eventually. And, then it’ll be onto the next big thing — whatever that ends up being.

Here are some other AI stories of note from the past few days:

  • Spotify tests AI-generated playlists: References discovered in the Spotify app’s code indicate the company may be developing generative AI playlists users could create using prompts, Sarah reports.
  • How much are artists making from generative AI? Who knows? Some generative AI vendors, like Adobe, have established funds and revenue sharing agreements to pay artists and other contributors to the data sets used to train their generative AI models. But it’s not clear how much these artists can actually earn, TC learned.
  • Google expands AI-powered search: Google opened up its generative AI search experience to teenagers and introduced a new feature to add context to the content that users see, along with an update to help train the search experience’s AI model to better detect false or offensive queries.
  • Amazon launches Bedrock in GA, brings CodeWhisperer to the enterprise: Amazon announced the general availability of Bedrock, its managed platform that offers a choice of generative AI models from Amazon itself and third-party partners through an API. The company also launched an enterprise tier for CodeWhisperer, Amazon’s AI-powered service to generate and suggest code.
  • OpenAI entertains hardware: The Information reports that storied former Apple product designer Jony Ive is in talks with OpenAI CEO Sam Altman about a mysterious AI hardware project. In the meantime, OpenAI — which is planning to soon release a more powerful version of its GPT-4 model with image analysis capabilities — could see its secondary-market valuation soar to $90 billion.
  • ChatGPT gains a voice: In other OpenAI news, ChatGPT evolved into much more than a text-based search engine, with OpenAI announcing recently that it’s adding new voice and image-based smarts to the mix.
  • The writers’ strike and AI: After almost five months, the Writers Guild of America reached an agreement with Hollywood studios to end the writers’ strike. During the historic strike, AI emerged as a key point of contention between the writers and studios. Amanda breaks down the relevant new contract provisions.
  • Getty Images launches an image generator: Getty Images, one of the largest suppliers of stock images, editorial photos, videos and music, launched a generative AI art tool that it claims is “commercially safer” than other, rival solutions on the market. Prior to the launch of its own tool, Getty had been a vocal critic of generative AI products like Stable Diffusion, which was trained on a subset of its image content library.
  • Adobe brings gen AI to the web: Adobe officially launched Photoshop for the web for all users with paid plans. The web version, which was in beta for almost two years, is now available with Firefly-powered AI tools such as generative fill and generative expand.
  • Amazon to invest billions in Anthropic: Amazon has agreed to invest up to $4 billion in the AI startup Anthropic, the two firms said, as the e-commerce group steps up its rivalry against Microsoft, Meta, Google and Nvidia in the fast-growing AI sector.

More machine learnings

When I was talking with Anthropic CEO Dario Amodei about the capabilities of AI, he seemed to think there were no hard limits that we know of — not that there are none whatsoever, but that he had yet to encounter a (reasonable) problem that LLMs were unable to at least make a respectable effort at. Is it optimism or does he know of what he speaks? Only time will tell.

In the meantime, there’s still plenty of research going on. This project from the University of Edinburgh takes neural networks back to their roots: neurons. Not the complex, subtle neural complexes of humans, but the simpler (yet highly effective) ones of insects.

From the paper, a diagram showing views of the robot and some of its vision system data.

Ants and other small bugs are remarkably good at navigating complex environments, despite their more rudimentary vision and memory capabilities. The team built a digital network based on observed insect neural networks, and found that it was able to successfully navigate a small robot visually with very little in the way of resources. Systems in which power and size are particularly limited may be able to use the method in time. There’s always something to learn from nature!

Color science is another space where humans lead machines, more or less by definition: we are constantly striving to replicate what we see with better fidelity, but sometimes that fails in ways that in retrospect seem predictable. Skin tone, for example, is imperfectly captured by systems designed around light skin — especially when ML systems with biased training sets come into play. If an imaging system doesn’t understand skin color, it can’t expose and adjust the exposure and color properly.

Images from Sony research on more inclusive skin color estimation.

Sony is aiming to improve these systems with a new metric for skin color that more comprehensively but efficiently defines it using a color scale as well as perceived light/dark levels. In the process of doing this they showed that bias in existing systems extends not just to lightness but to skin hue as well.

Speaking of fixing photos, Google has a new technique almost certainly destined (in some refined form) for its Pixel devices, which are heavy on the computational photography. RealFill is a generative plug-in that can fill in an image with “what should have been there.” For instance, if your best shot of a birthday party happens to crop out the balloons, you give the system the good shot plus some others from the same scene. It figures out that there “should” be some balloons at the top of the strings and adds them in using information from the other pictures.

Image Credits: Google/

It’s far from perfect (they’re still hallucinations, just well informed hallucinations) but used judiciously it could be a really helpful tool. Is it still a “real” photo though? Well, let’s not get into that just now.

Lastly, machine learning models may prove more accurate than humans in predicting the number of aftershocks following a big earthquake. To be clear (as the researchers emphasize), this isn’t about “predicting” earthquakes, but characterizing them accurately when they happen so that you can tell whether that 5.8 is the type that leads to three more minor quakes within an hour, or only one more after 20 minutes. And the latest models are still only decent at it, under specific circumstances — but they are not wrong, and they can work thorough large amounts of data quickly. In time these models may help seismologists better predict quakes and aftershocks, but as the scientists note, it’s far more important to be prepared; after all, even knowing one is coming doesn’t stop it from happening.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tech

Greenhouse Gases Are Alerting Oceans ‘Before Our Eyes,’ Says NASA

Published

on

By



NASA has shared a stunning yet concerning visualisation of sea surface currents and how they are being altered due to global warming. The visualisation depicts the average temperatures of ocean currents and how they differ at different locations.

The warmer hues such as red, orange, and yellow indicate higher temperatures, and cooler shades like green and blue represent lower temperatures.

“With 70% of the planet covered by water, the seas are important drivers of Earth’s global climate. Yet, increasing greenhouse gases from human activities are altering the ocean before our eyes,” the agency captioned the post.

According to NASA, 90 percent of the planet’s warming occurs within the ocean. Since modern recordkeeping began in 1955, the internal heat of the ocean has steadily increased, contributing significantly to climate change.

ALSO SEE: World’s Oceans Are Losing Their “Memory” As A Result Of Global Warming, Experts Claim

The heat stored in the ocean leads to thermal expansion, a process where water expands as it warms. This phenomenon is a major contributor to global sea level rise, accounting for one-third to one-half of the increase.

Scientists say the majority of this heat is concentrated at the surface, within the top 700 meters of the ocean. According to existing records, the past decade has been the warmest for the ocean since at least the 1800s, with 2023 marking the highest recorded ocean temperatures to date.

ALSO SEE: Arctic Ocean Warming Started Decades Earlier Than Previously Thought

The warming of the ocean has far-reaching effects. One of the most visible impacts is the rise in sea levels, primarily due to thermal expansion. Warmer waters have also led to widespread coral bleaching, which affects marine ecosystems and the increased temperatures also accelerate the melting of Earth’s major ice sheets.

NASA says that the warming ocean intensifies hurricanes affect ocean health and biochemistry, altering marine life habitats and disrupting food chains.





Source link

Continue Reading

Tech

NASA Shares Incredible Picture Of ‘Space Potato’ Phobos; It Will Soon Crash Into Mars

Published

on

By


Ever seen a space potato? NASA is here to treat you with one. The agency has shared a fascinating image of Phobos, the larger of two moons of Mars, explaining what makes this object so intriguing.

Meauring just 27 by 22 by 18 kilometres in diameter, Phobos orbits Mars about 6,000 km above the red planet’s surface and it is on a collision course with Earth.

This is the closest any Moon orbits a planet and Phobos might crash into Mars in the future. Scientists estimate that this is likely to happen within 50 million years. Another likely scenario of Phobos’ end will be its potential obliteration into pieces, eventually forming a ring around Mars.

According to NASA, Phobos is nearing Mars at the rate of six feet each year.

ALSO SEE: We May Have Been Wrong About Martian Moon Phobos’ Origin, It Could Be A Comet

Phobos (left) and Deimos (right). Image: NASA

Describing the image, the agency said that it was taken by the High Resolution Imaging Science Experiment (HiRISE) camera on the Mars Reconnaissance Orbiter, which has been studying Mars since 2006.

Phobos was discovered along with its twin just six days apart by astronomer Asaph Hall in 1877.

ALSO SEE: ISRO’s Mangalyaan Presents Breathtaking Video Of Martian Moon ‘Phobos’

The Moon also has several craters but the most dominant one is the 10-km-wide Stickeny crater which Hall named after his wife Angelina.

The second moon is Deimos which measures 15 by 12 by 11 kilometres and orbits the red planet every 30 hours. Both the moons are named after the mythological sons of Ares, the Greek counterpart of the Roman god. Phobos means fear and Deimos means dread, says NASA. As for their origin, astronomers believe they could be asteroids or debris caught by Mars in the early solar system.

(Image: NASA)





Source link

Continue Reading

Tech

Rare ‘Gigantic Jets’ Spotted Above The Himalayas, NASA Shares Viral Picture

Published

on

By


NASA recently shared a captivating image of gigantic jets soaring from a thunderstorm toward the Himalayan Mountains in China and Bhutan. This composite image, featured in NASA’s Astronomy Picture of the Day segment on June 18, reveals four immense jets captured within minutes of each other.

Gigantic jets are a rare and fascinating type of lightning discharge that have only been documented since the early 2000s. Unlike conventional lightning that occurs between clouds or strikes the ground, gigantic jets bridge the gap between thunderstorms and the Earth’s ionosphere, the layer of the atmosphere that is ionised by solar and cosmic radiation, NASA said.

Jets of lightning spotted over the Himalayas. Image: NASA/Li Xuanhua

These jets are unique in their appearance and behavior, differing significantly from traditional lightning phenomena.

ALSO SEE: Webb Telescope Photographs Baby Stars Burping Out Gases For The First Time

Despite their visual grandeur, the precise mechanisms and triggers behind gigantic jets are still under investigation. What is known is that these jets help to balance electrical charges between different layers of the Earth’s atmosphere, playing a crucial role in maintaining the atmospheric electrical circuit.

For those interested in observing this phenomenon, a powerful but distant thunderstorm viewed from a clear vantage point offers the best chance.

As these jets typically shoot upwards from the storm tops into the ionosphere, they can often be seen from hundreds of kilometers away under the right conditions.

ALSO SEE: NASA Shares First Cosmic Image Of 2024 And It’s Exploding With Stars



Source link

Continue Reading

Trending

Copyright © 2023 Dailycrunch. & Managed by Shade Marketing & PR Agency